Methodology
Guardian uses published statistical methods, documented thresholds, and regulatory references to support monitoring and auditability for high-risk AI systems under the EU AI Act.
Guardian's methodology is a documented framework for monitoring the signals that matter around high-risk AI systems in production.
It is designed to help compliance, risk, legal, and AI teams understand how signals are measured, how thresholds are set, and how outputs connect to review and follow-up workflows.
Guardian is not a legal verdict and it does not claim that a single score determines compliance. The methodology supports a defensible monitoring and evidence process around one live system. For how that shows up in software, see the product overview; for the regulatory context, see EU AI Act readiness. The same signal families show up in high-scrutiny workflows such as hiring and HR AI and credit, fraud, and underwriting.
For high-risk AI systems, monitoring outputs need to be explainable. If a signal, alert, or score cannot be traced to a documented method, it is difficult to defend in front of a regulator, auditor, internal governance committee, or legal review.
Guardian's approach makes the logic visible: what is being measured, why it matters, what threshold is applied, and what follow-up it is meant to trigger.
That is what makes the outputs more useful operationally and more credible in review.
The table is a guide to how we map common monitoring inputs to the EU AI Act; your counsel sets legal obligations in context.
| Metric | What it shows | Regulatory link |
|---|---|---|
| Demographic parity | Fairness across cohorts | Article 10 / Article 14 |
| Equalised odds | Error-rate equity across groups | Article 10 |
| Model drift | Performance change over time | Article 72 |
| Data quality | Input distribution and anomaly signals | Article 10 |
| Human oversight actions | Review and intervention records | Article 14 |
| Incident frequency | Rate and nature of flagged events | Article 62 |
| Documentation completeness | Coverage of required technical records | Article 11 |
Guardian maps each monitoring signal to the operational and regulatory context it supports.
When a threshold is crossed, the output should not sit in isolation. It should help teams understand what changed, why it matters, who should review it, and what record should be maintained next.
This does not replace legal interpretation. It helps teams connect measurement to action in a way that is easier to govern and easier to defend. A typical first step is the 4-week Readiness Sprint, then day-to-day use in Guardian against the background described on the EU AI Act page.
Guardian's methodology is developed with academic oversight from Dr. OJ Akintande of DTU Compute, bringing statistical rigor to fairness, drift, and model-risk monitoring.
Metrics and threshold logic are grounded in published statistical methods and relevant regulatory frameworks, including the EU AI Act, NIST AI RMF, ISO 42001, and peer-reviewed fairness research.
The goal is not to make legal determinations automatically. It is to make monitoring outputs more explicit, reviewable, and defensible.
Team
Thomas Noba
Co-founder & CEO
Nordic AI Integrity ApS. Copenhagen, Denmark.
Joris Cappa
Co-founder & COO
Nordic AI Integrity ApS. Copenhagen, Denmark.
Dr. OJ Akintande
Technical Advisor
DTU Compute (Technical University of Denmark). ML fairness and model risk specialist.
See also the security policy for how we handle data in production.