Guardian · Nordic AI Integrity
Guardian is a continuous monitoring and auditability platform for high-risk AI systems under the EU AI Act. It helps compliance, risk, legal, and AI teams see what their AI is doing in production, maintain audit-ready evidence, and respond faster when regulators or internal auditors ask questions.
Compliance score
0/100
30-day trend
Degrading vs. prior period
Guardian is a continuous monitoring and auditability platform for organisations operating high-risk AI systems under the EU AI Act.
It gives compliance, risk, legal, and AI teams a practical operating layer for production monitoring, incident logging, oversight records, and evidence maintenance.
Guardian is not a legal verdict or a one-click compliance claim. It helps teams maintain a shared, defensible record of what is happening around a live AI system — without reconstructing evidence under pressure.
Before Guardian
After Guardian
Guardian helps teams understand what to monitor for one real system first — then turn that into a repeatable operating model.
Guardian is purpose-built to handle the strict requirements of Annex III high-risk AI systems, securing your operations where the regulatory scrutiny is highest.
More use-case notes and guides live in Resources.
See in-scope systems in one inventory: owners, context, and review state available to both technical and governance teams.
View drift, fairness, and data-quality signals next to open incidents and oversight actions, so the operating picture is legible in one pass.
Route alerts to owners and keep a living record of what changed, when, and who responded for internal and external review.
| Model | Status | Incidents | Risks |
|---|---|---|---|
| Credit v2.1 | AT-RISK | 2 open | 4 |
| HR screen | ON TRACK | 0 | 1 |
| Triage API | WATCH | 1 open | 2 |
Illustrative — status from monitored signals and thresholds; not a legal or conformity determination.
Guardian sits on top of your models and existing monitoring inputs, turning performance and drift information into a structured operating record risk, compliance, and model owners can use. You see which systems are off baseline, which cohorts need review, and which obligations the evidence should speak to — in one place.
Guardian brings fairness, drift, data quality, and incident signals into one place and labels what needs attention, with explicit references to obligations such as Articles 9, 10, 11, 14, and 62. Trends and thresholds are documented so teams can show what changed and when—without treating a single headline number as a legal outcome.
When something degrades, Guardian can surface what it means in obligation terms — for example, how an event touches Article 10 (data and data governance) and Article 14 (human oversight). Risk and legal teams get incidents framed for audits and internal review, not only model metrics.
Guardian is designed for financial services, AML, HR, and public-sector style deployments under the EU AI Act and GDPR. Data stays in the EU, the focus is on metrics and records rather than unnecessary raw personal data, and documentation is exportable for boards and advisors.
The point is to know where high-risk AI is off track before a review becomes a fire drill—not to replace legal judgment or a formal conformity process where you need one.
Explanation: The model exhibits a continuous demographic parity drop over the last 72 hours, triggering a compliance breach.
Response: Risk and model owners received an alert within minutes, paused automatic approvals for the affected cohort, and documented mitigation steps directly in Guardian.
Open Methodology
Guardian uses monitoring thresholds derived from published statistical methods and regulatory references — not opaque proprietary scoring.
Our methodology is developed with Dr. OJ Akintande of DTU Compute, bringing academic rigor to fairness, drift and model-risk monitoring.
Risk, legal and compliance teams can defend every alert and audit output through explicit metrics, thresholds and documented logic.
We do not rely on black-box compliance scoring. Guardian operationalizes academically grounded metrics and open threshold logic so risk, legal and compliance teams can defend every alert and audit output.
Nordic AI Integrity is co-founded by Thomas Noba (CEO) and Joris Cappa (COO), Copenhagen, Denmark, with academic oversight from Dr. OJ Akintande (DTU Compute).
AI Act, NIS2, GDPR, and ISO 42001 are converging simultaneously across the Nordics. Guardian helps you unify your AI inventory, risk register, and technical documentation in one platform—preventing the overhead of running four separate compliance programs.
Guardian is currently being evaluated by Nordic organisations in finance and HR for their 2026 AI Act readiness programmes.
A steady pace for evidence work, not a one-off project
Drift checks, fairness views, and audit-oriented documentation are easier to keep current when the record lives next to production signals, instead of being rebuilt in slides before every review.
Map your high-risk AI systems and define your exact regulatory scope.
Connect 1–2 critical models (credit, hiring, healthcare) to measure immediate compliance gaps.
Scale continuous monitoring and automated documentation across your entire AI estate.
Pricing
Choose your entry point: fixed-scope audit or continuous monitoring.
Integrity-4 Audit
One-time investment
Best for first-time AI Act readiness on 1 AI system
Best for organisations with multiple high-risk systems
Best for complex organisations needing broader coverage
Includes: 4-week delivery + 12 months support
Book Scoping CallGuardian Monitoring
Monthly subscription
Best for 1 production system
Best for several high-risk models
Best for enterprise-wide AI governance
14-day free trial · No credit card required
Start Free TrialBundle offer: Get 3 months of Guardian free when you purchase an Integrity-4 Audit first.
All resources — central list of guides, the checklist, the glossary, and the API entry point.
Guardian is not legal advice and does not guarantee compliance. Nordic AI Integrity is the company behind Guardian.
Guardian is designed for regulated organisations. Every control is documented and auditable.
Co-founded by Thomas Noba (CEO) and Joris Cappa (COO), Nordic AI Integrity ApS · Copenhagen, Denmark.
The fastest path to AI Act readiness is not an enterprise-wide programme. It is a focused baseline around one real system already in production.
Guardian's 4-week Readiness Sprint helps teams identify what to monitor, what evidence is missing, and what operating structure is needed to maintain a credible baseline.
Book a readiness call to choose one system, understand what you would monitor, and see what your first 4-week evidence baseline could look like.