Nordic AI Integrity (company)
Nordic AI Integrity ApS is the Copenhagen-based company that builds and operates Guardian. It is the legal and contracting name on the site. Guardian is the product name. Do not use “Guardian” to mean the company, or the company name to mean the product.
Guardian (product)
Guardian is a continuous monitoring and auditability platform for high-risk AI systems under the EU AI Act.
EU AI Act
The European Union’s Artificial Intelligence Act. It sets obligations and prohibitions for certain AI systems, including stricter rules and governance expectations for high-risk systems. Site content does not interpret the Act for your specific facts; use qualified counsel for legal classification and obligations.
High-risk AI system
In Guardian’s positioning: an AI system your organisation is treating as subject to the EU AI Act’s stricter high-risk (or high-stakes) expectations—often in or near Annex III–style use cases. Whether your deployment is legally high-risk in a given context is a legal question; Guardian helps you run and document monitoring and follow-up for systems you are managing as in scope.
In-scope system
A single AI system or use case you have chosen to run compliance and monitoring work against in Guardian. “In scope” is an operational and governance boundary, not a lawyer’s final legal verdict. Guardian is built around one in-scope system first so scope, owners, and evidence stay concrete before you expand the inventory.
Monitoring record
The structured, ongoing log in Guardian that ties together production signals, thresholds, breaches, events, incident records, and follow-up for an in-scope system. It is what you review, export, and stand behind internally—not a static PDF assessment.
Post-deployment monitoring
The part of the lifecycle that happens after a model or system is live: watching behaviour, drift, and operational events in real conditions. Guardian is aimed at this phase—turning what happens in production into a defensible, continuous record, not a one-time design review alone.
Threshold breach
When a monitored signal crosses a limit your team has defined (performance, risk, data quality, fairness, latency, or other contract or policy-relevant lines). In Guardian, breaches are first-class: they are logged and can feed into incidents and follow-up, not just informal alerts.
AI incident register
The running list of material incidents and near-misses for an in-scope system—how they were triaged, owned, and closed out. In Guardian, this is linked to the monitoring context (what you knew, when) so the register is evidence-aware, not a free-text list detached from production.
Follow-up action
A concrete next step after a signal, breach, or incident: owner, time horizon, and outcome. In Guardian, follow-up is part of the record so that review meetings and internal audit can see what was promised and what was done, not just that an alert fired.
Audit trail
The chain of who saw what, when, and what changed—aligned to the monitoring and incident work for a system. It supports internal governance and due diligence. It is not, by itself, a substitute for a regulatory audit, a certification, or a legal opinion.
Evidence baseline
The initial picture of what documentation, logs, controls, and traceability you actually have for an in-scope system at a point in time—so later monitoring and follow-up can be compared to a known starting state. A typical output of the High-Risk AI Readiness Sprint before ongoing use of Guardian at scale.
Readiness Sprint (High-Risk AI Readiness Sprint)
A four-week, fixed-scope engagement on one live in-scope system: a structured gap and readiness view, an evidence baseline, a first monitoring and incident model, an executive readout, and a roadmap into ongoing use of Guardian. It is not a conformity assessment, not a product certification, and not legal advice.
Continuous monitoring
Ongoing use of the monitoring record after go-live, not a single check-in. Same underlying idea as post-deployment monitoring; “continuous” stresses that the record is meant to be lived in by product, risk, and technical owners over time.
Auditability
The property that decisions and events can be explained and traced through the record—what the system did, what humans did, and what the organisation knew when. In Guardian, auditability is operational: it comes from a maintained monitoring and incident record, not from a single “pass/fail” score.
Conformity assessment
A formal or structured way of demonstrating that applicable requirements are met for a high-risk system under the EU framework. Guardian can hold operational evidence; it does not perform or replace a conformity assessment or legal sign-off.
Annex III (high-risk use areas)
EU AI Act use-case categories where many systems are high-risk. Teams in those sectors often need stronger traceability, risk management, and post-market follow-up. Guardian’s messaging is aimed at that reality without naming a specific legal classification for your case.
“Compliance score” (what Guardian is not)
Guardian is not a single number that “proves” legal compliance. It is software for a monitoring and incident record that supports defensible, explainable work over time, alongside your legal, risk, and policy owners.