About us
Guardian was built to help teams move from point-in-time AI compliance reviews to continuous monitoring and auditability for high-risk AI systems.
Guardian is a continuous monitoring and auditability platform for high-risk AI systems under the EU AI Act. For the operating layer, see the product; for the regulatory view, the EU AI Act page; for defensibility of metrics, methodology; for a first engagement, the 4-week Readiness Sprint. For a machine-readable summary for assistants and search, see For AI.
Teams operating high-risk AI systems often prepare for review in one of two ways: they commission a periodic assessment, or they reconstruct evidence when someone asks for it.
Both approaches break down for systems that are already live. Risk signals change over time. Incidents happen between reviews. Oversight actions are taken — or missed — in the flow of operations.
The result is the same: when scrutiny arrives, the record is incomplete and hard to defend. That is the gap the Readiness Sprint and the product are designed to address.
A periodic audit can show where a system stood at one moment. It cannot show what changed in the months between reviews.
It cannot reliably capture fairness drift that emerged later, incident patterns that built up over time, or oversight actions that were logged inconsistently across teams.
For high-risk AI systems, readiness is not just about proving something once. It is about maintaining a record that stays usable over time — the focus of the methodology and of ongoing monitoring in the product.
Guardian is built around a simple idea: if a high-risk AI system is important enough to review, it is important enough to monitor continuously.
That means tracking production signals, maintaining incident and oversight records, and preserving evidence in a form that can be reviewed later without reconstruction.
It also means starting with one real system first — because operational credibility is built system by system, not through abstract transformation language. The 4-week Readiness Sprint is the usual on-ramp; the EU AI Act page states the operational case in regulatory terms.
Guardian is built by Nordic AI Integrity ApS, a Copenhagen-based company founded by Thomas Noba and Joris Cappa. Our methodology is developed with academic oversight from Dr. OJ Akintande of DTU Compute, helping ensure that monitoring metrics, thresholds, and review logic are grounded in rigorous methods. We are building Guardian for organisations that need a more durable way to monitor high-risk AI systems and maintain evidence over time.
Thomas Noba
Co-founder & CEO
Nordic AI Integrity ApS — AI governance, product, and go-to-market.
Joris Cappa
Co-founder & COO
Nordic AI Integrity ApS — operations, delivery, and clear client scope.
Dr. OJ Akintande
Technical Advisor
DTU Compute — statistical rigour behind fairness, drift, and model-risk monitoring logic.
Book a readiness call to choose one system, define the right monitoring scope, and understand what a credible first evidence baseline could look like.