EU AI Act
Guardian helps teams turn EU AI Act obligations into practical monitoring, incident, documentation, and audit workflows for AI systems already in production.
The product page describes the software; methodology covers the metrics and thresholds; the 4-week Readiness Sprint is a common first engagement. Educational guides: post-market monitoring and how to monitor high-risk AI systems.
For teams operating high-risk AI systems, the EU AI Act is not just a documentation exercise. It creates an ongoing operational expectation: monitor what the system is doing, log incidents, maintain oversight records, and keep evidence ready before someone asks for it.
Most readiness gaps are not about awareness of the law. They are about execution after deployment — when monitoring, incident handling, and evidence maintenance are not yet part of day-to-day operations.
Annex III of the EU AI Act lists categories where systems are typically treated as high-risk. The list below is a practical scan; your counsel should confirm how the rules apply to your facts.
If your system materially affects rights, access, eligibility, safety, or legal status, it is likely to require closer review under the EU AI Act.
The challenge is rarely knowing these categories exist. The challenge is maintaining them continuously around a live system.
Many organisations have at least some documentation from the build or approval phase. The problem begins after go-live.
Once a system is in production, evidence becomes fragmented across tools, teams, and documents. Incidents are logged inconsistently. Oversight actions are hard to reconstruct. When an audit or regulator request appears, teams scramble to rebuild the record.
Guardian is designed to make that record continuous rather than reactive. See the product overview and how a 4-week Readiness Sprint creates a first baseline. For the measurement layer, the methodology page describes defensible monitoring inputs.
Guardian supports the monitoring and evidence layer of AI Act readiness.
It is not a legal advice tool and it does not certify compliance. It helps teams monitor one high-risk AI system in production, maintain incident and oversight records, and keep a defensible evidence trail ready for review.
It is often easiest to start with one real system first through the Readiness Sprint, then extend the same operating model over time. The product page describes what you run week to week; methodology explains the metrics and thresholds.
See how the 4-week Readiness Sprint builds a first monitoring and evidence baseline, then what Guardian does in the product. Technical depth lives under methodology.