Resource
What post-market monitoring means under the EU AI Act
Post-market monitoring means maintaining a continuous record of how a high-risk AI system behaves after deployment, including incidents, oversight actions, and performance changes over time.
Guardian is a continuous monitoring and auditability platform for high-risk AI systems under the EU AI Act. For the regulatory angle, start with the EU AI Act page, then the methodology, product, and the 4-week Readiness Sprint. Data and operations: security; machine-readable summary: For AI.
What post-market monitoring is
Post-market monitoring is the ongoing observation and documentation of how a high-risk AI system performs after it is deployed.
It is not limited to technical performance. It also includes incidents, oversight, follow-up actions, and the evidence needed for later review.
What teams need to maintain after deployment
- Performance and drift records
- Fairness-related signals where relevant
- Incident and remediation records
- Human oversight and intervention records
- Documentation updates and evidence continuity
- Review triggers and threshold history
Why point-in-time review is not enough
A one-time review can show how a system looked at a single moment. It cannot show what changed over time or how the organisation responded once the system was live.
That is why post-market monitoring matters operationally: it turns isolated review into a continuous record.
How to build a practical monitoring record
- Define one in-scope live system
- Track the small set of signals that matter most
- Maintain incident and oversight records
- Set clear review triggers
- Keep evidence exportable and reviewable
See also About for why Guardian was built, and the companion resource on how to monitor high-risk AI systems.
FAQ
- What does post-market monitoring include?
- It includes performance changes, incidents, oversight actions, documentation updates, and the evidence needed to review what happened after deployment.
- Is post-market monitoring only for providers?
- The exact obligations depend on role and context, but operationally, any organisation running a high-risk system benefits from maintaining a durable monitoring record.
- Why does evidence often break down after launch?
- Because monitoring, incidents, and follow-up actions often become fragmented once the system is operating across teams and tools.
- What is the easiest way to get started?
- Usually by starting with one live system first and building a focused monitoring baseline around it.
Start with one live high-risk AI system
Book a readiness call to understand what post-market monitoring should look like for one system and how to create a credible baseline.