Risk classification, transparency obligations, and audit-ready documentation for AI-powered products. Code-level implementation, not just checklists.
Systematic Article 3–6 analysis. We determine whether your system is minimal, limited, high, or unacceptable risk — and document the reasoning for each step so it holds up under audit.
Art. 3(1), Art. 5, Art. 6(1–3), Annex IIIInteraction disclosure (50(1)) and machine-readable content marking (50(2)). We write the code — CLI banners, HTML comments with JSON metadata, API response headers — not just the policy document.
Art. 50(1), Art. 50(2), Recital 132Classification record, transparency report, data processing documentation, ToS clauses. One document an auditor picks up and reads, with everything cross-referenced to your codebase.
Art. 4, Art. 11, Art. 50Compliance checks as part of your build pipeline. When the system changes, the compliance docs flag it. Automated — not a quarterly manual review.
Ongoing maintenanceWe don't sell compliance templates. We work from the source code, classify the system based on what it actually does, and implement the obligations in the codebase. The goal is transparency reports that reference specific files and functions — not vague descriptions.
We're building this methodology on our own open-source projects first. You can follow the process in our EU AI Act blog series, where we're classifying and implementing compliance for a CLI tool from scratch.
Send us a short description of your AI system and we'll tell you the risk level and which obligations apply — no charge, no call.
Get a free risk assessment →