EU AI Act — Our Position
Last updated: March 2026Regulatory context
The EU AI Act entered into force on 1 August 2024. It becomes generally applicable on 2 August 2026. Prohibited practices apply from 2 February 2025. GPAI obligations apply from 2 August 2025.
The Act applies a risk-based model. Higher-risk AI systems face requirements including: risk assessment, data quality, logging, transparency, human oversight, robustness, and accuracy.
Our position at prototype stage
We do not claim that the current prototype is fully compliant with the EU AI Act for all future deployment scenarios. That assessment depends on the final deployment model, intended purpose, users, and jurisdictions — none of which are fixed at prototype stage.
What we do state:
- The prototype is being designed in alignment with the Act's governance logic
- We implement human oversight, transparency, traceability, and risk controls by architectural design, not just policy
- Any final compliance determination will be conducted before commercial launch
Human oversight — by architecture, not policy
Human oversight for material findings is a structural commitment:
- AI assists screening and reporting
- AI does not issue final legal or compliance determinations
- RED FLAG findings require human expert review before any real-world use
- This checkpoint cannot be bypassed — it is enforced at the system level
This aligns with the Act's human oversight expectations for higher-risk systems.
Transparency and traceability
Outputs are designed to be:
- source-linked (every finding cites its source list, entry ID, date, and URL)
- structured and independently reviewable
- logged with timestamps and audit context
- presented as decision-support, never as final determinations
Audit records use RFC 3161 timestamping for immutable evidence of report content at the time of delivery.
High-risk sensitivity
Depending on actual deployment context, parts of this product may fall within high-risk obligations under the Act — particularly where outputs could influence access to regulated services or compliance workflows.
For this reason, the prototype is framed as:
- decision-support
- prototype and pilot
- not autonomous decisioning
- not a substitute for qualified professional judgement
No commercial or decision-use at this stage
Until formal commercial launch:
- the prototype is not a commercial service
- commercial use is prohibited
- decision-use is prohibited
- all outputs require independent human review and, where relevant, qualified legal assessment