Compliance · EU AI Act
Private Beta · BuildingHigh-risk AI obligations become applicable on 2 August 2026. Arkova anchors your risk management, data governance, technical documentation, and audit logs to a public ledger, so an auditor or regulator can verify every claim without trusting your file system.
What it is
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework specifically governing artificial intelligence. It takes a risk-based approach: AI systems are classified into four tiers based on their potential to cause harm, and obligations scale accordingly.
The law applies extraterritorially. If you are placing an AI system on the EU market, or its output affects people in the EU, you fall in scope, regardless of where your company is incorporated. Penalties reach up to €35 million or 7% of global annual turnover, whichever is higher, for prohibited-AI violations. €15M / 3% for most other violations.
The act entered into force on 1 August 2024 and phases in through 2 August 2027. The prohibition tier and General-Purpose AI obligations are already applicable. The high-risk tier — where most enterprise AI deployments will land — becomes applicable on 2 August 2026.
Implementation timeline
EU AI Act published in the Official Journal. Phased implementation begins.
Unacceptable-risk AI systems banned: social scoring, manipulative AI, real-time biometric ID in public (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, untargeted facial-image scraping.
General-Purpose AI model providers must comply: technical documentation, copyright policy, training-data transparency, systemic-risk model evaluations.
High-risk AI systems must comply with risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.
High-risk systems used as safety components of products covered by EU sectoral legislation (medical devices, machinery, toys, etc.) become applicable. Full EU AI Act compliance required across all in-scope deployments.
Risk classification
Examples
Social scoring · subliminal manipulation · real-time biometric ID in public · workplace and school emotion recognition · untargeted facial-image scraping
Obligation
Banned. No deployment in the EU permitted.
Examples
Annex III: biometric ID, critical infrastructure, education access, employment, essential services (creditworthiness, public benefits), law enforcement, migration, justice, democratic processes. Annex II: AI as safety component of regulated products (medical devices, machinery, toys).
Obligation
Risk management system · data governance · technical documentation · record-keeping · transparency · human oversight · accuracy + robustness + cybersecurity · post-market monitoring · conformity assessment + CE marking. Most onerous tier.
Examples
Chatbots, emotion-recognition systems, biometric categorization, deepfakes (text, audio, video).
Obligation
Transparency obligations: users must be informed they are interacting with AI. Generated content must be labeled as AI-generated.
Examples
Spam filters, AI in video games, inventory optimization, recommendation engines below the high-risk threshold.
Obligation
No mandatory obligations. Voluntary codes of conduct encouraged.
Most enterprise AI deployments land in the high-risk tier, and the high-risk tier is where the evidence-and-audit burden sits. The rest of this page focuses on the high-risk obligations Arkova helps you anchor.
Who's in scope
The EU AI Act applies to providers (you build or rebrand an AI system), deployers (you use an AI system in your operations), importers, distributors, and authorized representatives. It applies regardless of where you are incorporated, if any of these conditions are met:
In practice this captures most US, UK, APAC, and LATAM enterprises operating globally. The extraterritorial scope mirrors GDPR.
How Arkova maps to high-risk obligations
High-risk AI obligations require continuous documentation. Arkova does not replace your existing AI governance, MLOps, or risk-management tools — it sits next to them and anchors their output to a public ledger so a regulator, auditor, or counterparty can verify each claim independently.
Requirement
Continuous, iterative risk management process across the AI system lifecycle. Documented identification, analysis, evaluation, and mitigation of risks.
Arkova
Cryptographically anchored risk-assessment records with immutable timestamps. Each risk-management cycle is anchored, so an auditor can reconstruct the exact state of your risk register at any review date.
Requirement
Training, validation, and test datasets must be relevant, representative, free of errors, and complete. Documentation of data provenance, gathering processes, and data preparation.
Arkova
Anchored data-source attestations with cryptographic fingerprints of dataset versions. Independent verification that the dataset claimed in your audit is the dataset actually used.
Requirement
Comprehensive technical documentation covering system design, development methodology, training data, validation procedures, performance metrics, and post-market monitoring.
Arkova
Versioned, anchored technical documentation. Every revision is timestamped. Auditors verify that the version provided matches the version reviewed at any prior date — without trusting your file system.
Requirement
Automatic event logging during system operation. Logs must enable traceability and post-market monitoring.
Arkova
Append-only audit log with cryptographic anchoring. Logs cannot be retroactively altered. Each event has an independently verifiable receipt.
Requirement
Documented human oversight measures, training of oversight personnel, and intervention protocols.
Arkova
Oversight events (intervention, override, escalation) anchored with operator identity and timestamp. Full chain of custody from AI decision to human review.
Requirement
Documented QMS for compliance with the EU AI Act, including configuration management, testing protocols, and post-market plan.
Arkova
QMS document hierarchy with cryptographic version control. Each policy revision creates a new anchored receipt — the QMS history is independently auditable.
Requirement
Users must be informed when interacting with AI. AI-generated content (deepfakes, synthetic media) must be labeled. Documentation of disclosures to data subjects.
Arkova
Anchored disclosure attestations: when, to whom, what was disclosed. Verifiable proof that transparency obligations were met at the time required.
What an EU AI Act audit asks for
Each of these categories is currently produced by 3–7 different tools at most enterprises (MLOps, data-catalog, GRC, e-signature, document management, ticketing). The hard part is not generating the evidence — it is assembling a coherent, audit-ready evidence package that survives vendor transitions. That is the problem Arkova exists to solve.
If you're deploying high-risk AI in the EU and want evidence that anchors each risk-management cycle independently of your file system, we'd like to discuss an early-access pilot.
Arkova is in private beta. Features described on this page are being built and refined with pilot customers right now. Some controls and integrations are live today; others are in active development. Talk to us about the parts most relevant to your workload.
Request Early AccessOr read the State of Compliance in 2026 for the broader regulatory picture.