Compliance · EU AI Act

Private Beta · Building

EU AI Act compliance, on a substrate auditors can verify.

High-risk AI obligations become applicable on 2 August 2026. Arkova anchors your risk management, data governance, technical documentation, and audit logs to a public ledger, so an auditor or regulator can verify every claim without trusting your file system.

What it is

The first comprehensive horizontal AI law.

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework specifically governing artificial intelligence. It takes a risk-based approach: AI systems are classified into four tiers based on their potential to cause harm, and obligations scale accordingly.

The law applies extraterritorially. If you are placing an AI system on the EU market, or its output affects people in the EU, you fall in scope, regardless of where your company is incorporated. Penalties reach up to €35 million or 7% of global annual turnover, whichever is higher, for prohibited-AI violations. €15M / 3% for most other violations.

The act entered into force on 1 August 2024 and phases in through 2 August 2027. The prohibition tier and General-Purpose AI obligations are already applicable. The high-risk tier — where most enterprise AI deployments will land — becomes applicable on 2 August 2026.

Implementation timeline

Five dates that determine your obligations.

1 Aug 2024In effect

Entered into force

EU AI Act published in the Official Journal. Phased implementation begins.

2 Feb 2025In effect

Prohibitions applicable

Unacceptable-risk AI systems banned: social scoring, manipulative AI, real-time biometric ID in public (with narrow law-enforcement exceptions), emotion recognition in workplaces and schools, untargeted facial-image scraping.

2 Aug 2025In effect

GPAI obligations applicable

General-Purpose AI model providers must comply: technical documentation, copyright policy, training-data transparency, systemic-risk model evaluations.

2 Aug 2026Next deadline

High-risk obligations applicable

High-risk AI systems must comply with risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.

2 Aug 2027Upcoming

Annex II high-risk extension

High-risk systems used as safety components of products covered by EU sectoral legislation (medical devices, machinery, toys, etc.) become applicable. Full EU AI Act compliance required across all in-scope deployments.

Risk classification

Four tiers determine what you owe.

Unacceptable risk

Examples

Social scoring · subliminal manipulation · real-time biometric ID in public · workplace and school emotion recognition · untargeted facial-image scraping

Obligation

Banned. No deployment in the EU permitted.

High risk

Examples

Annex III: biometric ID, critical infrastructure, education access, employment, essential services (creditworthiness, public benefits), law enforcement, migration, justice, democratic processes. Annex II: AI as safety component of regulated products (medical devices, machinery, toys).

Obligation

Risk management system · data governance · technical documentation · record-keeping · transparency · human oversight · accuracy + robustness + cybersecurity · post-market monitoring · conformity assessment + CE marking. Most onerous tier.

Limited risk

Examples

Chatbots, emotion-recognition systems, biometric categorization, deepfakes (text, audio, video).

Obligation

Transparency obligations: users must be informed they are interacting with AI. Generated content must be labeled as AI-generated.

Minimal risk

Examples

Spam filters, AI in video games, inventory optimization, recommendation engines below the high-risk threshold.

Obligation

No mandatory obligations. Voluntary codes of conduct encouraged.

Most enterprise AI deployments land in the high-risk tier, and the high-risk tier is where the evidence-and-audit burden sits. The rest of this page focuses on the high-risk obligations Arkova helps you anchor.

Who's in scope

If you place AI on the EU market, you are covered.

The EU AI Act applies to providers (you build or rebrand an AI system), deployers (you use an AI system in your operations), importers, distributors, and authorized representatives. It applies regardless of where you are incorporated, if any of these conditions are met:

  • You place an AI system or General-Purpose AI model on the EU market
  • You put an AI system into service in the EU
  • You are a provider or deployer outside the EU but the AI system's output is used in the EU

In practice this captures most US, UK, APAC, and LATAM enterprises operating globally. The extraterritorial scope mirrors GDPR.

How Arkova maps to high-risk obligations

Evidence that does not depend on trust in your vendors.

High-risk AI obligations require continuous documentation. Arkova does not replace your existing AI governance, MLOps, or risk-management tools — it sits next to them and anchors their output to a public ledger so a regulator, auditor, or counterparty can verify each claim independently.

Article 9 — Risk management system

Requirement

Continuous, iterative risk management process across the AI system lifecycle. Documented identification, analysis, evaluation, and mitigation of risks.

Arkova

Cryptographically anchored risk-assessment records with immutable timestamps. Each risk-management cycle is anchored, so an auditor can reconstruct the exact state of your risk register at any review date.

Article 10 — Data governance

Requirement

Training, validation, and test datasets must be relevant, representative, free of errors, and complete. Documentation of data provenance, gathering processes, and data preparation.

Arkova

Anchored data-source attestations with cryptographic fingerprints of dataset versions. Independent verification that the dataset claimed in your audit is the dataset actually used.

Article 11 + Annex IV — Technical documentation

Requirement

Comprehensive technical documentation covering system design, development methodology, training data, validation procedures, performance metrics, and post-market monitoring.

Arkova

Versioned, anchored technical documentation. Every revision is timestamped. Auditors verify that the version provided matches the version reviewed at any prior date — without trusting your file system.

Article 12 — Record-keeping (logging)

Requirement

Automatic event logging during system operation. Logs must enable traceability and post-market monitoring.

Arkova

Append-only audit log with cryptographic anchoring. Logs cannot be retroactively altered. Each event has an independently verifiable receipt.

Article 14 — Human oversight

Requirement

Documented human oversight measures, training of oversight personnel, and intervention protocols.

Arkova

Oversight events (intervention, override, escalation) anchored with operator identity and timestamp. Full chain of custody from AI decision to human review.

Article 17 — Quality management system

Requirement

Documented QMS for compliance with the EU AI Act, including configuration management, testing protocols, and post-market plan.

Arkova

QMS document hierarchy with cryptographic version control. Each policy revision creates a new anchored receipt — the QMS history is independently auditable.

Article 26 + 50 — Transparency

Requirement

Users must be informed when interacting with AI. AI-generated content (deepfakes, synthetic media) must be labeled. Documentation of disclosures to data subjects.

Arkova

Anchored disclosure attestations: when, to whom, what was disclosed. Verifiable proof that transparency obligations were met at the time required.

What an EU AI Act audit asks for

The seven evidence categories every high-risk system needs.

  1. Risk management documentation. The risk register at every milestone. Identification, analysis, evaluation, mitigation steps, residual-risk assessments. Versioned across the AI system's lifecycle.
  2. Data governance records. Training, validation, and test dataset provenance. Sampling decisions. Bias assessments. Data-preparation and labeling protocols. Data-subject rights handling for personal data.
  3. Technical documentation (Annex IV). Full system description, intended purpose, hardware, software, design choices, training methodology, performance metrics on relevant demographics, accuracy and robustness measurements, cybersecurity controls, and human-oversight measures.
  4. Automatic operation logs. Append-only event logs covering the AI system's operational period. Sufficient detail to enable post-market monitoring, incident reconstruction, and traceability of decisions.
  5. Conformity assessment records. Either internal control (Annex VI) or third-party assessment (Annex VII), depending on the system. EU declaration of conformity. CE marking. Notified-body certificates where applicable.
  6. Quality management system documentation. QMS scope, policies, procedures, configuration management, change-control records, test protocols, and post-market plan, all with audit trail.
  7. Transparency and human-oversight evidence. Disclosures made to deployers and end-users. Records of human-oversight interventions. Training of oversight personnel. Operator-error and override events.

Each of these categories is currently produced by 3–7 different tools at most enterprises (MLOps, data-catalog, GRC, e-signature, document management, ticketing). The hard part is not generating the evidence — it is assembling a coherent, audit-ready evidence package that survives vendor transitions. That is the problem Arkova exists to solve.

Get ready for 2 August 2026.

If you're deploying high-risk AI in the EU and want evidence that anchors each risk-management cycle independently of your file system, we'd like to discuss an early-access pilot.

Arkova is in private beta. Features described on this page are being built and refined with pilot customers right now. Some controls and integrations are live today; others are in active development. Talk to us about the parts most relevant to your workload.

Request Early Access

Or read the State of Compliance in 2026 for the broader regulatory picture.