🇪🇺 EU AI Act — Annex III, Article 10–16 Obligations

The Peer Assessment Platform
Built for EU AI Act Compliance

ChallengeMe acknowledges its High-Risk classification under the EU AI Act — and has already started the conformity work. Every obligation documented. Every deadline tracked.

🛡️
GDPR Compliant
🇪🇺
EU AI Act Ready
🏛️
90+ Institutions Trusted
📍
European Company, Founded 2017
⚠️

HIGH-RISK AI System — Annex III Classification

Under the EU AI Act, AI systems used to evaluate learning outcomes and assess students are classified as High-Risk under Annex III, Point 3(b). This applies directly to peer assessment platforms like ChallengeMe. High-risk systems face the most stringent obligations — technical documentation, human oversight, bias testing, FRIA, and mandatory conformity assessment. ChallengeMe is addressing all of them.

Compliance Timeline
August 2, 2026 — The Deadline That Matters

High-risk AI system obligations under Article 10–16 become enforceable on August 2, 2026. ChallengeMe is building compliance now — not at the last minute.

Feb 2025
Prohibited AI Practices Ban
Effective
Aug 2025
GPAI Model Rules
In force
Mar 2026
You Are Here
5 months to deadline
Q2 2026
FRIA Completed
ChallengeMe target
Aug 2, 2026
High-Risk Obligations Enforceable
EU AI Act Art. 10–16
~144 days

Until High-Risk Obligations Are Enforceable

Institutions procuring AI tools after Aug 2, 2026 face liability if those tools are non-compliant. Procurement decisions should happen now — not after the deadline.

Our Classification
Why We're High-Risk — And Why That's Good News

Most vendors won't tell you this. We will.

⚠️ HIGH-RISK — Annex III, Point 3(b)

ChallengeMe Assesses Learning Outcomes

The EU AI Act classifies AI systems in education under Annex III, Point 3. Point 3(b) specifically covers "AI systems intended to be used for the purpose of determining access to or assigning natural persons to educational and vocational training institutions" — and more broadly, any AI system determining educational outcomes or performance assessment. ChallengeMe's AI-assisted rubric evaluation, automated scoring, and peer calibration algorithms all fall within this scope.

Annex III, Point 3: "AI systems intended to be used in education and vocational training, notably for the purpose of... assessing participants in tests and as far as they may determine the access to education and professional courses or the outcome thereof." — EU AI Act, Official Text

Why this is good news for institutions: A vendor that acknowledges High-Risk status has done the legal analysis. Vendors who claim their AI tools are "low-risk" in education either haven't read the Act or are hoping you haven't. ChallengeMe's transparency means you're protected.

What We're Building
Four Compliance Pillars

The EU AI Act's high-risk obligations aren't checkbox items. Here's exactly what ChallengeMe is delivering.

📋

Technical Documentation

Article 11 requires complete technical documentation before market placement. We document our AI model architecture, training data, intended purpose, performance benchmarks, and risk mitigation measures.

⏳ In Progress
🔍

Bias Testing & Data Governance

Article 10 mandates training data governance and bias auditing. We test rubric scoring across demographic groups, flag systematic scoring gaps, and maintain audit logs of AI-assisted grades.

⏳ In Progress
🧾

Fundamental Rights Impact Assessment (FRIA)

Article 27 requires deployers (institutions) to conduct a FRIA. We provide a pre-built FRIA template for higher education institutions, co-completed with ChallengeMe's technical team — reducing your institutional burden dramatically.

⏳ In Progress

Conformity Assessment & CE Mark

Articles 43–49 require conformity assessment for high-risk systems. We are undergoing internal conformity assessment with documentation sufficient to support the EU Declaration of Conformity and CE marking pathway.

⏳ Q2 2026
Article-Level Obligations
What the EU AI Act Requires of Us

Articles 9–16 define specific obligations for high-risk AI providers. Here's our status.

Art. 9 — Risk Management System

Continuous risk identification, testing, and mitigation lifecycle

Art. 10 — Data Governance

Training data quality, relevance, and bias auditing procedures

Art. 11 — Technical Documentation

Complete Annex IV documentation package for regulators

Art. 12 — Record Keeping

Automatic logging of AI decisions with sufficient detail for audit

Art. 13 — Transparency

Clear disclosure to students when AI assessment is used

Art. 14 — Human Oversight

Instructor override controls and AI decision review workflows

Art. 15 — Accuracy & Robustness

Accuracy benchmarks, adversarial testing, and error rate disclosure

Art. 16 — Registration Obligations

Registration in the EU database for high-risk AI systems prior to market placement, including provider details, intended purpose, and conformity declaration reference

Art. 27 — FRIA Support

Pre-built FRIA template and technical support for institutions

Market Landscape
How Competitors Compare on EU AI Act

We researched public compliance positions across all major peer assessment platforms. The results are stark.

Platform Public Compliance Position High-Risk Acknowledgment FRIA Support Technical Docs Bias Testing
ChallengeMe ★ Most Transparent Full public commitment
FeedbackFruits Vague GDPR mentions only
Kritik No public AI Act statement
Turnitin PeerMark No public AI Act statement
Peerceptiv No public AI Act statement

Research conducted March 2026. Based on publicly available documentation, terms of service, and published compliance statements.

For Procurement Officers & DPOs
What Your Institution Needs Before Aug 2026

EU institutions face shared accountability for high-risk AI deployed on campus. Here's what you need from any peer assessment vendor.

01

Vendor Technical Documentation Package

Your AI tool supplier must provide an Annex IV documentation package. Request this in writing from any shortlisted vendor.

02

Fundamental Rights Impact Assessment

Your institution must conduct a FRIA. A good vendor supplies a template and completes it jointly. A bad vendor has never heard of it.

03

Bias & Accuracy Disclosure

Article 15 requires accuracy metrics. Ask vendors for their bias audit results across gender, nationality, and disability status.

04

Human Oversight Architecture

Every AI assessment decision must be overrideable by a human educator. Confirm the override workflow exists and is documented.

Trusted by forward-looking institutions
University A
Institution B
College C
University D
Institute E
Academy F
🇪🇺

Request a Compliance-Ready Demo

See ChallengeMe's EU AI Act documentation package, FRIA template, and human oversight architecture — in a 30-minute call tailored to your institution's procurement process.

GDPR Compliant EU AI Act Ready 90+ Institutions

For procurement officers, DPOs, and academic technology leads. No sales pressure — just facts.

✅ Request received. Expect a calendar invite within 1 business day.
Questions
FAQ — EU AI Act & ChallengeMe

The questions procurement and compliance teams ask most.

Is ChallengeMe genuinely High-Risk under the EU AI Act?
Yes. Annex III, Point 3 covers AI in education that determines or influences assessment outcomes. ChallengeMe uses AI for rubric suggestion, peer calibration, and scoring assistance — all of which constitute AI-assisted assessment of learning outcomes. We believe any vendor claiming their peer assessment AI is "low-risk" has not read the Act carefully.
When does ChallengeMe plan to complete conformity assessment?
We are targeting completion of internal conformity assessment and technical documentation package by Q2 2026 — well before the August 2, 2026 enforcement date. We will publish our Declaration of Conformity when the process is complete.
Does my institution also have obligations under the EU AI Act?
Yes. Institutions that deploy high-risk AI tools are classified as "deployers" under Article 26 and must conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27. ChallengeMe provides a pre-built FRIA template and technical support to make this as straightforward as possible for your institution.
How does ChallengeMe handle human oversight requirements?
Article 14 requires that high-risk AI systems allow qualified humans to override AI outputs. In ChallengeMe, instructors can review, modify, or reject any AI-assisted score at any time. All AI interventions are clearly labeled, and a full decision audit log is available to instructors and administrators.
Is ChallengeMe GDPR-compliant in addition to EU AI Act?
Yes. ChallengeMe is hosted in the EU, processes student data under GDPR, offers Data Processing Agreements (DPAs) for institutions, and does not transfer student data outside the EEA without appropriate safeguards. EU AI Act compliance builds on top of existing GDPR compliance.
What evidence can ChallengeMe provide for our IT security review?
We provide: Technical Documentation (Annex IV format), Data Processing Agreement, AI system description with intended purpose scope, bias testing methodology and results summary, human oversight architecture documentation, and FRIA template. Request the full compliance package via the form above.
In-depth article
EU AI Act Compliance Guide for Universities (2026)
Full breakdown: what to audit, what to document, what ChallengeMe provides.
Read the full guide →