ChallengeMe acknowledges its High-Risk classification under the EU AI Act — and has already started the conformity work. Every obligation documented. Every deadline tracked.
High-risk AI system obligations under Article 10–16 become enforceable on August 2, 2026. ChallengeMe is building compliance now — not at the last minute.
Institutions procuring AI tools after Aug 2, 2026 face liability if those tools are non-compliant. Procurement decisions should happen now — not after the deadline.
Most vendors won't tell you this. We will.
The EU AI Act classifies AI systems in education under Annex III, Point 3. Point 3(b) specifically covers "AI systems intended to be used for the purpose of determining access to or assigning natural persons to educational and vocational training institutions" — and more broadly, any AI system determining educational outcomes or performance assessment. ChallengeMe's AI-assisted rubric evaluation, automated scoring, and peer calibration algorithms all fall within this scope.
Why this is good news for institutions: A vendor that acknowledges High-Risk status has done the legal analysis. Vendors who claim their AI tools are "low-risk" in education either haven't read the Act or are hoping you haven't. ChallengeMe's transparency means you're protected.
The EU AI Act's high-risk obligations aren't checkbox items. Here's exactly what ChallengeMe is delivering.
Article 11 requires complete technical documentation before market placement. We document our AI model architecture, training data, intended purpose, performance benchmarks, and risk mitigation measures.
⏳ In ProgressArticle 10 mandates training data governance and bias auditing. We test rubric scoring across demographic groups, flag systematic scoring gaps, and maintain audit logs of AI-assisted grades.
⏳ In ProgressArticle 27 requires deployers (institutions) to conduct a FRIA. We provide a pre-built FRIA template for higher education institutions, co-completed with ChallengeMe's technical team — reducing your institutional burden dramatically.
⏳ In ProgressArticles 43–49 require conformity assessment for high-risk systems. We are undergoing internal conformity assessment with documentation sufficient to support the EU Declaration of Conformity and CE marking pathway.
⏳ Q2 2026Articles 9–16 define specific obligations for high-risk AI providers. Here's our status.
Continuous risk identification, testing, and mitigation lifecycle
Training data quality, relevance, and bias auditing procedures
Complete Annex IV documentation package for regulators
Automatic logging of AI decisions with sufficient detail for audit
Clear disclosure to students when AI assessment is used
Instructor override controls and AI decision review workflows
Accuracy benchmarks, adversarial testing, and error rate disclosure
Registration in the EU database for high-risk AI systems prior to market placement, including provider details, intended purpose, and conformity declaration reference
Pre-built FRIA template and technical support for institutions
We researched public compliance positions across all major peer assessment platforms. The results are stark.
| Platform | Public Compliance Position | High-Risk Acknowledgment | FRIA Support | Technical Docs | Bias Testing |
|---|---|---|---|---|---|
| ChallengeMe ★ Most Transparent | Full public commitment | ✓ | ✓ | ✓ | ✓ |
| FeedbackFruits | Vague GDPR mentions only | — | — | — | — |
| Kritik | No public AI Act statement | — | — | — | — |
| Turnitin PeerMark | No public AI Act statement | — | — | — | — |
| Peerceptiv | No public AI Act statement | — | — | — | — |
Research conducted March 2026. Based on publicly available documentation, terms of service, and published compliance statements.
EU institutions face shared accountability for high-risk AI deployed on campus. Here's what you need from any peer assessment vendor.
Your AI tool supplier must provide an Annex IV documentation package. Request this in writing from any shortlisted vendor.
Your institution must conduct a FRIA. A good vendor supplies a template and completes it jointly. A bad vendor has never heard of it.
Article 15 requires accuracy metrics. Ask vendors for their bias audit results across gender, nationality, and disability status.
Every AI assessment decision must be overrideable by a human educator. Confirm the override workflow exists and is documented.
See ChallengeMe's EU AI Act documentation package, FRIA template, and human oversight architecture — in a 30-minute call tailored to your institution's procurement process.
The questions procurement and compliance teams ask most.