GradGenie
AI-Powered Exam Grading for Irish Leaving Certificate
Multi-agent AI system that grades student essays through collaborative AI conversation.
The Problem
60,000+ Irish students sit Leaving Certificate exams annually. Grading is:
Weeks of manual marking by thousands of human examiners
Massive cost to coordinate and pay examination teams
Variation between individual markers affects fairness
Traditional AI grading uses single models that miss nuance and introduce systematic bias.
The Solution
A 5-stage agentic workflow where OpenAI and Anthropic models debate each answer:
Initial Grade
OpenAI GPT
Grade against official LC rubric
Bias Review
Anthropic Claude
Check for known grading biases
Grade Revision
OpenAI GPT
Reconsider based on bias feedback
Teacher Compare
Anthropic Claude
Align with real teacher examples
Final Calibration
OpenAI GPT
Final decision + confidence score
Low-confidence (< 0.85) routes to human review
High-confidence auto-grades with full audit trail
Technical Innovations
Two Models Debating
Not one model scoring - two models having an intelligent conversation about student work, catching biases and calibrating against real teacher examples.
Question Type Grouping
Groups by type (personal_reflection, text_analysis, writing_evaluation) for smarter teacher example retrieval instead of exact question matches.
Confidence-based
Psychometric thresholds determine auto-grade vs human review.
Complete Awareness
AI receives full source text, rubric, indicative material, and teacher examples.
Conversation Logging
Full audit trail of AI decision-making for transparency and appeals.
Database Design
API calls per answer
Processing time
Confidence score
Every grading decision logged and explainable.
Akella inMotion Delivered
Part of a team effort.