Assessment, Grading & Feedback
AI can grade certain types of student work - multiple choice, short answer, and increasingly, longer written responses - providing faster feedback than most teachers can manage. Automated essay scoring systems assess structure, coherence, grammar, and to some degree, content quality. For teachers overwhelmed by marking workloads, this offers genuine relief. AI can also provide formative feedback during the learning process, helping students improve their work before final submission. Plagiarism detection tools have been joined by AI detection tools that attempt to identify AI-generated student work - though these are unreliable and raise their own problems. The deeper challenge is that assessment is not just about measuring performance; it is about understanding what a student has learned, providing meaningful feedback, and making judgements that affect educational trajectories. AI can handle the mechanical aspects of grading, but the pedagogical judgement about what feedback will actually help a particular student learn is more subtle. There are also fairness concerns - automated grading systems can encode biases, and their limitations may disadvantage students who write in non-standard ways or come from different cultural backgrounds.