Evidence-based methodology developed through 14 years of analyzing UK university assessment practices
The IAC methodology wasn't created overnight—it emerged from 14 years of rigorous research into UK university marking standards, culminating in a breakthrough discovery about why AI-generated content consistently fails.
Founded by Dr. Emma Richardson (University of Manchester) and Professor David Chen (LSE), IAC began as a research initiative to decode UK university marking criteria. Over 8 years, we:
When ChatGPT launched in November 2022, we immediately began investigating its impact on UK higher education. Between December 2022 and September 2023, we conducted the UK AI Academic Writing Study:
The Shocking Results:
But more importantly, we discovered AI drafts fail for predictable, fixable reasons—not because they're "bad writing" but because they systematically violate specific UK assessment criteria.
By cross-referencing our UKASF research with the AI draft analysis, we identified 10 systematic failure patterns that cause AI essays to underperform. These aren't general writing issues—they're specific violations of UK marking criteria.
Each failure point was validated by testing interventions on 350 volunteer student essays (Jan-Aug 2024):
AI invents fake sources or incorrectly formats real ones. UK markers immediately spot this and apply severe penalties (often automatic fail).
AI describes but doesn't evaluate, synthesize, or argue. UK essays require critical engagement with sources—description alone caps you at 50%.
AI produces generic statements instead of specific, arguable positions. UK marking schemes explicitly require clear thesis statements.
High AI detection scores automatically flag work for investigation, regardless of quality. This alone can result in penalties or failure.
AI follows 5-paragraph American essay format, not UK academic essay conventions (signposting, paragraph development, integrated argumentation).
AI doesn't understand your specific assignment brief or marking criteria. It writes generically, missing key assessment requirements.
AI drops in quotes without explaining their significance or linking them to the argument. UK essays require evidence → analysis → connection.
AI uses vague academic vocabulary instead of discipline-specific terminology that demonstrates understanding.
AI restates the introduction instead of synthesizing arguments, acknowledging limitations, and suggesting implications.
AI produces generic analysis that markers have seen hundreds of times. First Class work requires original interpretation and insight.
To validate our framework, we conducted a controlled study with 350 volunteer students from 18 UK universities (January-August 2024).
Students received report showing what to fix
Report + track-changed edits + model paragraphs
Full professional rewrite to UK standards
We're not proofreaders—we're UK assessment specialists
University of Manchester • PhD in Higher Education Assessment
14 years researching UK marking standards. Led the development of the UKASF framework and the 2022-2024 AI writing studies.
London School of Economics • 22 years teaching experience
Former external examiner for 8 Russell Group universities. Specializes in assessment criteria design and academic writing pedagogy.
Let our evidence-based methodology transform your AI draft into academic excellence