At some point in the last two years, most universities will have convened a working group on generative AI. Some will have produced policy frameworks. Many will have run staff development sessions. A good number will have updated their academic integrity guidelines, published guidance for students, or commissioned an internal review. All of this activity is genuine, and some of it is genuinely useful.
highered
Most universities offer their academic staff some form of teaching development. A workshop on active learning. An induction session for new starters. A seminar series that appears in the calendar each year. The intentions behind these programmes are genuine. The evidence that they change teaching practice is, on the whole, thin.
Conversations about educational technology often orbit around efficiency, but the rapid rise of generative AI has forced universities into a long-overdue reckoning with a much deeper question: what exactly are we doing when we provide feedback? If feedback is merely the transfer of corrective information, then large language models have already won. They can parse essays, spot logical flaws, and debug code with astonishing speed. However, reducing feedback to a glorified diagnostic tool misses the fundamental reality of how university students actually learn.
Three years in. That's where we are now with generative AI in higher education. ChatGPT's arrival in late 2022 feels like both yesterday and a lifetime ago. The initial panic ("How do we AI-proof assessment?") has given way to something more interesting, more nuanced, and dare I say it, more hopeful.
Step into the lobby of almost any university, and you will likely find a mission statement etched onto the glass façade. It usually speaks of "excellence", "innovation", and "global citizenship". Yet, a mere few hundred metres away in a lecture hall, the reality often feels worlds apart from those lofty aspirations.
The landscape of higher education is increasingly defined by complexity. As institutions navigate financial pressures, technological disruption, and shifting student demographics, the nature of academic leadership is being actively renegotiated. While strategic plans frequently emphasise "transformation" and "agility", the operational reality often reveals a different trajectory: one characterised by intensified management and data-driven oversight.
Higher education institutions create remarkable teaching innovations. Early adopters experiment, sometimes grants fund pilots, and conferences celebrate successes. Yet, a frustratingly consistent lifecycle unfolds: innovations remain trapped in their local silos. They flourish within the boundaries of specific modules or departments, but often evaporate as soon as the pilot funding is exhausted or the pioneering professional moves on.
Conversations about technology in higher education often swing between two extremes. On one side, there is immense optimism that digital tools will revolutionise teaching. On the other, there is a wary scepticism about whether technology adds any real value to the student experience. A more balanced view suggests the reality is far more nuanced: the success of digital learning is inextricably linked to the human element of education.
When we discuss Generative AI in education, the conversation often defaults to technical skills. But there is so much more to this shift than 'literacy' or tool mastery. The further we go, the clearer it becomes that the challenge is also deeply human. It gets personal, it gets messy, and for many educators, it's an upheaval that strikes at the centre of their professional identity.