On 21st May 2025, Edge hosted the second in our latest series of debates on the philosophy of vocational education. Chaired by Professor Chris Winch, these focused on the impact of emerging tech on the workplace and within vocational education and training (VET).
Offering the first provocation of the debate, Professor Kevin Orr (Visiting Professor and Former Associate Dean, University of Huddersfield) introduced a healthy dose of scepticism around the role and expectations of AI in teaching and assessment. His primary concern was whether or not these technologies are genuinely enhancing pedagogical practice.
Despite the buzz around AI, Kevin argued that there is, so far, little solid peer-reviewed evidence to suggest it is significantly altering pedagogy. In his view, bold claims that it can replace teachers or significantly lighten workloads remain unproven. Not only that, he warned against any slide towards reimagining the teacher’s role as that of a mere technician who is responsible for operating tools rather than engaging in person-to-person education. While celebrating the role that technology plays in supporting certain aspects of teaching – using AI to spot plagiarism, curate learning materials, or for simulating hazardous tasks with virtual reality – he was clear that these applications should enhance, rather than replace, thoughtful, human-led interactions.
Kevin remained sceptical about what he called ‘unsubstantiated hopes from policymakers’ about the ability of AI to revolutionise teaching and learning. While good at curating content, he noted that tools like ChatGPT – which he admitted to using to create his provocation – echoed online hype but did not provide any original critical perspectives.
His larger concern was the growing influence of commercial interests. Citing Stanford’s 2022 AI Index, he pointed out that education is one of the top areas for AI investment, with much of that money going into intelligent tutoring systems. These systems often rely on standardised content and are designed to avoid ‘failure’ (in the pass/fail sense) – providing a version of learning that experts Holmes and Littlejohn have described as a naive, spoon-feeding approach, which is essentially: “behaviorist or instructionist, and ignores more than 60 years of pedagogical research and development.”
Kevin was keen to stress that none of this was meant to suggest technology in teaching and learning is inherently bad – only that we require solid evidence to inform how effective these innovations really are.
He concluded with a wry comment he’d heard during his youth in Belfast: “When your boss uses the words ‘exciting’ and ‘opportunity’ in the same sentence, you know your life is about to get harder.” That, he suggested, might sum up the future for teachers navigating a wave of new technologies. Rather than making life easier, these tools risk increasing workloads while chipping away at professional autonomy. Although openness to the possibilities of new technology is essential, we must not lose sight of what matters most: the role of well-informed, dedicated teachers. Especially in VET, where students are often young and professionally inexperienced, it is the teacher, not the tool, that makes the difference. New technologies should earn their place by supporting human connection, not sidelining it.
The debate’s second provocation came from Chris Thomson (Programme Lead for Digital Practice, Jisc). He highlighted two challenges for VET relating to AI. Firstly, what kind of skills are learners really developing around these technologies? Secondly, how are educators and training organisations using AI to shape their practice?
He opened with a short anecdote about Rio Ferdinand, the former footballer turned podcast host. In a recent sponsored segment, Ferdinand and his co-host asked Gemini AI how to deal with someone who takes post-work football too seriously. The AI suggested having a calm conversation and recommending alternative outlets for that person’s aggression – advice that the hosts found surprisingly insightful. Chris was less impressed. The advice, he argued, was sensible but standard. What mattered more was the novelty of hearing a device sound human. If a person had said it, would they have been quite so entranced?
Generative AI, Chris pointed out, is designed to produce satisfying, familiar output. It gives us what we expect, which is a problem. He described how a recent Jisc internship ad led to a flood of AI-generated CVs and cover letters. They were neat, competent, and bland. As a recruiter, this made it impossible to connect with the applicants. They had squandered an opportunity to make a human impression by essentially handing responsibility over to a generic template. Although just one example, this highlights the negative impact AI can have if used indiscriminately.
The same concern extends into VET more broadly. Vocational learners must be capable of communicating with impact. But Generative AI struggles with anything that requires individualism, authenticity or creative thinking. The 2023 World Economic Forum jobs report lists key skills as critical thinking, resilience, curiosity, and motivation – the usual suspects. But these things can’t be outsourced to an algorithm. The question is, how do we design and deliver curricula that embrace AI while helping learners develop individuality and voice?
On the education provider side, Chris acknowledged the appeal of AI tools that promise efficiency – automated feedback, adaptive platforms and so on. But he warned against dehumanising important aspects of the learning experience. Real learning is social, shaped through interactions, mistakes, and messy conversations. Over-automation risks robbing learners of the chance to develop essential core skills and relationships.
Chris closed by urging educators to think carefully about the indirect effects of these tools. What way of life and work are we modelling within VET? What expectations do these tools set for the future? AI might be capable of generating something competent and ordinary, but it’s up to us to make sure education still fosters the extraordinary.
The third and final provocation came from Patrick Craven (Director of Policy, City and Guilds). His focus was on balancing technological innovation with authentic interactions in learning and assessment. While innovation is often linked to technology, in contrast to the other speakers, he argued that we shouldn’t see this as sidelining educators. On the contrary, technology can enhance tutor expertise. And that is needed. In recent decades, there has been a gradual erosion of adult education. Technology offers new ways to re-engage adult learners. It is always available, it is scalable, and it is intuitive to use. If thoughtfully deployed, tech can support lifelong learning.
However, Patrick was not blind to risks around automation, such as adaptive learning tools becoming too reliant on simplified forms of assessment. But even these, he suggested, can offer benefits in the right context. For instance, automated assessment can quickly identify where learners are struggling, allowing human educators to intervene. He gave the example of confidence-based assessment, where learners not only give an answer but also rate how certain they are that it is correct. This can provide a fuller picture and highlight the dangerous combination of high confidence about an incorrect answer, paving the way for swift human intervention.
Patrick also nodded to the growing use of simulation-based assessment. This is not new – the defence and utilities industries have long relied on simulations to expose people to situations that would otherwise be dangerous and costly. What is new is their integrity. Innovations like wearable tech and highly realistic graphics help improve the transfer of experience to the real world. While they are not a replacement for real-world experience, Patrick linked his argument to traditional VET frameworks that distinguish between more easily teachable ‘core’ tasks that are commonly encountered in the workplace, and wider ‘range’ tasks that are less frequently encountered and also harder to teach. Simulations provide an opportunity to cover this wider range of activities much more safely and effectively.
Ultimately, all innovations, Patrick argued, must be anchored in trust, not just in the learner but in the systems and institutions that validate their skills. Trust is fundamental in education, particularly when qualifications carry such weight in employment. Learners must believe their assessments are fair, and society must believe in the institutions awarding them.
Ending on a cautious note, Patrick said that while technological innovation is inevitable, AI’s ability to compile what seems like common sense is a big risk. If learners stop questioning perceived wisdom, they risk dulling their creativity and critical thinking – some of the very traits VET should be aiming to develop. Over-reliance on AI would be a disservice to this deeper purpose.
Breakout and discussion sessions
After these three thought-provoking provocations, we opened the floor to wider discussion. Unsurprisingly, there was broad consensus that technology should enable, not replace, teaching professionals. There was also scepticism about AI’s role as a cost-cutting silver bullet. But there was much more hope around how we might use this tech to free up teachers to do what they do best – teach.
As for how AI is applied in itself, from a teaching perspective, there was broad agreement that it is less about ‘teaching the tech’ and more about understanding how AI transforms the way people think. This means developing learners’ information literacy. For instance, VET curricula might need to include how to write effective AI prompts and then critically analyse the outputs. A higher-level issue is how AI will transform workplaces and relationships, not just in VET environments, but between clients and professionals in the workplace. All this must be at the centre of our thinking.
Another theme was around evidence of efficacy: how do we determine AI’s effectiveness in teaching and assessment, given the speed of its development? One contributor raised concerns that tech doesn’t automatically improve outcomes – rather, outcomes are reliant on how technology is used. Furthermore, results may not be apparent for years. Given that longitudinal evidence collection can be tricky at the best of times, how do we determine if something is worth investing in and gets the results we want? Is it enough for institutions to conduct these evaluations themselves, or will it require specialist input, perhaps even at the national level? This tied to a final point: commercial interests. We must ensure any rollout is based on solid evaluation, not marketing.
Curriculum innovation sparked another interesting exchange. Furniture making, for example, now integrates many technological tools. Is it possible to integrate these without losing core skills and understanding of manual trades? Others noted that skills evolve – some become heritage skills, such as roof thatching. They’re still taught, just in proportion to the need. This doesn't mean craft skills are lost. It just means a shift. For example, part of a design curriculum might involve embracing tech that automatically cuts wood while combining this with how humans then refine the output.
Another discussion was around the impact of tech on adult learning. Some contributors expressed caution, questioning whether AI alone could truly reengage people, particularly older adults or the economically inactive. The primary concern was that technology might not offer the sense of community and motivation needed to bring learners back into training. Tech can provide automation, progress tracking, and interaction, yes – but perhaps what’s missing is the human connection: shared learning environments and peer support are invaluable for this.
Others pointed to a shift in attitudes catalysed by the pandemic. They observed that many older adults who once resisted digital platforms became more comfortable out of necessity, learning to video call and access virtual services. The argument was that generational barriers to digital engagement are not fixed. With the right scaffolding and inclusive design, even resistant learners can thrive in tech-enabled environments.
Despite contrasting views on the granular details, the overall consensus was that tech should be seen as a support tool. It can ease admin and offer new approaches to teaching and assessment. But beneath that, nothing can replace real professionalism and relationships, nor should it. As for what the future of VET looks like, that all depends on how we strike the balance going forward.