Recent AI tech signals and what they might mean for higher education
Eddie Major, AI in Learning and Teaching Coordinator, explores some recent AI signals and ponders what they might mean for higher education.

Every few months, the AI world seems to shed another layer of stability.
Breakthroughs that once cost billions now come from small teams, and what were once standalone AI tools are quickly becoming part of the digital infrastructure shaping how we teach, learn and share information.
For university educators, trying to work out what the latest AI announcements will mean for teaching practice can feel bamboozling. 鈥What new AI thing announced today will my students all be using tomorrow?鈥
Visually stunning, or synthetic 鈥楢I slop鈥?
Multimodal AI can now produce photorealistic images and short videos from text instructions, with outputs often indistinguishable from real photos or footage. But some dismiss it as destroying the internet.
Recent signals:
- OpenAI鈥檚 new flagship video generation app produces realistic videos from a user鈥檚 text instructions, leading misinformation experts to caution it will lead to 鈥渇raud, bullying and intimidation.鈥 (Oct 2025)
- Google鈥檚 new (aka Nano Banana) image generation AI tool can preserve subject identity through multiple output images, meaning a person or object looks the same across different angles and scenes, something other AI apps struggle with (Aug 2025)
Six months ago I wrote about the shocking realism of ChatGPT鈥檚 new image generation capability, and the potential impacts for university teaching in the near future. While there鈥檚 nothing new about fabricated images, the ease and scale with which 鈥榩erfect鈥-looking media can be created presents a reality universities can no longer overlook.
What might this mean for higher education?
- Multimodal AI content brings a new urgency for media literacy broadly across curricula
- AI in assessment guidelines and policies which are currently focused on AI-generated text in written assessments, will need to be updated to consider acceptable use of AI-generated media.
- Traditional media and visual arts-based disciplines are now grappling with what AI is doing to their practice
From assistant to actor: the rise of AI agents
AI agents are systems that can plan, take proactive steps, and operate across multiple applications or environments while performing complex tasks.
Recent signals:
- Google launched , which can autonomously browse the web, click buttons, fill in forms and complete digital tasks on behalf of the user (Oct 2025).
- METR measures AI performance by the length of tasks (in person work hours) that AI agents can complete. Its researchers say this , with (a 50% success rate when performing a task that would take a person 2h17m) (Aug 2025).
AI agents mark a shift from tools that simply respond, to ones that act on their own initiative. This isn鈥檛 necessarily about automating the work people do, but rethinking But some warn of the creation of a if lower-skilled work is fully AI-agentified.
What might this mean for higher education?
- Many routine administrative campus tasks might benefit from AI-agent semi-automation, freeing educators to focus on higher value work; but safe and reliable integration remains a challenge
- Industries are beginning to test deployment of AI agents in task-execution workflows. Graduates will need to know how to manage, supervise, and collaborate with autonomous systems in their discipline context.
The great embedding: what if AI becomes unavoidable?
Embedded AI refers to capability that鈥檚 built into existing software platforms or products, rather than as separate new tools you鈥檇 use deliberately.
Recent signals:
- A year after Google started placing AI generated overviews above its search results, some leading (Oct 2025). Now, its new (Oct 2025) has come to Australian users, filling the whole screen with AI-generated results, with traditional web link results now relegated to a small box at the side.
- Instructure, maker of Canvas, has partnered with OpenAI to feature set (Jul 2025)
- The Consumer Electronics Show (CES) this year saw AI embedded in (Jan 2025).
Until recently, you either used ChatGPT, or you didn鈥檛, and you weren鈥檛 confused about that. But the AI goldrush has seen developers update software ecosystems with shiny new AI features shoehorned into every crevice. AI may stop being a choice, and that could unsettle many.
One day you open a PDF and Adobe Acrobat鈥檚 new AI Assistant pops up to say 鈥渉ey, that report looks long,鈥 and offers up an AI summary to save you the onerous task of actually reading it.
But that鈥檚 a small-scale individual experience. Beneath the surface, AI is now also rewiring some of the fundamental digital infrastructure that we rely on to access the open internet. Google is finally facing its Kodak moment,[1] and undermining its search ad business by placing AI-generated content directly in the search interface. How long before people don鈥檛 even click the links anymore?
For anyone working in the facts business鈥攅ducators, journalists, scientists, lawyers鈥攚e鈥檝e hit the iceberg, but the panic hasn鈥檛 set in yet.
What might this mean for higher education?
- The academic publishing and discovery ecosystem will likely change, as publishers confront AI systems that scrape, summarise and repackage their outputs. Traditional subscription models, metadata structures and indexing models are under pressure to adapt.
- Incoming undergraduate cohorts may soon arrive with even less experience in navigating the open web or evaluating information quality. As AI tools become the dominant interface to information, we鈥檒l need to rethink how we teach the fundamental academic skills that enable students to search, verify, and critically interrogate sources for themselves.
After the 鈥楾鈥 in ChatGPT: smaller, smarter AI models emerge
Some AI commentators think the transformer鈥攖he underlying architecture that powers many of the current AI models鈥攎ight soon see its limelight dim. The breakthrough that helped unleash ChatGPT is : more data and parameters are yielding diminishing returns at scale, and transformers can鈥檛 learn new information.
Researchers are exploring what might be next. A peek at the latest pre-print papers offers a glimpse of what might be on the horizon.
Recent signals:
- BDH) takes a biology-inspired approach, using locally connected 鈥渘eurons鈥 that strengthen over time, allowing the model to l instead of relying on scale (Sept 2025).
- uses a single small network that loops over its own reasoning rather than stacking layers, solving complex problems with far less data and computing power. (Oct 2025)
What might this mean for higher education?
- A new wave of AI tools might soon be coming; smaller, faster, and more diverse in capability; many of which might not easily fit into current university policies and guidelines.
- Students will need to understand not just how to use AI, but how to use many different kinds of AI
These signals point to a future where AI systems become smaller, smarter, and more deeply embedded in our digital infrastructure. For universities, the challenge is no longer whether to respond to AI, but how to adapt policy and practice fast enough to keep up.
[1] Just as the digital camera technology that ended up destroying its photographic film business, the transformer architecture powering the AI models now displacing its lucrative search advertising business. Last year, more than half the revenue of Google鈥檚 parent company, (Alphabet) was from search advertising; its first commercial product (AdWords launched in October 2000).