The future and AI

The future and AI main

Whether we like it, or not, artificial intelligence is here to stay. The genie is out of the bottle.

Its rapid evolution has been embraced by some, and met with raised eyebrows by others.听

In our earlier issue of Lumen this year, we asked readers to describe their hopes and fears for the future. AI was an overwhelmingly present theme.

We shared some of these letters with academics from both the University of 亚洲色吧 and the University of South Australia to help clarify and respond to concerns on four broad themes: impact on jobs; global security; wellbeing; and the potential for cognitive decline.

贰尘辫濒辞测尘别苍迟听

Your thoughts: 鈥淭he rapid development of AI brings both opportunities and challenges. One major concern is job displacement, as AI automates tasks previously done by humans, leaving many unemployed. While new jobs may emerge, not everyone can adapt quickly. Another worry is that AI could surpass human intelligence in the future. If AI becomes too powerful, its impact on society and human decision-making remains uncertain. Will AI serve humanity or control it? Striking a balance between innovation and ethical responsibility is crucial to ensuring AI benefits rather than harms society. Thoughtful regulation and adaptation are necessary for a stable future.鈥 Wing Huang

AI itself is neither good nor bad, it鈥檚 a tool, and its impact depends entirely on the values and intentions of the people behind it. What we truly need is not to fear AI, but to cultivate a strong, responsible value system around its development and use. Just like when cars were first invented, many coachmen protested, fearing the loss of their livelihoods. But over time, society adapted, new jobs emerged, and transportation advanced. AI is going through a similar phase. It won鈥檛 replace humans, but it will change how we work and live. The challenge is not AI itself, but how we choose to guide it. With ethical principles and thoughtful leadership, AI can be a powerful force for good.

is a postdoctoral researcher and AI Education and Outreach Liaison for the Responsible AI Research (RAIR) Centre at the Australian Institute for Machine Learning at the University of 亚洲色吧.听

AI itself is neither good nor bad, it鈥檚 a tool, and its impact depends entirely on the values and intentions of the people behind it

厂别肠耻谤颈迟测听

Your thoughts: 鈥淚 am concerned that AI is accelerating the spread of misinformation, making it more convincing and harder to detect. Deepfakes, AI-generated videos and pictures, and automated disinformation campaigns have distorted public opinion, influenced elections, and fuelled political unrest. Existing laws (defamation, media regulation, and electoral rules) were not designed to deal with matters of this type and scale. Without legal frameworks to address AI-driven misinformation, Australia risks greater political instability, public distrust, and manipulation by bad actors.鈥 - Naaman Kranz

It is the case that AI-generated content can cause real harm to our democratic processes 鈥 which rest upon the ability of citizens to engage in reasoned, respectful and considered debate about the issues that impact all Australians. This is a challenge that is not unique to our democracy. At the University of 亚洲色吧, researchers are working across the disciplines of psychology, mathematical sciences, political sciences, philosophy, criminology and computing sciences to address these challenges. For example, we are conducting research that explores how people reason about information they encounter online, using a mock-social media environment that resembles many of the features of actual social media platforms. This work is aimed at developing strategies for social media users to engage with when trying to form their opinions about all sorts of issues from climate change to vaccination and nuclear energy. In understanding the abilities and weaknesses of our reasoning processes in interaction with AI-generated content and questionable information sources, we can strengthen the capacity for people to form opinions that are well-considered and based in reliable sources of information. We can also help inform law makers of our research findings so that they can devise governance and regulation processes that support a healthy democracy.听

, University of 亚洲色吧, uses AI through the lens of psychology to better understand how human-to-machine interfaces work for applications in defence and national security.听

Whilst not the leader, Australia is fortunately better than many countries at trying to address issues with AI, although we are far from where we could be. The Australian Government is active in this area, but the rapid escalation in the capabilities of the technologies will always pose a challenge and put governments on the back foot. On the plus side, we are now starting to see AI work for us detecting and helping manage all of this. At the end of the day, with the recent election, it鈥檚 on us to pressure our representatives for focus on topics that, long term, pose significant threats.

, University of 亚洲色吧, uses AI to enhance engineering practices and has researched how AI has been used to manipulate online users into handing over data听

AI thumb

Your thoughts: 鈥淚 suppose that there will be substantial progress in AI in the future potentially even to the point where machines come to outperform humans in many tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don鈥檛 find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.鈥 Neil Addleton

You are absolutely right! AI has significant benefits, but it comes with important risks that must be considered carefully. Importantly, there is a need for stronger collaboration between governments, AI providers and experts from a wide range of domains (economics, education, sociology, psychology) to ensure we mitigate risk the best we can. We are still learning the extent of societal changes that can be caused by AI, so there is a significant need for research on the potential impacts of AI, as well as strong public engagement, transparency and accountability. Government regulations and international collaboration between governments are also critical, as AI providers are mainly driven by profit and business goals. International cooperation between governments around AI is critically important to avoid AI being used as a political weapon. Finally, all of this comes with a significant issue of rapidly changing AI technologies, which makes it very challenging to devise simple and clear policy guidance around AI.

听University of South Australia, researches the interplay between human and artificial cognition to understand how it affects our learning.

奥别濒濒产别颈苍驳听

Your thoughts: 鈥淥ne of my biggest concerns for the future is the growing mental health crisis among young people, especially as they navigate urban-rural transitions and climate-related challenges. While technological advancements, including artificial intelligence, offer opportunities for progress, they also risk deepening inequalities if access remains limited. At the same time, the University鈥檚 evolving landscape presents both challenges and opportunities for students and researchers. I am excited about the potential for interdisciplinary collaboration to address these pressing issues, ensuring that research and policy solutions are both equitable and impactful for future generations.鈥 Trang Dang

I completely agree. We are seeing increasing mental health issues across all age groups, to the extent that even the Australian Institute of Health and Welfare has joined others in highlighting a 鈥榣oneliness epidemic鈥. AI has great potential to improve access to mental health support, offering services 24/7, with virtually no waiting times and without judgement. However, and this is a significant 鈥榟owever鈥, there is currently insufficient research on the impact and effectiveness of these tools. Educational technology companies are rapidly introducing AI-based self-help products, yet we lack robust evidence on how these tools actually function and whether they might inadvertently cause harm. While I firmly believe there鈥檚 no substitute for genuine human connection, AI might effectively bridge gaps, reduce wait times, and help people build stronger coping skills. Nevertheless, we urgently need high-quality research before we widely implement these technologies.

, from the University of South Australia, researches the impact of AI on teacher and student wellbeing.听

I agree that access to emerging technologies will continue to be a defining fault-line for equity. AI, in particular, is advancing faster than many had anticipated and offers exciting interdisciplinary potential. However, I鈥檝e already noticed disparities in how AI is being integrated into education. While some institutions actively encourage balanced, exploratory use, others are enforcing bans, delaying students鈥 opportunity to build critical skills. These uneven policies risk exacerbating existing educational and social inequalities.

, from the University of 亚洲色吧, is researching the early impacts of AI on teaching and learning.听

Cognitive decline听

Your thoughts: 鈥淢y concern currently is in regard to technology increasingly handling many cognitive tasks. I am concerned about people鈥檚 ability to think critically, solve problems, or even remember basic information without relying on digital devices. This cognitive decline could have lasting effects on intellectual development and society鈥檚 ability to think independently, especially with the latest generation.鈥 Despina Koutlakis听

This is a totally valid concern and an active area of research in cognitive psychology. The 鈥榗ognitive atrophy鈥 hypothesis is the idea that if we rely upon technology to carry out tasks that we may have previously carried out ourselves, we will essentially lose the skills associated with them. At the moment, the evidence is mixed on听this 鈥 with many studies focussing on the ability to write well being impacted by large language models. As yet, there is not enough evidence to suggest that these skills are likely to decline, and there is complexity in designing studies with appropriate 鈥榗ontrol鈥 conditions to determine the impact of the use of technology on cognition. Much of the expertise of individuals in highly specialised fields relies upon adequate memorisation of information 鈥 and the effective and creative use of this knowledge for solving problems. In this regard it is possible that the way that we develop this expertise will be enhanced by our use of technology. However, we need to be sure that this does not remove our agency and rights as humans.听

, University of 亚洲色吧.听

Article created by Isaac Freeman, Communications Officer for the University of 亚洲色吧. Main image created by Lachlan Wallace, Communications Officer for the University of 亚洲色吧 and Isaac Freeman using ChatGPT and Photoshop. Additional image sourced from iStock.听

Tagged in Lumen Wirltuti Warltati 2025, Future, Research