Mitigating misinformation on social media

Mitigating misinformation on social media

Using innovative AI tools and ‘big data’ analysis, University of ÑÇÖÞÉ«°É researchers are protecting the public against harmful online influence.

Social media has become an integral part of our daily lives; it shapes how we connect with friends, work, learn about our world, and entertain ourselves. However, this reliance also exposes us to misinformation, disinformation, and online communities designed to manipulate public perception, influence political actions, or deceive. This threat is a key concern in Australia, with ‘Information Warfare’ highlighted as a priority for the Australian Defence and National Security. Fortunately, University of ÑÇÖÞÉ«°É researchers are equipping us with the tools we need to navigate these challenges. Through studies on how digital interactions shape real-world behaviour and the development of tools to track false narratives and combat political misinformation, their work aims to better understand—and protect us from—harmful online influence.

Professor Carolyn Semmler, Professor Lewis Mitchell, Dr Rachel Stephens, and Dr Keith Ransom work collaboratively on a wide range of research in this area, including the impact of online interactions on in-person protest activity. In one instance, the team analysed a corpus of online interactions before, during, and after the anti-lockdown protests in Melbourne. The research showed that in digital conversations, humour was often effective in reinforcing group identity and encouraging agreement, and attacks and challenges often created more conflict and disagreement. The research also found that the in-person protesters’ actions were informed by a handful of influential internet users, who used these patterns toward agreement and disagreement to their advantage. 

"This is an important step toward making the links between online and offline behaviour that has consequences for the understanding and perceptions of social cohesion in Australia," Semmler says.

"It certainly shows that information and influence is directed by central players in online groups who use specific methods for gaining attention and reinforcing ‘in-group’ norms."

The team’s study, which has been funded by the Digi+FAME scheme, the Defence Science and Technology Group, and most recently an ASCA grant, is one of very few to use fine-grained analysis methods together with social network analysis. It’s one of many studies being conducted through the University’s Defence and Security Institute in the online influence space.

The team of researchers is also bringing cognitive science and large volumes of data generated by social media analyses together to detect, model, and prevent false online narratives. They will combine their gathered information and behavioural findings to create one of the first multidisciplinary tools for monitoring virtual narratives.

"To be able to counter misinformation and false narratives, you first need to be able to identify them through ‘narrative situational awareness’. This is what our research is working to be able to d