The widespread adoption of artificial intelligence is outpacing media literacy, according to a report released in Australia.
Research published in Australia on Monday reveals that adults’ media literacy is lagging behind the rapid development of generative artificial intelligence (AI). This trend, the researchers warn, leaves internet users increasingly vulnerable to misinformation.
The AI industry experienced explosive growth in 2022 following the launch of ChatGPT, a chatbot and virtual assistant created by OpenAI, a US AI research organization. Since then, the sector has attracted billions of dollars in investment, with tech giants like Google and Microsoft introducing tools such as image and text generators.
However, the ‘Adult Media Literacy in 2024’ paper by Western Sydney University found that users’ confidence in their own digital media skills remains low.
In a sample of 4,442 adult Australians, participants were asked to assess their confidence in performing 11 media-related tasks that required critical and technical abilities and/or knowledge. On average, respondents reported being confident in completing only four out of the 11 tasks.
The results are “largely unchanged” since 2021, when previous research was conducted, the paper noted.
The ability to identify misinformation online has not changed at all, according to research data. In both 2021 and 2024, only 39% of respondents expressed confidence in their ability to verify the truthfulness of information found online.
The recent integration of generative AI into online environments makes it “even more difficult for citizens to know who or what to trust online,” the report stated.
The slow growth in media literacy is particularly concerning given the ability of generative AI tools to produce high-quality deepfakes and misinformation, according to associate professor and research author Tanya Notley, as cited by the Decrypt media company.
“It’s getting harder and harder to identify where AI has been used. It’s going to be used in more sophisticated ways to manipulate people with disinformation, and we can already see that happening,” she warned.
Combating this requires regulation, although this is happening slowly, Notley said.
Last week, the US Senate passed a bill designed to protect individuals from the non-consensual use of their likeness in AI-generated pornographic content. The bill was adopted following a scandal involving deepfake pornographic images of US pop singer Taylor Swift that spread through social media earlier this year.
Australians now favor online content as their source for news and information as opposed to television and print newspapers, the report noted, adding that this represents a “milestone in the way in which Australians are consuming media.”