AI, Media Literacy, and the Next Generation
Door Seden Anlar, Maria Luís Fernandes, op Mon Dec 09 2024 10:40:00 GMT+0000Seden Anlar and Maria Luís Fernandes look at how AI has made its way into young people’s lives. What opportunities does it offer, and what threats? And how can policymakers, both nationally and internationally, offer young people tools to deal with AI? This is the first article in a three-part series.
Artificial intelligence (AI) is no longer a distant concept. Although it has existed for decades in various forms – from basic automation to advanced machine learning –, its development has now reached a new frontier. More specifically, in recent years, generative AI, a type of AI that can create original content, has made significant advancements.
A major milestone in this evolution came in November 2022 with the public release of OpenAI’s ChatGPT. This chatbot made generative AI widely accessible to the general public, allowing users to interact with it and explore its capabilities through a simple conversational interface. By August 2024, ChatGPT had over 200 million weekly active users – double the number it had in November 2022. Since the popular release of ChatGPT and many other similar AI-driven bots and tools, AI has become deeply embedded in everyday life, influencing the most diverse domains ranging from social media entertainment to education and politics.
Disinformation and Generative AI
One area particularly impacted by generative AI is the information space. AI has fundamentally transformed the way news and content are created and distributed, introducing new challenges to the ongoing issue of disinformation. With freely available and largely unregulated AI tools, it has become easier than ever to generate and distribute false information and create convincing fake content that can spread quickly across digital platforms.
The advanced capabilities of generative AI are evident in how convincingly it can mimic human communication and behaviour. We spoke to Juliane Von Reppert-Bismarck, Executive Director and founder of Lie Detectors, a Europe-wide journalist-driven non-profit that works to help teenagers, pre-teens and teachers tell news facts from news fakes and understand ethical journalism. She offered a striking example: ‘You just have to think about the recent case of a major engineering firm, which in May 2024 became the victim of a gigantic deepfake hoax. They actually transferred 20 million pounds to different bank accounts because someone had become convinced that during a video conference call they were speaking to real people. But they weren’t. They were talking to AI-manufactured beings, using voices that sounded familiar. And it convinced them to make that transfer.’
Even when used with good intentions, however, AI systems are prone to error. ChatGPT and similar chatbots routinely caution users that their responses may contain inaccuracies. Moreover, these systems are only as reliable as the data they are trained on. If the training data reflect societal biases or inequalities, which they usually do, AI algorithms risk perpetuating, or even amplifying, those biases.
AI in the Newsroom
This concern becomes particularly pronounced in the media sector, where AI is rapidly being integrated into various aspects of news production. AI-generated content is increasingly reaching audiences, often without clear labelling or guidelines to differentiate it from human-created material. This lack of transparency raises significant concerns about the potential misuse of the technology and its impact on public trust in information. One of the most troubling issues is the phenomenon known as ‘AI hallucinations’ where the technology generates false or misleading information that appears highly credible. These hallucinations can occur because AI systems, while sophisticated, lack true understanding and can create inaccuracies from their training data. Therefore, the absence of clear labelling not only misleads audiences but also erodes confidence in legitimate content, as audiences struggle to distinguish truth from fabrication.
The lack of transparency about AI-generated content raises significant concerns about the potential misuse of the technology and its impact on public trust in information.
Such risks are particularly alarming in contexts like journalism, education, or political discourse, where accuracy is important. Interestingly, according to the Digital News Report survey 2024, audiences appear to be less concerned about AI-driven stories related to sports and entertainment, where the stakes are perceived to be lower.
While news publishers recognise the potential benefits of AI – particularly for automating backend tasks like transcription, copyediting, and recommendation systems – many still view its involvement in content creation as a serious threat. In fact, they worry that AI's role in producing news articles, headlines, or other editorial content could further erode public trust in journalism.
The Role of Media Literacy
Public trust in information has been on the decline for many years, exacerbated by the spread of fake news on social media platforms. This erosion of trust could worsen with the increasing presence of AI, especially in light of elections. Concerns are rising over how these new technologies might be leveraged by political campaigns or even external actors looking to influence election outcomes.
The Bureau of Investigative Journalism has found that such tactics have been used to spread Russian disinformation ahead of this year’s elections in the UK and France. Moreover, NewsGuard, a media watchdog, has noted a rise in websites filled with AI-generated content, often designed to look like legitimate news outlets but peddling low-quality or false information. This surge in misleading content has raised alarms among experts, who warn that it could further erode trust in the media.
Moreover, in Slovakia, for example, a faked audio recording of a candidate allegedly discussing how to rig the election surfaced just days before a closely contested vote. Fact-checkers struggled to counter the disinformation effectively. Similarly, UK politics witnessed its first major deepfake incident in October 2024 when an audio clip of opposition leader Keir Starmer, apparently swearing at staffers, went viral on X (formerly Twitter). The clip gained millions of views, even after being exposed as false.
The biggest danger may not be that people believe false information, but rather that they stop believing anything at all.
Trust in media is likely to decline even more as AI gets more integrated into digital platforms. Research shows that people often rely on images and videos as ‘mental shortcuts’ when determining what to trust online, adhering to the notion that ‘seeing is believing’. With the rise of synthetic imagery and AI-manipulated content, the reliability of visual evidence – long considered a cornerstone of trust – is being increasingly undermined, leading to greater uneasiness and uncertainty among the public. These cases highlight how AI-generated content can be leveraged to sow confusion and mistrust during critical moments.
However, the biggest danger may not be that people believe false information, but rather that they stop believing anything at all. As political philosopher Hannah Arendt warned, during times of upheaval, the most significant threat is when ‘nobody believes anything any longer’. Therefore, this growing scepticism threatens to erode the foundations of democracy.
The consequences of this breakdown in trust are profound. As Juliane from Lie Detectors explained, ‘democracy is based on the assumption that we are capable of making informed decisions. So media literacy and the guarantees it provides to ensure informed decisions, not disinformed decisions, are absolutely core to the democratic process and to our democratic society.’
The Growing Challenge for Young People
This issue is especially critical for young people, who are the most exposed to the digital sphere and social media platforms where AI is increasingly prevalent. Often regarded as ‘digital natives’, young people interact with the online world more than any other group.
According to the Reuters Institute's 2024 report, younger generations are showing a weaker connection to traditional news brands than they did in the past, making them even more susceptible to disinformation. With the voting age lowered to below 18 in countries like Belgium, and considering that these young people will shape significant political decisions and take on influential roles in society in the coming decades, it is vital that young people understand how to navigate the rapidly evolving information landscape.
While news organisations are making efforts to protect the integrity of information by adopting AI usage guidelines, and social media platforms like X (formerly Twitter) and TikTok are introducing measures such as community notes and labelling AI-generated content, a critical question remains: Is this enough?
The remedy often cited for this growing concern is digital media literacy, which refers to the ability to critically evaluate and interact with digital content, encompassing a wide set of skills, ranging from identifying credible sources of information to understanding how algorithms and AI shape the content we consume. It involves not only technical know-how but also the capacity to discern between accurate information and disinformation and to engage responsibly in online environments.
While news organisations are making efforts to protect the integrity of information by adopting AI usage guidelines, a critical question remains: Is this enough?
But are young people actually digital media literate? The 2024 Ofcom report on Children and Parents: Media Use and Attitudes provides some insight into what young people think about their own digital media literacy skills. When asked, 69% of children aged 12-17 claimed they were confident in their ability to judge the authenticity of online content. However, this confidence was higher among boys and older teenagers. Notably, confidence among 16-17-year-olds dropped from 82% in 2022 to 75% in 2024.
However, as Juliane Von Reppert-Bismarck from Lie Detectors pointed out: ‘Asking young people whether they think they are media literate is not an effective way to measure their actual media literacy skills.’ Indeed, the 2024 Ofcom report highlights that confidence doesn’t always correlate with actual media literacy skills. It states: ‘Confidence does not just follow from good media literacy skills but intersects with it in ways that can either strengthen or undermine critical understanding. Someone whose confidence exceeds their ability is more likely to make mistakes, leading to potential harm. Conversely, someone with good critical understanding but lacking confidence may not trust their own judgement, which could leave them feeling unsure or unsafe online.’
Despite being referred to as ‘digital natives’, young people’s familiarity with technology does not automatically equate to strong digital media literacy skills. Safa Ghnaim, Associate Programme Director at Tactical Tech, an international non-profit focused on helping individuals and communities navigate the societal impacts of digital technology, highlighted this misconception:
‘I've definitely heard terms like “digital native”, which implies that young people are fluent in technology because they’ve grown up with it. But I think people sometimes jump to the conclusion that they are also good at discerning misinformation or spotting scams. That’s not necessarily the case. These skills still need to be taught. We've done research with teenagers about their hopes and fears around technology, and the results were revealing. Some expressed concerns about the erosion of human-to-human relationships, showing the depth of their anxieties.’
‘Regardless of the framework implemented, digital media literacy is very contextual. A young person in Germany isn’t the same as a young person in Portugal.’
Helderyse Rendall, Senior project coordinator of Tactical Tech's project focused on youth, What The Future Wants, echoes this sentiment: ‘Young people's ability to use technology is often mistaken for a deeper understanding of how technology intersects with their relationships, communities, and society in general. These are things that need to be explored and discussed in environments where they can reflect on their interactions and their effects.’
Data support this concern. According to a 2022 EU research, one in three 13-year-old students in Europe lacks basic digital skills when directly tested. Moreover, the OECD reports that only a little over half of 15-year-olds in the EU have been taught how to detect whether information is subjective or biased.
EU Initiatives on Digital Media Literacy
Recognising this growing need, the European Union has launched several initiatives aimed at promoting digital media literacy. On July 1, 2020, the European Skills Agenda was published, promoting digital skills for all and supporting the goals of the Digital Education Action Plan. This plan aims to improve digital skills and competencies for digital transformation and promote the development of a high-performance digital education system. In addition, the Digital Compass and the European Pillar of Social Rights Action Plan set ambitious goals for the EU: to ensure at least 80% of the population has basic digital skills and to have 20 million ICT specialists by 2030.
In October 2022, the European Commission released its guidelines for educators on promoting digital skills and tackling disinformation in primary and secondary schools. This toolkit covers three main topics: building digital literacy, tackling disinformation, and assessing and evaluating digital literacy. Then, in February 2023, the Commission published the Media Literacy Guidelines, which provide a framework for member states to share best practices and report on their media literacy efforts.
The State of Media Literacy in Europe
However, despite these EU-level guidelines, it ultimately falls to individual member states and their national education systems to implement them, as the Audiovisual Media Services Directive (AVMSD) requires member states to promote media literacy and report on their progress every three years. As Helderyse Rendall explains: ‘How you evaluate these competencies from UNESCO or the European Union is well documented, but how that translates to different contexts is more complicated. Research we published last year showed that, regardless of the framework implemented, digital media literacy is very contextual. A young person in Germany isn’t the same as a young person in Portugal.’
Despite these challenges, there is progress – at least at the international and local levels. According to Juliane from Lie Detectors, 'There’s a lot of activity at the EU, UNESCO, and OECD levels. Similarly, local initiatives stand out, with a growing number of mayors actively collaborating through associations and launching efforts like Digital Competence Weeks, which have proven highly popular. However, Juliane emphasised the need for action at the national level: 'What’s missing is the middle ground – the action needed from national and regional governments. This is where the real solutions will come from. That being said, there are notable exceptions. Scandinavia, particularly Finland, is often seen as a leader in media literacy, while Austria has also made strides by mandating that basic digital competencies be taught as a separate subject in schools.
In addition to formal education, there is a growing need for non-formal education initiatives to help close the gap. As Helderyse from Tactical Tech points out, many organisations are working to create frameworks that support education systems in teaching digital media literacy.
Safa from Tactical Tech adds: ‘There’s a real need for non-formal, creative, and co-creational interventions like ours, which focus on critical thinking and cross-applicable skills. These are the skills that help young people understand how algorithms work, how data collection functions, and how AI operates. While current education systems tend to focus on practical applications of technology, such as coding or robotics, it’s important to teach these alongside critical thinking skills.’
In the next article of this series, Seden Anlar and Maria Luís Fernandes will explore the contrasting approaches of Belgium and Portugal, hearing directly from teachers and students in both countries to understand their perspectives on media literacy and AI education.
This text is part of the Come Together Fellowship Program, a training program for young journalists led by cultural journal Gerador. The text was written under the guidance of rekto:verso.
This article was published in the context of Come Together, a project funded by the European Union.