AI DISINFORMATION

Enhancing critical thinking through chatbots: tackling disinformation

In today’s digital ecosystem, where content spreads rapidly and often without filters, countering disinformation has become an essential educational priority. Generative AI-based chatbots – such as ChatGPT – can provide meaningful support to both students and teachers. The approach presented here outlines a model for the mindful and pedagogically informed use of customised chatbots to strengthen the ability of learners aged 16 and over to recognise, analyse, and large-scale disinformation.

Disinformation refers to false, misleading or manipulated content that is deliberately disseminated with the intention of deceiving, confusing or influencing public opinion. Unlike misinformation (which consists of inaccurate information shared in good faith), disinformation involves a deliberate attempt to distort reality in order to gain an advantage, provoke strong emotional reactions, cause harm to individuals or groups, or influence political, social or economic decisions. It can take various forms: entirely fabricated news stories, decontextualised images, misleading headlines, manipulation of statistical data, altered videos (such as deepfakes), or narratives that mix accurate facts with distorted elements.

Its effectiveness lies in how quickly it spreads via social media, the widespread lack of media literacy, and the use of persuasive techniques that exploit emotions such as fear, anger, or a sense of group identity. In the educational and civic context, the ability to identify and counter disinformation entails the development of critical thinking, source verification skills, and an active awareness of one’s role within the digital information ecosystem.


The educational context

This activity is part of broader learning paths dedicated to digital civic education, media literacy, and the development of critical thinking skills. It is designed for students in the final two years of upper secondary education. The initiative pursues a twofold objective: on one hand, it aims to allow students to engage actively with generative AI-based chatbots, specifically trained to detect and deconstruct manipulative or distorted narratives. On the other hand, it seeks to involve teachers in a co-design process where conversational tools are adapted and customised to reflect the learning needs and competence levels of their specific classroom context.

By integrating AI-powered chatbots into critical thinking development programmes, schools can create meaningful learning opportunities where students are not only consumers of digital content, but also active investigators of its reliability. Teachers, in turn, become facilitators of inquiry, curators of intelligent prompts, and co-creators of pedagogical scenarios that prepare students to navigate complexity with autonomy and discernment.

This approach does not present chatbots as infallible arbiters of truth. On the contrary, it encourages a critical engagement with their outputs, fostering discussions on how such systems are trained, what data they rely on, and what limitations they present. In doing so, it promotes a shift from passive reception to active questioning – a key step towards building resilience against manipulation and cultivating informed, responsible digital citizens.

Integrating chatbots into teaching practice

The integration of customised artificial intelligence-based chatbots into educational settings presents a strategic opportunity to renew teaching practices and foster the development of transversal critical competences. In particular, within the domains of digital citizenship education and media literacy, these tools enable the promotion of active learning, cognitive autonomy, and reasoned interaction with complex informational content.

Chatbots can be incorporated into both physical and digital school environments through structured platforms such as learning management systems or online collaborative spaces. Their adoption does not imply replacing the teacher but rather reinforcing the teacher’s role as a pedagogical guide. This supports a more flexible and personalised approach to teaching, one that is centred on nurturing critical thinking.

Chatbots may serve a range of educational purposes across the different phases of the learning process:
In the initial phase, to trigger discussion and stimulate curiosity;
During the activity, to assist with research, guide analysis, and facilitate peer dialogue;
In the concluding phase, to support metacognitive reflection and the critical evaluation of one’s learning path.

Their pedagogical application can be structured according to four main operational approaches:

1. Interactive simulations
Chatbots engage with students by simulating complex communicative situations that require decoding ambiguous, partial or misleading information. These interactions prompt learners to identify problematic elements within messages, detect signs of manipulation, and evaluate the logical coherence and argumentative structure of the claims presented. This fosters vigilance, precision in interpretation, and a habit of questioning.

2. Guided source verification activities
Chatbots can lead students through structured sequences of questioning, analysis, and validation of information sources. This scaffolds a methodical and informed approach to evidence gathering. Through such tasks, students consolidate their ability to recognise the quality of information, assess the reliability of sources, and document their verification processes in a traceable manner.

3. Argumentative discussions
Chatbots can be programmed to adopt dialectical roles that encourage critical engagement. When prompted to interact with divergent opinions or problematic assertions, students are required to exercise their argumentative skills by defending their positions with data, references, and logically coherent reasoning. These exchanges take place in a respectful and well-documented dialogue, helping learners to refine their communication and persuasive capacities.

4. Workshops on cognitive mechanisms
By engaging in structured conversations, chatbots can help students explore their own decision-making processes and the ways in which they interpret and select information. This type of activity fosters awareness of cognitive biases, emotional or social influences, and supports the development of reflective strategies for interacting with digital content. Through this introspective dimension, students strengthen their ability to manage information overload and become more conscious agents in digital spaces.

In sum, chatbots offer a valuable pedagogical resource not merely as content generators or repositories of information, but as dynamic interlocutors that stimulate thinking, invite critical questioning, and mirror the complexities of the real-world information environment. For such tools to be effective, however, their use must be accompanied by careful instructional planning, ongoing reflection, and alignment with educational objectives rooted in ethical and civic responsibility.

The effectiveness of integrating chatbots depends on pedagogical clarity and critical design

The effectiveness of integrating chatbots into educational contexts hinges on several key factors: the clarity of the educational objectives, the quality of the sources used to train the chatbot, the coherence of the instructions given to the system, and the teacher’s ability to actively manage the interaction between students and technology. It is essential that the use of chatbots be accompanied by structured activities involving discussion, documentation, and collective reflection. This helps prevent uncritical reliance on the machine and, instead, fosters autonomous thinking and a sense of responsibility in the process of knowledge construction.

Thanks to the functionalities offered by platforms such as OpenAI’s ChatGPT and Brisk Teaching, educators now have the opportunity to create customised chatbots tailored to support specific educational aims – in this case, the fight against disinformation. These chatbots go beyond simply providing generic responses: they are configured directly by teachers to reflect the content, sources, roles, and teaching strategies that align with the topic at hand.

Importantly, this customisation process does not require the teacher to write code or possess advanced programming skills. The teacher acts as a “pedagogical curator” of the chatbot, deciding what it should say, how, to whom, and for what educational purpose. In doing so, generative artificial intelligence becomes an instrument of education, not a substitute for it.


Guided simulations with chatbots: training to uncover disinformation

Description:
Students interact with a chatbot that presents ambiguous, controversial, or potentially manipulated content. The goal is to develop their ability to critically analyse claims, verify the consistency of sources, and justify their judgement regarding the reliability of the information.

Implementation guidelines:

  • The teacher configures the chatbot with prompts such as: “Present the student with an online article containing misleading information about vaccine safety.”
  • Students work individually or in pairs, responding to the chatbot, noting their doubts, and flagging suspicious elements (e.g., excessive emotional language, data without references, clickbait headlines).
  • In a whole-class setting, students share the strategies they used to identify disinformation and compare outcomes.

Fact-checking workshops: the chatbot as digital tutor

From carefully designed prompts, the chatbot guides students through a structured verification process, helping them to distinguish between factual information, personal opinions, arbitrary interpretations, and deliberately manipulated content.

Implementation guidelines:

  • The teacher introduces a controversial claim or a common narrative circulating on social media (e.g., “Climate change is a hoax invented for political reasons”).
  • Students question the chatbot to assess the credibility of the claim, using prompts such as: “What sources support this statement?” or “Is this information consistent with official data?”
  • The chatbot responds with links to reliable sources, suggests fact-checking tools, and encourages cross-referencing.
  • The workshop concludes with a reflective worksheet in which students summarise their process and identify three criteria for detecting disinformation in the future.

Simulated argumentative discussions: the chatbot as a provocative interlocutor

The chatbot adopts the role of a character who holds a controversial stance on a current issue (e.g., immigration, ecological transition, artificial intelligence). Students must respond with counterarguments, citing verifiable data and trustworthy sources.

Implementation guidelines:

  • The teacher sets the chatbot to a specific role: “Defend the idea that the Earth is flat, using common yet misleading rhetorical arguments.”
  • Students must refute these claims, using sources such as ESA, NASA, Treccani Encyclopaedia, or peer-reviewed academic articles.
  • The activity may take the form of a team debate, with the chatbot acting as moderator and assessing consistency, argumentative effectiveness, and use of sources.

Personalised and multidisciplinary learning paths: adapting chatbots to the school context

Depending on the school level, subject area, or educational track, the chatbot can be programmed to address disinformation from different disciplinary perspectives – linguistic, psychological, legal, scientific.

Implementation guidelines:

  • In a language-focused secondary school, the chatbot analyses the language of fake news in multiple languages, comparing tone, vocabulary, and rhetorical structure.
  • In a technical institute, the chatbot presents case studies on environmental or technological disinformation, guiding students through data analysis.
  • In a humanities-based school, the chatbot helps students explore the psychological effects of fear induced by disinformation and the role of cognitive biases.
  • Students may choose a “thematic path” and work in small groups on different scenarios. The teacher can then facilitate a collective synthesis using multimedia materials, in-depth articles, or oral presentations.

By adopting such structured approaches, educators can transform chatbot technology into a versatile pedagogical ally. Far from being mere digital assistants, these systems can become cognitive partners that encourage questioning, debate, and discernment. Yet, their value depends entirely on how they are framed and facilitated. The challenge – and opportunity – for today’s teachers lies in designing meaningful, ethically sound learning experiences where AI contributes not to replacing human judgement, but to deepening it.

Using customised chatbots in disinformation education: more than a technological exercise

The use of customised chatbots for educational activities on disinformation is far more than a mere exercise in applied technology. It represents a redefinition of the relationship between teacher, content, and student by introducing a new mediator – one that provokes reasoning, poses challenging questions, offers stimuli, and proposes real-time checks. However, it is crucial to remember that these tools are not infallible. Students must also be trained in digital metacognition: the ability to reflect on the reliability and limitations of chatbot-generated responses. It is the teacher’s responsibility to supervise the learning experience, carefully select the data input into GPTs, and guide the comparison between automated answers and independent, critical thinking.

It is therefore essential that the teacher:

  • Takes on an active pedagogical role, monitoring student–chatbot interactions and integrating chatbot contributions with further explanations, clarifications, and in-person discussions.
  • Addresses the limitations of the technology openly, warning students against uncritical acceptance of automatically generated responses.
  • Encourages metacognitive reflection, for example by asking questions such as:
    “Would you trust this answer blindly?”,
    “How would you verify this information, even if provided by a chatbot?”,
    “Did you notice inconsistencies or omissions?”

Concrete examples of customisation

Fact-Checker GPT (Created with ChatGPT)

The teacher creates a GPT incorporating guidelines from reputable Italian and international fact-checking organisations (e.g. Pagella Politica, Facta, Snopes, EUvsDisinfo). The chatbot is programmed to respond solely on the basis of verified sources and to cite them during interactions. Students ask questions such as:
“Was the war in Ukraine provoked by NATO?”
The chatbot replies by referencing geopolitical data and fact-checked articles, helping students understand how to distinguish verified claims from strategic disinformation.

Disinformation detective (using Brisk Teaching)

Within a Google Docs assignment, the teacher inserts a fabricated text containing six rhetorical manipulations. Brisk generates a chatbot that interacts with the student, asking:
“What do you notice in this statement? Could it be a sign of disinformation?”
The chatbot provides immediate feedback, suggesting strategies to investigate the statement further or rephrase it in a more neutral manner.


Creating Custom GPTs with ChatGPT

Through OpenAI’s ChatGPT platform, educators can access the Custom GPTs creation area at https://chat.openai.com/gpts. The process is designed to be intuitive and user-friendly, allowing even those without programming expertise to configure their own educational chatbot.

During configuration, teachers can personalise the chatbot’s behaviour, tone, and sources according to their specific pedagogical goals. The first step is to define the role the chatbot will assume – for instance, it can be instructed to behave as a fact-checker, a media literacy educator, or a neutral moderator in digital debates. This initial guidance is essential to ensure consistency in the model’s responses.

A central component of this personalisation involves uploading or referencing a reliable and up-to-date set of sources, such as:

  • Articles and dossiers from accredited fact-checking projects (e.g. Facta, Pagella Politica, Snopes, EUvsDisinfo);
  • Institutional reports on media literacy, such as UNESCO guidelines on disinformation or documents from the European Commission on digital information quality;
  • Teacher-selected materials, including training modules, worksheets, or manipulated content examples for educational use.

In addition to content, teachers can also provide custom behavioural instructions, defining precisely how the chatbot should respond to users. These custom instructions go well beyond setting a formal or friendly tone – they allow the teacher to shape the depth of reasoning, the structure of the response, and the type of content to prioritise. Examples include:

  • “Explain the difference between disinformation, misinformation, and malinformation, using current events as examples.”
  • “Always emphasise the importance of cross-checking sources, inviting the student to consult at least two reliable references.”
  • “When dealing with questionable content, guide the student with Socratic questions to stimulate critical thinking, rather than providing an immediate answer.”

By adopting this approach, the teacher transitions from being a passive user of AI tools to an active designer of meaningful learning experiences. Customised chatbots become powerful pedagogical mediators, capable of fostering engagement, dialogue, and independent judgement – but only when used within a framework of critical awareness and reflective practice.

The aim is not to make students dependent on AI, but to equip them with the skills to question it. As such, the true value of these technologies lies not in their automation, but in the educational conversations they enable.

Adapting language and testing custom GPTs: from AI tool to tailored pedagogical assistant

In addition to defining the role and behavioural instructions of a chatbot, teachers can modulate the complexity of its language to suit the age, education level, or specific school context. For example, the chatbot can be instructed to use simple, accessible language for lower secondary students, or a more technical and argumentative register for students in their final year or in humanities and social science programmes.

Moreover, teachers can test their Custom GPT directly within the ChatGPT interface by simulating an interaction with a student. This allows them to observe how the chatbot handles questions, responds to sensitive topics, and reasons in the face of false or misleading content. Such testing is a crucial step before classroom deployment, allowing for refinements to ensure the chatbot functions as an effective educational tool.

The possibility of creating a Custom GPT using ChatGPT enables teachers to transform generative artificial intelligence into a tailor-made teaching assistant. This assistant can guide students through the critical exploration of digital information, support the deconstruction of deceptive narratives, and foster the responsible and conscious use of media.

https://chatgpt.com/g/g-6825dd4b5f488191a36347fdd3fc73af-media-mentor


Custom chatbots with Brisk Teaching

Brisk Teaching is a platform integrated with the Google Workspace for Education environment (especially Google Docs, Slides, and Classroom). It enables teachers to rapidly generate customised chatbots, each designed to perform specific instructional functions. A distinctive strength of Brisk lies in its ability to adapt in real time to the materials provided and to defined learning goals. This flexibility makes it a powerful tool for activities focused on digital citizenship education and critical thinking.

The process begins by uploading or selecting a targeted set of educational resources. Teachers may choose either authentic or simulated materials, such as:

  • News articles or blog extracts, which support analysis of journalistic language and source selection;
  • Official statements from institutions such as the WHO, the European Union, or national public health agencies, to be compared with manipulated or decontextualised versions;
  • Debunking content from accredited fact-checking sites (e.g. Facta, Pagella Politica, EUvsDisinfo), to enable contrast between accurate reporting and misleading narratives.

Once the materials are in place, Brisk automatically generates a teaching chatbot capable of simulating realistic scenarios in which students are challenged to assess the quality and reliability of information. The chatbot can be programmed to assume various roles depending on the educational context:

  • The Disinformation Influencer: The chatbot impersonates a digital persona spreading sensationalist messages, pseudoscientific content, or fake news. Students must identify persuasive techniques, recognise missing or distorted sources, and construct counterarguments based on evidence.
  • The Digital Fact-Checker: The chatbot guides students through verification tasks, offering reliable sources, explaining how to trace claims back to primary data, and identifying manipulation, omission, or false balance in posts and articles.

In both cases, the interaction unfolds directly within the working document, with a smooth interface that provides immediate feedback, contextual suggestions, and progression to interactive tasks.

Progressive learning paths and pedagogical impact

Brisk can also be configured to propose progressively challenging exercises, enabling students to tackle increasingly complex informational scenarios. Examples include:

  • Analysing a viral post that contains logical fallacies, such as overgeneralisations, false causality, or ad hominem attacks;
  • Examining misleading graphs or statistics, verifying the origin of the data and assessing the accuracy of their interpretation;
  • Rewriting emotionally charged or biased texts into balanced, fact-based articles that avoid rhetorical manipulation.

From a pedagogical standpoint, Brisk supports teachers in monitoring students’ cognitive processes and encourages metacognitive reflection. Rather than merely labelling an answer “right” or “wrong,” students are guided to understand why a piece of content may be unreliable and how it can be improved.

Thanks to its seamless integration with Google Workspace, all interactions can be archived, shared, and evaluated. This makes Brisk an ideal solution for personalised learning, digital civic education, and the development of transversal competences such as argumentative analysis, critical communication, and media responsibility.

Brisk and the scalable integration of generative AI in education

Brisk offers educators a concrete and scalable opportunity to integrate generative AI into teaching practice, particularly in fostering active digital citizenship. By empowering students to recognise, deconstruct, and resist disinformation through critical tools and practical skills, Brisk aligns with the broader objective of equipping young people to navigate complex digital ecosystems with discernment and integrity. It is not merely about detecting falsehoods, but about nurturing a sceptical, informed, and ethically grounded relationship with information technologies – a vital competence for contemporary education.


Using educational chatbots to counter cognitive biases and strengthen informational awareness

In today’s educational context—marked by increasing exposure to all kinds of digital content—information literacy cannot be reduced to teaching students how to verify sources. It must also include a guided exploration of the mental mechanisms that shape how individuals perceive and interpret information, often unconsciously. Among these mechanisms, confirmation bias and the Dunning–Kruger effect represent two significant obstacles to developing balanced critical thinking and resisting disinformation.

The integration of customised chatbots into the learning environment provides a concrete opportunity to address these cognitive distortions in an active and reflective way. Thanks to their ability to generate adaptive interactions, pose metacognitive questions, and present differentiated scenarios, chatbots can act as facilitators of self-analysis, argumentative dialogue, and awareness of one’s own knowledge limits.

Confirmation bias: recognising the influence of personal beliefs

Confirmation bias is the tendency to seek out, interpret, and remember information that supports pre-existing beliefs, while ignoring or dismissing evidence that contradicts them. Although rooted in the natural human preference for cognitive coherence, this bias undermines objectivity and contributes to the formation of “information bubbles”, reinforcing polarisation and the spread of partial or distorted content.

In schools, educational efforts must aim to make this bias visible, turning it into an object of critical reflection. Appropriately configured chatbots can help by presenting students with content that challenges their assumptions, posing questions that prompt them to revise their views, and encouraging engagement with contrasting sources. They can also be programmed to detect signs of cognitive closure or interpretive selectivity, offering feedback that invites students to explore alternative perspectives.

Through such dialogues, students can be trained to:

  • recognise how personal preferences influence their interpretation of information;
  • engage meaningfully with opposing viewpoints—even when they feel uncomfortable;
  • verify all information, including that which appears to confirm what they already believe.

The Dunning–Kruger effect: encouraging metacognitive awareness

The Dunning–Kruger effect is a metacognitive bias whereby individuals with low competence in a given area tend to overestimate their knowledge, while those with high competence are more likely to recognise their limitations. This phenomenon contributes to the unwitting spread of disinformation: those lacking the tools to critically evaluate complex issues are often more inclined to share simplified, inaccurate, or misleading content with unwarranted confidence.

In education, countering the Dunning–Kruger effect means fostering metacognition—the ability to reflect on one’s level of understanding, acknowledge knowledge gaps, and adopt a mindset open to learning. Educational chatbots can support this by posing reflective questions, offering adaptive feedback, and encouraging self-assessment. They can be used to:

  • constructively highlight when a student makes unfounded or overly simplistic claims;
  • provide scaffolded explanations that reveal the complexity of a topic and suggest further exploration;
  • promote caution in judgement, countering the impulse to draw hasty or absolute conclusions.

Building a culture of cognitive awareness

The aim is not to discourage students from expressing opinions, but to create the conditions in which they can become aware of their cognitive limitations and develop the habit of questioning what they know—and what they do not know. Both confirmation bias and the Dunning–Kruger effect act as invisible barriers to accurate and impartial evaluation of information. For this reason, educational activities aimed at countering disinformation must go beyond fact-checking techniques and include explicit instruction in cognitive awareness.

Chatbots, when used intentionally and guided by sound pedagogy, can provide valuable support in this area. For them to be effective, they must be configured not just to provide correct answers, but to stimulate cognitive and emotional processes: to provoke doubt, encourage critical revision, and emphasise the learning journey over the immediate result.

The teacher retains a central role in this process—as mediator and facilitator—supervising student–chatbot interactions, interpreting cognitive reactions, guiding metacognitive dialogue, and integrating digital activities with moments of collective reflection.

Reframing the fight against disinformation

Countering disinformation is not only a matter of improving the quality of content, but also of improving the quality of thinking. Confirmation bias and the Dunning–Kruger effect, though invisible, exert a powerful influence on how individuals form beliefs and interact with information. Using chatbots to surface, recognise, and deconstruct these biases transforms artificial intelligence into a tool for cognitive self-education, capable of strengthening critical thinking, self-evaluation, and informational responsibility.

A school that aspires to educate competent, aware digital citizens cannot ignore this deeper level of educational engagement. It is by confronting the way we think, not just the things we know, that we prepare learners to navigate a world of complexity, uncertainty, and ever-evolving digital communication.

A meaningful learning opportunity

The use of customised chatbots to counter disinformation represents a meaningful educational opportunity to promote digital responsibility and informational competence among young people. Thanks to the ability to configure GPTs to suit specific learning goals, teachers can offer students authentic, engaging, and cognitively rich experiences—positioning AI as an ally in the formation of informed and critically aware citizens. However, such an alliance demands vigilance, pedagogical mediation, and a critical approach to technology. Only under these conditions can AI truly serve as an educational resource.

The capacity to create tailored educational chatbots offers a powerful, flexible, and accessible means of addressing the pressing issue of disinformation in schools. By using platforms such as ChatGPT and Brisk in a targeted and intentional way, educators can design teaching experiences that not only engage learners but also build essential skills: critical analysis, source evaluation, argumentation, and responsible digital behaviour.

Yet, this innovation must be anchored in a robust pedagogical framework. Chatbots do not replace teaching—they enhance it when embedded within a thoughtful and intentional educational project. Their value lies not in the automation of content delivery, but in their capacity to facilitate meaningful dialogue, stimulate reflection, and support the development of independent thinking.

In an era marked by informational manipulation and social polarisation, teaching students to recognise and resist disinformation is not merely a technical skill—it is an act of civic engagement. Doing so with the support of artificial intelligence, when guided by pedagogical rigour and critical awareness, can make a genuine and lasting impact. It is not the chatbot alone that fosters change, but the way it is integrated, questioned, and used to promote a deeper understanding of truth, knowledge, and responsible participation in the digital world.

Certainly, here is the UK English translation of your bibliography and the list of fact-checking resources and platforms:


Bibliography

European Commission. (2022). Final report of the Commission Expert Group on tackling disinformation and promoting digital literacy through education and training.
URL: https://education.ec.europa.eu/document/final-report-of-the-expert-group-on-tackling-disinformation-and-promoting-digital-literacy-through-education-and-training

Joint Research Centre. (2022). DigComp 2.2 – The Digital Competence Framework for Citizens.
URL: https://joint-research-centre.ec.europa.eu/digcomp-digital-competence-framework_en

Machete, P., & Turpin, M. (2020). The use of critical thinking to identify fake news: A systematic literature review. In Responsible Design, Implementation and Use of ICT. Springer.
URL: https://link.springer.com/chapter/10.1007/978-3-030-45002-1_20

McDougall, J. (2019). Media literacy versus fake news: Critical thinking, resilience and civic engagement. Medijske Studije, 10(19), 29–45.
URL: https://hrcak.srce.hr/file/330876

Mihailidis, P., & Viotty, S. (2017). Spreadable spectacle in digital culture: Civic expression, fake news, and the role of media literacies in ‘post-fact’ society. American Behavioral Scientist, 61(4), 441–454.
URL: https://journals.sagepub.com/doi/abs/10.1177/0002764217701217

Troia, S. (2020). Mi posso fidare? Contrastare con le competenze la disinformazione. BRICKS, 4.
URL: https://www.rivistabricks.it/wp-content/uploads/2020/09/2020_04_19_Troia.pdf

Troia, S. (2022). Disinformation, the EU guidelines for teachers and educators: What they are and the next steps. Agenda Digitale.
URL: https://www.agendadigitale.eu/scuola-digitale/lotta-alla-disinformazione-le-linee-guida-ue-per-insegnanti-e-educatori-cosa-sono-e-prossimi-step

Troia, S. (2023). Understanding the European vision for digital education to foster active engagement. Impara Digitale.
URL: https://imparadigitale.nova100.ilsole24ore.com/2023/02/05/conoscere-la-visione-europea-per-listruzione-digitale-per-vivere-un-coinvolgimento-attivo-deap-digcomp

Troia, S. (2025). Guide to generative AI in education and research. Cittadinanza Digitale.
URL: https://www.cittadinanzadigitale.eu/?cat=315

Troia, S. (2025). The European Digital Skills Certificate: Why it matters. Agenda Digitale.
URL: https://www.agendadigitale.eu/giornalista/sandra-troia

Troia, S. (2025). LAB_Cyberbullingfreezone. Cittadinanza Digitale.
URL: https://www.cittadinanzadigitale.eu/?cat=315

UNESCO. (2018). Journalism, “fake news” and disinformation: A handbook for journalism education and training.
URL: https://en.unesco.org/fightfakenews

Vraga, E. K., Bode, L., & Tully, M. (2020). Creating news literacy messages to enhance expert corrections of misinformation on Twitter. Communication Research.
URL: https://journals.sagepub.com/doi/full/10.1177/0093650220910064

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking. Council of Europe.
URL: https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c

Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11), 1–40.
URL: https://www.tc.columbia.edu/faculty/sw2342/faculty-profile/files/LateralReadingWineburgMcGrewTCR2019.pdf


Fact-checking Resources and Platforms

EUvsDisinfo – European Union initiative to counter disinformation.
URL: https://euvsdisinfo.eu

Facta News – Italian project dedicated to fact-checking and debunking misinformation.
URL: https://facta.news

Pagella Politica – Italian fact-checking platform focused on political statements.
URL: https://pagellapolitica.it

Snopes – International fact-checking website.
URL: https://www.snopes.com