July 16, 2024 | 6:07


WEF promotes AI to combat disinformation with censorship

Mairenis Gomez

June 19, 2024 | 6:00 pm

The World Economic Forum (WEF) has decided to use artificial intelligence (AI) as a tool to combat disinformation

El World Economic Forum (WEF) continues to promote the use of artificial intelligence (AI) to combat “disinformation”, while warning of the dangers of AI in creating that same disinformation. The paradox is clear: they want to control AI to censor with the same tool that, according to them, generates misinformation.

In a recent WEF report, Cathy Li, Head of AI, Data and Metaverse, and Agustina Callegari, leader of the Global Coalition for Digital Security Project, urge policymakers, technology companies, researchers and civil rights groups to collaborate in developing advanced AI systems to combat disinformation and misinformation.

Advanced AI proposals and techniques

They propose techniques based on the analysis of patterns, language and context to improve content moderation. According to the authors, AI is at a level that allows us to almost perfectly understand the nuances between intentional disinformation and unintentional misinformation.

The proliferation of artificial intelligence in the digital age has led to notable innovations and unique challenges, particularly in information integrity. AI technologies, with their ability to generate fake and convincing text, images, audio and videos, They present significant difficulties in distinguishing authentic from synthetic content.

AI as a double-edged tool

This capability allows bad actors to automate and amplify disinformation campaigns, increasing their reach and impact. However, AI also plays a crucial role in the fight against disinformation. Advanced AI-powered systems can analyze patterns, language usage, and context to assist in content moderation, fact-checking, and detecting false information.

Social cost of misinformation

The consequences of unchecked AI-driven disinformation are profound and can erode the very fabric of society. WEF Global Risks Report 2024 identifies disinformation as a serious threat in the coming years, highlighting the possible increase in internal propaganda and censorship.

Political and social risks

The political misuse of AI poses serious risks, as the rapid spread of deepfakes and AI-generated content makes it increasingly difficult for voters to discern truth from falsehood. This can influence voter behavior and undermine the democratic process. Elections may be affected, public trust in institutions may decline, and social unrest may increase, even triggering violence.

Targeted disinformation campaigns

Additionally, disinformation campaigns can target specific demographic groups with harmful AI-generated content. Gender misinformation, for example, perpetuates stereotypes and misogyny, further marginalizing vulnerable groups. These campaigns manipulate public perception, causing widespread social harm and deepening existing social divisions.

Multi-pronged approach to tackling misinformation

The rapid development of AI technologies often outpaces government oversight, leading to potential societal harms if not carefully managed. Industry initiatives such as content authenticity and watermarking address key concerns about misinformation and content ownership.

They propose techniques based on the analysis of patterns, language and context to improve content moderation

Initiatives and collaborations for authenticity

Coalition for Content Provenance and Authenticity (C2PA), comprised of Adobe, Arm, Intel, Microsoft and TruePic, addresses the prevalence of misleading information online by developing technical standards to certify the source and history of multimedia content.

To mitigate the risks associated with AI, developers and organizations must implement robust safeguards, transparency measures, and accountability frameworks. As AI continues to transform our world, it is imperative to advance our approach to digital security and information integrity.

More news