- Meta launches anti-disinformation team
- Targets AI misuse, election threats
- Signs pledge against AI deception
Prior to the forthcoming European Parliament elections, the owner of Facebook, Meta, has announced plans to set up a specialised team to counter disinformation and the adverse effects of artificial intelligence (AI).
Marco Pancini, Head of EU affairs at Meta, stated that the “EU-specific Elections Operations Centre” would bring together experts from across the company to tackle misinformation, influence operations, and threats stemming from the misuse of artificial intelligence.
In a blog post on Sunday, Pancini wrote, “Ahead of the election period, we will facilitate our fact-checking partners across the EU in finding and rating content related to the elections, recognising the importance of speed during breaking news events.”
“We will use keyword detection to collate related content in one place, simplifying the process for fact-checkers to find it.”
Pancini also noted that Meta’s efforts to mitigate AI risks would include introducing a feature mandating users to indicate when they share AI-generated audio or video, with potential penalties for non-compliance.
“We already label photorealistic images generated by Meta AI, and we are developing tools to label AI-generated images that users post on Facebook, Instagram, Threads, and Midjourney, Shutterstock, Google, and OpenAI,” he added.
The advent of AI platforms such as Google’s Gemini and OpenAI’s GPT-4, amid concerns that fake information, images, and videos could sway election outcomes, has raised alarms.
Global Elections Tackle AI Threats
The EU parliament elections, scheduled from June 6 to 9, are part of several crucial polls to be held in 2024, a year considered the most significant in history for elections.
In elections that will see participation from voters in over 80 countries, including the United States, India, Mexico, and South Africa, around half of the world’s population is represented.
Meta, along with 19 other technology firms (Google, Microsoft, X, Amazon, and TikTok), signed a commitment earlier this month to clamp down on AI-generated content designed to deceive voters.
Under the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” the companies agreed on an eight-step plan to reduce election risks. This plan involves creating tools to identify AI-generated content and enhancing transparency about the actions taken to tackle potentially harmful material.
“Your path to wealth begins here – don’t wait, get your free Webull shares.”
Several electoral processes have already come under scrutiny for the influence of AI on voters.
Earlier this month, ahead of Pakistan’s parliamentary elections, the incarcerated former Prime Minister Imran Khan mobilised supporters with AI-generated speeches.
A fake robocall in January, falsely claiming to be from US President Joe Biden, discouraged voters from participating in the New Hampshire primary election.