A Year of Elections: Meta’s Evolving Role in Safeguarding Democratic Processes

The recent presidential election in the United States concluded an extraordinary year where nearly two billion individuals were expected to participate in elections across some of the world’s largest democracies, including India, Indonesia, Mexico, and the European Union. As a platform fostering public dialogue, Meta recognizes its critical responsibility to protect the democratic process, ensuring people can freely express their voices while preparing for elections worldwide.

Since 2016, we have refined our approach to managing election integrity, adapting to lessons learned and anticipating emerging risks. Our cross-functional election integrity team includes experts from intelligence, data science, product development, engineering, public policy, and legal disciplines. In 2024, we operated election operations centers globally to promptly address election-related challenges, covering major electoral events in the US, Bangladesh, Indonesia, India, Pakistan, the EU, France, the UK, South Africa, Mexico, and Brazil.

Now that the US election is behind us, we’re sharing key trends observed on our platforms and explaining how we balanced safeguarding free expression with maintaining user safety.


Balancing Free Expression and Safety

Allowing users to voice their opinions while ensuring a secure environment is a complex challenge, and no platform can achieve perfection in this area. We acknowledge that errors in policy enforcement can hinder the very freedom of expression we strive to promote, sometimes resulting in the removal or restriction of non-violative content. Throughout the year, we’ve focused on updating and applying our content policies equitably, and we remain committed to this goal.

  • Enhanced Political Content Controls: We introduced new tools on Facebook, Instagram, and Threads to let users control the amount of political content they see. These controls have rolled out in the US and are expanding globally.
  • Clarified Election-Related Policies: Users can post organic content questioning election processes, but we prohibit claims of corruption or irregularities when paired with threats of violence. For paid content, ads disputing the legitimacy of ongoing or upcoming elections remain banned.
  • Improved Penalty Systems: In 2023, we adjusted our penalties to ensure fairness, enabling people to voice their opinions while addressing repeat policy violations.
  • Hate Speech Policy Audits: Annual reviews of slur designations prioritize markets with imminent elections to protect vulnerable communities while supporting healthy political discussions.
  • Updated Protocols for Public Figures: Policy revisions allowed Americans to hear from presidential candidates during critical moments of civil unrest, with periodic reviews to assess enhanced penalties.

Promoting Voter Awareness

In 2024, Meta prioritized connecting users with accurate election information:

  • In-App Notifications: Facebook and Instagram’s reminders about registration, mail-in voting, and election day logistics garnered over 1 billion impressions during the US general election, with 20 million clicks leading to official sites.
  • Voting Alerts: Collaborating with local election officials, we delivered over 765 million notifications since 2020, adapting dynamically to local needs.
  • Search Result Interstitials: US users searching election-related terms received links to authoritative voting resources on Facebook and Instagram.
  • International Engagement: Election notifications reached millions worldwide. For example, India’s alerts via Facebook and WhatsApp reached 145 million and 400 million users, respectively, while in the UK, reminders reached 43.6 million people.

Transparency in Political Advertising

Meta remains a leader in ad transparency for political and social issues. Advertisers must complete a verification process and include a “paid for by” disclaimer, with ads stored in the public Ad Library for seven years. In 2024, we introduced additional requirements for AI-generated political content, ensuring accountability.


Monitoring AI’s Role in Elections

The rise of generative AI posed potential challenges, such as deepfakes and misinformation campaigns. However, the actual impact during 2024 elections was modest, and our existing safeguards effectively mitigated risks. AI-generated election-related misinformation represented less than 1% of fact-checked content.

Key actions included rejecting over 590,000 deepfake image requests related to political figures, closely monitoring covert AI-influenced campaigns, and collaborating with industry leaders on AI-related standards, such as the AI Elections Accord. Initiatives like India’s WhatsApp tipline and the European Fact-Checking Standards Network helped combat AI-driven misinformation globally.


Combating Foreign Interference

This year, Meta dismantled 20 covert influence operations across regions including the Middle East, Asia, and Europe. Russia remains the largest source of such networks, with 39 disrupted since 2017, followed by Iran and China. These operations often relied on fake accounts and platforms outside Meta’s ecosystem to evade detection. Our efforts to counter operations like Doppelganger have resulted in the exposure of thousands of fake domains.

Ahead of the US elections, Meta expanded restrictions on Russian state media entities, banning accounts associated with Rossiya Segodnya and RT globally for violating policies against foreign interference.


Reflecting on 2024 and Beyond

The year 2024 has underscored the ongoing challenge of balancing freedom of expression with security. As Meta reflects on this year’s unprecedented electoral landscape, we will continue refining our policies and approaches to uphold integrity and support democratic participation worldwide.

Source: https://about.fb.com

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments