Meta vows to do better after study shows it worsens rights risks in Philippines

Many experts consider the Philippines as “Patient Zero” of organized political disinformation.

a57.webp

A smartphone with the Meta logo and a 3D printed Facebook logo is placed on a laptop keyboard in this illustration taken Oct. 28, 2021. (File photo from REUTERS)

December 8, 2021

MANILA, Philippines — Acknowledging that the platform continues to exacerbate “multiple human rights risks” in the Philippines, tech giant Meta (formerly known as Facebook) has vowed to mitigate such risks and to “stop coordinated campaigns” that seek to manipulate public debate on its apps, as the country holds presidential elections next year.

Meta made this commitment after last week’s release of an independent human rights assessment by ethics consulting firm, Article One, which found that Facebook remains the main platform, not only for disinformation but also for hateful speech targeting journalists, human rights defenders, critics, and progressives.

Article One’s assessment was commissioned by Meta as part of its commitment to “understand the role [its] technologies play in society,” following public pressure to address how it fans existing social tensions and destabilizes democracies worldwide.

Many experts consider the Philippines as “Patient Zero” of organized political disinformation, largely believed to have catapulted then Davao City Mayor Rodrigo Duterte to the presidency in 2016. Several tech giants, including Facebook, were criticized for failing to curb what is now a burgeoning disinformation industry of troll farms and clickbait models.

Article One surveyed 2,000 Facebook users and interviewed 22 Philippine-based stakeholders, including members of civil society organizations and academians, journalists and human rights defenders, from March to August.

Among its findings was that Meta “faces multiple salient human rights risks in the Philippines, [which] largely relate to the ways in which the platform exacerbates existing tensions and risks.”

These included misinformation and disinformation, surveillance, terrorist organizing, sexual exploitation, and human and organ trafficking.

Of these, political disinformation continued to be the most prevalent and hardest to suppress. Not only was fake news used to “influence voter perceptions, it was also used to target and attack political opponents.”

Article One recommended, among others, that Meta develop a risk-mitigation plan for the 2022 presidential elections, focusing especially on the organized use of disinformation campaigns.

scroll to top