Facebook, TikTok, Twitter failed electoral integrity test in Kenyan elections • TechCrunch

Social media platforms Facebook, TikTok and Twitter failed to deliver on their electoral integrity promises during Kenya’s August elections, according to a new study from the Mozilla Foundation. The report says that content labeling failed to stop disinformation, as political advertising served to amplify propaganda.

The study found that hours after voting ended in Kenya, these social media platforms were awash with misinformation and misinformation about the candidates who had supposedly won the election, and that Twitter and Tiktok hashtags were spotty and failed to stop the spread of these falsehoods. . He says the irregular labeling of posts calling for elections ahead of the official announcement affected some parties more than others, making the platforms appear partisan.

Facebook largely failed on this front by having “no visible labels” during the election, allowing the spread of propaganda, such as claims about the kidnapping and arrest of a prominent politician, that had been debunked by local media outlets. . Facebook recently placed a label on the original post that claimed the prominent politician was kidnapped and arrested.

“The days after the Kenyan federal election were an online dystopia. More than anything, we needed platforms that delivered on their promises of being trusted places for election information. Instead, they were just the opposite: sites of conspiracy, rumors, and false claims of victory,” said Odanga Madung, a Mozilla Tech and Society Fellow who conducted the study and previously raised concerns about the inability of platforms to moderate content on the web. previous period. to the Kenyan elections. Mozilla found similar flaws during the 2021 German elections.

“This is especially discouraging given the platform’s promises ahead of the election. In a matter of hours after the polls closed, it became clear that Facebook, TikTok and Twitter lack the resources and cultural context to moderate electoral information in the region.”

Prior to the elections, these platforms had issued statements about the steps they were taking in the run-up to the Kenyan elections, including partnerships with fact-checking organizations.

Madung said that in markets like Kenya, where the level of institutional trust is low and questionable, there was a need to explore how labeling as a solution (which had been tried in Western contexts) could also be applied in these markets.

This year’s Kenyan general election was unlike any other as the country’s electoral body, the Independent Electoral and Boundary Commission (IEBC), released all results data to the public. in their quest for transparency.

The media, the parties of the main presidential contenders: Dr. William Ruto (now president) and Raila Odinga, and individual citizens conducted parallel recounts that produced mixed results, “triggering even more[ed] confusion and anxiety throughout the country.”

“This untamed anxiety found its home in online spaces where a plethora of misinformation and disinformation thrived: premature and false claims of winning candidates, unverified statements related to electoral practices, fake and parodic accounts of public figures…”

Madung added that the platforms implemented interventions when it was already too late and ended shortly after the elections. This is despite the knowledge that in countries like Kenya, where results have been challenged in court in the last three elections, more time and effort is required to counter misinformation and disinformation.

political advertising

The study also found that Facebook allowed politicians to advertise 48 hours before election day, breaking Kenyan law, which requires campaigns to end two days before elections. It found that people could still buy ads and that Meta applied less strict rules in Kenya than in markets like the US.

Madung also identified several ads that contained premature election announcements and results, something Meta said it did not allow, raising the question of security.

“None of the ads had any warning labels – the (Meta) platform just took the money from the advertiser and allowed them to spread unverified information to audiences,” he said.

“Seven ads can hardly be considered dangerous. But what we identified along with the findings of other researchers suggests that if the platform was unable to identify offensive content in what was supposed to be its most controlled environment, then questions should arise about whether there is any safety net on the platform. ”, said the report.

Meta told TechCrunch that “it’s up to advertisers to ensure they comply with relevant election laws,” but has put measures in place to ensure compliance and transparency, including verification of people running ads.

“We prepared a lot for the Kenyan elections over the past year and put in place a number of measures to keep people safe and informed, including tools to make political ads more transparent, so people can scrutinize them and make the right decisions. responsible accountable. We make it clear in our Advertising Standards that advertisers must ensure that they comply with the relevant electoral laws in the country in which they wish to run ads,” said the Meta spokesperson.

Mozilla is asking platforms to be transparent in the actions they take in their systems to find out what works to stop disinformation and disinformation, and to initiate interventions early enough (before elections are held) and after sustain efforts after results have been achieved. declared.

Leave a Comment