Connect with us

Hi, what are you looking for?

Gain That FlavourGain That Flavour

Editor's Pick

Meta Oversight Board’s Mixed Bag of Decisions

David Inserra

meta

Meta’s Oversight Board, an independent review body sometimes called the Supreme Court for Facebook, recently issued a series of decisions on controversial issues, mostly regarding hate speech. Unfortunately, the Board’s decisions continue a concerning trend toward a limited and inconsistent view of free expression online. 

The Board was originally imagined as a way to help Meta with difficult decisions and protect free expression on Meta’s platforms in the face of growing internal and external pressures to limit speech. This is reflected in the Board’s charter, which states that “the purpose of the board is to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Meta’s content policies.” 

But in my time at Meta, in which I directly supported the work of the Board, and in my research into the Board over the past year or so, it has become clear that the Board’s decisions have fallen short of this lofty goal. While I have forthcoming research that analyzes the successes, failures, and overall impact of the Board, the Board’s recent decisions are a microcosm of the challenges facing the institution. Namely, these cases reflect an increasing tension between contrasting views of free expression that have resulted in an inconsistent defense of expression online and a muddled approach to crafting norms and principles that Meta can use to craft and enforce its policies. 

The Board announced four significant decisions along with a note that is generally critical of Meta’s changes, both procedurally and substantively. 

  1. The first set of cases involves a pair of posts with individuals being critical of transgender individuals entering women’s bathrooms or participating in women’s sporting events. This content appears to have been posted by prominent right-wing accounts Libs of TikTok and the Daily Wire. The Board upheld Meta’s decision to allow this content to remain online, deciding that greater expression is important around issues of significant public debate. 
     
    But while the Board allowed this speech around a topic that is widely and hotly debated both by politicians and the public, there appears to be a significant minority of the Board that would have taken down this content. The minority view is given nearly as much space in the decision as the majority view. And rather than the Board drive change to expand speech in this area, Meta itself changed its policies in January to allow speech that explicitly called for restricting traditionally gendered spaces. The Board, rather than use this case as an opportunity to support such changes, instead took a critical eye toward Meta’s changes in this area, calling for Meta to identify how their new policies might harm the LGBTQ community and to mitigate and report on these harms. The other recommendations in this case focus not on how Meta can expand expression in this area, but rather how to improve enforcement against potential harassment. Certainly, the Board deserves credit for getting the core decision right, but the Board could have used this case to affirm broader speech norms. 

  2. A second case involved a pair of posts opposed to immigration, one in Polish and another in German, which Meta had allowed up, but the Board said should be taken down. The Polish case criticizes the EU and the current Prime Minister Donald Tusk’s more open approach to immigration and uses the term “Murzynów,” which is considered offensive by some and is a matter of debate in Poland. While the Board and Meta may want to remove obvious slurs from the platform, Meta already has a process for determining when a term is a slur, driven by Meta’s regional teams. And so the decision here reflects an ad hoc addition to the slur list of a term Meta’s local teams did not find was consistently used as a slur. The Board also recommends adding broad external engagement with civil society on slurs to potentially improve the process. The effect of this decision is to suggest that a broader set of terms across all languages could be considered slurs. 
     

    The German case called for stopping immigration because “they don’t need any more gang rape specialists” along with a link to a German government website about “Non-German suspects in gang rapes.” The post refers to German parliamentary debate over the issues of migrants and crime in which German government statistics found that since 2015, 46 percent to 56 percent of gang rapes were committed by non-German suspects despite the foreign population only representing about 13 percent or less of the population. The German government formally responded to these criticisms by citing its own research that the criminality of foreign-born individuals arose from their poverty and youth, factors associated with lawbreaking. 

    The Board has chosen to interpret these remarks uncharitably, saying that the user is accusing all or most immigrants of being sexual criminals and that the cumulative harm of such comments justifies their removal. A more charitable interpretation that aligns with the cited documents is that the user believes Germany’s much-debated migration and asylum policy has allowed in more people who end up engaging in sexual crimes, according to statistics, and that this justifies a more restrictive approach to immigration. 
     

    Without getting into the debate over immigration, the point is that this is not a straightforward case of all immigrants being called sexual predators. But the speech is directly linked to specific government statistics and parliamentary debates over immigration in Germany and the broader political debate across the EU. The Board not only interprets these remarks in the most hostile way possible but it directly recommends that Meta reverse its current presumptions. Currently, Meta doesn’t assume content that is unclear is violating, but the Board recommends that Meta assume that hate speech attacks are against an entire protected characteristic group (and thus violating) unless explicit qualifications are added. Requiring Meta to make this assumption of ill intent based on a nebulous standard of cumulative harm guarantees significant overenforcement against political and social speech. 

    Furthermore, Meta announced in January that it was loosening its policies to prevent silencing debate around immigration. While the Board states the revised policy did not change the outcome of this case, this decision, again takes aim at Meta’s new policy direction to allow greater expression on such issues by explicitly recommending more immigration-related speech be removed and that Meta again identify, mitigate, and report on the harms of the new policies. And unlike the transgender case, the free speech minority here has a much smaller rebuttal, leaving less of an impact. 

  1. A pair of posts in which users posted the former national flag of South Africa during the apartheid era and made other comments that expressed some fondness for this prior time, including potential support for apartheid by asking users to “read between the lines.” Through a variety of different majority and minority perspectives, the Board decided these posts were not clearly hate speech, and while they did violate Meta’s dangerous individuals and organization policy, the content should not be removed but demoted or not recommended. Ultimately, the Board recommended that Meta change its policy to add apartheid to its list of “hateful ideologies.” 
     

    What is considered a hateful ideology within Meta’s dangerous individuals and organizations policy is a topic worthy of greater Board consideration. The policy currently covers an ad hoc list of far-right ideologies—Nazism, white supremacy, white nationalism, and white separatism— rather than a clear set of principles and standards for what makes an ideology hateful and dangerous. If the policy wanted to emphasize the hatefulness of a given ideology, then there are plenty of supremacist and nationalist ideologies that could be added to this list given their claims of inherent superiority over others. Alternatively, this policy could turn on how dangerous and deadly an ideology has been or could be. That would likely expand this list to include various far-right and far-left totalitarian ideologies, notably Leninism, Stalinism, Maoism and other communist systems that resulted in the deaths of millions of people and repressions of hundreds of millions more. 

    Crafting a set of standards for what constitutes a hateful ideology, or at least pushing Meta to do so, would be a great task for the Board. Unfortunately, the Board only asked Meta to further expand its ad hoc list, again targeting far-right speech, while missing an opportunity to set a broad norm for when speech should be restricted regardless of ideological lines. Thankfully, the Board did prefer using softer actions like demotion rather than hard removals. 

  1. Three pieces of content about the UK immigration riots that were left up by Meta’s automated tools. Upon deeper review, Meta found one piece of content violating for calling for mosques and buildings where “migrants,” “terrorists,” and “scum” live to be destroyed or set on fire. The second two pieces of content involved seemingly AI-created images in which Muslims are presented as chasing a blond toddler or being chased by British men saying “Enough is Enough.” The Board ultimately decided that all three pieces of content should violate, given the context of the riots. The Board recommended expanding the violence and incitement policy to cover places as well as people and to remove more image-based content that the Board believes should violate the Hateful Conduct policy. 
     

    The Board also used this as an opportunity to scrutinize Meta’s move to a community notes style system, expressing concerns about the speed, accuracy, and volume of notes as opposed to third-party fact checking. The Board instead seems to favor even greater use of fact checking to address misinformation, as the decision expresses concern that the pool of “fact checkers Meta relies on is limited,” meaning a lot of content is “never assessed.” 

These cases showcase the Board’s mixed approach in advocating for greater expression. The Board has often chosen norms that limit speech based on strained views of harm. In some cases, the Board may advance expression but temper its decisions with other limits on expression or calls to remove more content. The Board may also appear to advance principles that are in tension or even contradictory. As a brief example, the Board has in several cases argued that women’s and feminist speech that is reviewed against the Hate Speech policy should be read with charitable assumptions and context that allow that content to remain up. 

But in this batch of cases, we see the Board frequently arguing for limits on speech based on less than charitable views of the content. This shortcoming may be indicative of a divide within the Board between more American-centric views of expression that are generally more supportive of speech and less open to viewpoint-based moderation, and more European and international views that are more willing to limit speech in general as well as specific viewpoints. 

My forthcoming paper will explore these trends and challenges in more detail, so stay tuned. 

You May Also Like

Politics

President Donald Trump welcomed Jordan’s King Abdullah II at the White House on Tuesday, a visit that comes amid contentious discussions between the U.S....

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...