IMPUTING INTERNATIONAL CRIMINAL LIABILITY TO SOCIAL MEDIA COMPANIES: THE CASE OF FACEBOOK IN MYANMAR

By Isha Ahlawat and Aakanksha Singh

[Isha Ahlawat and Aakanksha Singh are penultimate-year law students at Jindal Global Law School.]

Between August and November 2017, Myanmar’s government and military institutions orchestrated a crackdown on the country’s ethnic Rohingya minority in the northern state of Rakhine. What the UN Independent International Fact-Finding Mission (“FFM”) on Myanmar termed as “clearance operations”, began with troops and local mobs burning Rohingya villages and attacking civilians in response to Arakan Rohingya Salvation Army’s attack on police posts, and culminated in the forced displacement of hundreds of thousands of refugees who fled to Bangladesh. Facebook in particular was chastised for playing a “determining role” in the ethnic cleansing. Through Facebook, politicians, religious leaders, and citizens weaponized decades of ethnic tensions to spread hate speech and propaganda against the Rohingyas. Facebook became a fertile ground for Myanmar’s state institutions to build a narrative that the Rohingyas were a threat to the majority Bamar ethnic group and the Buddhist religion. 

Facebook may never be indicted for the role it played in the Rohingya crisis as on an international level there persists a lack of corporate accountability for war crimes due to the limited scope of prosecution and legal responsibility. The Genocide Convention and international criminal law only recognize states and natural persons as the sole regimes of legal responsibility while regulating incitement to genocide. Similarly, the UN Guiding Principles on Business and Human Rights follows a soft law approach and merely provides a roadmap for corporate conduct without seeking to impose strict sanctions in case of violations. Due to the absence of corporate liability in international criminal law, there is an urgent need for the development of standard regulations and liabilities for social media platforms such as Facebook. 

*

During the years leading up to the expulsion of Rohingyas from Myanmar, Facebook had come to dominate cyberspace in the country. The platform became so ubiquitous, that for many citizens, Facebook and the internet were synonymous entities. This meant that the social media giant assumed the role of a State-like entity where the public was served the illusion of participative democracy, as any citizen could interact with or comment on and share posts made by government officials. In an interesting paradox, the regulated became the regulator when in December 2018, Facebook banned Myanmar’s commander-in-chief for hate speech. This action, which came far too late and amounted to far too little, revealed how the governing laws of Facebook mimic a constitution, while its community standards become the law of the land in a country with pervasive Facebook use.  In a way, moderators keeping the community standard in check assume the role of enforcement agencies of a state, while the Facebook Oversight Board interprets the law, similar to a Supreme Court. The imposition of a ban on the speech of the official by Facebook is an example of the transformation of a corporation into a State-like watchman that regulates individual or collective actions to protect human rights. But who watches the watchman when it errs? 

Ultimately, Facebook is a company that works for profit maximization and user retention, and engagement. Its policies are not grounded in any one national legal order but are influenced by competing interests and the preferences of its top-level management. In several rounds of group discussions with Facebook’s employees, researchers found a lack of understanding of human rights norms. A formal framework or guidelines for decisions on content moderation were absent and some employees admitted to “making rules up”. Even though Facebook has stated that they look for guidance in Article 19 of the International Covenant for Civil and Political Rights (ICCPR) when setting standards for restricting freedom of speech, their interpretation of Article 19 is conclusory and collapses tests of legality, legitimacy, and necessity under Article 19(3) as well as proportionality into an undefined “risk of harm”. Thus, the company continues its practice of ad-hoc decision-making and wielding undefined discretion. Matters are further complicated since “risk of harm,” “newsworthiness,” “public interest,” and “international human rights standards” are not defined in Facebook’s community guidelines or press statements and thus questionable content can easily go under the radar or be ignored at will by the platform. Facebook’s lackadaisical approach has led many to believe that the constitution of an Oversight Board by the company, meant to address the deficit of transparency and legitimacy surrounding the company’s current content moderation rules and processes is merely an eyewash. 

The list of problematic elements does not end here. A judge in Washington D.C. recently criticized Facebook for not handing over information to investigators working toward prosecuting Myanmar for international crimes against Rohingyas. Facebook withheld information citing “privacy concerns” and tried to take refuge under U.S laws which bar electronic communication services from disclosing user communications

In light of all this, several important questions arise. To what extent are corporations like Facebook and its executives responsible in international law for the mismanagement of large-scale atrocity crimes? Moreover, do Facebook’s claims of preserving freedom of speech legitimize its inaction? In the case of Prosecutor v. Nahimana, Barayagwiza, & Ngeze before the International Criminal Tribunal of Rwanda, the founders of extremist media outlets were charged with direct and public incitement to commit genocide for encouraging the Hutu population to kill the Tutsis. However, later the Appeals Chamber reversed several aspects of the judgment by drawing a clear distinction between international crimes and hate speech and subsequently made it difficult to hold individuals who foment hatred accountable for the violence that stemmed from their actions. It has thus become an immense legal challenge to prosecute military leaders who perpetrate genocidal propaganda, much less censure executives of social media companies who allow such propaganda to flourish unabated on their platforms.

Existing mechanisms to address state liability in international criminal and humanitarian law were designed through a statist gaze and cannot apply to corporations without being structurally ill-equipped to address numerous manifestations of business operations. One may ask, could Facebook be held liable in a civil suit for “complicity in a genocide” or “aiding and abetting” a crime against humanity? Besides international criminal law, Rohingya plaintiffs may bring a state tort law claim against Facebook for negligence. However, they may not succeed as in most nations, providers of interactive computer services such as Facebook are granted broad immunity for content posted by third parties since they are not considered as the publisher of incriminating information. A prominent example of this is Section 230 of Title 47 of the US Code which establishes that websites cannot be held liable for third party content. In the analogue era, in cases such as Prosecutor v William Samoei Ruto and Others (2012) or during the Nuremberg Trials, publishers and broadcasters of hate speech were placed on the same plane for incitement to genocide.  However, in the internet age, social media platforms produce a structure where the instigator and broadcaster are considered legally separate entities. Regardless, while Facebook in Myanmar did not itself propagate hate speech, it did act as a third-party participant by coding the message through its software that ultimately made the speech public.

     Caroline Kaeb, of The Wharton School has argued that the focus of imprisonment and deprivation of liberty in criminal law has served to constrain the development of corporate criminal liability. To transform the criminal law to address the gap in legislation for corporate liability, Kaeb argues that courts can issue decrees for confiscation of a company’s assets, closure of the implicated corporate unit, or even corporate death penalties in the form of dissolution or monitorship. On the other hand, another scholar, VS Khanna has advocated for a variant of civil liability to curtain the higher standard of proof that is demanded in criminal proceedings. Although most of these solutions have been proposed for implementation under municipal laws, they are equally relevant for international criminal or human rights law. Moreover, while there is no evidence to support the argument that criminal liability will be effective in constraining the actions of corporations, tortious liability on the other hand may not serve as sufficient incentive to push corporations to be socially responsible.

*

Facebook’s State-like conduct in regulating speech is similar to the restrictions enforced by nations, except the former is not answerable to its users like governments are answerable to their people and judicial systems. This lack of legal regulation enhances the gap in corporate accountability in situations of mass atrocities as the State-like role is internally assumed by the company without any liability. To fill this gap, international law must transform soft law-based corporate accountability into strict criminal conventions. Corporate executives can indeed be held liable under international criminal law, as was evidenced by the prosecution of directors of companies that were complicit in the Nazi regime during the Nuremberg Trials. However, when the Rome Statute was being drafted, the possibility of prosecuting corporations was rejected as the practice was uncommon in participant States. The global dynamic has shifted and any new convention seeking to build a framework for corporate criminal liability must ensure that corporate governance and policy formation is structured to provide for transparency and accountability. 

While social media corporations may themselves not publish hateful, genocidal content, they construct their platforms in such a manner that makes it easy for such material to proliferate and reach millions. The algorithm used by Facebook and other social media platforms is a powerful tool that tracks user activity on the application and other websites to serve content and advertisements that encourages users to scroll, click, comment, share and shop. While the use of such algorithm leads to a more personalized user experience, it is heavily criticized for exacerbating social issues such as violence and racism, by promoting misinformed content due to its shock value and high user engagement. It is thus important that guidelines for community standards must adhere to norms of international human rights law and the development of artificial intelligence that supports efficient and non-arbitrary decision making must be prioritized. 

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s