The Use of Digital Open-Source Information as Evidence in Human Rights Adjudication: A Reality-Check

[Ruwadzano Patience Makumbe is a doctoral researcher under the ERC funded project DISSECT: Evidence in International Human Rights Adjudication at the Human Rights Centre, Ghent University in Belgium.]

User-generated content, such as social media material posted on Facebook, Twitter and Facebook, has arguably become the most depended on source of information by society including the media and civil society for developments on human rights violations. Content posted by witnesses, victims, perpetrators and independent civil society groups provides relatively comprehensive accounts of events. For instance, Syrian opposition activists provided information on the conflict which has been a useful basis for various actors to act upon. It has been pointed out that more hours of video footage now exist of the Syrian conflict than the actual length in hours of the conflict itself. There is also a surging interest by practitioners, lawyers and legal scholars on how digital open source information can be utilized as evidence that meets standards accepted in courts. How digital open source information and investigations has triggered an immense shift in international justice and accountability is receiving so much scholarly and practitioner attention particularly the complexities around using this type of information as evidence in international courts. This blog post contributes to this discussion by highlighting some of the key challenges inhibiting the harnessing of digital open source information as evidence and proposes solutions to address these challenges.    

Globally, courts are progressively transitioning to the new machineries proffered by digital technologies and adapting the manner in which law is practiced. Forensic evidence such as finger prints or DNA evidence has for long been used as evidence unlike digital open source evidence. With regards to digital open source evidence such as electronic images, video footage and satellite imagery, its use is still maturing particularly so in human rights related cases. In recent years, there has been changes in the evidential system and digital open source evidence is being used in human rights violations related prosecutions before the International Criminal Court (ICC) and in Europe particularly in Germany, the Netherlands, Sweden and Finland where universal jurisdiction prosecutions of (often) asylum seekers from Syria and Iraq who are identified as alleged perpetrators of international crimes are investigated and prosecuted.  Despite the increase in the production of digital open source information, it is not yet commonly used as evidence in courts. However, this is set to shift with the increasing quantities of digital information being gathered by human rights practitioners, lawyers, victims and witnesses that is useful to cases and its widespread availability and accessibility.

The lack of clarity on the use of digital open source information presents challenges and questions to lawyers and human rights practitioners who may potentially want to present it before courts as evidence. This is largely due to it being less tested in human rights courts and also its inherent relatively flawed characteristics. Potential hinderances to harnessing digital open source information as evidence are also experienced by open source investigators and judicial officers. This post highlights three key challenges;

First, digital information is inherently instable. This poses a significant difficulty to ensure that the material can be relied on for evidence, proof and truth in court proceedings. That digital information is generally susceptible to be manipulated is well-documented here, with this danger also having been discussed specifically in relation to international criminal investigations here and its vulnerabilities and mitigation strategies here. Concerns have been raised as to the appropriateness of relying on material that is impermanent, can be easily manipulated and is vulnerable to attack from misinformation and disinformation campaigns. This is even more worrying with tools such as deep fake technologies constantly being improved such that it is difficult to tell when a piece of digital information such as a video has been manipulated. The perceived credibility deficit from which digital information suffers thus is met with a bias position which requires much more efficient authentication and verification mechanisms that can reverse this bias.

Second, digital open source information may vanish in that it is often/always at risk of being taken down by third party companies particularly those that run online social media platforms like Facebook, Twitter and Youtube. Content is usually taken down because it incites or promotes violence. A video containing hate speech or an extra-judicial killing is for example generally considered as restricted material by the third party entities who remove content from their platforms in order to protect their user community against being subjected to harmful content. It is possible, however that this material would have been very precious to victim representatives/human rights lawyers to prove what happened and corroborates or substantiates further the accounts of victims and witnesses.

Once deleted the material is no longer available to the public. Although it is usually retained by the social media company, it is not archived within a system that allows relevant authorities or institutions to access it as part of online investigations. Facebook even tried to fight  the application for discovery which the Gambia had filed in June 2020 with the U.S. District Court for the District of Columbia in order to compel it to provide information related to the personal Facebook accounts of Myanmar officials who allegedly had perpetrated human rights violations against the Rohingya. Facebook had argued that complying with the request would violate the Stored Communications Act (SCA) (28 USC §2702) which restricts entities that provide an electronic communication service to the public from sharing the information. However, the Court disagreed and granted the application. Highlighting that only permanently removed content may be divulged, it noted that failure to produce the requested information “would compound the tragedy that has befallen the Rohingya”, thereby recognising and prioritising the need for accountability for international human rights violations. However, a sustainable measure has to be designed to address the uncertainty caused by takedowns while enhancing the already existing responsibility of social media platforms to moderate the information posted on their platforms.

Third, the use of digital open-source information as evidence in human rights courts has not yet been tested so that it remains to be seen how these courts will approach this type of evidence. As digital open source information is increasingly becoming useful in human rights work, more cases will inevitably be developed with digital open-source evidence as primary evidence. This in particular applies to regional human rights courts where the use of digital open-source information as key evidence is still in the early stages. Additionally, unlike in international and domestic criminal courts, human rights courts apply complex and varied evidentiary rules and are not distinctively designed to conduct fact-finding thus they are not equipped with forensic specialists who may be useful in processing digital open source evidence. Nevertheless, testing its use will allow for an understanding of key issues attached to digital open source evidence including proving its credibility, how the metadata and source information should be presented to the court and a consideration of the sufficiency and detail required to adequately ensure that the evidence is considered as admissible and credible by the courts.

The challenges discussed are complex and require multi-stakeholder responses designed to withstand temporality as technology is advancing rapidly. Essentially, the inherent instability of digital information is a key issue that is difficult to address particularly because digital open source information is often  user-generated and thus at the risk of manipulation. Online open source investigators have the task to continually keep up with misinformation, disinformation campaigns and deep fake technologies by developing verification and authentication tools as well as archival techniques to preserve digital information. They also require financial resources to adequately meet the changing technological tools. Further, it may be difficult to regulate takedowns but laws that address what social media companies do after takedowns will be useful in a digital landscape. This law can ensure that relevant content removed on social media platforms is archived in such a manner that it is shareable with authorities and credible institutions involved in accountability efforts. The lessons learnt in international and domestic criminal prosecutions are certainly useful in human rights courts and practice. However, the practices cannot necessarily be duplicated as the systems in place for criminal courts are different for human rights courts. This includes the lack of specialised investigative and fact-finding mechanisms in human rights courts. Human rights courts and lawyers have the opportunity to design a working mechanism to deal with the use of digital open source evidence which includes identifying key evidentiary considerations that will be useful in assessing the admissibility, credibility and probative value of digital open source evidence.

Conclusion

Digital open source information will increasingly become important evidence in court proceedings. International justice institutions including both domestic and international courts should be prepared to handle the upsurge in cases that are built on digital open source evidence. Actors involved in justice processes have the opportunity to contribute to the use of digital open source evidence by ensuring that approaches that are designed are responsive and adaptive to the changing landscape prompted by the digital age.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Why We Need to Stop Distinguishing Current Autonomous Weapon Systems

By Nurbanu Hayır

In 2018, a group of experts under the framework of The Heinrich Böll Foundation published a report on autonomy in weapon systems. As this report is a policy suggestion to the German government on the legality of autonomous weapon systems (AWS), it reflects on their definition of them. After defining AWS as “any weapon system with autonomy in the critical functions of target selection and engagement” as inspired by the International Committee of the Red Cross, the report summarizes specific characteristics of some weapon systems that “keep them distinct” from fully AWS “that raise concerns” under international law. It enumerates these characteristics as (1) Use of the weapon system in “highly structured and predictable environments” (2) Inability to “dynamically initiate a new targeting goal” (3) Constant human supervision (4) Anti-material uses of the weapon system in order to argue that they do not qualify as AWS.

This article claims that these distinctive characteristics bewilder the debate on what AWS are and whether AWS are illegal. Weapon systems with autonomy in their critical functions, i.e., systems that can “select (i.e. search for or detect, identify, track) and attack (i.e. intercept, use force against, neutralise, damage or destroy) targets without human intervention” should be defined as autonomous weapon systems irrespective of these characteristics because these do not mean that a particular system does not have autonomy in its critical functions but only that the use of AWS might be legal under International Humanitarian Law (IHL).

The purpose of this article is not to argue that all that qualify as AWS is illegal, but rather all that qualify as AWS should be regulated under international law. We should not allow AWS to escape regulation by distorting its definition. Considering that an essential part of the discussions held globally is on whether AWS require the development of new norms under IHL, defining AWS as broad as necessary is crucial to determine the scope of application of these new rules.

  1. Use of the weapon system in highly structured and predictable environments

The use of the system in highly structured and predictable environments may be likely to decrease the likelihood of misidentifying targets. Nevertheless, they should not be considered elements to refrain from defining these systems as AWS, but rather elements to consider when deciding whether the use of a particular AWS is legal in casu.

Autonomy is the ability to operate independently from a human operator. It is the product of Artificial Intelligence, a field of study that has allowed machines to develop functions initially performed by humans. A way of doing this is through hand-coded programming, where coders define everything beforehand, which yields no predictability issues unless an exceptional malfunction occurs. However, this method is increasingly outdated at the expense of machine-learning, a coding technique that provides more autonomy to machines. To put it very roughly, machine learning algorithms, which are a series of combinations to solve a function as in an algebra class, allow the machine to make its own decisions after receiving the data about the environment and the task it must perform with the help of humans. This has increased predictability issues since not everything can be pre-programmed by the coder and machine-learning algorithms are not transparent for humans to untangle. This is so because the machine operates through thousands of combinations when deciding, where humans eventually lose track due to the limits of their cognition. Thus, although humans set the goal for the machine, they cannot foresee the pathway in which the machine makes the decision that might lead to an unpredictable result. 

This foreseeability issue is particularly important because systems are likely to misidentify targets due to the limits of the current technology. Machines’ perception of the environment remains radically different from that of humans, i.e., they use hundreds of dark squares (pixels) to recognize an object. In contrast, humans see and interpret objects in a cognitive way that is unmatched by that of machines. When used in target recognition in weapon systems, this has serious repercussions in  “misidentifying”  targets by machines. Target recognition is equally as important as target engagement in determining whether a weapon system qualifies as an AWS. This should be the case because although a human may intervene in the target engagement phase, as target recognition is completely independent of humans, the decision to engage will heavily rely on the target recognized by the autonomous function. Autonomous target recognition in the critical function of selecting targets should be sufficient to define the system as an AWS

Thus, the use of the weapon system in highly structured and predictable environments should not prevent it from being defined as an AWS.

2. Inability to “dynamically initiate a new targeting goal”

Initiation of a new targeting goal based on the objective introduced to a system is a great example of a near-General AI, which can practically perform all the functions performed traditionally thanks to human cognitive abilities. Today’s AI is Narrow AI, which can only perform some functions that a human can. As an example of a near-General AI, the United Kingdom defines AWS as weapon systems “capable of understanding higher-level intent and direction.” However, the ability to select and attack targets, possible through Narrow AI, is sufficient to raise questions of compliance with IHL principles of distinction, proportionality, and precaution without a need for a General-AI system. For instance, a weapon system that is introduced with image and speed details to autonomously recognize and engage with a target raises questions under the principle of distinction as it is uncertain whether it can properly distinguish between lawful and unlawful targets. Although this system is not capable of understanding the goal of the command by the operator, it nevertheless raises concerns under IHL. 

Thus, similarly above, the limits of the technology today should not prevent defining a system as AWS. For instance, although a system may be incapable of “dynamically initiating a new targeting goal” it may still have autonomy to recognize or engage with a target, which is likely to cause issues under IHL independent of the high-level complexity required by some States.

3. Constant human supervision

Although exercising human supervision from time to time may rule out autonomy entirely, the ability of the system to allow for human supervision does not render the AWS non-autonomous per se. There are many weapon systems with autonomy that are able to operate in the autonomous mode and sometimes do. More importantly, human supervision may be exercised on functions independent of targeting. A great example of this is active protection systems (APS), which are designed to protect armored vehicles at a speed that overpasses the human capability to detect targets. Though exercising human supervision is possible, the aim behind APS is to engage with targets faster than humans can, so they usually operate autonomously without human supervision in target engagement. Hence, human supervision is limited in the targeting functions for specific reasons. Thus, a weapon system will likely be defined as AWS.

Further, it is unclear how much reliance the human operator will vest on the weapon system. Concerns of automation bias also support that human supervision, unless it rules out the system’s ability to operate independently, cannot be a ground to disregard autonomy in the current weapon systems’ functions.

4. Anti-material uses of the weapon

IHL protects civilians and civilian objects under the principle of distinction, the principle of proportionality and the principle of precaution applicable to both the design and use of weapon systems during armed conflicts. If the weapon system is constrained by design to be used towards humans, surely there will be no issues concerning the protection of civilians during armed conflict. Yet, civilian objects (e.g., an operational hospital) might still be threatened. Further, civilian presence is an independent element of the characteristics of the weapon system. Thus, the target type cannot be a ground to claim that a weapon system is not autonomous but rather renders the use of that weapon in compliance with IHL.

Further, some weapon systems are not constrained by design to be used towards humans, but their deployment area happens to be scarcely populated by humans. This is the case for the US Phalanx-Closed-in-Weapon-System (Phalanx) deployed in naval areas with almost no civilian presence. The target software of Phalanx can select and attack its targets. The fact that it does so in naval areas does not mean that it has autonomy in its critical functions. It signifies that its use in autonomous mode is likely to comply with IHL rules, but there are instances where Phalanx misidentified its targets and opened friendly fire.  

Hence, the fact that the system in question is used as an anti-material weapon system is not only sometimes irrelevant to the design of the weapon system; it also does not always mean that it is impossible to encounter consequences in violation of IHL.

Conclusion

The Heinrich Böll Foundation’s summary of the characteristics that distinguish current weapons systems from AWS demonstrate a phenomenon in the debate on the definition of AWS that should be eliminated: the definition of an AWS must be independent of the criteria that are likely to render its use legal under the norms of IHL on the use of such weapons. The use of the weapon system in “highly structured and predictable environments”, inability of the system to “dynamically initiate a new targeting goal”, “constant human supervision” over the weapon system, and “anti-material uses” of the system are merely factors that increase the likelihood of compliance of the AWS with IHL. They do not mean that a particular system does not have autonomy in its critical target selection and attack functions. This is particularly important to clarify because when a system is excluded from the definition of AWS, it is no longer possible to include it in the scope of application of the emerging rules on AWS.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

The New Cyber Plague Demands A “King’s Ransom”: Who is to Blame?

By Başak Köksal

[Başak Köksal is a senior law student at Istanbul University Faculty of Law, in Turkey. She is interested in International Cyberspace Law and Human Rights on the Internet. She is a member of Istanbul Center of International Law (ICIL) and International Law Students Association (ILSA).]

I. INTRODUCTION

Ransomware attacks are one of the cyber-related malicious activities aimed at encrypting the files or systems on the target device until such time that the ransom demanded in exchange for the decryption is paid. During the time the files are encrypted, the purpose is to render the files and the systems, that the targeted party needs for carrying out its services, inoperable until the determined ransom payment in cryptocurrency—     in general bitcoin—     is made. Up until now, there were several ransomware operations that have been carried out against the institutions providing health care services, governmental entities and global companies. These may lead to such dire consequences that in fact were considered only possible via kinetic attacks including loss of life and physical damage. One of the most striking and recent one was mounted against Kaseya which provides IT infrastructure to many transnational corporations and small businesses. Because of its prevalent interaction with the majority of the world, the attack constituted a supply chain attack by which thousands of businesses suffered from exploitation.

Considering all the damages that they caused, one important question may—     or should—     arise; who is the main actor behind and therefore responsible for these malicious activities? Is this the state from which the attack is launched or the non-state actor performing independently from the state where it operates? This post will deal with the question of circumstances under which states can be held responsible for a certain ransomware attack. It will firstly lay down the conditions for the attribution of ransomware campaigns to states and secondly it will discuss which norms of international law can be subject to a violation by them.

II. STATE RESPONSIBILITY ARISING FROM RANSOMWARE ATTACKS

As has been endorsed by the 2015 Report of United Nations Group of Governmental Experts (UN GGE), states’ positions and Tallinn Manual 2.0 reflecting the teachings of most highly qualified publicists in cyber field,  Articles on Responsibility of States for Internationally Wrongful Acts (ARSIWA) are also applicable to the activities of States in cyberspace. Therefore, states can be held responsible under international law for their internationally wrongful cyber acts.

Pursuant to Article 2 of ARSIWA, relating to the elements of an internationally wrongful act, the act in question must be attributable to a state and it must be in violation of international obligations imposed on that state. Attribution of cyber activities to a state consists of three main phases; first of which is the identification of the devices by which the cyber activities concerned are launched, secondly the identification of persons or group of persons behind them and last but not least the establishment of sufficient link between the state and the entity concerned. (Delerue, 2020, p. 55)  In that regard, Articles 4-11 of ARSIWA, stating the circumstances in which the conduct is attributable to a state, must be taken into account when assessing the involvement of states in cyberattacks. Among them, Article 8 needs a closer look specifically in the cyber context due to the fact that states tend to act through private groups composed of specialized hackers to protect themselves from any accusations. (Collier, 2017, p. 25)  According to that article, the conduct of a person or group of persons shall be considered an act of a State so long as the conduct is carried out upon the instructions or directions or under the control of that state.

In terms of cyber activities, there is a controversy regarding the degree of control sought to attribute the conduct of private entities to States. On one hand, some of the states (e.g. Brazil and Norway) have reaffirmed “effective control” test which was introduced by the International Court of Justice in Nicaragua Case (Nicaragua v. United States, 1986, p. 65, para. 115) and afterwards endorsed in Bosnian Genocide Case (Bosnia and Herzegovina v. Serbia and Montenegro, 2007, p. 209, para. 401). According to that test, the conduct of private persons or groups can be imputed to a state provided that the state must be able to determine the execution of the actions concerned and terminate them whenever it wants.(Tallinn Manual 2.0, p. 96, para. 6)

On the other hand, there are some experts maintaining that this strict threshold is hardly attainable, and requires considerable effort to conclusively establish by the injured state, therefore there is a need for lower threshold (e.g. “virtual control” test, “control and capabilities” test) (Stockburger, 2017, p.7; Margulies, 2013, p.19).

Once the attack is conclusively attributed to a state, it must be considered whether the attack is in violation of international obligations owed by the alleged responsible state to the injured state. In the following section, the norms of international law that might be compromised by state-sponsored ransomware attacks will be analyzed, which will be followed by the illustration of real-life examples and consideration of different scenarios.

III. STATE-SPONSORED RANSOMWARE ATTACKS

In the event that the actions of the hackers are attributable to a state, these may constitute a breach of the prohibition of use of force, the principle of non-intervention, or the duty to respect the sovereignty of other states.

a) Use of Force

According to Roscini (2014), for cyber operations to fall within the scope of Article 2(4) of UN Charter related to the prohibition of use of force, the cyber operation in question must amount to a “threat” or “use of force” and the threat or use of force must be exerted in the conduct of “international relations”(p. 44). That force must reach the level of an “armed attack”. (The Charter of the United Nations: A Commentary Vol I p. 208, para 16) For a cyber-attack to be qualified as an armed attack, the effects-based approach requires that it must “cause or reasonably likely to cause the damaging consequences normally produced by kinetic weapons” (Roscini, 2014, p. 47) Considering this approach, ransomware attacks which have detrimental impacts on people’s lives (e.g. Springhill Medical Center ransomware attack), national critical infrastructures (e.g. SamSam ransomware incidents) are likely to amount to a use of force.

Moreover, Schmitt (2012) has also suggested some criteria in identifying cyber operations constituting armed attacks. These are severity, immediacy, directness, invasiveness, measurability of effects, military character, state involvement, and presumptive legality (p. 314-315). For these conditions, it may be held that ransomware attacks causing loss of life or injuries, critical damage to state property may satisfy the severity requirement. However, in terms of the directness and immediacy requirements, this assertion would not be justifiable anymore due to the fact that the initial act of ransomware operations, which is the encryption of data, do not inevitably and directly cause the above mentioned severe adverse consequences and these results do not take place immediately following the attack. Generally speaking, there is a length of time between the encryption of data and files and resulting outcomes. So that, pursuant to Schmitt’s criteria, despite its grave consequences, the prohibition of use of force is unlikely to be invoked for ransomware operations.

b) Intervention

Alternatively, a ransomware attack may be deemed a violation of the non-intervention rule  regarded as “part and parcel of customary international law” in Nicaragua Case (Nicaragua v. United States, 1986, p. 106, para. 202) although not as serious as a use of force but still a grave violation. A ransomware attack may be considered a prohibited intervention provided that it interferes with the inherent governmental functions of the target state and it is coercive by depriving the target state of determining its matters freely (Nicaragua v. United States, 1986, p. 108, para. 205). The first requirement would be met if the attack is intended to render the data or services protected and offered by the injured state inoperable. To illustrate, Texas Municipality was hit by a ransomware attack in 2019 rendering the vital records including birth and death certificates inaccessible. As for the second condition, the attack could be coercive if it compels the target state to act or change its attitudes with respect to a matter that falls within its internal affairs (See the positions of Netherlands and Germany). The injured state is compelled to follow one of the predetermined paths: to pay the ransom and decrypt the files it needed for the exercise of its functions or to deal with the problem on its own by taking a risk of possible procrastination in providing public services. As can be seen, in some way these attacks necessitating the payment of a huge amount in ransom shackles the target state to go along the path in which it passively carries out a policy regarding its internal affairs.

c) Sovereignty Principle

Violation of Sovereignty in cyberspace is an issue that also deserves attention. The international community is divided (Heller, 2021, p. 1444-1445) as to whether sovereignty is a standalone rule in cyberspace (explicit non-acceptance coming from UK, USA). Given that Sovereignty is not just a principle but a rule that may be violated by states’ cyber actions, the ransomware attack in question must reach one of the thresholds stipulated in Tallinn Manual 2.0 namely, physical damage, loss of functionality and intervention in inherently governmental functions of that state.

Physical damage is deemed to exist in cases where the ransomware attack causes loss of life or bodily harm (e.g. University Hospital Düsseldorf Attack) or destroys sophisticated systems or data (e.g. NotPetya Attack). If the attack does not cause material harm but necessitates costly and/or arduous repairs/replacements of the physical components of affected devices (e.g. National Ink Attack), the loss of functionality threshold would have been crossed. (Tallinn Manual 2.0, p.21, para. 13). Interference with or usurpation of inherently governmental functions through ransomware attacks may occur when the data or services that are necessary for the exercise of governmental functions are encrypted and rendered unusable thus violating the sovereignty principle.

IV. CONCLUSION

All in all, as can be seen, acts not only in the physical world but also in the virtual world may bring about physical and damaging consequences. Ransomware attacks pose a serious threat for both states and global companies and businesses. They may target critical infrastructures of states – such as health, justice, administration, and the databases of companies including those containing personal data or information regarding the exercise of their functions. It is imperative to identify the origin of these attacks and hold the perpetrators responsible in accordance with the law. In case the ransomware attack is attributable to a certain state, it is unlikely that a ransomware attack may be considered a use of force because of its nature not allowing the immediacy and directness criteria to be met. For the non-intervention rule, it may be challenging to prove that it is coercive. The most probable way to hold it internationally responsible is to cite the violation of sovereignty. Last but not least, the strict criteria regarding both attribution and use of force that has been held for kinetic attacks should be duly softened in terms of cyber operations, otherwise states may escape from international responsibility and move freely in this gray zone.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Platforming Violence? Incitements to Genocide on Social Media Platforms: a Legal Analysis

Claudia Hyde

[Claudia Hyde holds an LLM (Hons) in Public International Law from the London School of Economics and is a legal researcher.]

Introduction

As jurisdictions such as the United Kingdom and the European Union grapple with the challenge of regulating tech giants, the use of social media platforms during mass atrocities has been brought to the fore by the protracted legal battle between Meta/Facebook and The Gambia.

 In November 2019, The Gambia instituted proceedings against Myanmar at the International Court of Justice (ICJ) alleging breaches of the 1946 Genocide Convention committed against Myanmar’s Rohingya minority. At the height of the violence, Facebook emerged as a powerful tool in intensifying and spreading conflict: as many as 700 individuals were employed by the Tatmadaw (Myanmar Military) to create fake profiles on the platform and flood the fake profiles with propaganda and incitement to violence. In support of its case before the ICJ, in June 2020, The Gambia filed an application for discovery with the US District Court for the District of Columbia requesting that Facebook disclose information about the now-deleted Facebook accounts. 

The legal issues raised by the dispute have received extensive comment elsewhere. What has received relatively little attention, however, is the extent to which those Facebook posts constitute breaches of international law in their own right as violations of the prohibition on incitement to genocide. This post will briefly survey the case law on incitement emanating from the International Criminal Tribunal for Rwanda (ICTR), the legal principles and their application to speech on social media. 

The prohibition on incitement

Article 3(c) of the Genocide Convention prohibits “direct and public incitement to genocide.” Each of the underscored elements are essential components of the crime and will be discussed in turn.

Direct

Incitement to genocide must be “direct” to be punishable in the sense of being understood as a call to commit genocide (Timmermann, 2006). The ICTR Akayesu judgment provides the most detailed description of the “direct” requirement in case law, stating in para. 557 that the incitement must “assume a direct form and specifically provoke another to engage in a criminal act.” Speech that is “mere vague or indirect suggestion” will not constitute incitement. 

Nonetheless, the ICTR has interpreted the “direct” requirement expansively with a focus on the meaning of the message in its context. In its caselaw, there have been clear-cut cases of defendants calls on others to commit genocide, unambiguously and directly. In Bikindi, for instance, the accused’s comments included “Hutus should hunt and search for the Tutsis and kill them” [para. 125]. Less direct and euphemistic language has also been held to constitute direct incitement: in Kambanda, the accused was convicted for stating “you refuse to give your blood to your country and the dogs drink it for nothing” [para. 39].  The meaning of the message in its historical, cultural and linguistic contexts determines whether the incitement is direct.

In the context of social media, the sociolinguistic nuances particular to that platform define the relevant context. Language is codified on social media platforms by users who develop similar reference points and adopt similar sentence structure, terminology and syntax, such as acronyms and “memes.” This context may be considered to determine whether a direct call to genocide has been made. For instance, users of 4chan’s /pol/ board employed triple parentheses as a coded means of referring to Jews, stylised as (((Them))) (Tuters and Hagen, 2020). The flexibility within the meaning of “direct” in “direct and public incitement” would allow for such anachronisms and coded language to be considered and is sufficiently flexible to incorporate new forms of media and ways of presenting information. This is significant in the Burmese context, where Rohingya people have been vilified as “terrorists” and “traitors” by the government to legitimate violence. These accusations pervade the portrayals of Rohingya people in the impugned Facebook posts.

More controversially, ICTR jurisprudence suggests that sharing incitement posts may be prohibited as well. In Niyitegeka, the accused was found guilty of incitement after commending a member of a militia at a public meeting for his “good work” [para. 142]. Similarly, the accused in Ruggiu was found guilty of incitement after referring to genocidaires as “valiant combatants” [para. 44]. This suggests that other speech acts in which the author endorses or glorifies acts of genocide will constitute incitement, regardless of whether the statement calls on others to partake in genocide. The logical conclusion of the decisions in Ruggiu and Niyitegeka is that the act of sharing, such as “retweeting,” a post that incites genocide would be sufficiently direct. Again, this will be context specific. A post on Twitter retweeting another’s incitement to genocide when accompanied by criticism of the message will not be understood by the audience as adopting or endorsing incitement to genocide. However, retweeting another’s incitement to genocide with an affirmative message could be viewed in the same way as the speeches in Ruggiu and Niyitegeka and be considered sufficiently direct. Thus, social media users need not be the authors of a post that incites genocide in order to commit the crime of incitement, as sharing or ‘retweeting’ would suffice 

Public

Various factors have been considered throughout the ICTR’s case law in determining the “publicness” of a statement. Most significant for this analysis is the medium of communication employed. Certain forms of communicating incitement to genocide, including through print media and radio, have been considered “public” by their very nature. For instance, in the Media case, the circulation of Kangura or the average number of listeners Radio Télévision Libre des Mille Collines (RTLM), a radio station that played a significant role in spurring on violence, were not considered in any depth by the Chamber. The fact that mass media were employed determined the publicness of the remarks. Similarly, the broadcast of violent and patriotic songs written by the defendant in Bikindi on radio waves was considered public in of itself. Relevant here is the ILC’s 1996 commentary where it was argued that “public incitement is characterized by a call for criminal action…by such means as the mass media.” 

This would lead to the conclusion that speech or statements made through social media, being a mass communication platform, would necessarily be “public.” However, the publicness of social media is not so easily discerned: invite-only or “elite” social media platforms such as Raya, for instance, cannot be accessed by a “mass” audience in the sense that their audience is select or limited. Similarly, if a Twitter user with 20 followers were to incite genocide on their account, it is difficult to view this as “mass” communication. Considering that social media users’ followings and reach vary widely, it is difficult to gauge when a user’s posts will be “public” and when they will not. 

The fake profiles created on the orders of Burmese generals often had thousands of followers, meaning that any posts published on those profiles would likely be considered “public” for the purposes of the Convention. But the application of these legal principles to social media posts is currently untested, and each post would need to be considered individually in the context considering all the facts of the case. It cannot be assumed that every post that is “public” on social media is “public” in the Convention sense.

Incitement

The act of incitement is not defined in the Genocide Convention or subsequent instruments. In Kajelijeli, the ICTR provided some guidance by stating that in “common law jurisdictions, incitement to commit a crime is defined as encouraging or persuading another to commit the crime, including by use of threats or other forms of pressure” [para. 850]. The Chamber did not, however, endorse any particular definition of incitement. 

Scholars are divided on the question of whether incitement must be linked to an act of genocide in order to be considered a crime. Benesch, for instance, has argued that a statement should be considered incitement where there is a “reasonable possibility that a particular speech will lead to genocide” (Benesch, 2008). This reading receives little support in scholarly comment (see for instance Wilson, 2017 and Scott Maravilla, 2008) and case law. In Nahimana, the ICTY confirmed in para. 981 that there is no requirement that incitement be linked to an act of genocide for it to be punishable. Such a reading better reflects one of the key purposes of the Convention: to prevent genocide from happening. It would be inconsistent with the object and purpose of the treaty for unsuccessful incitement to genocide to be considered any more lawful than successful incitement. 

This conclusion is important for the purposes of incitement on social media, where the effect of a statement may be remote from the statement itself. The global nature of social media communication means that a statement in one country can have effects in another. Considering the example of Myanmar where several hundred accounts were created to incite genocide against Rohingya people, there is no way of proving which particular Facebook post or account prompted a reader to commit violence. International criminal law does not require proof that a post directly caused the reader to commit violence. It is sufficient that the post could prompt such action.

Conclusion

From my analysis, it is clear that incitement directed over social media is capable of being covered by the international criminal prohibition on incitement to genocide. However, the ambiguities in the law present a barrier to accountability. Social media platforms have clearly displaced other media as the new frontiers for the dissemination of hate. Considering the role that these platforms are already playing in spurring on mass atrocities, it is essential to understand where the deficiencies in the current legal framework lie and what must be remedied to hold the perpetrators to account.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.