Exploring the “world’s town square”: online protests and the scope of the right of peaceful assembly 

[María José Escobar is a law graduate of the University of Bucharest.]

At this point in time, it is not controversial to state that the COVID-19 pandemic has challenged what the world took for granted regarding the exercise of our most fundamental freedoms. Not only did States adopt strict and unusual restrictions on mobility and expression, but we also witnessed how these rights had their boundaries tested as reality shifted to an online forum. 

In line with this trend, it must be noted that both experts and States are constant in affirming that Human Rights, such as freedom of assembly, must be protected online as well as offline. 

As correct as this affirmation is regarding freedom of assembly, it is also clear that social media is not a mere instrument that can be used to facilitate the organization of physical gatherings, but rather a tool that also allows us to congregate in ways that were simply not possible 30 years ago. Consequently, the question must be raised: is the current legal framework prepared to safeguard the digital exercise of this right? 

Freedom of assembly and online protests: a general sketch

The right of peaceful assembly is protected by numerous international and regional instruments including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the American Convention on Human Rights, and the European Convention on Human Rights

In July 2020 the Human Rights Committee adopted General Comment No. 37 on the right of peaceful assembly. The document defined freedom of assembly as the right to “organize or take part in a gathering of persons for a purpose such as expressing oneself, conveying a position on a particular issue or exchanging ideas.” General Comment No. 37 was praised by activists partially because it unequivocally asserted that the protection granted to freedom of assembly extends to peaceful gatherings held and organized within an online forum. 

For sure, online forms of exercising freedom of assembly, including those seeking to enable civic engagement, precede the General Comment. Movements such as the Arab Spring had long before discovered the effectiveness of social media, more concretely Facebook groups, as a tool for facilitating and encouraging mobilization. 

However, recent events have proved that activists may feel tempted to choose less traditional forms of virtual gatherings and protests. For instance, in Russia, people decided to rally in Yandex.Maps – the Russian version of google maps – to complain about a constitutional reform that would allow President Putin to stay in power until 2036. An additional and more popular example could be the 28 million Instagram users who used the #BlackOutTuesday hashtag to upload pictures of black squares as a symbol of protest against police brutality in the United States. 

These novel forms of collective expression may help illustrate the premise that the dynamics of freedom of assembly can fundamentally differ when they take place in an online forum. For instance, the fact that online protests are not limited in space, and sometimes in time, makes the phenomenon difficult to delimit. Notably, it had been argued, before General Comment No. 37 was issued, that a gathering must be temporary to be considered an assembly (Nowak, 2005, p. 484). Other authors considered “bodily proximity” as one of the key dimensions of both assemblies and demonstrations (Butler, 2015, p. 178)

Given these discussions, concerns had been raised regarding the general applicability of rules dealing with freedom of assembly to every form of online gathering. These concerns mostly focus on how key elements related to this right, such as its relationship with freedom of expression and the permissibility of certain restrictions, will play out in the virtual arena. Consequently, it was suggested to give priority to freedom of expression, and not freedom of assembly, when analyzing certain forms of digitally collective expression (see here, here, and here for a more profound analysis on the discussion). 

General Comment No. 37 settled these debates and specifically asserted that article 21 of the ICCPR protects online assemblies. Still, the highly authoritative character of this clarification is not immune to unanswered questions. How can we establish participation when a gathering takes place in an online forum? Can shutting down a hashtag be considered a restriction to freedom of assembly? Is it possible to determine when a virtual assembly starts and ends? 

Limitations specific to online forums

The characteristics proper to virtual assemblies demand a closer look at the validity of certain limitations to the right of peaceful assembly which have traditionally been accepted as legitimate. Typically, any peaceful, non-violent gathering falls under the protection granted by freedom of assembly. As such, restrictions on these types of assemblies can only be imposed if they pursue certain legitimate aims and comply with the standards of legality, necessity, and proportionality. 

In this regard, it remains unclear if and how virtual demonstrations can become violent as, according to General Comment No. 37, violence implies “the use by participants of physical force against others that is likely to result in injury or death, or serious damage to property.” This distinction is relevant because only peaceful gatherings benefit from protection. Incidentally, a Recommendation of the Committee of Ministers from the Council of Europe warned that even if individuals do have the right to protest online if the virtual protest “leads to blockages, the disruption of services and/or damage to the property of others” legal consequences might be in order. 

On a similar note, it is important to clarify that, ordinarily, individual acts of violence are not enough for an assembly to be characterized as violent. For an assembly to fall outside of the protection granted by freedom of assembly, General Comment No. 37 asserts that violence must be “widespread and serious.” On the other hand, the European Court of Human Rights has ruled that for a gathering not to be considered peaceful, “organizers and participants must have had violent intentions, incite violence or otherwise reject the foundations of a democratic society.” It is worth noting that until this day, to the best of my knowledge, there has yet to be an online demonstration characterized as violent. 

Similarly, when dealing with permitted restrictions to freedom of assembly, the protection awarded to this right assures that a peaceful protest must be able to reach its targeted audience. Given the amplifying effect of the internet, any restriction targeting the virtuality or virality of a protest would severely impede the targeted audience- i.e an indefinite group of internet or social media users- to witness the campaign. This brings up the question: is there any room for “time and place restrictions” when analyzing online gatherings? 

Nevertheless, something is certain as far as restrictions are concerned: indiscriminate internet blockages, used to censor speech or crackdown on gatherings or protests, have been clearly considered disproportionate. The Special Rapporteur on freedom of assembly has even asserted in paragraph 52 of a report on the rights to peaceful assembly and association that “network shutdowns are in clear violation of international law and cannot be justified in any circumstances.” This standard is particularly relevant, as the practice seems to have become more and more common, even among well-established democracies


The right to freedom of assembly has been described as an “essential component of democratic governance.” This remains true in a virtually connected world: the right to collective expression is still one of the pillars which help a tolerant and broad-minded society to thrive. Whether they spread calls for social action, become a space to share common interests, or even a stage for civic protest, online forums must be protected as means to both enable and exercise freedom of expression and assembly. Social media and similar tools have value to human rights because of, and not despite, the new features and challenges they bring to the table. 

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.


By Isha Ahlawat and Aakanksha Singh

[Isha Ahlawat and Aakanksha Singh are penultimate-year law students at Jindal Global Law School.]

Between August and November 2017, Myanmar’s government and military institutions orchestrated a crackdown on the country’s ethnic Rohingya minority in the northern state of Rakhine. What the UN Independent International Fact-Finding Mission (“FFM”) on Myanmar termed as “clearance operations”, began with troops and local mobs burning Rohingya villages and attacking civilians in response to Arakan Rohingya Salvation Army’s attack on police posts, and culminated in the forced displacement of hundreds of thousands of refugees who fled to Bangladesh. Facebook in particular was chastised for playing a “determining role” in the ethnic cleansing. Through Facebook, politicians, religious leaders, and citizens weaponized decades of ethnic tensions to spread hate speech and propaganda against the Rohingyas. Facebook became a fertile ground for Myanmar’s state institutions to build a narrative that the Rohingyas were a threat to the majority Bamar ethnic group and the Buddhist religion. 

Facebook may never be indicted for the role it played in the Rohingya crisis as on an international level there persists a lack of corporate accountability for war crimes due to the limited scope of prosecution and legal responsibility. The Genocide Convention and international criminal law only recognize states and natural persons as the sole regimes of legal responsibility while regulating incitement to genocide. Similarly, the UN Guiding Principles on Business and Human Rights follows a soft law approach and merely provides a roadmap for corporate conduct without seeking to impose strict sanctions in case of violations. Due to the absence of corporate liability in international criminal law, there is an urgent need for the development of standard regulations and liabilities for social media platforms such as Facebook. 


During the years leading up to the expulsion of Rohingyas from Myanmar, Facebook had come to dominate cyberspace in the country. The platform became so ubiquitous, that for many citizens, Facebook and the internet were synonymous entities. This meant that the social media giant assumed the role of a State-like entity where the public was served the illusion of participative democracy, as any citizen could interact with or comment on and share posts made by government officials. In an interesting paradox, the regulated became the regulator when in December 2018, Facebook banned Myanmar’s commander-in-chief for hate speech. This action, which came far too late and amounted to far too little, revealed how the governing laws of Facebook mimic a constitution, while its community standards become the law of the land in a country with pervasive Facebook use.  In a way, moderators keeping the community standard in check assume the role of enforcement agencies of a state, while the Facebook Oversight Board interprets the law, similar to a Supreme Court. The imposition of a ban on the speech of the official by Facebook is an example of the transformation of a corporation into a State-like watchman that regulates individual or collective actions to protect human rights. But who watches the watchman when it errs? 

Ultimately, Facebook is a company that works for profit maximization and user retention, and engagement. Its policies are not grounded in any one national legal order but are influenced by competing interests and the preferences of its top-level management. In several rounds of group discussions with Facebook’s employees, researchers found a lack of understanding of human rights norms. A formal framework or guidelines for decisions on content moderation were absent and some employees admitted to “making rules up”. Even though Facebook has stated that they look for guidance in Article 19 of the International Covenant for Civil and Political Rights (ICCPR) when setting standards for restricting freedom of speech, their interpretation of Article 19 is conclusory and collapses tests of legality, legitimacy, and necessity under Article 19(3) as well as proportionality into an undefined “risk of harm”. Thus, the company continues its practice of ad-hoc decision-making and wielding undefined discretion. Matters are further complicated since “risk of harm,” “newsworthiness,” “public interest,” and “international human rights standards” are not defined in Facebook’s community guidelines or press statements and thus questionable content can easily go under the radar or be ignored at will by the platform. Facebook’s lackadaisical approach has led many to believe that the constitution of an Oversight Board by the company, meant to address the deficit of transparency and legitimacy surrounding the company’s current content moderation rules and processes is merely an eyewash. 

The list of problematic elements does not end here. A judge in Washington D.C. recently criticized Facebook for not handing over information to investigators working toward prosecuting Myanmar for international crimes against Rohingyas. Facebook withheld information citing “privacy concerns” and tried to take refuge under U.S laws which bar electronic communication services from disclosing user communications

In light of all this, several important questions arise. To what extent are corporations like Facebook and its executives responsible in international law for the mismanagement of large-scale atrocity crimes? Moreover, do Facebook’s claims of preserving freedom of speech legitimize its inaction? In the case of Prosecutor v. Nahimana, Barayagwiza, & Ngeze before the International Criminal Tribunal of Rwanda, the founders of extremist media outlets were charged with direct and public incitement to commit genocide for encouraging the Hutu population to kill the Tutsis. However, later the Appeals Chamber reversed several aspects of the judgment by drawing a clear distinction between international crimes and hate speech and subsequently made it difficult to hold individuals who foment hatred accountable for the violence that stemmed from their actions. It has thus become an immense legal challenge to prosecute military leaders who perpetrate genocidal propaganda, much less censure executives of social media companies who allow such propaganda to flourish unabated on their platforms.

Existing mechanisms to address state liability in international criminal and humanitarian law were designed through a statist gaze and cannot apply to corporations without being structurally ill-equipped to address numerous manifestations of business operations. One may ask, could Facebook be held liable in a civil suit for “complicity in a genocide” or “aiding and abetting” a crime against humanity? Besides international criminal law, Rohingya plaintiffs may bring a state tort law claim against Facebook for negligence. However, they may not succeed as in most nations, providers of interactive computer services such as Facebook are granted broad immunity for content posted by third parties since they are not considered as the publisher of incriminating information. A prominent example of this is Section 230 of Title 47 of the US Code which establishes that websites cannot be held liable for third party content. In the analogue era, in cases such as Prosecutor v William Samoei Ruto and Others (2012) or during the Nuremberg Trials, publishers and broadcasters of hate speech were placed on the same plane for incitement to genocide.  However, in the internet age, social media platforms produce a structure where the instigator and broadcaster are considered legally separate entities. Regardless, while Facebook in Myanmar did not itself propagate hate speech, it did act as a third-party participant by coding the message through its software that ultimately made the speech public.

     Caroline Kaeb, of The Wharton School has argued that the focus of imprisonment and deprivation of liberty in criminal law has served to constrain the development of corporate criminal liability. To transform the criminal law to address the gap in legislation for corporate liability, Kaeb argues that courts can issue decrees for confiscation of a company’s assets, closure of the implicated corporate unit, or even corporate death penalties in the form of dissolution or monitorship. On the other hand, another scholar, VS Khanna has advocated for a variant of civil liability to curtain the higher standard of proof that is demanded in criminal proceedings. Although most of these solutions have been proposed for implementation under municipal laws, they are equally relevant for international criminal or human rights law. Moreover, while there is no evidence to support the argument that criminal liability will be effective in constraining the actions of corporations, tortious liability on the other hand may not serve as sufficient incentive to push corporations to be socially responsible.


Facebook’s State-like conduct in regulating speech is similar to the restrictions enforced by nations, except the former is not answerable to its users like governments are answerable to their people and judicial systems. This lack of legal regulation enhances the gap in corporate accountability in situations of mass atrocities as the State-like role is internally assumed by the company without any liability. To fill this gap, international law must transform soft law-based corporate accountability into strict criminal conventions. Corporate executives can indeed be held liable under international criminal law, as was evidenced by the prosecution of directors of companies that were complicit in the Nazi regime during the Nuremberg Trials. However, when the Rome Statute was being drafted, the possibility of prosecuting corporations was rejected as the practice was uncommon in participant States. The global dynamic has shifted and any new convention seeking to build a framework for corporate criminal liability must ensure that corporate governance and policy formation is structured to provide for transparency and accountability. 

While social media corporations may themselves not publish hateful, genocidal content, they construct their platforms in such a manner that makes it easy for such material to proliferate and reach millions. The algorithm used by Facebook and other social media platforms is a powerful tool that tracks user activity on the application and other websites to serve content and advertisements that encourages users to scroll, click, comment, share and shop. While the use of such algorithm leads to a more personalized user experience, it is heavily criticized for exacerbating social issues such as violence and racism, by promoting misinformed content due to its shock value and high user engagement. It is thus important that guidelines for community standards must adhere to norms of international human rights law and the development of artificial intelligence that supports efficient and non-arbitrary decision making must be prioritized. 

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.

The Use of Digital Open-Source Information as Evidence in Human Rights Adjudication: A Reality-Check

[Ruwadzano Patience Makumbe is a doctoral researcher under the ERC funded project DISSECT: Evidence in International Human Rights Adjudication at the Human Rights Centre, Ghent University in Belgium.]

User-generated content, such as social media material posted on Facebook, Twitter and Facebook, has arguably become the most depended on source of information by society including the media and civil society for developments on human rights violations. Content posted by witnesses, victims, perpetrators and independent civil society groups provides relatively comprehensive accounts of events. For instance, Syrian opposition activists provided information on the conflict which has been a useful basis for various actors to act upon. It has been pointed out that more hours of video footage now exist of the Syrian conflict than the actual length in hours of the conflict itself. There is also a surging interest by practitioners, lawyers and legal scholars on how digital open source information can be utilized as evidence that meets standards accepted in courts. How digital open source information and investigations has triggered an immense shift in international justice and accountability is receiving so much scholarly and practitioner attention particularly the complexities around using this type of information as evidence in international courts. This blog post contributes to this discussion by highlighting some of the key challenges inhibiting the harnessing of digital open source information as evidence and proposes solutions to address these challenges.    

Globally, courts are progressively transitioning to the new machineries proffered by digital technologies and adapting the manner in which law is practiced. Forensic evidence such as finger prints or DNA evidence has for long been used as evidence unlike digital open source evidence. With regards to digital open source evidence such as electronic images, video footage and satellite imagery, its use is still maturing particularly so in human rights related cases. In recent years, there has been changes in the evidential system and digital open source evidence is being used in human rights violations related prosecutions before the International Criminal Court (ICC) and in Europe particularly in Germany, the Netherlands, Sweden and Finland where universal jurisdiction prosecutions of (often) asylum seekers from Syria and Iraq who are identified as alleged perpetrators of international crimes are investigated and prosecuted.  Despite the increase in the production of digital open source information, it is not yet commonly used as evidence in courts. However, this is set to shift with the increasing quantities of digital information being gathered by human rights practitioners, lawyers, victims and witnesses that is useful to cases and its widespread availability and accessibility.

The lack of clarity on the use of digital open source information presents challenges and questions to lawyers and human rights practitioners who may potentially want to present it before courts as evidence. This is largely due to it being less tested in human rights courts and also its inherent relatively flawed characteristics. Potential hinderances to harnessing digital open source information as evidence are also experienced by open source investigators and judicial officers. This post highlights three key challenges;

First, digital information is inherently instable. This poses a significant difficulty to ensure that the material can be relied on for evidence, proof and truth in court proceedings. That digital information is generally susceptible to be manipulated is well-documented here, with this danger also having been discussed specifically in relation to international criminal investigations here and its vulnerabilities and mitigation strategies here. Concerns have been raised as to the appropriateness of relying on material that is impermanent, can be easily manipulated and is vulnerable to attack from misinformation and disinformation campaigns. This is even more worrying with tools such as deep fake technologies constantly being improved such that it is difficult to tell when a piece of digital information such as a video has been manipulated. The perceived credibility deficit from which digital information suffers thus is met with a bias position which requires much more efficient authentication and verification mechanisms that can reverse this bias.

Second, digital open source information may vanish in that it is often/always at risk of being taken down by third party companies particularly those that run online social media platforms like Facebook, Twitter and Youtube. Content is usually taken down because it incites or promotes violence. A video containing hate speech or an extra-judicial killing is for example generally considered as restricted material by the third party entities who remove content from their platforms in order to protect their user community against being subjected to harmful content. It is possible, however that this material would have been very precious to victim representatives/human rights lawyers to prove what happened and corroborates or substantiates further the accounts of victims and witnesses.

Once deleted the material is no longer available to the public. Although it is usually retained by the social media company, it is not archived within a system that allows relevant authorities or institutions to access it as part of online investigations. Facebook even tried to fight  the application for discovery which the Gambia had filed in June 2020 with the U.S. District Court for the District of Columbia in order to compel it to provide information related to the personal Facebook accounts of Myanmar officials who allegedly had perpetrated human rights violations against the Rohingya. Facebook had argued that complying with the request would violate the Stored Communications Act (SCA) (28 USC §2702) which restricts entities that provide an electronic communication service to the public from sharing the information. However, the Court disagreed and granted the application. Highlighting that only permanently removed content may be divulged, it noted that failure to produce the requested information “would compound the tragedy that has befallen the Rohingya”, thereby recognising and prioritising the need for accountability for international human rights violations. However, a sustainable measure has to be designed to address the uncertainty caused by takedowns while enhancing the already existing responsibility of social media platforms to moderate the information posted on their platforms.

Third, the use of digital open-source information as evidence in human rights courts has not yet been tested so that it remains to be seen how these courts will approach this type of evidence. As digital open source information is increasingly becoming useful in human rights work, more cases will inevitably be developed with digital open-source evidence as primary evidence. This in particular applies to regional human rights courts where the use of digital open-source information as key evidence is still in the early stages. Additionally, unlike in international and domestic criminal courts, human rights courts apply complex and varied evidentiary rules and are not distinctively designed to conduct fact-finding thus they are not equipped with forensic specialists who may be useful in processing digital open source evidence. Nevertheless, testing its use will allow for an understanding of key issues attached to digital open source evidence including proving its credibility, how the metadata and source information should be presented to the court and a consideration of the sufficiency and detail required to adequately ensure that the evidence is considered as admissible and credible by the courts.

The challenges discussed are complex and require multi-stakeholder responses designed to withstand temporality as technology is advancing rapidly. Essentially, the inherent instability of digital information is a key issue that is difficult to address particularly because digital open source information is often  user-generated and thus at the risk of manipulation. Online open source investigators have the task to continually keep up with misinformation, disinformation campaigns and deep fake technologies by developing verification and authentication tools as well as archival techniques to preserve digital information. They also require financial resources to adequately meet the changing technological tools. Further, it may be difficult to regulate takedowns but laws that address what social media companies do after takedowns will be useful in a digital landscape. This law can ensure that relevant content removed on social media platforms is archived in such a manner that it is shareable with authorities and credible institutions involved in accountability efforts. The lessons learnt in international and domestic criminal prosecutions are certainly useful in human rights courts and practice. However, the practices cannot necessarily be duplicated as the systems in place for criminal courts are different for human rights courts. This includes the lack of specialised investigative and fact-finding mechanisms in human rights courts. Human rights courts and lawyers have the opportunity to design a working mechanism to deal with the use of digital open source evidence which includes identifying key evidentiary considerations that will be useful in assessing the admissibility, credibility and probative value of digital open source evidence.


Digital open source information will increasingly become important evidence in court proceedings. International justice institutions including both domestic and international courts should be prepared to handle the upsurge in cases that are built on digital open source evidence. Actors involved in justice processes have the opportunity to contribute to the use of digital open source evidence by ensuring that approaches that are designed are responsive and adaptive to the changing landscape prompted by the digital age.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Why We Need to Stop Distinguishing Current Autonomous Weapon Systems

By Nurbanu Hayır

In 2018, a group of experts under the framework of The Heinrich Böll Foundation published a report on autonomy in weapon systems. As this report is a policy suggestion to the German government on the legality of autonomous weapon systems (AWS), it reflects on their definition of them. After defining AWS as “any weapon system with autonomy in the critical functions of target selection and engagement” as inspired by the International Committee of the Red Cross, the report summarizes specific characteristics of some weapon systems that “keep them distinct” from fully AWS “that raise concerns” under international law. It enumerates these characteristics as (1) Use of the weapon system in “highly structured and predictable environments” (2) Inability to “dynamically initiate a new targeting goal” (3) Constant human supervision (4) Anti-material uses of the weapon system in order to argue that they do not qualify as AWS.

This article claims that these distinctive characteristics bewilder the debate on what AWS are and whether AWS are illegal. Weapon systems with autonomy in their critical functions, i.e., systems that can “select (i.e. search for or detect, identify, track) and attack (i.e. intercept, use force against, neutralise, damage or destroy) targets without human intervention” should be defined as autonomous weapon systems irrespective of these characteristics because these do not mean that a particular system does not have autonomy in its critical functions but only that the use of AWS might be legal under International Humanitarian Law (IHL).

The purpose of this article is not to argue that all that qualify as AWS is illegal, but rather all that qualify as AWS should be regulated under international law. We should not allow AWS to escape regulation by distorting its definition. Considering that an essential part of the discussions held globally is on whether AWS require the development of new norms under IHL, defining AWS as broad as necessary is crucial to determine the scope of application of these new rules.

  1. Use of the weapon system in highly structured and predictable environments

The use of the system in highly structured and predictable environments may be likely to decrease the likelihood of misidentifying targets. Nevertheless, they should not be considered elements to refrain from defining these systems as AWS, but rather elements to consider when deciding whether the use of a particular AWS is legal in casu.

Autonomy is the ability to operate independently from a human operator. It is the product of Artificial Intelligence, a field of study that has allowed machines to develop functions initially performed by humans. A way of doing this is through hand-coded programming, where coders define everything beforehand, which yields no predictability issues unless an exceptional malfunction occurs. However, this method is increasingly outdated at the expense of machine-learning, a coding technique that provides more autonomy to machines. To put it very roughly, machine learning algorithms, which are a series of combinations to solve a function as in an algebra class, allow the machine to make its own decisions after receiving the data about the environment and the task it must perform with the help of humans. This has increased predictability issues since not everything can be pre-programmed by the coder and machine-learning algorithms are not transparent for humans to untangle. This is so because the machine operates through thousands of combinations when deciding, where humans eventually lose track due to the limits of their cognition. Thus, although humans set the goal for the machine, they cannot foresee the pathway in which the machine makes the decision that might lead to an unpredictable result. 

This foreseeability issue is particularly important because systems are likely to misidentify targets due to the limits of the current technology. Machines’ perception of the environment remains radically different from that of humans, i.e., they use hundreds of dark squares (pixels) to recognize an object. In contrast, humans see and interpret objects in a cognitive way that is unmatched by that of machines. When used in target recognition in weapon systems, this has serious repercussions in  “misidentifying”  targets by machines. Target recognition is equally as important as target engagement in determining whether a weapon system qualifies as an AWS. This should be the case because although a human may intervene in the target engagement phase, as target recognition is completely independent of humans, the decision to engage will heavily rely on the target recognized by the autonomous function. Autonomous target recognition in the critical function of selecting targets should be sufficient to define the system as an AWS

Thus, the use of the weapon system in highly structured and predictable environments should not prevent it from being defined as an AWS.

2. Inability to “dynamically initiate a new targeting goal”

Initiation of a new targeting goal based on the objective introduced to a system is a great example of a near-General AI, which can practically perform all the functions performed traditionally thanks to human cognitive abilities. Today’s AI is Narrow AI, which can only perform some functions that a human can. As an example of a near-General AI, the United Kingdom defines AWS as weapon systems “capable of understanding higher-level intent and direction.” However, the ability to select and attack targets, possible through Narrow AI, is sufficient to raise questions of compliance with IHL principles of distinction, proportionality, and precaution without a need for a General-AI system. For instance, a weapon system that is introduced with image and speed details to autonomously recognize and engage with a target raises questions under the principle of distinction as it is uncertain whether it can properly distinguish between lawful and unlawful targets. Although this system is not capable of understanding the goal of the command by the operator, it nevertheless raises concerns under IHL. 

Thus, similarly above, the limits of the technology today should not prevent defining a system as AWS. For instance, although a system may be incapable of “dynamically initiating a new targeting goal” it may still have autonomy to recognize or engage with a target, which is likely to cause issues under IHL independent of the high-level complexity required by some States.

3. Constant human supervision

Although exercising human supervision from time to time may rule out autonomy entirely, the ability of the system to allow for human supervision does not render the AWS non-autonomous per se. There are many weapon systems with autonomy that are able to operate in the autonomous mode and sometimes do. More importantly, human supervision may be exercised on functions independent of targeting. A great example of this is active protection systems (APS), which are designed to protect armored vehicles at a speed that overpasses the human capability to detect targets. Though exercising human supervision is possible, the aim behind APS is to engage with targets faster than humans can, so they usually operate autonomously without human supervision in target engagement. Hence, human supervision is limited in the targeting functions for specific reasons. Thus, a weapon system will likely be defined as AWS.

Further, it is unclear how much reliance the human operator will vest on the weapon system. Concerns of automation bias also support that human supervision, unless it rules out the system’s ability to operate independently, cannot be a ground to disregard autonomy in the current weapon systems’ functions.

4. Anti-material uses of the weapon

IHL protects civilians and civilian objects under the principle of distinction, the principle of proportionality and the principle of precaution applicable to both the design and use of weapon systems during armed conflicts. If the weapon system is constrained by design to be used towards humans, surely there will be no issues concerning the protection of civilians during armed conflict. Yet, civilian objects (e.g., an operational hospital) might still be threatened. Further, civilian presence is an independent element of the characteristics of the weapon system. Thus, the target type cannot be a ground to claim that a weapon system is not autonomous but rather renders the use of that weapon in compliance with IHL.

Further, some weapon systems are not constrained by design to be used towards humans, but their deployment area happens to be scarcely populated by humans. This is the case for the US Phalanx-Closed-in-Weapon-System (Phalanx) deployed in naval areas with almost no civilian presence. The target software of Phalanx can select and attack its targets. The fact that it does so in naval areas does not mean that it has autonomy in its critical functions. It signifies that its use in autonomous mode is likely to comply with IHL rules, but there are instances where Phalanx misidentified its targets and opened friendly fire.  

Hence, the fact that the system in question is used as an anti-material weapon system is not only sometimes irrelevant to the design of the weapon system; it also does not always mean that it is impossible to encounter consequences in violation of IHL.


The Heinrich Böll Foundation’s summary of the characteristics that distinguish current weapons systems from AWS demonstrate a phenomenon in the debate on the definition of AWS that should be eliminated: the definition of an AWS must be independent of the criteria that are likely to render its use legal under the norms of IHL on the use of such weapons. The use of the weapon system in “highly structured and predictable environments”, inability of the system to “dynamically initiate a new targeting goal”, “constant human supervision” over the weapon system, and “anti-material uses” of the system are merely factors that increase the likelihood of compliance of the AWS with IHL. They do not mean that a particular system does not have autonomy in its critical target selection and attack functions. This is particularly important to clarify because when a system is excluded from the definition of AWS, it is no longer possible to include it in the scope of application of the emerging rules on AWS.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

The New Cyber Plague Demands A “King’s Ransom”: Who is to Blame?

By Başak Köksal

[Başak Köksal is a senior law student at Istanbul University Faculty of Law, in Turkey. She is interested in International Cyberspace Law and Human Rights on the Internet. She is a member of Istanbul Center of International Law (ICIL) and International Law Students Association (ILSA).]


Ransomware attacks are one of the cyber-related malicious activities aimed at encrypting the files or systems on the target device until such time that the ransom demanded in exchange for the decryption is paid. During the time the files are encrypted, the purpose is to render the files and the systems, that the targeted party needs for carrying out its services, inoperable until the determined ransom payment in cryptocurrency—     in general bitcoin—     is made. Up until now, there were several ransomware operations that have been carried out against the institutions providing health care services, governmental entities and global companies. These may lead to such dire consequences that in fact were considered only possible via kinetic attacks including loss of life and physical damage. One of the most striking and recent one was mounted against Kaseya which provides IT infrastructure to many transnational corporations and small businesses. Because of its prevalent interaction with the majority of the world, the attack constituted a supply chain attack by which thousands of businesses suffered from exploitation.

Considering all the damages that they caused, one important question may—     or should—     arise; who is the main actor behind and therefore responsible for these malicious activities? Is this the state from which the attack is launched or the non-state actor performing independently from the state where it operates? This post will deal with the question of circumstances under which states can be held responsible for a certain ransomware attack. It will firstly lay down the conditions for the attribution of ransomware campaigns to states and secondly it will discuss which norms of international law can be subject to a violation by them.


As has been endorsed by the 2015 Report of United Nations Group of Governmental Experts (UN GGE), states’ positions and Tallinn Manual 2.0 reflecting the teachings of most highly qualified publicists in cyber field,  Articles on Responsibility of States for Internationally Wrongful Acts (ARSIWA) are also applicable to the activities of States in cyberspace. Therefore, states can be held responsible under international law for their internationally wrongful cyber acts.

Pursuant to Article 2 of ARSIWA, relating to the elements of an internationally wrongful act, the act in question must be attributable to a state and it must be in violation of international obligations imposed on that state. Attribution of cyber activities to a state consists of three main phases; first of which is the identification of the devices by which the cyber activities concerned are launched, secondly the identification of persons or group of persons behind them and last but not least the establishment of sufficient link between the state and the entity concerned. (Delerue, 2020, p. 55)  In that regard, Articles 4-11 of ARSIWA, stating the circumstances in which the conduct is attributable to a state, must be taken into account when assessing the involvement of states in cyberattacks. Among them, Article 8 needs a closer look specifically in the cyber context due to the fact that states tend to act through private groups composed of specialized hackers to protect themselves from any accusations. (Collier, 2017, p. 25)  According to that article, the conduct of a person or group of persons shall be considered an act of a State so long as the conduct is carried out upon the instructions or directions or under the control of that state.

In terms of cyber activities, there is a controversy regarding the degree of control sought to attribute the conduct of private entities to States. On one hand, some of the states (e.g. Brazil and Norway) have reaffirmed “effective control” test which was introduced by the International Court of Justice in Nicaragua Case (Nicaragua v. United States, 1986, p. 65, para. 115) and afterwards endorsed in Bosnian Genocide Case (Bosnia and Herzegovina v. Serbia and Montenegro, 2007, p. 209, para. 401). According to that test, the conduct of private persons or groups can be imputed to a state provided that the state must be able to determine the execution of the actions concerned and terminate them whenever it wants.(Tallinn Manual 2.0, p. 96, para. 6)

On the other hand, there are some experts maintaining that this strict threshold is hardly attainable, and requires considerable effort to conclusively establish by the injured state, therefore there is a need for lower threshold (e.g. “virtual control” test, “control and capabilities” test) (Stockburger, 2017, p.7; Margulies, 2013, p.19).

Once the attack is conclusively attributed to a state, it must be considered whether the attack is in violation of international obligations owed by the alleged responsible state to the injured state. In the following section, the norms of international law that might be compromised by state-sponsored ransomware attacks will be analyzed, which will be followed by the illustration of real-life examples and consideration of different scenarios.


In the event that the actions of the hackers are attributable to a state, these may constitute a breach of the prohibition of use of force, the principle of non-intervention, or the duty to respect the sovereignty of other states.

a) Use of Force

According to Roscini (2014), for cyber operations to fall within the scope of Article 2(4) of UN Charter related to the prohibition of use of force, the cyber operation in question must amount to a “threat” or “use of force” and the threat or use of force must be exerted in the conduct of “international relations”(p. 44). That force must reach the level of an “armed attack”. (The Charter of the United Nations: A Commentary Vol I p. 208, para 16) For a cyber-attack to be qualified as an armed attack, the effects-based approach requires that it must “cause or reasonably likely to cause the damaging consequences normally produced by kinetic weapons” (Roscini, 2014, p. 47) Considering this approach, ransomware attacks which have detrimental impacts on people’s lives (e.g. Springhill Medical Center ransomware attack), national critical infrastructures (e.g. SamSam ransomware incidents) are likely to amount to a use of force.

Moreover, Schmitt (2012) has also suggested some criteria in identifying cyber operations constituting armed attacks. These are severity, immediacy, directness, invasiveness, measurability of effects, military character, state involvement, and presumptive legality (p. 314-315). For these conditions, it may be held that ransomware attacks causing loss of life or injuries, critical damage to state property may satisfy the severity requirement. However, in terms of the directness and immediacy requirements, this assertion would not be justifiable anymore due to the fact that the initial act of ransomware operations, which is the encryption of data, do not inevitably and directly cause the above mentioned severe adverse consequences and these results do not take place immediately following the attack. Generally speaking, there is a length of time between the encryption of data and files and resulting outcomes. So that, pursuant to Schmitt’s criteria, despite its grave consequences, the prohibition of use of force is unlikely to be invoked for ransomware operations.

b) Intervention

Alternatively, a ransomware attack may be deemed a violation of the non-intervention rule  regarded as “part and parcel of customary international law” in Nicaragua Case (Nicaragua v. United States, 1986, p. 106, para. 202) although not as serious as a use of force but still a grave violation. A ransomware attack may be considered a prohibited intervention provided that it interferes with the inherent governmental functions of the target state and it is coercive by depriving the target state of determining its matters freely (Nicaragua v. United States, 1986, p. 108, para. 205). The first requirement would be met if the attack is intended to render the data or services protected and offered by the injured state inoperable. To illustrate, Texas Municipality was hit by a ransomware attack in 2019 rendering the vital records including birth and death certificates inaccessible. As for the second condition, the attack could be coercive if it compels the target state to act or change its attitudes with respect to a matter that falls within its internal affairs (See the positions of Netherlands and Germany). The injured state is compelled to follow one of the predetermined paths: to pay the ransom and decrypt the files it needed for the exercise of its functions or to deal with the problem on its own by taking a risk of possible procrastination in providing public services. As can be seen, in some way these attacks necessitating the payment of a huge amount in ransom shackles the target state to go along the path in which it passively carries out a policy regarding its internal affairs.

c) Sovereignty Principle

Violation of Sovereignty in cyberspace is an issue that also deserves attention. The international community is divided (Heller, 2021, p. 1444-1445) as to whether sovereignty is a standalone rule in cyberspace (explicit non-acceptance coming from UK, USA). Given that Sovereignty is not just a principle but a rule that may be violated by states’ cyber actions, the ransomware attack in question must reach one of the thresholds stipulated in Tallinn Manual 2.0 namely, physical damage, loss of functionality and intervention in inherently governmental functions of that state.

Physical damage is deemed to exist in cases where the ransomware attack causes loss of life or bodily harm (e.g. University Hospital Düsseldorf Attack) or destroys sophisticated systems or data (e.g. NotPetya Attack). If the attack does not cause material harm but necessitates costly and/or arduous repairs/replacements of the physical components of affected devices (e.g. National Ink Attack), the loss of functionality threshold would have been crossed. (Tallinn Manual 2.0, p.21, para. 13). Interference with or usurpation of inherently governmental functions through ransomware attacks may occur when the data or services that are necessary for the exercise of governmental functions are encrypted and rendered unusable thus violating the sovereignty principle.


All in all, as can be seen, acts not only in the physical world but also in the virtual world may bring about physical and damaging consequences. Ransomware attacks pose a serious threat for both states and global companies and businesses. They may target critical infrastructures of states – such as health, justice, administration, and the databases of companies including those containing personal data or information regarding the exercise of their functions. It is imperative to identify the origin of these attacks and hold the perpetrators responsible in accordance with the law. In case the ransomware attack is attributable to a certain state, it is unlikely that a ransomware attack may be considered a use of force because of its nature not allowing the immediacy and directness criteria to be met. For the non-intervention rule, it may be challenging to prove that it is coercive. The most probable way to hold it internationally responsible is to cite the violation of sovereignty. Last but not least, the strict criteria regarding both attribution and use of force that has been held for kinetic attacks should be duly softened in terms of cyber operations, otherwise states may escape from international responsibility and move freely in this gray zone.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Platforming Violence? Incitements to Genocide on Social Media Platforms: a Legal Analysis

Claudia Hyde

[Claudia Hyde holds an LLM (Hons) in Public International Law from the London School of Economics and is a legal researcher.]


As jurisdictions such as the United Kingdom and the European Union grapple with the challenge of regulating tech giants, the use of social media platforms during mass atrocities has been brought to the fore by the protracted legal battle between Meta/Facebook and The Gambia.

 In November 2019, The Gambia instituted proceedings against Myanmar at the International Court of Justice (ICJ) alleging breaches of the 1946 Genocide Convention committed against Myanmar’s Rohingya minority. At the height of the violence, Facebook emerged as a powerful tool in intensifying and spreading conflict: as many as 700 individuals were employed by the Tatmadaw (Myanmar Military) to create fake profiles on the platform and flood the fake profiles with propaganda and incitement to violence. In support of its case before the ICJ, in June 2020, The Gambia filed an application for discovery with the US District Court for the District of Columbia requesting that Facebook disclose information about the now-deleted Facebook accounts. 

The legal issues raised by the dispute have received extensive comment elsewhere. What has received relatively little attention, however, is the extent to which those Facebook posts constitute breaches of international law in their own right as violations of the prohibition on incitement to genocide. This post will briefly survey the case law on incitement emanating from the International Criminal Tribunal for Rwanda (ICTR), the legal principles and their application to speech on social media. 

The prohibition on incitement

Article 3(c) of the Genocide Convention prohibits “direct and public incitement to genocide.” Each of the underscored elements are essential components of the crime and will be discussed in turn.


Incitement to genocide must be “direct” to be punishable in the sense of being understood as a call to commit genocide (Timmermann, 2006). The ICTR Akayesu judgment provides the most detailed description of the “direct” requirement in case law, stating in para. 557 that the incitement must “assume a direct form and specifically provoke another to engage in a criminal act.” Speech that is “mere vague or indirect suggestion” will not constitute incitement. 

Nonetheless, the ICTR has interpreted the “direct” requirement expansively with a focus on the meaning of the message in its context. In its caselaw, there have been clear-cut cases of defendants calls on others to commit genocide, unambiguously and directly. In Bikindi, for instance, the accused’s comments included “Hutus should hunt and search for the Tutsis and kill them” [para. 125]. Less direct and euphemistic language has also been held to constitute direct incitement: in Kambanda, the accused was convicted for stating “you refuse to give your blood to your country and the dogs drink it for nothing” [para. 39].  The meaning of the message in its historical, cultural and linguistic contexts determines whether the incitement is direct.

In the context of social media, the sociolinguistic nuances particular to that platform define the relevant context. Language is codified on social media platforms by users who develop similar reference points and adopt similar sentence structure, terminology and syntax, such as acronyms and “memes.” This context may be considered to determine whether a direct call to genocide has been made. For instance, users of 4chan’s /pol/ board employed triple parentheses as a coded means of referring to Jews, stylised as (((Them))) (Tuters and Hagen, 2020). The flexibility within the meaning of “direct” in “direct and public incitement” would allow for such anachronisms and coded language to be considered and is sufficiently flexible to incorporate new forms of media and ways of presenting information. This is significant in the Burmese context, where Rohingya people have been vilified as “terrorists” and “traitors” by the government to legitimate violence. These accusations pervade the portrayals of Rohingya people in the impugned Facebook posts.

More controversially, ICTR jurisprudence suggests that sharing incitement posts may be prohibited as well. In Niyitegeka, the accused was found guilty of incitement after commending a member of a militia at a public meeting for his “good work” [para. 142]. Similarly, the accused in Ruggiu was found guilty of incitement after referring to genocidaires as “valiant combatants” [para. 44]. This suggests that other speech acts in which the author endorses or glorifies acts of genocide will constitute incitement, regardless of whether the statement calls on others to partake in genocide. The logical conclusion of the decisions in Ruggiu and Niyitegeka is that the act of sharing, such as “retweeting,” a post that incites genocide would be sufficiently direct. Again, this will be context specific. A post on Twitter retweeting another’s incitement to genocide when accompanied by criticism of the message will not be understood by the audience as adopting or endorsing incitement to genocide. However, retweeting another’s incitement to genocide with an affirmative message could be viewed in the same way as the speeches in Ruggiu and Niyitegeka and be considered sufficiently direct. Thus, social media users need not be the authors of a post that incites genocide in order to commit the crime of incitement, as sharing or ‘retweeting’ would suffice 


Various factors have been considered throughout the ICTR’s case law in determining the “publicness” of a statement. Most significant for this analysis is the medium of communication employed. Certain forms of communicating incitement to genocide, including through print media and radio, have been considered “public” by their very nature. For instance, in the Media case, the circulation of Kangura or the average number of listeners Radio Télévision Libre des Mille Collines (RTLM), a radio station that played a significant role in spurring on violence, were not considered in any depth by the Chamber. The fact that mass media were employed determined the publicness of the remarks. Similarly, the broadcast of violent and patriotic songs written by the defendant in Bikindi on radio waves was considered public in of itself. Relevant here is the ILC’s 1996 commentary where it was argued that “public incitement is characterized by a call for criminal action…by such means as the mass media.” 

This would lead to the conclusion that speech or statements made through social media, being a mass communication platform, would necessarily be “public.” However, the publicness of social media is not so easily discerned: invite-only or “elite” social media platforms such as Raya, for instance, cannot be accessed by a “mass” audience in the sense that their audience is select or limited. Similarly, if a Twitter user with 20 followers were to incite genocide on their account, it is difficult to view this as “mass” communication. Considering that social media users’ followings and reach vary widely, it is difficult to gauge when a user’s posts will be “public” and when they will not. 

The fake profiles created on the orders of Burmese generals often had thousands of followers, meaning that any posts published on those profiles would likely be considered “public” for the purposes of the Convention. But the application of these legal principles to social media posts is currently untested, and each post would need to be considered individually in the context considering all the facts of the case. It cannot be assumed that every post that is “public” on social media is “public” in the Convention sense.


The act of incitement is not defined in the Genocide Convention or subsequent instruments. In Kajelijeli, the ICTR provided some guidance by stating that in “common law jurisdictions, incitement to commit a crime is defined as encouraging or persuading another to commit the crime, including by use of threats or other forms of pressure” [para. 850]. The Chamber did not, however, endorse any particular definition of incitement. 

Scholars are divided on the question of whether incitement must be linked to an act of genocide in order to be considered a crime. Benesch, for instance, has argued that a statement should be considered incitement where there is a “reasonable possibility that a particular speech will lead to genocide” (Benesch, 2008). This reading receives little support in scholarly comment (see for instance Wilson, 2017 and Scott Maravilla, 2008) and case law. In Nahimana, the ICTY confirmed in para. 981 that there is no requirement that incitement be linked to an act of genocide for it to be punishable. Such a reading better reflects one of the key purposes of the Convention: to prevent genocide from happening. It would be inconsistent with the object and purpose of the treaty for unsuccessful incitement to genocide to be considered any more lawful than successful incitement. 

This conclusion is important for the purposes of incitement on social media, where the effect of a statement may be remote from the statement itself. The global nature of social media communication means that a statement in one country can have effects in another. Considering the example of Myanmar where several hundred accounts were created to incite genocide against Rohingya people, there is no way of proving which particular Facebook post or account prompted a reader to commit violence. International criminal law does not require proof that a post directly caused the reader to commit violence. It is sufficient that the post could prompt such action.


From my analysis, it is clear that incitement directed over social media is capable of being covered by the international criminal prohibition on incitement to genocide. However, the ambiguities in the law present a barrier to accountability. Social media platforms have clearly displaced other media as the new frontiers for the dissemination of hate. Considering the role that these platforms are already playing in spurring on mass atrocities, it is essential to understand where the deficiencies in the current legal framework lie and what must be remedied to hold the perpetrators to account.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with. 

Weapon System with Autonomous Functions and the Martens Clause: Are the use of these weapons in line with the principles of humanity and the dictates of public conscience?

By Clea Strydom

[Clea Strydom completed her B.A. Law and LL.B at Stellenbosch University, South Africa, before writing her LL.M dissertation on the International Humanitarian Law implications of weapon systems with autonomous functions through the University of Johannesburg, South Africa.]


States are increasingly implementing artificial intelligence (AI) to pursue autonomy in weapon systems for armed conflict for various reasons, including, faster reaction time, faster data collection and processing, and being able to use robots instead of risking human combatants’ lives. There are, however, concerns that weapon systems with autonomous functions cannot be used in compliance with International Humanitarian Law (IHL), that it is unethical for machines to lethally target humans, and that their use could lead to an accountability gap.  Therefore, there has been an ongoing debate about whether to ban the development of these weapon systems. The mere fact that these systems have autonomy is not the issue the ongoing legal debate is focused on; rather it is the delegation of critical functions i.e., acquiring, tracking, selecting, and attacking targets, to weapon systems, that is of concern. The ICRC has correctly identified that “ethics, humanity and the dictates of the public conscience are at the heart of the debate about the acceptability of autonomous weapon systems.” 

Weapon Systems with Autonomous Functions

Autonomy in weapon systems should not be seen as a mere development of conventional weapons, instead, it is a paradigm shift in weapons technology that could change warfare drastically. Autonomy in weapon systems does not denote a specific new weapon but rather a shift in the level of human control over critical functions to weapon systems. This concerns a change in how warfare is conducted. While the most widely used terms are Lethal Autonomous Weapon Systems (LAWS) or Autonomous Weapon Systems (AWS), ascribing autonomy to the whole system is problematic.  It should be kept in mind that autonomy is not a type of technology, but rather a characteristic of technology, related to certain functions, instead of being attached to the object itself. Due to the problems with ascribing autonomy to the system, Andrew Williams suggests referring to “autonomous functioning in a system” in general, or “systems with autonomous functions” when referring to a specific platform or system. Therefore, the author has adopted the term weapon systems with autonomous functions (WSAF), as it indicates that the whole machine is not autonomous, but instead that it can perform certain functions with varying degrees of human interference, which will depend on various factors such as the system’s design or intelligence, the external environmental conditions in which the systems will be required to operate, the nature and complexity of the mission, as well as policy and legal regulations. It must be kept in mind that while autonomy in weapon systems is being pursued by several States, weapon systems that can perform critical functions autonomously are still a thing of the future. Therefore, the debate, including the advantages and disadvantages of autonomy in weapon systems, is at this stage still speculative.

The Martens Clause

The Martens Clause made its first appearance in the 1899 Hague Convention II and has since been included in Additional Protocol to the Geneva Conventions, Article 1(2): 

“In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience”.

The International Court of Justice in the Legality of the Threat or Use of Nuclear Weapons Advisory Opinion confirmed the principle contained in the Marten Clause as customary IHL and held that it “proved to be an effective means of addressing rapid evolution of military technology”. Concerning WSAF, the crux is whether the delegation of life and death decisions to a robot would be in line with the dictates of public conscience and principles of humanity.

Professor Michel Veuthy highlighted the importance of public conscience in IHL and identified that it can trigger the codification of IHL principles, be an impetus for the implementation and enforcement of IHL, and provide a safeguard for all situations not provided for or considered in the law. On the other side of the argument, Michael Schmitt argues that the Martens Clause only applies in the absence of applicable law in the Geneva Convention and Additional Protocols or international agreements such as treaties; and that since 1899, when the Martens Clause first appeared, the law relating to weapons has developed to such an extent that it covers all existing and future weapons. As a result, the role of the Martens Clause has been diminished. He argues that it is unlikely that any weapons would be found to be in contravention of the Martens Clause if it has been found to comply with IHL and applicable treaties. However, Robin Geiss points out that the IHL principles applicable to weapons are framed in a human-centric manner and might not sufficiently be able to deal with autonomy in weapon systems; therefore the Martens Clause could be used to create new laws or act as a safety net, as Veuthy suggests.

Even if it is accepted that a weapon could be banned based on the Martens Clause, several questions with no clear answers arise: first, how does one determine what the public conscience is, and secondly, which public? It is unlikely that the global public will share a common ‘conscience’. The public conscience and principles of humanity are not timeless or universal.  Several individuals have conducted surveys to try and determine public opinion on the weapon systems in question. Political scientist and current Inspector General of the United States Department of Justice, Michael Horowitz found that public opinions depend on context. In the first round of questions, Horowitz’s survey found that 48% of participants were opposed to “autonomous weapons”. However, once he put the use of the weapons in context and highlighted their benefits, opposition to them dropped to 27%. In American roboticist and robo-ethicist, Ronald Arkin’s survey participants acknowledged that “autonomous weapon systems” do have a role to play, but the majority felt that they should not be allowed to use force. IPSOS, a global market research, and public opinion specialist company has done various surveys on the views of “killer robots” for Human Rights Watch and the Campaign to Stop Killer Robots (who have called for a ban of “weapon systems that can perform critical functions autonomously). Interestingly the latest survey, conducted between November 2020 and January 2021 across 28 countries, shows that there is a correlation between opposition and the age of the respondents; with a 54% opposition average for those under 35 years of age, and 69% among those ages 50-74. This can be indicative of several factors, including that the younger generation is more accepting of technology and that the older population is more likely to have had first-hand experiences of the horrors of war. 

HRW believes that States should be considering these views when reviewing “autonomous weapons”. The perspectives do not create binding rules but may influence treaties and decisions to deploy the weapons. It is important to keep in mind that opinions change over time. While 50 years ago we could not imagine the possibility of unmanned remote-controlled systems being an integral part of military arsenals as they are today, we have come to accept them to a large extent. Surveys need to be seen in the context of the time, the way the questions are framed, and in this case, advancement in technology. As autonomy in weapon systems develop and the technology becomes more advanced, views on them will change. Armin Krishnan notes, in his book titled Killer Robots: Legality and Ethicality of Autonomous Weapons, that with “social conditioning” views on WSAF will evolve. 

Regarding the principles of humanity, there is a concern about the importance of human agency in life and death decisions. A lot of anxiety exists about losing human control over weapon systems and war in general, which raises questions beyond compliance with laws and also considers whether the deployment of such weapon systems is in line with our values.  Delegating decisions about life and death may dehumanize armed conflict even further. The concern is that allowing weapon systems to lethally target humans means they are not treated as unique human beings which is an afront on human dignity; late Professor Heyns referred to this as “death by algorithm”. It has also been argued that the anthropocentric formulation of IHL principles implicitly requires human judgment over decisions regarding force.


To date, the Martens Clause has never been used to ban a weapon. It must be kept in mind that at this stage the debate is still very speculative. Weapon systems that can perform critical functions autonomously, however, offer numerous advantages and it is unlikely that States will refrain from developing and deploying weapons that would give them the upper hand based on personally held views. What the Martens Clause does do is to remind us that in deciding on whether and how to design, develop and use WSAF we must do so in a way that safeguards our values instead of rendering them unsustainable. 

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.

The Jus ad Bellum Spatialis and the potential impact of Soft Law in regulating the Use of Force against Space Objects

By Sören Sommer

[Sören Sommer (LL.M.) is a PhD law student at the University of Glasgow.]

The recent Russian anti-satellite missile test has abruptly brought the risk of potential future conflicts in outer space back on the international space and security agenda. As has been repeated time and time again, outer space is increasingly becoming more competitive, congested, contested, and even weaponised (Schrogl et al., 2015, pp. 521-716; Steer, 2017, p. 9). Due to the ever-increasing reliance of modern societies and modern militaries on space assets, sophisticated means and methods of space warfare to use force against space objects are being rapidly developed. Potential future conflicts over space resources and geopolitical conflicts on Earth which might spill-over to space contribute to the fragility of the continued peaceful and cooperative use of outer space and further increase the risk that space objects will be targeted in future outer space conflicts. This entails grave humanitarian consequences due to the potential outage of essential space-based services (Thomas, 2011; Sommer, 2019) and space environmental risks due to the creation of harmful space debris.

Fortunately, actual hostilities have not been conducted in outer space to date. This of course also means that there is no sufficient State practice on the matter so far, but rather much political and legal uncertainty instead, since unfortunately, the jus ad bellum spatialis (the international regime governing inter-State armed force in outer space) is far from being conclusively developed (with manual projects like MILAMOS and Woomera still ongoing) and remains insufficient to appropriately prevent and regulate conflicts in outer space and ensure its continuing sustainability, peacefulness, and security. Various hard law initiatives such as the longstanding disarmament efforts by the UN Conference on Disarmament (UNCD) to conclude a Treaty on the Prevention of an Arms Race in Outer Space (PAROS) and also the ultimately unsuccessful drafting and negotiation process of the Draft Treaty on the Prevention of the Placement of Weapons in Outer Space, the Threat or Use of Force against Space Objects (PPWT) aimed at prohibiting the use of force against (another State’s) space object have been (and will likely remain) unsuccessful due to lacking or diverging State interests when it comes to regulating and especially restricting military uses of the “ultimate high ground” (Sheehan, 2015, pp. 12-13; Mutschler, 2015, pp. 43-48).

While States have failed to formulate a clear prohibition on the use of force in outer space through hard law despite all the increasing risks, I would like to point out in this post how besides the UN Charter and general international space law already prohibiting the use of force against other States’ space objects, especially soft law might play a crucial role in ensuring the continued sustainability, peacefulness, and security of outer space by contributing to the remarkable formation of an international customary norm prohibiting such uses of force and thus fill a dangerous legal gap in the jus ad bellum spatialis. In my opinion, such a customary norm has already started to form through soft law, primarily expressed by a series of consistent and widely supported United Nations General Assembly (UNGA) resolutions on the matter, which can be seen as evidence for existing opinio juris. Furthermore, the absence of open uses of force against space objects in cases of inter-State conflicts can be viewed as concurring State practice on the matter (for now).

First, I would like to briefly revisit how existing hard law – the UN Charter and international space law in particular – already but insufficiently prohibits the use of force in outer space in my view.

The rules of the UN Charter are generally considered to apply in outer space, but are very general themselves, including their well-known sometimes more but often less force-restrictive interpretations. Art 2 (4) of the UN Charter is principally sufficiently broad to also cover (illegal) uses of force in outer space, despite the fact that there cannot be conventional cross-border use of force due to the lack of borders in the res communis environment of outer space where targeted objects are principally outside the territory of any State. This is because Art. 2 (4) of the UN Charter not only prohibits the use of force “against the territorial integrity” of another State, which is conventionally understood as prohibiting cross-border force (Hakimi & Cogan, 2016, p. 257), but also broadly prohibits the use of force “in any other manner inconsistent with the Purposes of the United Nations”. These “Purposes of the United Nations” are laid down in Art. 1 of the UN Charter and are above else “to maintain international peace and security” (Art. 1 (1) UN Charter), which is irreconcilable with using force in outer space. On that basis it has been convincingly argued that the UN Charter use of force prohibition thus also extends to uses of force in outer space (Goh, 2004, p. 263; Cheng, 1997, pp. 70-72; Sommer, 2019, pp. 22-35).

The outer space use of force prohibition is in my view also implicitly reflected in international space law. The international framework regulating outer space activities consists of five multilateral space treaties at the core (most of which today enjoy wide ratification), which were concluded under the auspices of the UN, and nowadays also of various soft law agreements, such as UNGA resolutions, transparency and confidence-building measures, and policy guidelines (Freeland, 2015, p. 91). Many of the space treaties’ provisions have customary international law status today (Lee, 2003, p. 93; Schmitt, 2017, p. 270; in fact, all provisions referenced in this post enjoy such status) and generally, custom and soft law instruments are of particular importance for regulating outer space activities (Cheng, 1997, pp. 127-150; Tronchetti, 2011, pp. 619-633).

The rules of international space law focus almost exclusively on the peaceful uses of outer space and remain largely silent on the issue of the use of force. On the one hand, this entails lacking normative clarity regarding the use of force in outer space, but on the other, it is also indicative of how the international community imagines its shared use of outer space in my opinion and importantly, international space law is linked to the general jus ad bellum regime.

Particularly, the central and widely ratified international agreement on the use of outer space, the Outer Space Treaty (OST), states that outer space use shall be in accordance with international law and the UN Charter (Art. III OST). The jus cogens use of force prohibition as found in the UN Charter as well as in customary international law thus also applies to outer space use.

Furthermore, the so-called “launching States” (Arts. VII OST, I (a) Registration Convention (REG)) retain sovereignty over their space objects under international law by exercising “jurisdiction and control” according to Arts. VIII OST, II REG (Schmidt-Tedd & Mick, 2017, pp. 520-524). This is similar to maritime law and the concept of the “flag state”, which shall also “exercise jurisdiction and control […] over ships flying its flag” (Art. 94 United Nations Convention on the Law of the Sea). In the Nicaragua case, the ICJ held that the “principle of respect for State sovereignty […] is […] closely linked with the [principle] of the prohibition of the use of force”. (para. 212) Since space objects remain under the sovereign control of their respective launching States it is my opinion that the use of force against another State’s space object therefore qualifies as a prohibited use of force (Sommer, 2019, p. 34).

Apart from the general jus ad bellum rules, international space law is clear on the fact that outer space is first and foremost to be used for peaceful purposes (Finch, 1968, p. 365), despite its past and present military use (Goh, 2004, p. 269). Paras. 2 and 4 of the OST preamble first mention the principle of the peaceful purposes of outer space use, which is considered to be customary law (Blount, 2012, p. 2) and appears in almost all UN documents relating to outer space. While the peaceful purposes principle is often seen as indicative of how the international community imagines its shared use of outer space, the fact remains that the international space treaties are largely silent regarding unpeaceful uses of outer space. There is only Art. IV OST which prohibits the placement of WMDs in space.

The lack of sufficient normative clarity under the UN Charter regime and general international space law regarding the legality of using force in outer space contains the risk that States abuse the existing legal gaps or act in a way that others consider unlawful. This could also alter contemporary, force-restrictive interpretations of the jus ad bellum spatialis due to converse State practice. Since it is inconceivable at the moment that the major global space powers will be willing or able to agree on any new space treaty in the foreseeable future due to lacking or diverging State interests, especially with regard to restricting the use of force in outer space or prevent its weaponization (apparent when looking at the unsuccessful drafting and negotiation process of the aforementioned PPWT), looking for other means beside treaty law restricting the use of force in outer space seems appropriate because of the potentially highly adverse effects of space warfare.

A peculiarity of international space law is not only the particular significance of its customary law, which for the past decades has filled and continues to fill the gaps of lacking State support for new UN space treaties and compensates for their inadequacies (Tronchetti, 2011, pp. 619-633), but also that such customary space law is frequently formed through soft law like UNGA resolutions. In its 1996 Nuclear Weapons Advisory Opinion, the International Court of Justice (ICJ) generally stressed the potential relevance of soft law (UNGA resolutions in particular) for the development of customary law:

The Court notes that General Assembly resolutions, even if they are not binding, may sometimes have normative value. They can, in certain circumstances, provide evidence important for establishing the existence of a rule or the emergence of an opinio juris. To establish whether this is true of a given General Assembly resolution, it is necessary to look at its content and the conditions for its adoption; it is also necessary to see whether an opinio juris exists as to its normative character. Or a series of resolutions may show the gradual evolution of the opinio juris required for the establishment of a new rule.” (para. 70)

One year later, Cheng (Cheng, 1997, pp. 127-150) famously noted the possibility of “instant custom” in international space law with regard to UNGA resolutions and thus underlined the importance of soft law for the development of international space law, which continues to be relevant today.

Since 1959, the UNGA has adopted 64 resolutions on International Co-operation in the Peaceful Uses of Outer Space and 26 resolutions on the Prevention of an Arms Race in Outer Space, almost always with overwhelming support. These resolutions can be considered as authoritative interpretations of the UN Charter in the outer space-context and contribute to the formation of customary international law regarding the prohibition of using force in outer space (Goh, 2004, p. 260). Moreover, the 2017 UNGA Resolution on Further Practical Measures for the Prevention of an Arms Race in Outer Space explicitly encourages all States to actively contribute to the “prevention of […] the use of force against space objects.”

In my opinion, the series of consistent and widely supported UNGA resolutions on the matter can be seen as evidence of emerging opinio juris through soft law prohibiting the use of force in outer space in line with the ICJ’s criteria in its aforementioned Nuclear Weapons Advisory Opinion. Furthermore, the absence of open uses of force against space objects (although clearly feasible from a technical standpoint, as several successful anti-satellite weapons tests in the past have shown) in cases of inter-State conflicts can be regarded as concurring State practice on the matter.

As Cheng has shown, customary international law can rapidly develop from UNGA resolutions. Thus soft law will continue to play a crucial role in regulating space activities in the future and the emergence of an international custom prohibiting the use of force against space objects might provide an exit from the international community’s deadlock with regards to sufficiently regulating the use of force in outer space and could therefore be vital in ensuring the continuing sustainability, peacefulness, and security of outer space and its beneficial use for mankind.

Views expressed in this article are the author’s own and are not representative of the official views of Jus Cogens Blog or any other institute or organization that the author may be affiliated with.