The outbreak of COVID-19 poses a near unparalleled challenge for humanitarian agencies as the disease impacts fragile and conflict states across the world, driving a surge of humanitarian need and support.
The changing contextual dynamics also include COVID-19’s impact on humanitarian security risk management. Broader security risk management, which also focuses on community acceptance, should too adapt its frameworks to include this issue.
The United Nations Educational, Scientific and Cultural Organisation (UNESCO) published two recent reports entitled ‘disinfodemic.’ According to these reports, at an international, state and community level, disinformation in certain fragile and conflict states has meant that humanitarian workers are increasingly at risk due to perceptions of them as carriers of COVID-19. Moreover, both state and non-state actors have developed increasingly sophisticated disinformation tactics that can have a significant impact on strategic and operational risks for humanitarian organisations.
These developments require humanitarian agencies to adapt their security risk management techniques, including more closely monitoring reputational and media sentiment towards their operations. These efforts should also include training humanitarian staff who are responding to the COVID-19 pandemic to ensure they understand how disinformation can impact operations and their personal security.
The new disinformation threat environment
There remains some confusion about the difference between mis- and dis-information. The two can broadly be defined as:
Misinformation: the sharing of information that is inaccurate.
Disinformation: the sharing of information that is deliberately inaccurate, in order to spread a false narrative.
Disinformation – the focus of this article – poses an array of risks to humanitarian organisations, including reputational, security and cyber. The likelihood of it impacting humanitarian organisations has increased significantly in the context of COVID-19.
State and non-state actors have adopted a range of disinformation tactics. These include brigading, sock puppets, deep fakes, botnets, and content farms.
The UN has labelled the current pandemic an ‘infodemic’ which has spread across the world. In the United Kingdom, for example, 5G phone masts have been targeted because of information spread online which falsely attributed them to the spread of COVID-19. Meanwhile, countries such as Russia, China and Iran have all sought to increase their influence, power and diplomatic strength through influence operations, which use a range of new techniques and hybrid power. In a paper released in June, the European Union accused China and Russia of waging targeted coronavirus disinformation campaigns across the world.
Non-state actors, including terrorist groups and criminal organisations, also use disinformation as a technique to increase their power and strengthen their relationships with populations under their control.
The history of the disinformation threat environment
COVID-19 is the latest example of how disinformation impacts humanitarian agencies. Historically, outbreaks of disease have often been blamed on foreign countries or migrants. In the 1980s, the Soviet Union shared information attributing the spread of AIDs in Africa to the United States (US).
Contemporary incidents include the 2009 decision by the Bashir government in Sudan to expel three international non-governmental organisations (NGOs) after falsely accusing them of supporting rebels in Darfur.
More recently, a Russian government-backed disinformation operation aimed at the White Helmets Civil Defence Forces, a volunteer humanitarian organisation, has involved linking the organisation to Al-Qaeda. The campaign was supported by an online network of activists, conspiracy theorists and trolls. To achieve this, attackers have spread falsified information through techniques such as ‘rapid retweeting’.
COVID-related disinformation campaigns
COVID-19 presents significant challenges for NGOs due to the politicisation of both the pandemic and of government responses, creating a complex environment to navigate.
The disinformation threat environment is further complicated by a number of populist leaders, such as the Brazilian President, Jair Bolsanaro, who has frequently issued statements that underplay the risk posed by COVID-19.
The unfortunate reality is that when powerful figures have sought to contradict the Brazilian President’s messages, they have often become targets of disinformation campaigns. For instance, when the country’s Health Minister contradicted the president on containing COVID-19, a viral campaign followed, which claimed the Minister was corrupt. Similarly, when the Congressional President publicly questioned Bolsonaro’s handling of COVID-19, a social media campaign labelled him as an enemy of the country.
Other variations on this disinformation tactic include accusations that COVID-19 is a Western disease and that the staff of international humanitarian organisations will spread the virus.
The risks to human rights or advocacy organisations speaking out against a figure such as the Brazilian president could include disinformation campaigns being directed against them.
Disinformation at the community level
Disinformation also poses a risk to NGO staff at a local or community level. For instance, disinformation (alongside widespread misinformation) became a notable issue during the Ebola outbreak in early 2014, resulting in an increased risk to humanitarian communities.
Humanitarian agencies responding to the issue were increasingly targeted amid concerns from armed groups and community members that their work presented a threat to locals’ health.
Meanwhile, Russian internet trolls spread information that the US was to blame for bringing Ebola to the Democratic Republic of Congo (DRC) in 2018. This, among other factors, resulted in various armed groups conducting attacks against Ebola treatment centres in DRC.
In the current COVID-19 pandemic, certain states have used their control over media outlets to spread disinformation. For example, alleged Russian and Chinese state actors have initiated stories in Sub-Saharan Africa and Latin America which assert that COVID-19 is a biological weapon and that there are vaccinations available.
As elaborated below, there are clear security implications of such disinformation for NGOs – particularly those with political figures either on their leadership team or on their board and those that receive funding from the US government and others.
Organisational risks
These changes in the humanitarian threat environment translate to an evolving set of risks to organisations’ security, reputations and their ability to operate.
The risks detailed below are not meant to be exhaustive but rather are based on an interpretation of the current threat environment and the capabilities and intentions of groups, states and individuals that are motivated by disinformation.
Perhaps the highest impact risk is the damage to an organisation’s brand and reputation in the event they are targeted in a disinformation campaign. Such a scenario would likely arise following the organisation reporting or documenting an incident which embarrasses a state power such as Russia, China or the US, all of whom have the capabilities to conduct a sustained information operation.
State tactics could entail unearthing controversial issues from the past, either of the organisation or its leadership, or connecting them to a controversial non-state actor in one of their areas of operation.
The White Helmets attack elaborated earlier provides the most evident example of this. However, comparisons can also be drawn with journalists who report from conflict zones and elsewhere who have had information and images from their private lives circulated online in an attempt to call their impartiality into question.
The consequences of such an incident for an NGO could include significant financial damage, or the forced closure of their operations in a particular region or state.
Individual risks
As mentioned earlier, one of the high-level risks to individual aid workers is the deliberate targeting of individuals responding to COVID-19, who are perceived as spreading the disease.
These attacks could be conducted by a multitude of actors and could range from a single attack by an individual with a firearm or a knife targeting a healthcare worker, to a sophisticated or prolonged attack by a local armed group that opposes the humanitarian agency’s presence or activities. In the context of Syria, organisations viewed as a threat by certain states could be subjected to a targeted airstrike, for example, NGOs working with displaced persons.
Another risk that organisations should be prepared for is the potential for disinformation to inspire far-right groups to conduct targeted attacks against high-profile spokespersons. Given the considerable rise of far-right groups, many of which are motivated by a xenophobic and nationalistic agenda, there is a growing risk of targeted attacks against prominent individuals and organisations. In 2016, for example, a politically motivated attacker killed a British parliamentarian, Jo Cox, who had been publicly involved in improving access to the asylum system.
Implications for security risk management
Given the contextual changes, adjustments should be made to ensure that disinformation is given due attention within humanitarian agencies’ security risk management processes.
It, of course, remains critical to prioritise community relations and acceptance within security risk management, but such an approach must go beyond a reliance on neutrality and ‘good intentions’. Instead, organisations should make an effort to actively share verified facts about their activities with communal, religious and non-state actor organisations as part of a communications strategy which incorporates engagement with non-state actors and becomes part of regular communications.
Examples of such programmes include the UN Office for the Coordination of Humanitarian Affairs’ (OCHA) use of widespread text messaging to share information with relevant stakeholders, along with the establishment of ‘UN Verified’, which aims to provide verified information to combat dis- and mis-information about COVID-19.
Another important strategy is to build relationships with local media organisations and radio outlets to distribute truthful messaging about humanitarian activities in the area to counter negative perceptions and disinformation.
Finally, organisations might also consider systematically monitoring negative reporting that might indicate a risk to the security of their staff and operations. Such an approach may also include assessing the risk of disinformation within risk assessments alongside more typical issues such as terrorism, crime and kidnapping.
Conclusion
Disinformation is a relatively new issue, which, due to the prevalence of social media and difficulties with regulating social media companies, will only become more significant in the future. Humanitarian agencies should ensure that they evolve with this emerging trend to ensure that they understand how the threat impacts their risk management frameworks and how it can be mitigated within the context of COVID-19 responses.
About the author
James Blake has worked in security and humanitarian risk management for over a decade, which includes helping organisations establish new programmes in conflict zones. He writes for a range of publications including Foreign Policy, The New Humanitarian, Devex and Jane’s Intelligence Review and has recently published several articles on disinformation.
Related:
Communications Technology and Humanitarian Delivery: Challenges and opportunities for security risk management
Twenty one authors have contributed to this publication analysing how communications technology is changing the operational environment, the ways in which communications technology is creating new opportunities for humanitarian agencies to respond to emergencies, and the impact that new programmes have on how we manage security.
Digital Security of LGBTQI Aid Workers: Awareness and Response
This article discusses the digital risks that LGBTQI aid workers may face while working in areas that are hostile to people who identify or are perceived as LGBTQI, and ways in which aid workers and non-governmental organisations can prepare and respond to these risks.
Cyber-Warfare and Humanitarian Space
In this article, Daniel Gilman explores the impact that cyber-warfare can have on humanitarian space.