KM1- Mapping cybersecurity and the broader context
KM 2 - Cybersecurity strategy, policy and regulation
KM3 - Cyber diplomacy and international cooperation
1 of 4

3.2. Cybersecurity and human rights: Can we have both?


🎯What is the interplay between human rights and security? 

🎯Does more security imply less privacy and freedom of expression for internet users?

🎯How to address the pressing challenges to internet rights?

Cybersecurity is usually discussed in the context of national or international systems, rather than as a right of an individual. In the context of national or international systems, discussion on human rights and security often takes a binary logic – we can have either human rights OR security. However, we might ask whether it is possible to balance the two. 

The area of human rights online consists of numerous issues including privacy and data protection, and freedom of expression, to name but a few. It may appear that we must weigh these rights against security measures such as surveillance or control of encryption; yet, there are certain measures that can enhance both security and rights, such as digital literacy, smart use, and digital hygiene, as illustrated in Figure 12. 

Figure 12. Some measures to support both security and human rights

Digital literacy is more than ICT skills and implies a critical assessment of the impact of digital technology on personal development and society. In addition to ICT competences, it incorporates the three pillars: smart use, nurturing values, and an understanding of the digital age (see the illustration below). In this context, smart use refers to the skills needed for the responsible and safe use of the internet, nurturing values implies critical thinking and personal rights and responsibilities in the digital context, while understanding relates to the understanding of implications of societal and economic concepts in the digital age (e.g. how emerging technology is changing the labour market).

Figure 13. Digital literacy pillars
Source: DiploFoundation

Contribute and engage

To learn more about cyber capacity building, education and developing skills, refer to the Knowledge Module 4.

Case study: Cybersecurity and Cybercrime Laws in the SADC Region: Implications on Human Rights

The report published by MISA Zimbabwe in partnership with Konrad Adenauer Foundation discusses enacted and proposed cybersecurity and cybercrime laws in the SADC region and their implications on the right to privacy, freedom of expression, and media freedom. The publication also makes a comparative analysis of these laws and international conventions, standards, and norms.


3.2.1. Privacy and security


Privacy and security online were not a matter of serious international discussion before the commercialisation of the internet. However, as the internet and its structural components evolved, perceptions of these concepts have also changed. In the past, privacy was mainly discussed in relation to the protection of personal data from disclosure and trade by – and to – third parties, such as Facebook, Google, and advertising agencies. Terrorist attacks in the USA, the UK, France, and Belgium (among others), have contributed to a shift in the discourse towards the protection of personal data from the (mis)use by governments under the justification of national security. 

In attempting to define privacy within national laws, the overarching emphasis by governments has been on the handling of one’s personal data and, therefore, the principles used to protect this information. Even the definition of personal data, however, varies from country to country. For instance, there are debates about whether an IP address – which provides an indispensable trace (sometimes called an electronic footprint) for e-forensics and information for cybersecurity protection measures – must be considered as personal data, since it may, in some circumstances, provide a link to the real identity of the person using it. The GDPR, for instance, is clear that ‘online identifiers’ – such as IP addresses – can be considered personal data. Moreover, the Court of Justice of the EU has ruled that even dynamic IP addresses can constitute personal data.

Government databases contain an increasing volume of citizens’ records. In addition, in certain cases, security policies require the corporate sector – including the internet industry, which holds enormous amounts of personal data about online customers – to share this data with security services and law enforcement agencies. The UK surveillance law – dubbed ‘the snooper charter’ by some internet human rights activists – even requires internet service providers (ISPs) to store user browsing histories for one year and make them available upon court request, and for companies to decrypt user data on demand. It also allows security services to hack into users’ computers and devices – although journalists and some other entities remain protected from this scrutiny. Civil liberties groups advocate for strong mechanisms on national and global levels, to ensure the protection of personal data and prevent misuse by security services and law enforcement agencies.

There is, however, a more direct cybersecurity dimension related to personal data. With increasing links between government agencies and corporate sector databases containing personal data, there is a higher risk of criminals gaining access to these databases, which represent a very lucrative asset for them. Therefore, governments are increasingly obliged to create national regulations for data protection, not only due to their obligations (and pressure) to respect human rights, but also in response to the need to additionally secure their own services and systems.

In the digital age, the flow of personal data, and to some extent the processing of such data, is inevitable. Therefore, current policy debates revolve around the details of what information is considered private, who should be able to collect and disseminate it, when and in what manner, what the acceptable duration of data retention is, and finally, what the minimum standards for data processing and management to ensure security should be.

The EU, for example, adopted the Data Retention Directive (Directive 2006/24/EC) in 2006, requiring telecommunications service providers and operators to retain certain categories of personal data for a period of six months to two years. This requirement was strongly opposed by privacy activists. The directive was declared invalid by the European Court of Justice in 2014. The GDPR provides a comprehensive framework for EU countries and also defines relations with entities processing EU personal data that are located in third countries. In addition, it defines accountability, requires ‘privacy by design’, defines data breach notifications, and regulates international transfers, among other topics.

Another angle of the privacy and cybersecurity debate is related to the rise and rapid expansion of social media and user-generated content. To gain access or membership to social media sites, users are required to provide personal information. In essence, the user ‘pays’ for online services by providing personal data; the data has become the ultimate currency on the net. To make things worse, every piece of information uploaded is usually copied a number of times and stored by caching servers around the world, making it hard, if not impossible, to remove pieces of ourselves from the internet.

Reflection point – Explaining the reasons for limited privacy protection in Africa

The authors of the article titled ‘Privacy and Security Concerns Associated with Mobile Money Applications in Africa’ aim to elucidate the reasons behind little privacy protection on the African continent. Below is an excerpt from the article.

There are a number of reasons to explain the limits of privacy protection in Africa. First, a strong communitarian strain exists throughout much of Africa. This mindset deemphasises the rights of individuals in favour of those of the community. In such a context, the privacy of individuals is given little consideration. Second, traditional economies with limited electronic communication and commerce have less need for individual privacy protection as there are few means to collect, use, and exploit sensitive information. Until very recently, the vast majority of Africans did not engage in data compiling transactions. For both of the reasons above, there are few established legal protections in African nations.

What are your thoughts regarding the above-mentioned attempts to elucidate reasons for limited privacy protection in Africa? Do they still hold merit given that the article was published almost a decade ago?

Is there any other reason specific to the African continent?

Case study: Privacy and personal data protection in Africa: A rights-based survey of legislation in eight countries

Part of a project by the African Declaration on Internet Rights and Freedom (AfDec) Coalition, the survey offers an in-depth analysis of the status of privacy and data protection legislation in Ethiopia, Kenya, Namibia, Nigeria, South Africa, Tanzania, Togo,  and Uganda. The authors looked into the countries’ regional and global commitments to privacy and the impact of their legislative environment on the right to privacy. They also conducted an analysis of the data protection laws, identified main actors and institutions, assessed data protection practices in internet country code top-level domain name (ccTLD) registration, and examined the status of the country’s data protection authority.

The research shows the discrepancy between the formal adoption of the relevant legislative framework and practice. Of the eight countries presented in the report, only four have enacted comprehensive data protection privacy acts: Kenya, South Africa, Togo, and Uganda. This does not, however, indicate that the specific country is committed to upholding privacy rights.  For instance, Togo enacted a data protection law in 2019, and is one of the few countries in Africa to have ratified the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention). However, recent reports show the widespread unlawful surveillance of journalists and human rights activists in the country. 

In the majority of the countries covered by the research, comprehensive privacy and data protection frameworks have yet to be tested as some of the laws are new (passed in 2019), or in draft form.  

Each country’s report provides a set of recommendations to different stakeholders in the respective countries. A key role for civil society identified in the reports’ recommendations is to monitor the implementation of privacy laws and other related legislation. At the local and national level, part of this monitoring involves documenting and reporting breaches of data protection and privacy legislation. At the regional and international levels, there is a need for the formation of coalitions by civil society groups in order to strengthen monitoring capacity as well as for their active participation at forums such as the Human Rights Council’s Universal Periodic Review when countries are due to report.

Contribute and engage: MOOC – Right to privacy in the digital age in Africa

Hosted by the Centre for Human Rights, University of Pretoria, with the support of Google, the course addresses the key elements of the right to privacy and data protection in the digital age in Africa. The course developers aimed to tackle the challenges that African countries face with enacting adequate legislation on the regulation of data collection, control and processing of personal data. The course was implemented in 2021 and there are no indications so far that it will take place in 2022.


3.2.2. Encryption and security – striking the right balance


Traditionally, it was only that the governments had the power and the know-how to develop and deploy powerful encryption in their military and diplomatic communications. But now, with user-friendly packages such as Pretty Good Privacy (PGP), encryption has become affordable for any internet user, including criminals and terrorists. The increasing use of encryption triggers the challenge of finding the right balance between the rights of internet users to private communication, and the need for governments to monitor some types of communication of relevance for national security (i.e., to help curb potential criminal and terrorist activity).

Governments and security services in many countries are trying to introduce limits to the strength of encryption algorithms within mainstream products and services, and to insert backdoors that would allow government agencies to access encrypted data if necessary. The 2017 Joint Communiqué of the political heads of the intelligence services of the ‘Five Eyes’ alliance – Canada, New Zealand, Australia, the UK, and the USA – warned that encryption can severely undermine public safety, as it prevents lawful access to the content of communications for investigation of crime and terrorism. Civil society and human rights communities have voiced strong concerns about these developments, fuelled by the Snowden revelations, suggesting that limits to encryption and backdoors could be used for political censorship and disproportionate (mass) surveillance. In addition, such measures could compromise the identity of political activists, bloggers, and journalists in authoritarian states, thereby risking their individual security. Some researchers, in fact, claim that the internet is not going ‘dark’ (i.e., encrypted), and that law enforcement and security agencies still have sufficient digital trails to follow, without the need to weaken encryption systems.

The debate on an international regulatory framework for encryption centres on the interplay among complex security and human rights issues. From a security standpoint, governments have reiterated the need to access encrypted data to prevent crime and ensure public safety. In this context, there have been revelations of backdoors in encrypted software and products, and pressure on the internet and tech companies to allow governments access to data. Moreover, in some countries, such as the USA, the UK, and Russia, there have been efforts to introduce specific legislation requiring tech companies to allow or assist law enforcement agencies to access encrypted data and/or devices (under more or less defined circumstances).

From a human rights standpoint, the right to privacy and other human rights should be protected, and encryption tools – including pervasive encryption (across the whole network) – are essential to protect privacy as well as personal security. The need to protect encryption and anonymity was highlighted, for example, in the 2015 report of the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, David Kaye, on the use of encryption and anonymity to exercise the rights to freedom of opinion and expression in the digital age. Kaye’s 2017 report addresses the roles played by internet and telecommunications access providers. He reviews state obligations to protect and promote freedom of expression online, then evaluates the digital access industry’s roles, concluding with a set of principles that guide the private sector’s steps to respect human rights.

The case of the FBI versus Apple, in which a US federal court ordered Apple to assist the FBI in unlocking the iPhone of one of the shooters who murdered 14 people in San Bernardino in December 2015, serves as an example of myriad perplexing aspects of this debate. The case triggered two opposing views. On the one hand, Apple, backed by other internet companies and human rights activists, argued that complying with the request would create a dangerous precedent and would seriously undermine the privacy and security of all of its clients, as illustrated in Figure 14. On the other hand, authorities argued that the case did not involve backdoors or the decryption of devices, but rather, a one-time solution, necessary in this particular case. They also accused Apple of prioritising its business interests over a terrorism investigation.

Figure 14. Pandora’s iPhone . Source: Carlson, 2016

The case raised a number of questions that remain open. 

  • Under what circumstances are authorities entitled to request tech companies to downgrade the security of their devices?
  • What safeguards are, or should be, in place?
  •  
  • Should authorities be allowed to influence the way companies design their products?
  • At the same time, to what extent should companies protect the privacy of their users?
  • Should privacy be protected at any cost?

3.2.3 Freedom of expression and objectionable content


The principle of freedom of expression is based on internationally recognised standards such as the Universal Declaration of Human Rights, where Article 19 includes the right ‘to seek, receive, and impart information and ideas through any media and regardless of frontiers’. 

Although freedom of expression is a recognised right, the issue of objectionable content is used in some cases to restrict this right. This raises the question of how ‘objectionable’ is defined. Different cultural and political traditions lead to a variety of classifications across the globe. What is legal or acceptable in one place may be illegal or unacceptable elsewhere. Therefore we need to observe and analyse the issues case-by-case. 

Child sexual abuse content (CSAM) is classified as objectionable and illegal content by global consensus and is thus prohibited by international law (ius cogens). Nevertheless, many countries do not have regulations in place that extend the coverage of conventional laws to the distribution of, or access to, CSAM via the internet, thereby leaving the online space out of reach of juridical authorities. Even when national legislation does cover the online space, however, the prosecution might not be possible without the harmonisation of regulations on the international level, and enhanced cooperation of various institutions. For example, online distribution, possession, and access might be carried out by people residing outside of the country of impact, thus out of reach of the national jurisdiction.

Violence, racism, and hate speech are types of content which is ‘sensitive for particular countries, regions, or ethnic groups due to their particular religious and cultural values’. Often, there is a blurred line between objectionable content and freedom of speech; political nuances vary from state to state. Nonetheless, for many countries across the world, the implications of such content or online activities of certain groups or individuals on national security serve as a pretext to clamp down on freedom of expression.

Reflection point

An ongoing debate questions who should be responsible for online content, especially concerning hate and extremist speech. Should internet giants like Facebook monitor content? Facebook CEO Mark Zuckerberg argues that they should respect freedom of speech, especially by politicians. Reactions are mixed, in line with individual positions on hate speech and freedom of speech.

 The UN Special Rapporteur on Freedom of Opinion and Expression, David Kaye, wrote in his 2019 Annual Report to the UN General Assembly on online hate speech:

The prevalence of online hate poses challenges to everyone, first and foremost the marginalised individuals who are its principal targets … Unfortunately, States and companies are failing to prevent ‘hate speech’ from becoming the next ‘fake news’, an ambiguous and politicised term subject to governmental abuse and company discretion.

also:

… new laws that impose liability on companies are failing basic standards, increasing the power of those same private actors over public norms, and risk undermining free expression and public accountability…

Does this have any implications for security or cybersecurity?

Case study

African Digital Rights Network (ADRN) has produced the first study on the opening and closing of online civic space in ten African countries (Cameroon, Egypt, Ethiopia, Kenya, Nigeria, South Africa, Sudan, Uganda, Zambia, and Zimbabwe). The study identified 65 examples of activists using digital tools to open up civic space online, but almost twice as many examples (115) of governments using tech tools and tactics to close down online space. There are individual reports for each country.

The main pattern identified in all ten countries is that each new generation of digital technology used by activists to exercise freedom of expression is met by harsh government measures developed precisely to curb those freedoms and deny citizens their digital rights. 

For instance, in the case of SMS activism, which was the first widespread digital tool used to create virtual civic space,  there were numerous examples across Africa where text messaging was used to voice political dissent, advocate for marginalised and vulnerable groups, or mobilise masses. However, this was followed by a range of repressive measures such as the mandatory SIM card registration, message surveillance, bans on bulk SMS and arrests for political speech in SMS messaging. A similar fate befell blogging, social media and even with privacy and anonymisation tools.

The paper presents key challenges on digital rights on the continent such as regressive online content regulation, network disruptions, and surveillance and proposes numerous actions and measures by state and non-state actors to address them.

Contribute and engage

Enrol in Diplo’s Introduction to internet governance online course! The ten-week course introduces IG policy and covers main issues, including human rights, development, infrastructure and standardisation, cybersecurity, legal, economic and sociocultural issues, and IG processes and actors in dedicated modules.

New post

Your email address will not be published. Required fields are marked *

Post a comment
Skip to content