pexels nana dua 3586249 scaled 1

The benefits of the “Bugs in our pockets”

6 minute • Tudor N. Pana • 01 noiembrie 2021


Recently, we saw a number of cybersecurity experts condemning both Apple and the European Union for their latest proposals on the monitoring of phones. In a quest of illicit content, both entities have decided to strengthen surveillance of digital activity. However, expert opinion is extremely divided, many seeing such monitoring as risky, potentially licensing intrusive government and corporate supervision.

Security specialists have cautioned that the methods proposed by tech companies to identify child abuse online offer significant security and privacy issues, raising concerns about the forthcoming EU legislation (see below). The Irish Times calls the new client-side scanning technology ‘seriously flawed’, while others believe that scanning photos on phones amounts to mass surveillance and so it must be forbidden, irrespective of the objective in sight.

Understanding why they take this position requires, a priori, a clear picture of what the technology entails. Thus, after describing the technology in question, we will move onto the EU anti-child abuse law, exposing its possible shortcomings, drawn from the critique. Then, the article will explore the US and Apple controversy on the same topic, ending with a discussion over the hypothetical effects of such surveillance, adopting a risk-based thinking.

What are these new scanning technologies?

(Photo by cottonbro from Pexels)

There are many different solutions available, as evidenced by EU’s ‘technical solutions’ report, and although they are not entirely identical, their purpose is, i.e., fighting child abuse via scanning everyone’s digital activity.

One of the most prominent technologies is known as client-side scanning (CSS). Briefly, in a customary CSS system, every device, be it phones, computers, tablets, but also watches or modern speakers, would have pre-installed software virtually recording the activity and alerting the relevant authorities when/if the user acquires any targeted material (e.g., proof of child abuse) and sends it to other users. This could circumvent encryption (the process of converting human-readable plaintext to incomprehensible text, known as cipher-text) completely by accessing all content prior to the user encrypting, sending, receiving, or backing it up to the cloud. It could even monitor notes that the user wrote for personal use with no intention of backing them up or sending them to anyone else, replicating the behaviour of a law-enforcement wiretap.

EU law and other controversies

The European Parliament has adopted the ePrivacy derogation in July 2021 (final version here). This is an interim legislation, aimed at allowing service providers to adopt additional measures aimed at combatting child abuse, such as the CSS above. The derogation should be replaced by a regulation specifically aimed at tackling child abuse – scheduled for early December this year.

While the Commission says these screening methods would not hinder privacy, the report (titled ‘Bugs in our Pockets: The Risks of Client-Side Scanning’), which appeared on a Cornell University open-access service, written by 14 cybersecurity experts focuses of several items and hints to the contrary. They support the idea that citizens’ capacity to freely use digital devices, produce, save and download materials, and interact with others is heavily reliant on our ability to feel safe while doing so. Their stance is that the arrival of large-scale scanning of our personal devices, where we store information ranging from work assignments, to-do lists, to messages and photographs from friends, family, and loved ones, affects the core of our private life. Accordingly, the report states that such mass surveillance will have a detrimental effect on our freedom of expression and even, where it applies, on democracy itself.

Equally intriguing is that even if the bill has passed by a strong majority (537 for, 133 against, 23 abstained), we have many EU lawmakers concerned about the new rules being ‘legally flawed’, in the sense that they might not stand scrutiny when challenged in courts (see the opinion of the European Parliament Vice-president). This is because the new legislation provides for CSS-akin technology, which some believe could amount to a breach of our right to a private life, defended by article 8 of the European Convention on Human Rights.

US & Apple

However, end to end encryption has also been severely criticised. For instance, the US Department of Justice has linked air-tight encryption to public safety in a sort of indirectly proportional relationship, where a balance must be found. The main idea is that legal access to content by authorities should not be precluded, considering that 1 in 3 internet users is a child (UNICEF) and that, case in point, 12 million of the 18.4 million global reports of online child abuse in 2018 have indicated Facebook as the interface used.

Acknowledging the issue, Apple had announced it would start using the NeuralHash technology which could scan images just before they are saved on the iCloud (the Apple cloud). Then, the NeuralHash would match them against known child sexual abuse material on a database maintained by the National Center for Missing and Exploited Children – in the US. However, after receiving considerable backlash, the tech giant decided to postpone the application of the new features until further ‘improvements’ would be made – unclear what these might be, but we imagine they would most likely be related to limiting the company’s access to any irrelevant or ‘residual’ data.

Along the same lines, the FBI – Apple clash of 2016 is one of the highest-profile privacy debates between a technology company and a government. Pressured by the authorities to access a terror suspect’s iPhone following a shooting in California, Apple went to court with the FBI, defending its use of encryption. The company defended the freedom of its users, invoking civil liberties, while the FBI invoked public safety reasons. The Bureau has finally managed to access the iPhone in question by hiring a private (undisclosed) contractor. The main consideration for us is whether Apple’s decision not to help the authorities should have been trumped by the urgency of the matters at stake. Equally fascinating is the idea that, essentially, both parties have used arguments based on societal welfare – safety and freedom.

Now, it is interesting how Apple has decided to change its approach (as explained above), declaring it would start using technologies allowing them to track users’ activity, calling on public safety reasons – using the arguments invoked by the FBI and the US Department of Justice a few years back.

Final words

Although appreciative of the critique, one has to accept it is always a matter of striking the right balance between multiple societal interests, and this case makes no exception. On one side, there is the potential (emphasis added) impact on our private life. On the other side, there are the tangible and already well-known effects of the online realm, facilitating child abuse.

Now, the unavoidable questions follow: are we willing to take the risk of disclosing some of our personal information to protect countless children from abuse? Is it possible to postpone child abuse until we find better solutions? Certainly not, and in the absence of viable options we must use the tools at our disposal.

(Photo by Markus Spiske from Pexels)

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *