Contact-tracing apps have truly taken off and it looks like they’re here to stay. But not all contact-tracing apps are made equal.
Globally, the variety of approaches can be overwhelming as no consensus has emerged. It can be difficult to work your way through the chaos and understand why countries are diverging in such significant ways.
Buzzwords like informed consent, anonymity and privacy are thrown around by governments defending their stance, but it is rare for them to be fully explained. As the network of tracing apps expands, so too does the confusion.
Acronyms add to the noise (PEPP-PT, DP-3T, TCN Coalition), but grasping their meaning is another matter entirely. Moreover, understanding any individual app and its privacy implications now requires wading through the jargon of centralisation, decentralisation and data minimisation before the apps are updated, sending us back to the drawing board all over again.
In this time of high anxiety and unprecedented government-backed data collection, we don’t yet have all the answers, but this makes it all the more important for legislators across the world to ask the right questions. Here, we outline some of the key questions our representatives should be asking.
- ON EFFICACY
Questions on privacy and on efficacy cannot be separated. If this is a contest between health and privacy, we need to be sure these apps are effective in the first place:
Q1: How can an app protect us from COVID-19? And what’s the evidence that they work?
In their most basic form these apps rely on two pieces of information, infection data and location data, to identify known COVID cases and trace their contacts. But, before introducing such widespread data collection, governments should be able to give substantial evidence that this will make a difference. This question may be obvious but the answer is not and experts, such as the Ada Lovelace Institute in the UK, have questioned what level of tracing current evidence on efficacy can justify. We must involve epidemiologists in the conversation and ask governments to back up these strategies with evidence.
Q2: How does each methodological choice impact on efficacy?
Questions on efficacy should not supersede questions on privacy, but they should frame them. Assuming some form of app has the potential to help stop COVID-19:
- Are mandatory apps more effective?
- Do centralised servers provide greater health benefits?
- And what is the most effective tracing tool? Geolocation? Bluetooth? Credit card data? All of the above?
We must ask governments to explain how each choice impacts on efficacy if we are to weigh these options against our own privacy.
Q3: How will we limit the purpose of these apps?
Tracing apps can be easily expanded, taking on new functions. But, as Carmella Tronsoco, head of EPFL’s Security and Privacy Engineering Lab (SPRING), has asserted, “we cannot do general purpose privacy protection. That is an oxymoron.” This makes it essential for governments to define their goals from the start and commit to these limitations.
- ON CONSENT AND CONTRACTS
Consent is seen by many as necessary, but this does not mean it is sufficient. If we are being asked to waive our rights, governments must answer specific questions on consent:
Q4: What sort of consent do these apps require?
- What about future updates?
The software will likely need to be updated. But is consent iterative or is the power now in the developers’ hands?
- Is consent a red line?
Pandemics evolve quickly so introducing a voluntary app is not the same as a guarantee of continued consent requirements. Governments’ need red lines on consent. For example, in Australia the Deputy Chief Medical Officer left the door open to mandatory app downloads in the future, forcing Prime Minister Scott Morrison to reassure users that apps would remain voluntary.
- Whose consent is important? And why?
Some countries have introduced mandatory tracking, but only for certain groups. In Hong Kong, those arriving from overseas underwent a 14-day quarantine accompanied by a mandatory tracking wristband. But if consent is not universal, how do governments justify differentiating between groups?
- ON DATA MINIMISATION AND DECENTRALISATION
In recent weeks, the debate between decentralisation and centralisation has been central to European conversations on contact-tracing and we must continue to hold government to account on these choices.
Q5: Why are you going for a centralised/decentralised approach? And what does this mean?
This can appear to be a technical distinction, but the privacy implications are significant. Governments must be able to justify their choices and explain them to us in simple terms.
- What’s the difference?
Generally, a centralised system differs from a decentralised one in two key ways. First, anonymous identifiers are generated by the server rather than the phone, meaning we must trust the central server that these are anonymous. Second, when contact between people occurs, data is uploaded to a central system for both the infected individual and their contacts. This creates a graph, recording a web of social interactions. Decentralised systems will record no such grafts and contacts of infected individuals will be notified locally, through a “handshake” from the infected individual’s phone. Subtle variations of these approaches are being introduced so we must demand clear explanations on how systems centralise data collection.
- What is the Apple-Google approach? And why is it decentralised?
In order for these apps to work effectively, they must be able to access Bluetooth data even when you are not using the app. This is what allows them to trace contacts. Currently, this is not possible on Apple or Google phones. For this reason, Apple and Google have collaborated to create their own decentralised approach to contact tracing. This would allow apps to use Bluetooth in the background, but only if the app uses a decentralised approach. This approach is designed to increase privacy, but governments should not rely on Apple or Google for this and must still be able to explain exactly how this setup works.
- Is a centralised system guaranteed to be more effective?
Not necessarily. Centralised systems can create graphs to show social networks and these can be used to by epidemiologists modelling the disease. But there is an issue. iPhones do not allow apps to trace location in the background. Either, the app must always remain open in the background, as was required in Singapore, or the system must rely more heavily on android data, leading to the phrase “android immunity”. Simply not collecting data from iPhones is a big issue. If governments opt for a centralised system, this must be justified on the basis of efficacy and if it starts to look like centralised apps are not more effective, governments must face significant questioning on this.
Q6: How can you guarantee anonymity? And are you minimising data collection?
Beyond questions on decentralisation, the more fundamental concern is perhaps data minimisation. We must ask governments to justify the need for each piece of data they collect. Minimising data collection minimises the chance of identification. In South Korea, the detail of location data shared with the public raised concerns that individuals were identifiable. We should ask why apps require each piece of data from us, and how that is useful to an epidemiologist:
- Why are you asking me to put in my email address/my postcode/my name if the system is anonymous? And why are you asking for extra location data?
Sometimes apps will ask for extra information that is not strictly necessary. This can enable additional functions for apps, for example facilitating additional disease modelling. But tracking is possible without this information and so we should ask why governments to be explicitly clear about why they want this extra data.
- ON TRUST AND TRANSPARENCY
One way of conceptualising the debate is as one of trust. Who do we trust with our data? And what level of transparency do we require of them?
Q7: Who are you asking us to trust? And who is accountable?
This will depend on the system in place. For centralised systems, the power is in the hands of the government and the health service as they are in possession of large quantities of data. In a decentralised system, citizens may place more trust in themselves as less data is collected centrally. Angela Merkel has defended a decentralised approach: “If you can only trust the federal government and no-one else, that contradicts our understanding of democracy. I trust citizens.” Whether trust is placed in the hands of citizens or governments, separate questions must be asked on accountability as this will rest firmly in the hands of governments and app developers.
Q8: What sort of transparency is required?
Trust does not mean apps should not be scrutinised and so questions need to be asked on transparency. Transparency in some form must be a prerequisite to “informed” consent.
- Are governments being transparent on the options available?
- Will the source code be made transparent?
- And if this is not possible, who do we trust to look over these apps and preserve our rights? Will there be an independent review?
Whatever form of transparency is required, these apps must be scrutinised and we need to ask ourselves who we trust with this task.
- ON BIAS AND SHARING THE BURDEN
Reports tend to focus on privacy, but such debates should not eclipse questions on bias.
Q9: Who will face the greatest burden of these apps? And what measures do you have in place to prevent bias?
This pandemic is not impacting everyone equally, and we should not assume apps will be immune from this problem:
- Is there potential for individuals or communities to be stigmatised if patterns of higher infection are picked out?
- How will you include vulnerable groups? And those without smartphones?
- Will certain groups be hyper-visible in the data and therefore easier to de-anonymise on the basis of social interactions?
Governments must be asked about the safeguards in place to prevent bias in these apps.
- ON LEGISLATIVE GUARANTEES AND LONG-TERM IMPACTS
As more apps are released, we should already be asking questions on the future. Verbal guarantees on privacy are insufficient and we need to ask what legislation is accompanying these apps.
Q10: What legislation is already in place to protect our rights? And what more do you plan on doing?
The scope and timescale of these apps should be defined, alongside who has access to the data. But we also need to ask what legal guarantees are in place.
- Are there legal guarantees that data will not be shared with other groups, for example law enforcement?
- Are there legal guarantees that data collection cannot continue indefinitely without review? At what point will these apps be deleted?
Q11: How will these apps evolve during the pandemic?
As social distancing measures are eased, apps will need to evolve to new circumstances and so their function may change.
- Is there potential for more international cooperation between apps, especially when travel restrictions are eased?
Given the variety of approaches, many questions are already being asked on compatibility. If these apps cannot communicate with one another, this may impact on travel restrictions. Particularly potent is the Irish case where a centralised system in the North and decentralised system in the South could limit cooperation.
- What’s your plan for when the pandemic is over?
Whether data is set to be deleted or used in future research, these strategies should be questioned right from the start.
As these apps are implemented, governments around the world are weighing health against privacy. If they can come to such differing conclusions, so can you. Before the public go ahead and download these apps, their representatives must ensure that governments are doing everything in their power to preserve privacy and keep citizens informed.