An App That Spotlights Your Risks

By Lindsay Beck | October 31, 2013

Small Photo
Photo

Earlier this month, I had the opportunity to participate in an event to determine how human rights defenders might approach an emergency alert mobile app given their diverse risks, and to ensure that activists first consider their risks before adopting such a tool.

The following is a summary of this event, written by Alix Dunn of the Engine Room and Libby Powell of Radar. 

How can an app developer make sure that an app doesn’t do more harm than good? For Amnesty International, that question could be one of life or death for human rights defenders using their new Panic Button app.

Over the last year, the Technology and Human Rights team at AI have been developing a mobile app that responds directly to the high risk scenarios that human rights defenders face. Built for Android, the Panic Button app embeds a hidden function that, when activated, will send a distress signal via SMS to three trusted contacts in an emergency. Built specifically to respond to the high risk of detention of human rights defenders (HDR), the development team have spent a year creating and testing a prototype, which received £100,000 in the Google Global Impact Challenge this summer. Having released a functional first version of the app, the project team are currently planning three pilots with HRD networks in Asia-Pacific, Central America and East Africa. You can read more here about what it does and who it’s for.

Unfortunately, the Panic Button is not a cure-all for threatened activists. In some cases having a phone with GPS enabled can pose a bigger threat (it reveals your location to your telecommunication company who may be in league with an adversary), than the one Panic Button can fix (you have been detained or abducted and no one knows your location or that you’ve been seized). Critically thinking about when and how a human rights defender should use Panic Button should be part of a larger process of understanding the ecosystem of threats and risks an individual faces. That process is the substance of trainings, exercises, curriculum, group discussion, even entire books. So how does a human rights defender who has access to the Panic Button app know if it is smart for them to use it? And if they decide to use the app, how do they know the best way to incorporate it into their work to get maximum benefit? And how does AI make sure that users have enough information to make smart choices about using the app?

These are tough questions.

To explore how to approach them and what is possible with in-app risk assessments, AI hosted a workshop at its secretariat. The engine room helped with the framing, facilitation, and invites and for one day, a human rights defender who has tested the app through its iterations, digital security trainers, media development trainers, user experience experts, and AI team members came together to tackle the problem and scope out solutions.

The challenge was to design a guided risk assessment within the app that would have the dual goal of improving user decision-making around security by HRDs and increasing the chances of the app being effective. Risk assessments are ideally carried out well ahead of any action, and should guide users through the potential risks that may arrive and how they might best prepare for them.

Without a trainer to explain jargon or evolve scripted scenarios, the language and examples used within the guide may not be accessible or applicable, and could be a dead end for users. While the app is primarily targeting an active, informed user group of HRDs, it also needs to be accessible to those who are new to activism or are thrown into high risk action ‘overnight’ by sharp turns in policy or governance.

So at every stage developers need to consider the scope of the process and ask themselves: ‘Who is this for? What level of risk is this dealing with?’ Rather than considering this process in a vacuum, the group were given potential user cases and incident scenarios that would allow them to consider the use of the Panic Button and help to guide the development of risk assessment function.

One of the use cases was a female HDR from the DRC and the well-known head of a women’s rights organisation based in Goma, documenting abuses by both government and rebel forces in regions where territorial conflict is active. As head of the organisation, she travels widely, sharing information and advocating for justice and regularly faces the threats crossing checkpoints controlled by various armed groups. On a particular trip to Rwanda, on her way to an international conference, her car is pulled over by police, allegedly due to an issue with her Congolese license plates. While the police speak to the driver, she is pulled out of her car and into another vehicle.

She is traveling with her Android and she activates the Panic Button from the back of the car, as she is taken to a local police station and questioned. She thinks her mobile has some credit (at least for one message) but she’s not sure as she doesn’t use the smart phone very often.

When the group discussed this case, it was clear that in order for the app to be useful in the moment of an attack, HRDs would need to prepare extensively. She would need to agree on protocols and procedures with contacts for what happens when the Panic Button is activated. She would need to be ready for the moment.

With this emphasis on preparation, the workshop group discussed how to structure a clear-cut preparation process. We were able to map out key steps that activists could follow in order to make sure that at the moment they pushed the alert they would be able to spark a chain of actions that would bring them meaningful assistance as rapidly as possible whilst also warning others in their network.

This narrowing of scope, from how to assess your risk to how to effectively prepare to use Panic Button made it possible to start designing possible models for the in-app risk assessment. By the end of the day the group had sketched and agreed upon two plausible formats, a rudimentary design of possible user interfaces, and the scope of questions and threats that an in-app assessment can address. As the app itself is piloted and disseminated, information to help individuals prepare for the moment when they might need to activate Panic Button must be available - including to those who are out of range of trainers.

Share