Amidst a frenzy of events for NYC’s Climate Week, frog hosted our friends and former partners from UN OCHA’s Centre for Humanitarian Data along with a few distinguished speakers to discuss data collection, building unbiased algorithms, and how we can use the power of predictive analytics to get more proactive about delivering humanitarian aid in times of crisis.
Climate Week sheds a light on this pressing issue as extreme weather and unpredictable storms are creating greater and more frequent humanitarian crises around the world. In the last few months alone we’ve seen the devastation Hurricane Dorian brought to the Bahamas, and the massive food crisis caused by flooding after Cyclone Idai hit Malawi, Mozambique and Zimbabwe. These storms devastate communities, wipe out access to clean water, and destroy crops, homes and other resources that people depend on for survival. UN OCHA is dedicated to providing aid in these times of crisis and is working hard to become proactive, rather than reactive in our greatest times of need.
frog worked with OCHA in 2014 to develop the Humanitarian Data Exchange (HDX), a platform that enables workers from the UN, NGOs, government and universities to radically improve data sharing during extreme situations. Since then, The Centre for Humanitarian Data was founded to manage HDX and the Humanitarian Exchange Language (HXL). A trusted environment like HDX gives humanitarians confidence that personally identifiable information will be protected. Adding depth to the intelligence, HXL makes the sharing of important data less binary. The entirety of the details may not be needed for overlapping organizations to benefit from knowing which types of data are being collected. In this way, partners can discern meaningful signals without putting individuals at risk. In addition to providing data services, data literacy, data policy and network engagement, The Centre is also expanding its capabilities into predictive analytics to help manage and finance aid for humanitarian crises in a more proactive way.
The evening was filled with expert speakers and a keynote presentation followed by a panel Q&A with the audience. Our line up included Andrew Kruczkiewicz, Technical Advisor, Red Cross Crescent Climate Centre who discussed forecast-based financing in light of climate change; Leonardo Milano, Predictive Analytics Team Lead, OCHA Centre for Humanitarian Data; and Nadia Piffaretti, Senior Economist at the Global Center on Conflict, Security and Development, at The World Bank. The keynote was delivered by Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who discussed the ethical implications of using predictive models for humanitarian response.
After welcome remarks by frog’s Global Impact Lead Jon Wasserman, and Centre Lead Sarah Telford, Lisa Carty (UN OCHA’s Director of Humanitarian Financing & Resource Mobilization Division) took the stage to speak about the global humanitarian situation and how UN OCHA is using predictive analytics and anticipatory humanitarian financing efforts to make response time faster and most cost-effective. “’We are fanatical about improving the world’” Lisa said, quoting frog’s manifesto, “that’s what brings us together—the humanitarian and design communities.”
And when it comes to that intersection, we know that “when organizations collaborate on common goals, they can learn from each other’s strengths and challenges,” says Jon Wasserman, “but it’s not only best practices that get shared. When partnerships are forged, the durability and efficacy of the humanitarian data landscape increases.”
To further delve into the principles surrounding the current humanitarian data landscape, we welcomed to the stage Cathy O’Neil, who spoke more specifically about the ethics of algorithms. Cathy opened by stating that in order to understand how the algorithms used in predictive analytics work, you don’t actually have to be a PhD in mathematics. In fact, she urges that it’s all of our responsibility to care about the implications of these algorithms. “You already know what a predictive algorithm is—it’s in your head,” she said. Algorithms follow the same human logic we pull upon every day—collecting data on what was “successful” in the past in order to predict what will work for the future.
The problem is that human logic is inherently biased by the experiences of the individual—meaning “success” for you may be entirely different than what “success” looks like for someone else facing different circumstances. Much in the same way, algorithms are often built without taking into context the full ecosystem of stakeholders involved.
One apt example Cathy cites involves the education and incarceration systems. Predictive analytics are often used to try and understand the likelihood of an individual’s propensity to commit a crime. And while this kind of predictive criminology may not look like a Minority Report dystopia yet, it can be just as harmful. In many cases, the data available on crime committed is hard to obtain, so the police use data on arrests by neighborhood or suspensions in school districts instead. The problem is that an arrest is not always indicative of a crime committed. Many reports find that arrests or suspensions are doled out disproportionately in low-income neighborhoods or underserved school districts. Still, the police looking at this data are more likely to continue going to those neighborhoods or schools to look for crime. This biased dataset functions as a self-fulfilling prophecy that keeps perpetuating the same problems, without actually getting to the root to fix them. “Biased algorithms create feedback loops that reward destructive predictive behavior,” says Cathy. The numbers show the algorithms are “succeeding”—but at what cost?
So what’s the answer if we’re trying to harness the power of these predictive algorithms in order to aid our most vulnerable populations in times of crisis? The answer, of course, is not so simple. But we do know it will take human-centered, diverse and inclusive teams to root out bias. And, Cathy posits, perhaps a need of a Hippocratic oath for those building the algorithms. Like doctors, shouldn’t they promise to “do no harm”?
What is clear, is that we will continue to face difficulty in collecting data on some of the most underserved and vulnerable populations around the world. As designers, coders, engineers and policy makers using predictive analytics, we must always be asking ourselves “Who is going to be affected by this algorithm?” And even further, “For whom does this algorithm fail?
Jon points out that while we talk a lot about “unconscious bias” or “implicit bias” when critiquing data analysis, it ignores the potential for active, conscious bias. Selective targeting or response may have severe impacts on vulnerable communities. There must be responsible humans making thoughtful decisions before, during and after data collection and analysis. By fostering a trusted environment for sharing and discourse, HDX improves the landscape of responsible data use.
More than ever, algorithms are being used everywhere—they aid us in our day to day activities and function as life-altering decision makers. But algorithms are only ever as just as the systems that make and perpetuate them. This means it’s more important than ever to have diverse voices in the room working on every aspect of these problems, from engineering and coding to designing and strategizing how they will be deployed and how the data will be collected. There are no more excuses for biased data being perpetuated by biased systems. By putting people first and operating with empathy and understanding—along with engineering and design expertise—we believe we can start to use predictive analytics for good.