Paul Currion wrote a fairly lengthy guest piece on Mobile Active’s blog (http://mobileactive.org/how-useful-humanitarian-crowdsourcing) last week. Unfortunately, for the most part he wasn’t even criticizing the right people, software or communications.
From January to April this year I coordinated the translation, geolocation and categorization of emergency text messages sent in Haiti in the wake of January 12 earthquake. This was the first step in the only emergency response service available to people within Haiti during this critical period. I worked with some amazing people during this process, including 1,000s of Kreyol-speakers around the world who were translating these messages as they arrived, categorizing them and clicking on a map where they knew locations to be. When I say that I was ‘coordinating’ this, what it really means is that I was the cheer-squad and tech-support for these people globally who had come together to help, many of them for weeks on end, to do what the majority of the responders could not: translate from Kreyol to English and identify locations not yet labeled on any map. The messages were streamed back to the emergency responders within Haiti. Shortly after launch, the messages were also taken by a group of volunteers at Tufts University who put many of them on a public map. They had been mapping crisis information since the day of the earthquake and added many of these messages to this map, in many cases refining the coordinates and working with the responders to flag ‘actionable’ messages. The group at Tufts were using the Ushahidi platform to do this, primarily because Patrick Meier of Ushahidi was there. We were brought together by Josh Nesbit of FrontlineSMS:Medic and there were many other organizations involved (see http://www.mission4636.org/history/ – I’m with Energy for Opportunity), but it is easy to see why Ushahidi received the lion’s share of publicity: they were the public face of the effort thanks to hosting the map and provided the media-friendly images of students working together.
It is understandable that the press have been wrong in thinking that this was the extent of the effort. I have never bothered to correct the press when they have attributed (and in some cases awarded) our individual or collective achievements to the wrong people (or a subset of the right people) because the overall message was spot on: you can make a real-time difference from anywhere in the world. But it is a real disappointment to see someone in the humanitarian space make the same error and unintentionally spread bad information within our community, so it is worth correcting.
Currion’s article was a review of Mission 4636, undertaken with evidence from the popular press and a data-dump from the Ushahidi-Haiti map. It went no deeper than this – he did not contact those of us working on the system or those responding to the messages. He makes an excellent point that only people on the ground can define exactly what constitutes ‘actionable’ data, but then rests his entire argument on a scenario where he simply imagines himself on the ground.
This is not how you carefully review an information system and it is not surprising that his assumptions were way off. He presupposed that most of these messages were not actionable and there was not the capacity to respond. In fact, the majority of the messages he read were actionable; it was the responders on the ground who rightly defined what was ‘actionable’; and within two weeks of the earthquake they told us they had the capacity to respond to more messages than were coming through our system.
This should be enough to simply ignore his post, but this doesn’t help anyone trying to do a better job of this. Like Currion, I have worked in information management for about 10 years. This does not qualify me to review information systems. As part of my work, I have trained and worked as a systems analyst. This does qualify me. Reviews of humanitarian organizations should take place in private for a number of reasons, not least of which is people’s willingness to be honest and open when they are not subject to the scrutiny of commentators working with partial information. I know of half a dozen people/orgs currently conducting critical reviews of crowdsourcing for humanitarian response, mostly very detailed studies about specific aspects of translation and geolocation. They are rightly not blogging as they go. It is very flattering that commentators on the web want to get involved too: systems analysis is not very exciting and won’t get CNN banging on your door. Methodologies for critical analysis such as Currion’s would not pass review by engineers in our field, but this doesn’t (and perhaps shouldn’t) let Currion and those like him still report their opinions through other avenues like blogging. So here are some small (non-exhaustive) pointers about how to conduct reviews for information management systems with reference to a few of the misconceptions in the original article:
- Review all data. The publicly downloadable data from Ushahidi Haiti (about 3000 records) is only about 1-2% of all structured data that went through Mission4636. For example, there were 80,000 messages to 4636, this data doesn’t contain the ‘actionable’ flag, and it doesn’t contain phone numbers which is one part of identifying the source of information to establish veracity (and allows a quick ‘tell us more’ response). It would be ideal to get as close to the actual reports as possible, too: strictly speaking Currion did not read the messages sent to 4636, he read the (crowdsourced) translations.
- Establish use-cases with people who used the data. I don’t know of any aid workers/emergency responders who simply took data dumps in the manner reported in the article. Those who passively received the data took category and/or geographic specific reports in real-time as they were posted. Actionable items were identified in conjunction with the responders. Like Currion said, only people on the ground can define actionable data and the format in which to receive them, which was exactly what did happen – why would anyone who knows about humanitarian response assume otherwise? It was misleading to report otherwise and created a false-dichotomy between us and the broader response community. For example, we collected reports of unaccompanied minors according to specifications given by OXFAM. For obvious reasons, reports of (and sometimes from) unaccompanied minors were excluded from the public map/data, so these are not in the publicly available data-dump. We were able to turn these unstructured messages in Haitian Kreyol into structured, geolocated reports with English translations (and return numbers), taking the burden of filtering, translating and structuring reports off the already overstretched workers within Haiti.
- Talk to stakeholders. The article focused on water management. Individual requests for water through 4636 were defined as actionable by the responders, as were clustered requests for food, especially in areas unknown to aid workers at the time of report. Currion argued that he would not use these reports for water management? I don’t know which would be the better strategy. It is fine to establish hypothetical situations within your own area of expertise but for real, past events, it might be useful to ask what really happened. If someone sent a request for water within Haiti to 4636, the translation of that message ended up in the hands of one of the responders (in this case it was Southern Command), often within just a few minutes. My understanding is that their main concern was guarding against disease outbreaks, but you should ask them about specific response cases and the exact action taken. We are extremely fortunate that we did not see current the cholera outbreak during this period and it is the people in charge of water management that we have to thank. I am sure they were not idle and carefully balancing different information sources, so we can presume that they were not wasting their own time when they were requesting more information from the crisis-affected population.
- Establish appropriate metrics for evaluation. The raw number of messages is not really relevant – the majority of calls to 911 are not emergencies either. It is better to evaluate the uniqueness of individual reports when compared to what other information is available. For example, the Pakreport instance Currion cites as a failure for having too few records is actually the most detailed map of village-level assessment reports, thanks to the efforts by crowdsourced volunteers outside of Pakistan who geolocated text reports. We received thanks from MapAction within Pakistan for this last week: “we are in desperate need for
better information on village locations, so this is perfect!”. I extend this thanks to everyone who helped map these reports! To the doubters who voiced skepticism and did not help, I ask you to please swallow your pride, join us, and contribute just a few hours to help next time.
- Consider multiple interpretations before drawing conclusions. Currion did not find the Ushahidi map to be useful. The director of FEMA called it the most accurate map of the crisis. Neither analysis negates the other and I respect both positions equally. But perhaps this should give thought before anyone draws the broad conclusion that the entire deployment was not useful, and the even broader conclusion that crowdsourcing is not useful.
- Understand the user community. The biggest users of this system were not humanitarians, they were Haitians. A trickle of messages ended up on the public Ushahdi map, a river went through the entire system, but these pale in comparison to the absolute flood of messages between the crisis-affected population and their friends and relatives outside of the Haiti. Members of the Haitian community were using the map when in contact with their friends and relatives within Haiti that only possessed cellphones. They were directing people to the nearest locations that they could obtain food and explaining the system for obtaining food (eg: “there is a food distribution point 1 kilometer north, and you can only collect food for so many people”). The volunteers I was working with did this for many of the people who texted to 4636, too – it was a community helping itself and this was an order of magnitude greater than anything we in the humanitarian world achieved. People would not have been silent if there was no 4636 service or up-to-date crisis map: they would have been trying to help themselves through whatever information and communication means possible, so the more we can systematize this information, the more we can directly aid the crisis-affect population by helping them help themselves. These are the hardest stats to quantify, as the people who were helped this way are those who do not otherwise clog up the information and aid channels on the ground in unnecessary ways.
As for my opinion of the Ushahidi platform itself? I have never really used a Ushahidi deployment directly, so I am not qualified to comment. In Haiti, the translation platform was originally hosted on an Ushahidi server by a very talented Ushahidi developer, but the code was adapted from a missing persons’s platform, and we later transferred this service to CrowdFlower. A few weeks after the transfer to CrowdFlower we staggered the transfer of the translation service from volunteers to paid Samasource workers within Haiti. Maybe 90% of all work on what some people have called ‘Ushahidi Haiti’ did not take place on Ushahidi. But that misses the point that it was the people, not the technology, that made the biggest difference. I have worked with people from the Ushahidi team twice – in Haiti and in Pakistan. They are among the most professional software developers I have ever worked with, and also among the first to talk about the limitations of their own platform (http://blog.ushahidi.com/index.php/2010/05/19/allocation-of-time-deploying-ushahidi/). The press might be reporting that Ushahidi is the solution for all the world’s problems, but I have not observed this behavior in the organization itself. When people in Pakistan needed a (slightly different) crowdsourcing platform for part of the PakReport initiative, the first thing Ushahidi did was admit this and reach out. This is certainly not the behavior of an organization that believes they have a broad solution to all problems. On the back of my experience with their staff, I would certainly like to see them expand.
It is easy to get caught up in our own bubbles and overestimate the level of exposure that Ushahidi has received. Even within the crowdsourcing parts of industry and academia, most people have not heard of Ushahidi. Those who do might try to remember what they read about them many months ago. Just as many would ask me if ‘Ushahidi’ is that new conveyor-sushi restaurant (which is a form of crowdsourcing, I suppose). It is not that they have received too much attention, just that the rest of us receive very little, but that is business as usual. It worries me that at least some of the criticism they have received is the result of simple jealousy. We should be happy that an organization with the same goal as many of us is getting recognition.
I understand the concern about allocating resources, so here’s a simple comparison. One of the leaders of the search and rescue teams in Haiti told me that the average cost per successful rescue was about $1,000,000. If we had have paid for every 4636 message to be translated, categorized, mapped and flag as actionable/non-actionable, it would have cost $200,000-$300,000. In other words, the entire eco-system, (which was 50 times larger than Currion estimated) would probably have cost about one quarter as much as 1 single search and rescue success. Whatever percentage of that million dollars was gathering intelligence, this system would pay for itself very quickly. In the case of Haiti, it was overwhelmingly a volunteer effort – the first months were free, providing actionable data and supporting the crisis-affected community. In addition to the dozens of lives we know we helped save, and the 100s that the responders assured us we helped save, we took data-structuring off the hands of those within the crisis-affected region in such a way that was a net gain in monetary resources. Funding initiatives like this to be even more scalable, and just as importantly more prepared, is an obvious allocation of future resources.
All crisis response is an exercise in failure. We cannot help everyone that needs help and so we are simply trying to find the best ways to fall short. Hundreds of lives saved and tens of thousands receiving the first aid sounds large. To the people we helped it was everything, but in the scale of the whole crisis it was small. However, for a short time in Haiti the ability to respond to requests for help was much wider than at any point in Haiti’s past when even child-births reported through 4636 were being responded to. This level of response never occurred with the ’114′ emergency reporting service that became inoperable at the time of the earthquake, or at any other time in Haiti’s past. Hopefully, it will again in the future. Response systems are evaluated on the entire effort, not individual cases, but it is positive to think about those handful of mothers who reached out to 4636 for help and received a level of aid that would have been beyond their expectations even prior to the earthquake.
One beauty of crowdsourcing is that anyone can step up to help, even if it is just tagging locations on a map, and this can truly have a multiplier effect for those on the ground. So please step up if you wish to help.