GiveWell Panel

Earlier this week, I attended a panel discussion at GiveWell that focused on issues surrounding the Randomized Control Trial (RCT) movement within the non-profit/NGO sector, which is fancy way of saying that organizations should have some evidence that their programs work before trying to scale them up.

Before I share my notes, let me share some context. Like many other fields, there is a greater demand and push for data-driven and evidence-based solutions. Clearly not everything can be measured and there is room for experimentation and unproven ventures, yet there are a lot of organizations that cannot reliably prove that they have much (if any) impact relative to the status quo counterfactual.

An implicit premise of the discussion was that a lot of RCT data is dirty and results from similar trials can be highly variable. This is not surprising, since much of this data is from impoverished locales.

There was some debate over the bias towards studies that illustrate globally consistent conclusions vs  recognizing that geo-specific differences can and do exist. There was also a debate over bottom-up grassroots work vs top-down public policy solutions.

Even in charitable, intervention, and development programs, there are no sure bets. Having a 95% confidence level in what you’re doing is unrealistic (95% falls within two standard deviations and is often accepted as a sure thing). Even 80% confidence levels may be too high a bar. Panelists agreed that there is no hard and fast threshold for supporting a program, but confidence levels, cost, practical considerations, and externalities should be considered.

Below are several other comments that I found interesting:

  • It is difficult, if not impossible, to separate a program’s effectiveness from the organization running it. 
  • Data collection could be a lot better because much of it is driven by academics who have different incentives that decision-makers and practitioners.
  • A case can be made to support a less organized organization with poor data monitoring rather than an organization with great data, if the there’s a wide disparity in the estimated impact (ie. deworming programs can increase incomes more than cash transfer programs by a factor of 10x)
  • Some of the biggest surprises the panelists have seen has been behavioral incentives having outsized positive impact (ie. immunization rates shooting up dramatically if you offer a kilo of lentils)

Overall, I enjoyed the event. Rather than an overconfidence in RCTs and data, the panelists were upfront about the limitations and misuses of RCT data. This surprised me since GiveWell has built their reputation on data-driven giving. To some extent, philanthropy is like anything else: you make decisions about an uncertain future with incomplete and imperfect information.

Full Disclosure: Nothing on this site should ever be considered to be advice, research or an invitation to buy or sell any securities, please see my Disclosures & Disclaimers page for a full disclaimer.

Leave a Reply

Your email address will not be published. Required fields are marked *