PT/EN

Posted on 02/06/2024

Inspirations and lessons learned for updating doebem's research methodology

By Luan Paciência

Reading Time: 11 min

TL;DR

  • In order to update and improve our research methodology, we studied the methodologies of leading organizations, such as GiveWell, Giving What We Can,SoGive, Founders Pledge, Rethink Priorities, Open Philanthropy and Animal Charity Evaluators: GiveWell, Giving What We Can, SoGive, Founders Pledge, Rethink Priorities, Open Philanthropy and Animal Charity Evaluators. This post aims to share these practices, which served as inspiration for doebem's new research methodology.
  • In general, there is a set of assumptions and beliefs shared by the evaluation organizations that serve as a backdrop for their research, but there are disagreements about implementation.
  • As a way of optimizing the search for the best donation opportunities, the evaluating organizations focus on prioritized causes, generally based on the criteria of Importance, Treatability and Neglect.
  • Organizations and solutions are generally evaluated on the criteria of evidence of effectiveness, cost-effectiveness and openness to new donations. To be feasible, this process takes place like a funnel, starting with superficial evaluations of a large number of organizations, to in-depth evaluations of a very restricted set of the most promising organizations.
  • In cases where there is an interest in identifying the most cost-effective solution regardless of the cause, it is necessary to equate the different impacts, assigning moral weights.

doebem's current research methodology was built on the lessons learned in its early years and inspired by the methodologies of leading evaluation organizations around the world that share the mission of maximizing the positive social impact of donations by identifying the most cost-effective solutions, namely GiveWell, Giving What We Can, SoGive, Founders Pledge, Rethink Priorities, Open Philanthropy and Animal Charity Evaluators. The aim of this article is to share the main mapped practices that guided the updating of doebem's research methodology.

In addition to sharing a mission with doebem, the organizations that served as inspiration also share the diagnosis of the third sector in the world, that the philanthropy operates in far from optimal conditions, with resources being used inefficiently, or in other words, with a large proportion of donations being made for solutions with little or no impact. There are many causes for this, the most notable being the absence and/or non-dissemination of evidence and knowledge production in and of the third sector. This, in turn, is one of the factors explaining the fact that many decisions regarding donations are made without the support of data and evidence.

In this way, these organizations, like doebem, are committed to producing data and evidence and using it t o guide all their activities, especially recommending the best solutions for receiving donations. As a consequence, and with implications for the research methodology, no truth can be assumed to be absolute. There must be total openness to changing opinions, positions and recommendations as new evidence is found. What's more, skepticism must be the starting point, i.e. a solution is understood to be effective only when there is data and evidence to support this claim.

Faced with the huge number of solutions and civil society organizations competing for resources, as well as the enormous complexity of social problems around the world and the limited manpower to thoroughly evaluate all the existing solutions, it is necessary to adopt some strategies that make this search for the best donation opportunities feasible.

Prioritizing Causes

A first strategy that is commonly used is to identify the most promising causes in order to focus the process of finding solutions in a process called Cause Prioritization. In general, causes are prioritized using the Importance, Treatability and Neglect criteria that give the framework in question its name, ITN.

The importance criterion seeks to capture the social value of solving a given problem, i.e. how many people are currently affected by it and how significant this impact is on their lives. The Treatability criterion seeks to capture the chance of the problem being solved with the increase in resources being invested in its solution. In other words, whether the problem appears to be solvable. Finally, the Negligence criterion aims to identify the amount of resources and attention currently devoted to solving the problem. The hypothesis behind this is that causes that stand out in all three criteria have a greater chance of encompassing solutions with high cost-effectiveness, i.e. generating a high positive social impact per unit of money invested.

Although the above description serves as a common backdrop for many organizations, there are differences in implementation. Some organizations end up including new criteria, such as urgency and the likelihood of there being organizations working on the cause. There is also divergence in assigning values to the criteria, which can occur through the analysis of quantitative data or the perception of members of the organization's teams. evaluating organizations. In other cases, the criteria are broken down into sub-criteria before being evaluated.

Evaluation of organizations/solutions

Once the causes have been prioritized and the search for solutions has been focused, the process of evaluating the organizations/solutions begins. In general, this is a process that works like a funnel, with a first phase of very superficial analysis of a large number of organizations, followed by phases in which the number of organizations decreases and the level of analysis increases, until reaching a stage of in-depth analysis of a few organizations. The first phase primarily involves the analysis of public data, such as information on the organizations' websites. The last phase involves very close interactions with the teams of the organizations being assessed in order to sharing more specific data and checking information.It is important to note that this model brings with it a probability of error, i.e. that very cost-effective organizations/solutions will not be identified and recommended because they do not have the specific information for each phase and so will not advance in the process, but it is necessary because of the limited manpower needed to evaluate all the organizations/solutions in the same depth.

Throughout the evaluation process, organizations/solutions are generally assessed on three criteria: evidence of effectiveness, cost-effectiveness and openness to new donations. The first of these seeks to capture evidence that the solution being evaluated actually generates a positive impact, i.e. contributes to solving the problem it sets outto solve. It is assumed that there are different types of evidence, from impact assessments to reports from those who have benefited - and all of them have their value. However, it is also understood that there is a kind of ranking of therobustness and quality of these types of evidence, with randomized evaluation being preferable to the others. Effective solutions are expected to have and present various types of evidence about their effectiveness.

Cost-effectiveness is estimated quantitatively through the magnitude of the solution's impact and its cost. The cost is mapped with the organization being evaluated, through financial reports, for example. Of course, all the costs for its implementation and, in general, also the organization's operational maintenance costs by estimating the part that the solution represents in the organization as a whole. As far as impact is concerned, the ideal is for the solution in question (in the territory in which it operates and with the public it benefits) to have already been evaluated and its impact estimated through randomized evaluations, for example. But this is not the rule. Impact evaluations, especially randomized ones, are not that common. For this reason, often the impact evidence presented for the solutions of interest are evaluations of similar solutions in other contexts, which points to a considerable problem: the external validity of these results. External validity is the ability of the result to be generalized or attributed in other contexts. Strictly speaking, the external validity of randomized evaluations is quite low. What is usually done to try to avoid (rather than solve) this problem is to establish some correspondence factors between the context actually assessed and the context of interest. It is common for these factors to be assigned arbitrarily according to the perception and analysis of the evaluation team. An important consideration when calculating cost-effectiveness is its accuracy. Although it is a quantitative estimate, the aim is not to arrive at an exact cost-effectiveness figure for each solution evaluated, but rather to capture significant differences between them.

Finally, the criterion of openness to new donations aims to identify whether the main limiting factor for expanding the solution is financial, i.e. to explore whether the organization will be able to apply new resources effectively, strategically and in a timely manner to the solution. This is important because if the obstacles to expansion are of a different nature, for example skilled workforce or territorial challenges, new resources will not be enough to increase the reach of the solution or it will be done much less efficiently, making the cost-effectiveness estimate unrepresentative for the new context of the organization.

As with the prioritization of causes stage, the assessment of organizations also takes place in a customized way for each assessing organization. Some include additional criteria according to specific institutional or contextual interests, such as transparency and institutional culture. There is also an ongoing debate about the sufficiency of the cost-effectiveness criterion, i.e. whether all the relevant criteria could be summarized and considered in the single cost-effectiveness estimate. The table below gives a brief summary of the criteria used by some reference evaluation organizations:

tabela-post-metodologia.png

Another important aspect related to the work of identifying the best solutions to solve social problems is the assignment of moral weights to compare projects that aim to solve problems of different natures. If the goal is to find the best donation opportunities (regardless of the cause), it is necessary to put all the results on a single scale. This is a highly complex challenge, involving philosophical debates and different worldviews, and it involves comparing, for example, a contemporary human life saved with a future human life saved. In addition, the impact of the moral weights on the final results in the identification of the recommended projects is significant. However, there is no gold standard to define the weights and it is common for this to happen arbitrarily through the perception of people on the team of the project. evaluating organizations. Whatever the path, it is essential to make the process of defining moral weights explicit, as a commitment to transparency, which must permeate the entire methodological process.

Still at the stage of evaluating organizations and solutions, the importance of Listening to the public benefiting from the solutions and realizing that the absence of processes of this nature can lead to erroneous results. As these are the people most impacted by the solutions of interest, it is important to consider in the evaluation process some strategy for giving them a voice and checking whether the solution is effective in practice. This becomes even more relevant when the transformation power of the evaluated solution is less obvious.

After recommendation

Moving on to the post-identification and recommendation of organizations, there are differences in the level of flexibility regarding the use of donations. Some organizations define that donations must be used exclusively in the evaluated solution. In these cases,the option is for control and certainty that the donations will necessarily generate the estimated impact. At the other extreme, some organizations allow donationsto be used in any way the organization deems most strategic. In these cases, the option is for autonomy and the understanding that the organization - once it has been extensively evaluated - can enjoy this freedom and apply the resources in the way that seems most strategic in order to maximize social impact.

Regardless of the form and as a means of guaranteeing the proper use of the resource, the importance of monitoring the organizations supported and the donations made is highlighted. This is seen as one of the most important factors for donors to make their donations through the evaluating organizations.

Finally, a challenge for organizations of this nature is to avoid the trap of getting caught upin what is most obvious. Because of the bias in the literature to produce more knowledgeabout what is easily observable, many promising solutions can be "hidden". And it's important that evaluation organizations don't settle for what's easiest to measure and always aim to identify the best donation opportunities, even if it involves more research effort.

The points listed above were extensively debated, selected and adapted to the Brazilian reality and used as inputs for updating and improving doebem's research methodology.

We believe that by doing so, we will be able to carry out more qualified work and thus maximize the social impact of donations.

References:

https://80000hours.org/problem-profiles/

https://animalcharityevaluators.org/charity-reviews/

https://rethinkpriorities.org/research

https://sogive.org/methodology

https://www.effectivealtruism.org/articles/introduction-to-effective-altruism

https://www.founderspledge.com/our-methodology

https://www.givewell.org/research

https://www.givingwhatwecan.org/our-research-and-approach?locale=en

https://www.openphilanthropy.org/cause-selection/