When Data Stays on the Shelf: Why Organizations Collect but Rarely Apply Evidence
March 22by Ivan Munguongeyo
Across humanitarian and development organizations, enormous amounts of data are collected every year. Call it baseline surveys, midterm or endline evaluations. Whichever method is used, the intention is clear: to generate evidence that can guide programs and improve results. But much of that information never shapes decisions. It is collected, analyzed, written up, and then quietly set aside.
The pattern follows the expected stages: teams design tools and gather large volumes of information from communities, often among vulnerable groups such as refugees and host communities. Analysts clean datasets, run statistical analysis, and produce comprehensive reports. Findings are then presented at workshops where stakeholders acknowledge the recommendations. Beyond that moment, little happens.
Rarely is there structured reflection on why results turned out the way they did. When unexpected outcomes emerge, such as declining cassava yields or stagnating behavior-change indicators, they are seldom examined in depth. Recommendations remain unprioritized, responsibility for follow-up is unclear, and the process quietly resets with the next round of data collection. This raises an uncomfortable question: why do organizations continue to collect so much data if so little is used?
Part of the problem lies in accountability pressure. Donors expect evidence, indicators, and measurable results, and organizations respond by collecting more information to demonstrate progress. Digital tools such as KoBoToolbox have made surveys easier to deploy, while storage is cheap and compliance rules encourage organizations to retain large datasets. Collecting more data can feel safer than deciding what not to collect.
Another issue is that many assessments are not designed with specific decisions in mind. Data are gathered, analyzed, and reported, yet findings often remain separate from program decisions. Program teams collect information, M&E units analyze it, and leadership reviews the results, but these processes often remain disconnected. Even when reports are technically strong, their findings may not lead to changes in program implementation.
Over time, the analytical process can become an end in itself. Reports are produced to meet deliverables rather than to drive change. Dissemination workshops focus on presenting findings instead of agreeing on what should happen next. Recommendations gradually lose momentum because no one is responsible for tracking them. As a result, organizations can appear evidence-driven while programs themselves remain largely unchanged.
The consequences are increasingly visible in communities. Households are repeatedly surveyed by multiple actors NGOs, researchers, and government teams often on similar issues. Communities find themselves answering the same questions without seeing improvements, contributing to data fatigue. Respondents eventually begin to question the value of participating. For vulnerable populations, repeated surveys without visible change can feel less like participation and more like extraction. Trust erodes, cooperation becomes harder, and the quality of future data is threatened.
Addressing this pattern requires a shift in how organizations think about evidence. Too often, the starting point is how much information can be gathered, rather than what decisions it is meant to inform. Before launching a survey, teams need to ask: what will we do differently once we have these results? Evidence becomes valuable only when it leads to discussion, adjustment, and accountability. That means translating findings into clear actions, assigning responsibility, and revisiting them as programs evolve.
Leadership also plays an important role. Managers do not need to be statisticians, but they do need to be comfortable engaging with evidence and asking what it implies for the work. When data becomes part of everyday management conversations rather than something produced only for reporting, it begins to influence real choices about priorities, resources, and program design.




