Data-Driven Decisions Making: Are We Fooled by Fancy Words? 

Without big data, you are blind and deaf in the middle of a freeway.” – Geoffrey Moore

In recent years, the landscape of data-driven decision-making has evolved significantly. We now have access to more data than ever before, supported by advanced computing tools that enable sophisticated analysis. This blog is not intended to be a sensational article but rather a thought-provoking discussion for those interested in the intricacies of data-driven decision-making, monitoring, and evaluation. The use of data has permeated many aspects of decision-making across all facets of life. Data-driven decision-making, popularized in the 1980s and 1990s, is evolving into a vastly more sophisticated concept known as Big Data that relies on software approaches generally referred to as analytics. The growth in data is in part driven by the demand for organizations to be “data-driven” by effectively utilizing their data by unlocking its potential value.

There is more data available today than there was a decade ago. The tools and techniques for analyzing this data have also made significant strides, making it easier to derive insights which could not be imagined and make informed decisions. However, the terms “data-driven,” “decision-making,” and “monitoring and evaluation” are often used without a deep understanding of their implications. Monitoring is widely recognized as crucial, but evaluation is where opinions often diverge, yet both use data more so the same data. From the onset, it is also important to note that there is an opportunity cost of not using data since the value of data is only derived when it is used.

The opportunities and challenges of data-driven decision-making and monitoring and evaluation are well documented. Proponents argue that data-driven approaches can lead to more informed and objective decision-making, while critics caution about the potential for misuse, bias, and unintended consequences of relying on data.

While many programs excel in the use of data for monitoring, large gaps exist in evaluation. Evaluations often remain descriptive, and few go beyond this to implement standardized methods at various stages of the evaluation process. My bias leans towards causal evaluations, as they provide concrete methods, especially when the goal is to scale interventions. Despite the importance of thorough evaluation, it is often neglected or inadequately performed. This may be due to capacity, resource, and priority constraints. Peter Heydon argues that this is because “we tend to put too much emphasis on the technology and not enough on the people and process.” He argues that data technology is not built for users but for data scientists. A dual problem arises in organizations where the data analyst is a subject matter expert but not a statistician. Or conversely, the analyst is a statistician but not a subject matter expert.

“Since most of the world’s data is unstructured, an ability to analyze and act on it presents a big opportunity.” — Michael Shulman

This means there has to be a deliberate effort to build the capacity of the people to analyse, interpret and make sense of the data. To incorporate data in evaluation meaningfully, we need to build a culture and infrastructure that values data-driven decision-making at all stages.

This brings us back to evidence-based decision-making. While research results are used to inform decisions, it is crucial to question the nature of the data, the possibility of selective reporting, and the transparency of the project implementation process. Data is used to shape narratives, and narratives are about power and or profits. This means the way data is communicated can significantly impact its interpretation. These questions, though seemingly basic, are essential for upholding research ethics. To encourage the uptake of our research and implementation by others, we must be transparent and accountable. This involves sharing the complete implementation process and the data, including raw data and any processing done. Transparency fosters trust and allows for ethical and effective critique of research findings. Other essential elements include the use of standardized methods, robust study designs, and clear communication of limitations.

“Everybody needs data literacy, because data is everywhere. It’s the new currency, it’s the language of the business. We need to be able to speak that.” Piyanka Jain

By incorporating these principles, we can move towards a more rigorous, ethical, and impactful use of data in decision-making, monitoring, and evaluation. Diverse sources of data and perspectives can help researchers and decision-makers understand and adapt evidence to contexts for more effective interventions. Supporting the national, subnational, counties and school systems to have crosscutting, routine data-generating and sharing platforms that bring together stakeholders from different sectors is therefore crucial for the successful implementation of learning interventions.

“If you want people to make the right decisions with data, you have to get in their head in a way they understand.” — Miro Kazakoff

At the core of Zizi Afrique Foundation is education-based data and the amount of data generated and demanded will continue to grow as the organization grows. To continue influencing policy change through data, Zizi must be deliberate on improving the credibility of data collected, including how it is handled and packaged. This calls for continuous capacity building on those handling data and the consumers (both internal and external) on how they can use it to unlock the data’s value. This is because according to Dan Vesset “People spend 60% to 80% of their time trying to find data which is a huge productivity loss.”

Leave a Reply

Your email address will not be published. Required fields are marked *