It’s a familiar question for anyone working in development — whether setting up a savings club for young mothers or a public awareness campaign to encourage HIV testing — how do you know if it’s working? This big question has even more relevance now that many development organizations are turning to digital products to reach and engage program participants.
In traditional development terms, we are accustomed to thinking of outcomes and impact in terms of behavior change: Higher savings rates or increased HIV testing rates, for example. Our desired outcomes might be the same for new digital interventions — and now, thanks to the nature of digital interventions, we have significantly more data to add to the portfolio, alongside new metrics such as dwell time, page views, and bounce rates.
“The development sector still has much to learn about how digital interventions can affect behavior change.”
— Kecia Bertermann, director of digital research and learning at Girl EffectBut what does it all mean? Does the fact that we have data for digital interventions mean that we can understand our impact more accurately and quickly, or does more data just mean more confusion?
At Girl Effect we’ve been thinking about how we can measure the impact of our digital interventions targeting adolescent girls for some time.
These are some of our learnings for how to best incorporate digital data and metrics in a monitoring and evaluation, or M&E, framework.
Understand your audience
This foundational tenet of sound development work is just as important for digital interventions as it is for more traditional programs. Learn about your target audience — or “user” in digital terms — as well as their challenges, aspirations, and opportunities, but also understand how they use digital interventions and how your digital tool can help support and enhance their lives.
Learn about your metrics
If you’re new to digital interventions, it’s important to first come to grips with the metrics available. Websites, apps, social media, and interactive voice response, each have their own catalog of metrics. Dive into the definitions and learn how the crucial backend data for digital interventions can help you understand the way users use your digital tool.
Select your core metrics carefully
It’s easy to become overwhelmed when learning about digital metrics. Spend some time with the metrics; put vanity metrics aside, and consider those that best help measure your intervention. If you’re measuring the impact of a savings app, a weekly return rate might be more essential in helping understand your intervention’s use than a bounce rate, for instance. Similarly, if your interactive voice response line is designed to help users learn about family planning methods, the user journey completion rate might be one of your core metrics of choice. By focusing on core metrics, you have the opportunity to focus on the metrics that help you track the elements most relevant to your intervention’s success.
Digital data offers an intimate window into a vital area of M&E: Understanding the progress of a user toward impact. In offline interventions, it is often very time consuming and expensive to gather monitoring data. Digital interventions offer a wealth of monitoring data that can help M&E, and program and design teams understand quickly when and how a digital intervention is missing the mark.
At Girl Effect — a creative nonprofit empowering girls to change their lives — we have found it helpful to think about digital metrics in four categories:
Reach: Metrics about the number of users disaggregated by number of visits and user demographics.
Engagement: Metrics that describe how a user interacts with the site or service. For example, dwell times or page views, user journey, and completion rates.
Participation: When available, these metrics describe how a user contributes to the site or interacts with others — such as comment rate, questions posted to a forum. Note that if a digital intervention is a “read-only” site, participation metrics might not be available.
Outcomes: Changes in knowledge, attitude, or behavior, which contribute to impact as a result of visiting the site.
For example, on a site’s backend, teams can quickly see that they might be losing two-thirds of their users when they visit a specific page on the site. This immediately tells the team that the page might need improvement; perhaps the terminology on the page is not useful for the target audience, or the content is difficult to understand. Spotting these trends quickly can help the entire digital team pinpoint areas where a digital intervention is not succeeding, in order to use resources more efficiently to investigate the critical “why” questions, and course-correct when needed.
Understanding the user journey is a crucial component for measuring a digital intervention.
When thinking about evaluating the impact of an intervention, these suggestions can be helpful:
Measure quickly, measure often
Digital technologies, by nature, are iterative and fast-changing; the underlying tech of a given intervention is subject to enhancements at any time, and content and user contributions can quickly change site and community dynamics. Additionally, a user’s lifecycle on a site is often shorter than a person’s participation in an offline intervention. While we continue to learn more about long-term, longitudinal effects of digital interventions, the nature of digital tools and users’ interactions with content and digital programs requires us to think differently about measuring outcomes and impact than we might have in traditional, offline interventions. Within a three-year evaluation plan, for example, shorter, iterative evaluations of outcomes are key in a digital context.
Explore ways to investigate impact
This varies across types of digital interventions and might include online surveys, comment analysis, and analysis of user-generated content. Developing a coding framework to classify comments according to comment type and comment content can be particularly useful for understanding intention and impact amongst users. Additionally, if your database is large, data science techniques and machine learning can be used to analyze data and evaluate impact.
Offline measurement is still important
While digital data is timely and comprehensive, using a case study, focus group, or ethnographic approach to augment your digital data is essential for answering the “why” questions about your digital intervention. Sessions with users can be useful for understanding site design issues as well as understanding how and why a user utilizes content from the site in their everyday life; and just as crucially, when they choose not to apply learnings from a site in their daily practice.
The development sector still has much to learn about how digital interventions can affect behavior change. As we continue on our learning journey, understanding how users engage with sites, services, or apps, is important for teasing out the mechanics of change from outcomes and impact. Using a holistic measurement approach — which can include engagement, participation, and impact — can be useful for understanding desirable outcomes, as well as providing a framework for quickly understanding when to make necessary site adjustments.
So, if you’re looking to implement and measure a digital intervention, remember: Dig into the data; be selective and perceptive when creating your digital measurement framework; and learn, iterate, and continue to improve. Digital measurement offers exciting possibilities for understanding change. While we continue to learn about the best ways to use digital to change lives, let’s continue to share and improve what works in the “how” and “why” of digital measurement.