Blog

Measuring Simple, Complicated and Complex Impact for Change Networks

Posted by Steve Waddell in M&E on June 15, 2010

There are many different ways to approach impact measurement, but using the wrong methods can actually undermine a change network’s efforts. The value of appropriate impact measurement is that it not only helps explain to funders their return on investment, but it also is an important tool for priority-setting, decision-making and managing.

Traditional evaluation approaches come from an industrial “in-put/out-put” model. This is fine for simple tasks, but it is inappropriate for complicated and complex tasks that are part and parcel of change networks.

Simple, Complicated and Complex Activities

Three key differences in these types of tasks in the Table reveal that a change network does all three activities. However, these networks are distinguished by an over-arching mission that requires complex activities. Therefore, although the networks need impact measurement methods that will address all three activities, their umbrella measurement method must accommodate complexity.

In change networks, the need for methods that can address complicated and complex activities is evidenced in a number of ways, such as:

The demands of complex systems are reflected in “developmental evaluation” (DE), both an approach and title of a book by Michael Quinn Patton about to be released. Michael writes:

“Developmental evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments. … Informed by systems thinking and sensitive to complex nonlinear dynamics, developmental evaluation supports social innovation and adaptive management. Evaluation processes include asking evaluative questions, applying evaluation logic, and gathering real-time data to inform ongoing decision making and adaptations.”[1]

As in action research strategies, the evaluator is part of the development team from beginning to end, rather than someone who comes in at the end to simply do a post facto analysis.

Two Examples

Ricardo Wilson-Grau, a colleague who works with the DE approach, points out there is a number of methodologies that can be used under that heading. He has, for example, practiced DE using Outcome Mapping with the Global Partnership for the Prevention of Armed Conflict (GPPAC) and the Global Water Partnership (GWP).

Ricardo explains that traditional evaluation poses questions such as:

DE, however, is more interested in answering other questions about the strategy as something in development. For example, the Global Platform for the Prevention of Armed Conflict (GPPAC) introduced an Outcome Mapping in 2007 as a planning tool. In 2009, Ricardo advised on:

The first question was a reflection on the system itself; the second was about further development of the system. Based on those findings, GPPAC is now further developing Outcome Mapping.

Another example is with the GWP. GWP operates in a highly complex, dynamic environment. It has thousands of members who are constantly changing, grouped into 60-70 country water partnerships, whose actual number at any given moment is unknown. These country partnerships are grouped into 13 regional water partnerships with a global secretariat in Stockholm.concerns the approach to measurement. Over ten years they had placed the issue of integrated water resource management on the environmental agenda.

In traditional evaluation performance and success are measured against predetermined goals and SMART outcomes: specific, measurable, achievable, realistic, and time-bound. DE is quite different. With Ricardo’s support GWP created a monitoring procedure to apply DE principles to develop measures and tracking mechanisms as outcomes emerge. They introduced the procedure into one region and, according to what did and did not work, adjusted it for the next region. That is, the measures could change as the process unfolded. They tracked the forks in the road – specifically how different regions had to adjust the monitoring procedure – and used this information to point out the implications of key decisions as the innovative monitoring system evolved. Consequently, their donors are being informed of the governmental policy and practice changes that GWP – directly or indirectly, usually partially and often sometimes unintentionally – influences. That’s simple, complicated and complex.

Ricardo is an independent evaluator and organizational consultant based in Brazil and the Netherlands. He can be reached at ricardo.wilson-grau@inter.nl.net.

[1] Patton, M. Q. (2010). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York, NY, USA, Guilford Press.
Comment on this item
  • Browse the Archive

  • Subscribe via RSS or email

    Click here to subscribe to our RSS feed, or receive our blog posts by email.