6-6+Draft+Assessing+and+Learning+about+Network+Impact

=ASSESSING AND LEARNING ABOUT NETWORK IMPACT=

Why is it important to contribute to learning and evaluation in a network context and why does this require working in new ways?
Catalyzing and supporting networks requires creative and experimental grantmaking practices. It requires taking calculated risks, learning alongside network leaders to find out what works, and adapting based on new insights. Adopting a network mindset requires experimenting with new behaviors and practices and openly learning from these experiences. An investment in learning and evaluation is critical for both.

According to a recent GEO publication, learning and evaluation is “the process of asking and answering questions that grantmakers and nonprofits need to understand to improve their performance as they work to address urgent issues confronting the communities they serve.” Contributing to learning and evaluation in a network context means asking and answering questions about what’s working in partnership with others involved in the network, sharing what you’re learning so others can benefit from your experience, adapting your network or experiment, and then asking new and better questions.

This is not about disregarding accountability concerns. If anything accountability is increasingly important in a network context where responsibility and action are decentralized. Focusing on network learning and adaptation is a means of engaging network participants and leaders in a collaborative assessment process, where ownership of insights and recommendations can be shared and thereby motivate collective action.

While the number of funders investing in and experimenting with networks and network approaches is growing, there is limited evidence to make the case that networks work. Many of the funders investing in social networks for social change have an intuitive sense that network benefits – connectivity, trust, reciprocity, reach – are critical to most social change endeavors. However, the ability to make this case and the accompanying practice and tools are in their early days.

It’s difficult to assess network impact for many of the same reasons that broad-reaching efforts toward systems change are hard to measure:


 * Quantification: many changes can’t be measured in quantitative terms, and what can be measured may not always be what’s most important
 * Long-time horizons: field level results may take many years to be realized. Even in the short term, outputs may be inconsistent given the organic nature of network organizing. Moreover, in a network context, the results may differ from what was originally intended. You need to be patient and perhaps willing to continue providing support even if the outcomes you’d like to see aren’t yet being delivered.
 * Causality: It’s rarely possible to attribute causality to a single program, let alone a network where you might not be aware of all of the players and activities, which are many, inter-related and constantly changing.

Roberto Creminini underscored the challenges of long-time horizons to impact and the difficulty of attributing causality when he said, “Grantmakers are used to supporting projects with clear goals. But you never know when the value of a network will become clear. This can be difficult for grantmakers that seek a linear return on investment. Yet as networks grow, they often build upon many small acts of problem-solving and knowledge-sharing. Over time, these small acts build confidence within the network and position it for even greater potential. The key is patience: networks may lie dormant, but become active when necessary.”

To add to this complexity, in order to be effective, network evaluation needs to be shaped by participants that reflect the network’s diversity. However, since most networks are driven by volunteer participants, it’s hard to get people to participate in the evaluation process. Plus, participants often enter and exit networks fluidly making it difficult to know who’s in and who’s out. And, when you can get participants to engage, their perspectives on assessment will reflect their diverse reasons for participation, making it hard to align with and clarify desired outcomes.

How to get started contributing to learning and evaluation?
Despite the challenges, we’re learning about how to learn about network impact. While there is no easy formula, there is an emerging set of principles that can help inform network impact assessment and, more generally, evaluation of efforts to change complex systems: consider the context, assess multiple pathways to impact and enable ongoing learning and collaboration.

At the heart of most of the recommendations is shift from a “logic model” view of the world that assumes linearity, pure objectivity, and controllable comparisons, to a systems-orientation that understands networks as complicated webs of relationships embedded in complex and messy systems.

__Consider the context__


 * Understand the context, how it’s changing and the implications for comparison. Networks are embedded in a context. The context changes the network and the network changes the context. As a result, you can’t easily measure network success by comparing one network to another, or positing what might have happened otherwise. Take, for example, an analysis of the impact of woman’s organizations in Egypt today versus prior to the overthrow of Mubarak. The context has changed so much it’s not a valid comparison. Instead, it’s more fruitful to track patterns and pattern changes over time. As Sterman writes, “Complex systems are in disequilibrium and evolve. Many actions yield irreversible consequences. The past cannot be compared well to current circumstance. The existence of multiple interacting feedbacks means it is difficult to hold other aspects of the system constant to isolate the effect of the variable of interest.”
 * Calibrate results against what might be expected at a given point in a network’s lifecycle. This was the approach that the Campaign to End Pediatric HIV/AIDS (CEPA) took to assessing their impact. CEPA is a networked campaign that cuts across six African countries, with coordination at the regional, national, and global levels. Their assessment process, led by iScale, looked at the degree to which these global, national, and regional networks were vibrant and connected and matched this against how far along the network lifecycle each was. Understanding the various lifecycles of the network helped create a shared understanding of the campaign’s current state, challenges and future potential.

Assess multiple pathways to network impact

 * Focus on meaningful contribution toward impact, rather than attribution. Given the complexity of networks and the systems in which they’re embedded, causal attribution is difficult to assign, if not impossible. Instead, focus on how network participants and projects are contributing towards long-term aspirations. This has been the approach the Barr Foundation has used for their ongoing learning about the Barr Fellows Program, an intense leadership development experience and network of diverse community heroes in Boston. The program is organized for impact at the individual, network and community level. When assessing impact on the City of Boston, they are not trying to make direct causal links. Instead, they’re focusing on gathering stories about the ways in which Barr Fellows and the social capital built through their network are contributing to local community vitality. For instance, coordination among Fellows and their work to benefit the city was cited as an important contribution to Boston winning competitive federal “Promise Neighborhood” funding.

> When considering process indicators, like the nature of relationships and the state of network formation, the point is not to be ‘goal-free’ but rather to figure out what can help increase the likelihood of long-term success by linking these indicators to outcomes and impacts. For example, American Public Media is conducting a mulit-stage evaluation of their Public Insight Network, a network of volunteer sources and newsrooms that taps these sources for reporting inputs. They’re first looking at the process of network formation and implementation, and then exploring links to longer-term community change. > > Similarly, Lawrence Community Works (LCW), a community development corporation in Massachusetts that is approaching community organizing with a network lens, is revamping its approach to data collection so it’s more reflective of what people are doing in the network, and therefore informs action. They’re gathering data at the individual level (e.g. what are the different types of members and their experiences and outcomes in the network), the network level (e.g. how many people are moving in and out of LCW) and the field level (how is LCW’s work and practices making a difference in the city of Lawrence and informing practice in other places throughout the country).
 * Look at indicators of impact at multiple levels: the nature of relationships, the process of network formation and the field you’re trying to change. More specifically, look at:
 * Connectivity: what is the nature of relationships within the network? Is everyone connected who needs to be? What is the quality of these connections? Does the network effectively bridge and embrace differences? Is the network becoming more interconnected? What is the network’s reach?
 * Network formation: how healthy is the network along multiple dimensions --participation, network form, leadership, capacity, etc? (See ‘Questions to Consider when Investing in Networks for Good” page__.) Also, what products and services are the immediate result of network activity?
 * Field level outcomes: what progress is the network making on achieving its intended social impact (e.g. policy outcomes, change in the system)? How do you know?


 * Evolve the evaluation approach with the network. Because networks themselves are dynamic and always evolving, it’s impossible to determine fully in advance the evaluation design. It will likely shift as the network changes. This is the approach that Annie E. Casey Foundation took in their ongoing efforts to evaluate the Making Connections Initiative over the course of it’s ten year duration. They co-evolved their approach alongside the initiative design and came to consider, “evaluation as a work in progress, and developed new goals, measurements, techniques and tools as the initiative grew while also focusing on initial evaluation questions.”

__Contribute to ongoing learning and enable collaboration__


 * Assess often and early. Action can take a long-time to emerge from networks and tends to come in waves. Recognize that patterns of network activity may be sporadic and spread over a long time period, and adopt approaches to learning and evaluation that reflect this rhythm. Early stage and regular evaluation can also be a way to find things to celebrate and thereby increase momentum and commitment to the shared work. This was the experience of the Franklin Community College Network when they paused to map their learning and progress two years in. They were surprised and pleased by all they had achieved in a short period of time with a loose group people and had renewed energy to carry the work forward.


 * Emphasize learning over near-term judgment, given the long time horizon for many networks. It’s less about answers and assessing success or failure at a point and time, and more about continuous learning and adaptation to accelerate progress toward your goal. For instance, the Tides Foundation and California Endowment are supporting the networking efforts of a community clinics to work with both traditional and nontraditional partners to address community health in new ways as part of their “Networking for Community Health Initiative.” The content area, focus, and strategy are very different for each of the grantees, addressing problems ranging from Hepatitis A in the water used by fishermen, to green healthcare practices, community markets, and exercise places to combat obesity. The grantees have formed a learning community to reflect on insights about what works across this diversity and assess their work. Also, mini grants allow the grantees to take information and apply it.

> > > > For a lot of grantmakers, there’s little latitude for ‘failed’ grants – investments that don’t achieve the stated outcomes. In the network context, this risk aversion is especially problematic because network participants may decide to take action that’s different from a funder’s original vision. In addition, funders often have high expectations for short term results. Yet, groups working through a model of loose network connections can take a long-time to evolve and deliver tangible outcomes. As Beth Kanter said, “Many things look like failure when you’re in the middle of it.” Investing in and openly sharing learning can be one way of better understanding networks, helping networks adapt, and building a base of knowledge about what works. For grants that really are failures, there’s opportunity – in the words of Chris Van Bergeijk of the Hawaii Community Foundation, "Failures can create fertile ground for other things to happen later. It's like compost: you throw all kinds of things in there and make sure air comes in... It's the compost theory of network grantmaking!"
 * Evaluate networks collaboratively. Engage network participants in developing a system-wide picture of what is being tried and achieved by the various players. If you build a shared vision and theory of the change you’d like to see, it becomes possible to collectively develop shared indicators that you can all track progress against. This is what the Conservation Alliance for Seafood Solutions did when they developed their common vision for environmentally sustainable seafood. The Conservation Alliance, a group of NGOs all working on standards for sustainable seafood sourcing began to coordinate their individual efforts to influence major seafood buyers in 2006 when they were connected together by the David and Lucile Packard Foundation. Over the course of two years, the group worked together in-person and through conference calls with Packard’s support and the help of a facilitator, and in 2008 they arrived at a common vision that was ratified by 17 of the original __ network participants.
 * Build capacity for ongoing learning and evaluation. Because networks are ever-changing and leadership, at its best, is distributed, participants across the network need to be constantly gathering feedback on what works and acting on this, individually and collectively. One way to do this is to invest in feedback loops and learning systems for ongoing assessment that help everyone build understanding together. This ensures real-time feedback, engages network participants in an ongoing strategic conversation and helps strengthen ownership of the network. For instance, the RE-AMP energy network, supported by the Garfield Foundation and others, has developed a “learning and progress system” that tracks activities across the network, creating the habits and data for an ongoing network-wide conversation about what works. Members input data online and track progress against their goals. A learning and progress analyst analyzes this data and looks for cross-cutting patterns, gaps, and opportunities to share information with other members. Recently created, RE-AMP is working on network member uptake. Once participation is widespread, it will create a shared picture of progress and evaluative insight, while also decreasing the burden on each organization to do separate reporting to funders.
 * Learn openly and with others. Capture what you’re learning, from your own experiments to work with a network mind-set and from the networks you’re supporting. Along the way, share what you’re learning so others can learn from you and open yourself up to learning from others.



Go back to 6-7 Draft Funders Guide TOC