5-17+Network+Learning+and+Evaluation+Funders+Guide

=The "Ask"=
 * //We'd love your thoughts / reflections on the preliminary outline of the guide, specifically://**
 * 1) General reactions: what works? what didn't?
 * 2) What’s missing?
 * 3) What's not needed / what can we drop?
 * 4) What additional examples / experiences from your work or others might we include to illustrate ideas?

= **Network Learning and Evaluation** =

//The Learner wants to know how change happens. His foundation supports a range of approaches and he and his colleagues want to figure out what interventions in a system make a difference. When should they invest in policy change, spreading new ideas or on-the-ground community building? And, what are the best ways to do so? In addition to supporting established NGOs, his foundation is experimenting with catalyzing networks. The hypothesis is that the foundation can influence systems by working transparently and participating in networks, and by investing in movement building, coalitions, alliance and loose groups of change makers. But how can the Learner know if it’s working and how can he engage others in exploring with him?//

//Its tough work for so many reasons: there’s no shared picture of the system, there are so many variables and they’re always changing, plus systems take a really long time to shift. Can’t there be more straightforward ways to think about impact? Can’t there be quicker and clearer answers to whether or not the money is making a difference? Is there a simple way to make the case for working with a network mindset?//

__**//What does it mean to contribute to learning and evaluation in a network context? And, what’s the link between learning and evaluation?//**__
Catalyzing and supporting networks requires creative and experimental grantmaking practices. It requires taking calculated risks, learning alongside network leaders to find out what works, and adapting based on new insights. Adopting a network mindset requires experimenting with new behaviors and practices and openly learning from these experiences. An investment in learning and evaluation is critical for both.

According to a recent GEO publication, learning and evaluation is “the process of asking and answering questions that grantmakers and nonprofits need to understand to improve their performance as they work to address urgent issues confronting the communities they serve.” [iv] Contributing to learning and evaluation in a network context means asking and answering questions about what’s working //in partnership// with others involved in the network, sharing what you’re learning so others can benefit from your experience, adapting your network or experiment, and then asking new and better questions. [we’ll add diagram here]

This is not about disregarding accountability concerns. If anything accountability is increasingly important in a network context where responsibility and action are decentralized. Focusing on network learning and adaptation is a means of engaging network participants and leaders in a collaborative assessment process, where ownership of insights and recommendations can be shared and thereby motivate collective action. [example?]

__**//Why is assessing impact difficult in a network context?//**__ While the number of funders investing in and experimenting with networks and network approaches is growing, there is limited //evidence// to make the case that networks work. Many of the funders investing in networks for good have an intuitive sense that network benefits – connectivity, trust, reciprocity, reach – are critical to most social change endeavors. However, the ability to make this case and the accompanying practice and tools are in their early days. It’s difficult to assess network impact for many of the same reasons that broad-reaching efforts toward systems change are hard to measure:
 * //Quantification//: many changes can’t be measured in quantitative terms, and what can be measured may not always be what’s most important
 * //Long-time horizons:// field level results may take many years to be realized. Even in the short term, outputs may be inconsistent given the organic nature of network organizing. Moreover, in a network context, the results may differ from what was originally intended. You need to be patient and perhaps willing to continue providing support even if the outcomes you’d like to see aren’t yet being delivered**.**
 * //Causality//: It’s rarely possible to attribute causality to a single program, let alone a network where you might not be aware of all of the players and activities, which are many, inter-related and constantly changing.

Furthermore, in order to be effective, network evaluation needs to be shaped by participants that reflect the network’s diversity. However, since most networks are driven by volunteer participants, it’s hard to get people to participate in the evaluation process. Plus, participants often enter and exit networks fluidly making it difficult to know who’s in and who’s out. [i] And, when you can get participants to engage, their perspectives on assessment will reflect their diverse reasons for participation, making it hard to align with and clarify desired outcomes.

__**//How to get started contributing to learning and evaluation?//**__
Despite the challenges, we’re learning about how to learn about network impact. While there is no easy formula, there is an emerging set of principles that can help inform network impact assessment and, more generally, evaluation of efforts to change complex systems: consider the context, assess multiple pathways to impact, enable ongoing learning and collaboration and embrace a systems orientation.

**Consider the context** > > >
 * // Understand the context, how it’s changing and the implications for comparison. // Networks are embedded in a context. The context changes the network and the network changes the context. As a result, you can’t easily measure network success by comparing one network to another, or positing what might have happened otherwise. Take, for example, an analysis of the impact of woman’s organizations in Egypt today versus prior to the overthrow of Mubarak. The context has changed so much it’s not a valid comparison. Instead, it’s more fruitful to track patterns and pattern changes over time. [i] As Sterman writes, “Complex systems are in disequilibrium and evolve. Many actions yield irreversible consequences. The past cannot be compared well to current circumstance. The existence of multiple interacting feedbacks means it is difficult to hold other aspects of the system constant to isolate the effect of the variable of interest.” [ii] [Example?]
 * //Calibrate results against what might be expected at a given point in a network’s lifecycle.// This was the approach that the Campaign to End Pediatric HIV/AIDS (CEPA) took to assessing their impact. CEPA is a networked campaign that cuts across six African countries, with coordination at the regional, national, and global levels. Their assessment process, led by iScale, looked at the degree to which these global, national, and regional networks were vibrant and connected and matched this against how far along the network lifecycle each was. Understanding the various lifecycles of the network helped create a shared understanding of the campaign’s current state, challenges and future potential. [iii]

** Assess multiple pathways to network impact **
 * //Focus on meaningful contribution toward impact, rather than attribution//. Given the complexity of networks and the systems in which they’re embedded, causal attribution is difficult to assign, if not impossible. Instead, focus on how network participants and projects are contributing towards long-term aspirations. This has been the approach the Barr Foundation has used for their ongoing learning about the Barr Fellows Program, an intense leadership development experience and network of diverse community heroes in Boston. The program is organized for impact at the individual, network and community level. When assessing impact on the City of Boston, they are not trying to make direct causal links. Instead, they’re focusing on gathering stories about the ways in which Barr Fellows and the social capital built through their network are contributing to local community vitality. For instance, coordination among Fellows and their work to benefit the city was cited as an important contribution to Boston winning competitive federal “Promise Neighborhood” funding. [v]

> When considering process indicators, like the nature of relationships and the state of network formation, the point is not to be ‘goal-free’ but rather to figure out what can help increase the likelihood of long-term success by linking these indicators to outcomes and impacts. For example, American Public Media is conducting a mulit-stage evaluation of their Public Insight Network, a network of volunteer sources and newsrooms that taps these sources for reporting inputs. They’re first looking at the process of network formation and implementation, and then exploring links to longer-term community change. [i] > > Similarly, Lawrence Community Works (LCW), a community development corporation in Massachusetts that is approaching community organizing with a network lens, is revamping its approach to data collection so it’s more reflective of what people are doing in the network, and therefore informs action. They’re gathering data at the individual level (e.g. what are the different types of members and their experiences and outcomes in the network), the network level (e.g. how many people are moving in and out of LCW) and the field level (how is LCW’s work and practices making a difference in the city of Lawrence and informing practice in other places throughout the country).
 * //Look at indicators of impact at multiple levels//: the nature of relationships, the process of network formation and the field you’re trying to change. More specifically, look at:
 * //Connectivity//: what is the nature of relationships within the network? Is everyone connected who needs to be? What is the quality of these connections? Does the network effectively bridge and embrace differences? Is the network becoming more interconnected? What is the network’s reach?
 * //Network formation//: how healthy is the network along multiple dimensions --participation, network form, leadership, capacity, etc? (See ‘Questions to Consider when Investing in Networks for Good” page__.) Also, what products and services are the immediate result of network activity?
 * //Field level outcomes//: what progress is the network making on achieving its intended social impact (e.g. policy outcomes, change in the system)? How do you know?


 * // Evolve the evaluation approach with the network // . Because networks themselves are dynamic and always evolving, it’s impossible to determine fully in advance the evaluation design. It will likely shift as the network changes. This is the approach that Annie E. Casey Foundation took in their ongoing efforts to evaluate the Making Connections Initiative over the course of it’s ten year duration. They co-evolved their approach alongside the initiative design and came to consider, “evaluation as a work in progress, and developed new goals, measurements, techniques and tools as the initiative grew while also focusing on initial evaluation questions.” [i]

** Contribute to ongoing learning and enable collaboration ** > > > > > > For a lot of grantmakers, there’s little latitude for ‘failed’ grants – investments that don’t achieve the stated outcomes. In the network context, this risk aversion is especially problematic because network participants may decide to take action that’s different from a funder’s original vision. In addition, funders often have high expectations for short term results. Yet, groups working through a model of loose network connections can take a long-time to evolve and deliver tangible outcomes. As one NNF participant said, “Many things look like failure when you’re in the middle of it.” Investing in and openly sharing learning can be one way of better understanding networks, helping networks adapt, and building a base of knowledge about what works. For grants that really are failures, there’s opportunity – in the words of one NNF participant, "Failures can create fertile ground for other things to happen later. It's like compost: you throw all kinds of things in there and make sure air comes in... It's the compost theory of network grantmaking!"
 * //Assess often and early//. Action can take a long-time to emerge from networks and tends to come in waves. Recognize that patterns of network activity may be sporadic and spread over a long time period, and adopt approaches to learning and evaluation that reflect this rhythm. Early stage and regular evaluation can also be a way to find things to celebrate and thereby increase momentum and commitment to the shared work. This was the experience of the Franklin Community College Network when they paused to map their learning and progress two years in. They were surprised and pleased by all they had achieved in a short period of time with a loose group people and had renewed energy to carry the work forward.
 * // Emphasize learning over near-term judgment //, given the long time horizon for many networks. It’s less about answers and assessing success or failure at a point and time, and more about continuous learning and adaptation to accelerate progress toward your goal. For instance, the Tides Foundation and California Endowment are supporting the networking efforts of a community clinics to work with both traditional and nontraditional partners to address community health in new ways as part of their “Networking for Community Health Initiative.” [fn] The content area, focus, and strategy are very different for each of the grantees, addressing problems ranging from Hepatitis A in the water used by fishermen, to green healthcare practices, community markets, and exercise places to combat obesity. The grantees have formed a learning community to reflect on insights about what works across this diversity and assess their work. Also, mini grants allow the grantees to take information and apply it.
 * //Evaluate networks collaboratively//. Engage network participants in developing a system-wide picture of what is being tried and achieved by the various players. If you build a shared vision and theory of the change you’d like to see, it becomes possible to collectively develop shared indicators that you can all track progress against. This is what the Garfield Foundation did when they brought together grantmakers and activists to deve lop a systems map of Midwest energy issues. The process, albeit time consuming and labor intensive, resulted in a map and set of conversations that helped participants align around a common vision of what change would require, while developing trusting relationships along the way. [fn] [ Find a different example to mix things up. Maybe Conservation Alliance for Seafood Solution’s Common Vision?]
 * //Build capacity for ongoing learning and evaluation// . Because networks are ever-changing and leadership, at its best, is distributed, participants across the network need to be constantly gathering feedback on what works and acting on this, individually and collectively. One way to do this is to invest in feedback loops and learning systems for ongoing assessment that help everyone build understanding together. This ensures real-time feedback, engages network participants in an ongoing strategic conversation and helps strengthen ownership of the network. For instance, the RE-AMP energy network, supported by the Garfield Foundation and others, has developed a “learning and progress system” that tracks activities across the network, creating the habits and data for an ongoing network-wide conversation about what works. Members input data online and track progress against their goals. A learning and progress analyst analyzes this data and looks for cross-cutting patterns, gaps, and opportunities to share information with other members. Recently created, RE-AMP is working on network member uptake. Once participation is widespread, it will create a shared picture of progress and evaluative insight, while also decreasing the burden on each organization to do separate reporting to funders.
 * //Learn openly and with others.// Capture what you’re learning, from your own experiments to work with a network mind-set and from the networks you’re supporting. Along the way, share what you’re learning so others can learn from you and open yourself up to learning from others.

**Embrace a systems orientation**
 * //Shift from a “logic model” view to a systems orientation.// At the heart of many of the above recommendations is shift from a “logic model” view of the world that assumes linearity, pure objectivity, and controllable comparisons, to a systems-orientation that understands networks as complicated webs of relationships embedded in complex and messy systems. The chart below lots at each of these models in their extreme. ****[i]**** The point here isn’t to throw out all that has worked in the past in favor of a systems-oriented approach. Instead, look for opportunities to artfully blend a ‘logic model’ model and systems orientation depending on what your situation calls for.

//(Adapted from Ben Ramalingam)//
 * **Logic Model Approach** || **Systems-oriented approach** ||
 * Evaluator positioned outside to assure objectivity || Evaluator and evaluation positioned inside, integrated into network action ||
 * Evaluation for funder audience || Evaluation for and by network participants and others working in the system, including funders ||
 * Evaluator and funder control the evaluation and design || Evaluator and funder collaborate with network participants to shape a process that will contribute to network learning and adaptation ||
 * Evaluator and funder control data and information || Network controls and owns data and information ||
 * Evaluation focused on linking cause and effect || Evaluation focused on surfacing patterns that can help the network learn ||
 * Environment/context is separate from what is being evaluated || Context is part of the network. As the context changes the network changes and vice versa ||
 * Evaluation addresses a single or unified theory / perspective || Evaluation addresses multiplicity of perspectives and values, some of which may conflict with the collective view ||
 * Accountability to control and locate blame for failures || Learning to respond to lack of control and strategically adapt to what’s unfolding ||
 * Fear of Failure || Hunger for learning ||

__ **//FAQs about Contributing to Learning and Evaluation://** __

 * What’s the relationship between network theory, systems dynamics and complexity theory? (See page _.)
 * What skills are needed to effectively contribute to learning and assess network impact? (See page .)

__ **//Additional Resources//** __
Outlines a framework for thinking about network evaluations – including handy worksheets – and provides a toolkit for evaluators comprising specific tools, skill-set, and strategies to help facilitate the task. Eli Malinsky and Chad Lubelsky, Center for Social Innovation and Canada Millenium Scholarship Foundation, 2008. ONLINE: http://s.socialinnovation.ca/files/NetworkEvaluation_Pocket_english.pdf
 * Network Evaluation: Cultivating Healthy Networks for Social Change**

This publication offers a brief overview of how grantmakers are looking at evaluation through an organizational learning and effectiveness lens, including learning and evaluation stories from 19 GEO members. Grantmakers for Effective Organizations, 2009. ONLINE: [|www.**geo**funders.org/document.aspx?oid=a06600000056W4x]
 * Evaluation in Philanthropy: Perspectives from the Field**

This issue highlights social network analysis (SNA) methodology and its application within program evaluation through four diverse case studies, providing a basis to model common applications of network analysis within the field. Maryann M. Durland and Kimberly A. Fredricks, Jossey-Bass and the American Evaluation Association, 2006.
 * Social Network Analysis in Program Evaluation: New Direction for Evaluation**

Scans the current field of network monitoring and evaluation with the goal of identifying where progress has been made and where further work is still needed. Innovations for Scaling Impact and Keystone Accountability, June 2010. ONLINE: []
 * Next Generation Network Evaluation**

Outlines a three-pronged approach to assessing the impact of networks: network mapping, network indicators, and network outcomes. June Holley, 2007. ONLINE: []
 * “Networks and Evaluation”**

An in-depth look at various forms of leadership networks. Discusses the value of Social Network Analysis (SNA) as a promising evaluation approach for networks. Claire Reneilt and Bruce Hoppe, //Leadership Quarterly//, January 2009. ONLINE: _
 * “Social Network Analysis and the Evaluation of Leadership Networks”**

Sketches the challenges of planning, monitoring, and evaluating results in international social change networks, with suggestions on how they might be met. The principles in this paper are highly relevant to other complex networks. Ricardo Wilson-Grau, In Assessing Progress on the Road to Peace Planning, Monitoring and Evaluating Conflict Prevention and Peacebuilding Activities, European Center for Conflict Prevention, May 2008.
 * “Complexity and International Social Change Networks”**

This chapter summarizes the characteristics of a complex adaptive system (CAS) from an organizational perspective, identifying properties of evaluation systems that are consistent with the nature of a CACS and describing tools and techniques for more effective evaluation. Glenda H. Eoyang and Thomas H. Berkas, //In Managing Complexity in Organizations//, Quorum Books, 1999. ONLINE: []
 * “Evaluating Performance in a Complex Adaptive System”**

Go back to 5-17 Draft Funders Guide TOC.