4-5+Convening+--+Digging+Deeper+B-2

= = === **//Measuring the strength of our networks//**, how they change over time, what theory of change to develop, what longer term outcomes might be achieved through networks, etc. (Led by Roberto Cremonini) ===

//What we need to assess in the network//
 * We can use the growth cycle to help us determine what to evaluate about the network (e.g., initially assess connectivity, but wait 2-4 years to measure growth). This may mean that the evaluation sequenced.
 * There are several dimensions of change that are important to capture in the evaluation:
 * The benefit of participation to participants, e.g., shared resources, stronger ability to advocate, working with a network mindset, etc.;
 * In some cases there are people outside the network who also benefit, e.g., through changes created or influenced by the network action.
 * Effective networks can also result in accelerated potential for learning, a dimension of network health.
 * //How can we measure this as a field level outcome?//
 * Stephen Johnson wrote a book titled Where Good Ideas Come From that is great for thinking about how to nurture innovation in the initiatives we support, and in our own organizations. He's also produced a YouTube video on the topic, available here.
 * There’s a struggle between connectivity versus field-level outcomes… the question always comes back to //“What are they doing?”//
 * Funders are interested in knowing how networks work, but the people involved are less interested. That’s a tension. One of the things that Barr did in their work with weavers was to draw a clear line between the outcomes they were expecting from a network approach vs. what they expected to observe in the field.
 * //How do we tailor the amount of capital given to the network’s lifecycle stage? Is it a good idea to give the network more/less money as it moves to a different stage?//
 * Some networks – like organizations – have no need to advance to the next stage or institutionalize; others have a clear long-term trajectory and shared desire to reach a defined end-state. It really depends on what you’re funding them for and what they want to do.

//Learning from the evaluation//
 * The application of learning – and the rapid iteration of what is learned by doing – is of great value for society. Our current evaluation processes do not address application as much as they should.
 * //Where are the lessons held? Are people learning individually, as a team, at their respective institutions… or sharing learnings with the field?//
 * Even foundations that have a strong emphasis on learning from the get-go hold their grants results reports internally... Sometimes, those results aren’t even circulated internally within the foundation!
 * Funders invest substantial time / energy in creating materials for an internal audience. However, they don’t share that information with a broader audience that stands to benefit from it; they also work in timelines that aren’t useful for learning and taking immediate action. These are barriers for working with a network mindset.

//The foundation’s role//
 * Framing evaluations as a way to learn (e.g., about what’s working or not) may take some of the pressure off the process, particularly in the context of network-centric initiatives in which people feel like they are playing in new territory.
 * We need to accelerate the capacity for learning inside the network by building capacity (and a supportive environment) for sharing failures and learning in general.
 * It’s important for funders to be up front about the fact that we don’t have all the answers, that we’d like to learn so we can innovate our own funding strategies in the networks space, and that it’s OK for grantees to talk to us when things aren’t going well — because we’re “in it” with them. From the very outset, one of the most important ingredients for learning is to create an environment where there’s permission to make mistakes and no permission to keep mistakes private.
 * Foundation staff needs to be able to take that kind of feedback.
 * There’s also the Google Way: If you’re going to fail, fail fast, and fail by design. Google releases their features selectively to small groups, to test whether they’ll be well received (i.e., they’ve built testing into their project design).
 * Due to the power dynamic, “learning with the network” can be difficult for the funder.
 * //Is it even an option? If it’s not, how can we build this capacity for learning without overburdening nonprofit leaders?//
 * //What’s the alternative?//
 * One option is for learning to happen on its own (with some facilitation and support from the funder, who’s not directly involved). At a certain point, the group will have to report what they’ve learned to the funder. //What form does that take? Is it the facilitator who reports? Is it the network?// //At what point do learnings become a pitch to receive additional funding?//
 * Even when there is agreement around the goal and mission, it’s the role of the funder to point out problems with project design and / or implementation. //How does that influence the relationship between funder and grantees?//
 * The beauty of working in networks is that you can improvise more, i.e., work more through intuition than strategy and planning. In early stages of the work this can help create / strengthen relationship; in later stages of the work, as you move to take action, you need a plan just like any other project.
 * In the absence of a peer learning group for weavers, the funder can play that role.
 * We don’t know how to look for the wonderful things that happen in the periphery.
 * //How open-minded are we about saying we’re looking for emergent outcomes on the periphery?//
 * It’s so much about when you decide to measure.
 * Funders are often tied to the end-point of the grant period, which is often the wrong time to be reflecting on outcomes.

//Other comments//
 * We’re lacking a portfolio strategy tool to help us incorporate networks into our overall strategy… it’s rarely the case that we fund a single network to achieve a specific goal.
 * Distributing grant money piece-wise (vs. in half million dollar chunks) and based on performance creates a strong incentive for grantees to learn about what works and what doesn’t.

//Reflections and takeaways//
 * Evaluation has been so much about outcomes for people. A good process evaluation would help to highlight that process is also important.
 * We can break down evaluations into task, process, and relationship.
 * Whether or not there’s a power dynamic, the funder is a stakeholder and should be in the room and at the table. The question is //How?// That comes down to creating safe spaces for conversation… so much of it is contextual.
 * You need a good facilitator to help you get through and learn from difficult-to-have conversations.
 * There’s a need to balance celebrating failure and holding people accountable.
 * “A lesson learned is not learned until it is applied.” You have to ask what has changed in the way people think, and what people will actually do differently next time.
 * When we’re knee-deep and engaged in our work, we tend not to recognize the breakthroughs. //What is it that helps create those breakthroughs?// Often it’s messiness that’s uncomfortable (e.g., difficult-to-have conversations) that push us to the next level.
 * The question of when to measure is a central one; we usually measure too early.
 * There’s need for formative evaluation along the way.
 * It’s important to incorporate lifecycle into the evaluation, to have a better sense of what to measure & when.
 * Funders should be open to unintended consequences in their evaluation.
 * There’s need for better tools to look at whole systems.
 * //Is systems analysis a good replacement?//
 * We need something more multi-dimensional than a linear logic model.

Go back to April 5-6 Convening Notes. = =