Measuring Networks

Use this section to help monitor and evaluate the results of your network.

Network Monitoring, Evaluation, and Learning (MEL) Overview

Determining what to measure in your network is dependent on where your network falls on the continuum and what its goals are. The resources below are a good starting point for your MEL efforts.

Monitoring

Monitoring a network entails regular data collection of mostly quantifiable information to see if the network you are supporting is growing in size, strength, or influence. As you move from outputs to outcomes, the fewer “standard” indicators there will be because your monitoring, evaluation, and learning will be less about how the network functions and more about what it achieves.

For support networks, the results to monitor typically include the rate and/or quality of sharing between network members (e.g. knowledge, skills, etc.) and member use of resources shared. For example, how many network members does each individual contact, on average? How often do individuals engage other network members, on average? What percentage of network members open messages on the network mailing list or participate in a group chat? If network members are providing resources or trainings, how many other members are engaging that content?

For coordination networks, consider monitoring member adoption of specific roles and responsibilities and implementation of joint or complementary activities towards the network goal(s). For example, how many joint activities did network members conduct? Is the network growing more technically sophisticated as members use their unique skill sets by leading working groups or committees? Did the network diversify by bringing in members that can help to achieve the network goal(s)? How many members did the network recruit or retain through offering incentives for participation?

Network Indicators

There is no universal set of indicators relevant to each project that supports a network. In many projects, a network is just a means to an end supporting some other kind of ‘long-term’ outcome. However, indicators that measure network structure and engagement are relevant because they can tell you if increased network growth, coordination, or autonomy is occurring, which is a useful ‘short term’ outcome when long term results might not happen until years into your project.​​​​​​​

Click here for a list of network indicators, and here for network examples that include indicators.

Evaluation

Evaluating a network entails collecting and analyzing information about your network and often includes a combination of indicator data and supplementary qualitative and contextual information. All of this information is critical in assessing the degree to which it is functioning as intended and/or to judge its effectiveness.

Consider a baseline evaluation if you need supplementary data beyond what your indicators provide to assess the starting point of how the network functions, particularly if you are planning a new intervention with an existing network and need additional context to make comparisons at the end of your program. Consider a midline evaluation if you are less confident about the approach to network development you have chosen and would like to test this approach with an opportunity to adjust it midway. Use an endline evaluation to assess network achievements. Note that these evaluation categories are flexible and that you can conduct small scale evaluation efforts at several different points in the project lifecycle.

Learning

Think of a learning activity as a way to make sense of your project and that is more flexible than an indicator and less time consuming than an evaluation. Periodic learning activities (such as after action reviews, reflection sessions, scenario planning, or mapping exercises) can help you to make sense of what your indicator data is telling you and determine the need for programmatic adaptation. These activities are also good opportunities for participatory MEL with network members themselves. Consider how frequently to implement such learning activities (balancing timeliness of such discussions and fatigue if non-IRI stakeholders are included) and who will participate in these discussions to make them most effective. Then, make sure to document them and share your lessons learned and adaptations with key stakeholders (like your donor in a quarterly report, or your network members in a message or group activity).

Networks and Sustainability

Measuring network results can be challenging for a variety of reasons. One challenge is that most projects contain more than just a network. Sometimes a project doesn’t achieve its objective, but still sets up a healthy and sustainable network. Other times, a project does achieve its objective, but the network did not contribute to that result. It is important to have intentional and continuous discussions about what success looks like. Is creating a healthy and sustainable network a vital component to that success? If so, what kind? With most projects, you can make choices over time to focus IRI-supported networks more on different sides of the continuum.

An additional consideration is sustainability. Is the network you are supporting intended to exist indefinitely or is it convened to achieve a specific purpose and then disband? Perhaps you aren’t quite sure yet. Ask yourself if the existence of the network in and of itself is a key outcome you wish to achieve. If your goal is to set up a group of people or organizations to support each other or coordinate to achieve great things long after IRI has left, then sustainability is probably important to you. If your goal is less about the means of achieving a social or political result than the ends, then sustainability might be less of an issue once those ends are met. It may be useful to explicitly explain in your results chain or other MEL documents the degree to which you are prioritizing network sustainability vs. other kinds of results. See below for a typology and examples.

Data Collection Tools and MEL Approaches

Establishing whether a network activity caused results is difficult but achievable! It requires careful attention to data collection and evaluation methods (and resources to support both).

Measurement Options

Note 1: Measurement in Closed and Closing Spaces

In closed and closing spaces pay special attention to feasibility as well as safety of participants. For instance, Digital Storytelling may not be possible in some contexts without risking the safety of participants. To avoid subjecting your participants to accidental risk, work closely with them to ensure they understand what they are being asked to do and whether they are aware of any potential risks.

Detailed Overview

Measurement Option: Social Network Analysis

Use for: mapping and analyzing relationships between individuals, groups, organizations or other actors in a network. It useful when what you really care about is increased collaboration between network members, especially in the context of relatively large networks. It could apply to either coordination or support networks, but note that SNA won’t tell you whether the network was successful in any other goal beyond increased growth, communication, or collaboration.

SNA is also appropriate if your theory of change operates on the structure of the network. For example, is your theory that hierarchical networks are more effective at advocacy than decentralized networks? Is your intervention designed to reduce degrees of separation between members? In short, SNA can tell you a lot about the structure of the bonds of a network but will tell you relatively little about what flows over those bonds.

When to use in program cycle: SNA isn’t particularly useful as a one-time effort – it is best when you can compare what a network looks like at baseline or midline to what it looks like at endline. It could be a useful needs assessment tool if conducted at baseline only.

If you are collecting social network data at only one point, consider measuring separate networks as a point of comparison. For example, decentralized support networks might distribute resources in response to a disaster more efficiently than hierarchical networks. In this case, a comparison of networks on the same results measures can provide some evidence for your hypothesis.

Time Commitment: 3 out of 3 – this takes about as much time as a rigorous qualitative evaluation might take.

Technical Commitment: 3 out of 3 – specialized software and other prerequisites are necessary (though note that this software is pretty straightforward to learn).

M, E, or L: SNA is a great option for rigorous project monitoring. It could be the measurement tool for an indicator on network growth or increased collaboration. In some cases, SNA can help evaluate whether or not a network activity caused key results. Again, the use of SNA for evaluation requires comparison of networks or measuring the same network across time; therefore, the data collection process and evaluation method should be designed and resourced accordingly. SNA is also useful for evaluation and learning if supplemented by other kinds of evidence about member incentives and results, particularly about the content of network relationships, as opposed to only network structure.

Measurement Option: Ripple Effect Mapping (REM)

Use for: A popular participatory evaluation technique designed to capture higher level results, including the impact of complex programs and collaborative processes. Well-suited for evaluating group-focused efforts, REM involves aspects of Appreciative Inquiry, mind mapping, facilitated discussion, and qualitative data analysis. This technique could be used to capture higher-level results for all types of networks, but would especially be a good fit to bring members of a coordination network together to discuss progress made and see the levels of collective changes the network achieved as a whole.

When to use in program cycle: to allow more time for network members to achieve higher-level results (ripples themselves); REM works best when applied closer to endline, post-midline.

Time Commitment: 1 (this can be completed in a single session)

Technical Commitment: 1 (facilitation skills are necessary but not technically complicated)

M, E, or L: Most relevant for evaluation of performance and effectiveness, as well as capturing both intended and unintended effects of a program.

Measurement Option: Outcome Mapping

Use for: mapping out and measuring results of a network that seeks to influence behavior, relationships, and actions of other people, groups and organizations. OM allows program participants to set goals about actors who should be targeted, changes that participants expect to see, and strategies to reach and measure progress toward these goals. It would be most applicable to coordination or hybrid networks that collectively seek to influence behavior of other actors.

When to use in program cycle: Most useful when utilized at baseline and throughout a project for monitoring results.

Time Commitment: 1 (this can be completed in a single session (for strategy) but would be most valuable with 2 or more sessions)

Technical Commitment: 1 (facilitation skills are necessary but not technically complicated)

M, E, or L: Most helpful for monitoring a network’s results, as it can provide more meaningful progress markers of success as it relates to influencing behavior and actions of other actors. In addition, it could be used for evaluation and learning but may need to be supplemented by other data collection, if evaluating networks’ function or results that go beyond influencing outside actors.

Resources: Better Evaluation Guide on Outcome Mapping

Measurement Option: Most Significant Change (MSC)

Use for: capturing unintended results and to encourage a more participatory approach that allows participants to define their own success through personal accounts. This method could be appropriate for all types of networks. Note that MSC as a stand-alone method may overwhelmingly focus on successes and not provide a complete picture of a network’s function and results. Therefore, MSC is best used in combination with other measurement options, including more quantitative metrics.

When to use in program cycle: Best used at baseline and throughout a project rather than as a one-off exercise.

Time Commitment: 2 (this approach requires multiple sessions to be most meaningful)

Technical Commitment: 1 (facilitation skills are necessary but not technically complicated)

M, E, or L: MSC is most useful for monitoring and learning purposes to periodically understand trends in achievements by network members and learn from discussions on significance of stories and underlying values of what success is for beneficiaries and implementers. While MSC could also be used in evaluations, it needs to be combined with other kinds of evidence.

Resources: Better Evaluation Guidance on MSC

Original MSC Guide by Davies and Dart

Measurement Option: Digital Storytelling (DST)

Use for: A highly participatory method that encourages beneficiaries to share aspects of their life stories through the creation of their own short digital media production. ‘Media’ may include the digital equivalent of film, animation, photos, audio recordings or electronic files. Though mainly used to highlight the stories of communities and individuals within communities, DST can be a great method to learn about how coordination network members prioritize issues, how they perceive progress made, and, in support networks, what is important to them personally.

Similarly, PhotoVoice, a participatory photography method that allows communities and individuals to represent themselves, could also provide rich information about the work and progress of networks. The photographs could inform and enable wider advocacy efforts and communications to show progress made by coordination networks. PhotoVoice could also be helpful to incorporate for support networks, in which case the change represented would be more at an individual/personal level, rather than collective.

When to use in program cycle: Introduced at baseline, DST (as well as PhotoVoice) can be used throughout the program cycle. Similarly, as with MSC, gauging buy-in, agreeing on set times and expectations for collecting digital stories is key for success and to provide structure and inform all MEL practices.

Time Commitment: 2

Technical Commitment: 2 – technical commitment is relatively low for implementers and partners with access to technology and experience with basic camera function, but may be more complicated for some with less of this experience or access.

M, E, or L: While DST is primarily meant to enhance learning and encourage sharing, the stories could also inform monitoring and evaluations as well.

Resources: USAID Learning Lab guide on Story Telling

Measurement Option: Difference-in-Differences

Use for: impact evaluation; DiD is one example of a suite of “quasi-experimental” evaluation methods for determining whether, and to what degree, your network activity caused an outcome of interest. DiD works by comparing measures over time (e.g. at baseline, midline, and endline) on your network participants to a cohort of similar actors who did not participate in the program. For example, a DiD evaluation of a support network might measure participants over time on a skills test and compare results at each point to similar individuals that did not participate.

A variation in DiD can “phase in” participants. If you can’t measure non-participants, consider systematically onboarding cohorts of participants in phases – with this design, a comparison of initial participants to participants that were incorporated in later stages can provide information about the effect of the program.

When to use in program cycle: DiD, like many impact evaluation methods, requires repeated data collection at a minimum of two points in the program cycle and repeated endline data collection to measure the durability of any causal effects. A high quality DiD evaluation might include data collection at baseline, midline, endline, endline + one month, and endline + three months.

Time Commitment: 3

Technical Commitment: 3

M, E, or L: DiD and other quasi-experimental designs are usually reserved for impact evaluation – determining whether a program activity caused a specific result. However, repeated data collection presents opportunities for learning, especially if network sustainability is an intended result.

Resources: Poverty Action Lab presentation on experimental (i.e. randomized evaluations, randomized controlled trials (RCTs)) and quasi-experimental evaluations, including an example of a DiD evaluation design.

Innovations for Poverty Action (IPA) impact evaluation methods table – includes methods, descriptions, assumptions, and data collection requirements for several experimental and quasi-experimental evaluation designs, including DiD.

Example Network Indicators (General) Example Support Network Indicators Example Coordination Network Indicators

Last updated