
First learning bulletin
How to develop a strong monitoring and evaluation plan
This is the first of a series of learning bulletins supporting monitoring and evaluation (M&E) of large capital active travel schemes within local authorities. These bulletins are written by The National Centre for Social Research (NatCen).
NatCen, commissioned by Active Travel England (ATE), is reviewing and giving feedback on local authorities’ M&E plans for these schemes.
This project responds to ATE's requirement that large schemes be formally evaluated. As part of this, local authorities should produce M&E plans covering impact, process, and value-for-money (VfM) evaluations. This work is part of the broader Active Travel Portfolio evaluation led by Sheffield Hallam University. It sits alongside Mosodi’s evaluation capability support programme for local authorities.
To review each plan, we are using a framework aligned with the latest ATE guidance and the Magenta Book. The goal is to strengthen local authorities' M&E capabilities by providing feedback, which will help improve M&E practice over time.
We have reviewed a number of M&E plans and can now share ideas to consider when developing scheme M&E plans. In this first learning bulletin, we will focus on three areas that might be considered ‘quick wins’ in terms of developing am improved M&E plan.
Rationale for the investment
M&E plans and the Theory of Change could be strengthened by including more evidence about the problems the scheme will address. They could also include more evidence to support the rationale for the investment. This could be drawn from the business case. For example, this could include evidence about the rates of active travel in different population groups, or show that key stakeholders support the project.
Evaluation questions
We have seen that plans may list evaluation objectives, but not include questions to guide the design of the process, impact and VfM evaluations. You can often infer evaluation questions from the evaluation objectives, but a strong set of questions sharpen and strengthen the M&E plan. Example questions related to impact include:
- did the scheme achieve the expected outcomes
- did the scheme cause the difference
- what were the unintended outcomes
- how has the context influenced outcomes
- what is the journey types of users
- what are users' attitudes towards the scheme, for example safety
- is the scheme used disproportionately by one type of demographic
Dissemination
We have seen few plans so far that include a good dissemination plan that lists which key groups or audiences require what information, at which point in time, and for what purpose. A strong dissemination plan considers different formats for greater impact. These include:
- one-page summaries
- video outputs
- infographics
- data sharing
- newsletters
- social media posts
- conference presentations
We hope that some of these suggestions are useful when developing or refining your M&E plan. In the next bulletin we will explore suggestions related to impact evaluation and types of analysis to consider.

Second learning bulletin
Overview of approaches to quantitative data analysis
In this bulletin, we give an overview of quasi-experimental methods for analysing active travel data. We look at methods that can be used with different levels of data availability, and understanding the limitations of these methods.
What is a quasi-experimental evaluation?
Quasi-experimental evaluations estimate the causal impact of an intervention without having to randomly allocate the intervention to specific groups. A quasi-experimental design (QED) evaluation uses statistical methods to estimate a 'counterfactual' of what would have happened to outcomes of interest had the intervention not taken place. These methods account for factors that may affect how findings are interpreted, as they are related to the relationship of interest and may bias results.
The main area of concern is selection bias. This can arise from systematic differences between the intervention and comparison groups that are associated with the outcomes being investigated, which biases the causal estimates. This bias can affect how valid the findings are by attributing effects to the intervention, when they are actually due to pre-existing differences between the groups.
Different quasi-experimental designs need different types of data and vary in how reliable and strong their findings are. The table below provides an overview of common methods.
Method | What is it? | When to use It | Strengths | Limitations |
---|---|---|---|---|
Difference-in-Differences |
Comparison of changes in outcomes before and after an intervention was implemented between the intervention and comparison group. |
Data is available at one time point each, before and after an intervention for intervention and comparison groups. |
|
|
Synthetic control methods |
Comparison of outcome trends between the intervention group and a weighted combination of multiple comparison groups created as a ‘synthetic control’ with similar trends to the intervention group. |
Long time series of data is available before and after the intervention, with data available at aggregate level for intervention group and multiple comparison groups. |
|
|
Interrupted time series (ITS) |
Assessment of time series of intervention group outcomes, to identify whether the intervention has led to a significant deviation from the pre-intervention trend.
|
Longitudinal data is available at multiple time points before and after the intervention for a single intervention group. |
|
|
Statistical matching or weighting |
Pre-processing methods used to develop a comparison group based on endline data. Statistical matching techniques identify a subset of the most similar comparison group individuals, whereas weighting uses all comparison individuals but varies how they contribute to the average. |
Post-implementation data with detailed information on fixed characteristics that are associated with outcomes is available for both intervention and comparison groups. |
|
|
Conclusion
In summary, quasi-experimental methods are useful tools for analysing active travel data to assess the causal impact of interventions, particularly when randomised trials aren't possible. However, methods differ in data that is needed and how far they address biases. By understanding and addressing these limitations, researchers can conduct better analyses, which supports decision-making and improvements in active travel infrastructure and policies.

Third learning bulletin
Process evaluation
This is the third bulletin in our series focused on improving monitoring and evaluation (M&E) practices within local authorities for large capital active travel schemes.
In this bulletin, we focus on process evaluation, which you should consider alongside impact and value-for-money evaluations. Our review found that process evaluation is often overlooked, with almost half of the plans making no mention of it.
Based on our findings, we highlight five key considerations for planning a robust process evaluation.
Define clear process evaluation questions
A process evaluation should address the following, consistent with the Magenta Book guidance:
- was the scheme delivered as intended
- what aspects of planning and delivery worked well, and what didn’t, and why
- how and to what extent has stakeholder feedback been incorporated
- what improvements could be made
- how has the local context influenced delivery
Assessment against the logic model
A high-quality process evaluation should look at how activities were designed to work together. This should include checking whether the mechanisms for achieving positive impact worked the way they were planned in the Logic Model.
Identify appropriate data sources and collection methods
In your plan, you should include a full set of data sources and methods mapped to the evaluation questions. This should include a mix of objective and subjective data to make sure the approach is thorough.
Recommended data collection methods and data sources include:
- analysis of scheme documentation and monitoring information. For example, comparing forecasted scheme costs against actual costs, inputs and delivery milestones, and explaining any variances
- qualitative fieldwork. This includes interviews and focus groups to explore challenges, successes, and lessons learned. These should inform recommendations to improve future scheme delivery and management
Assess the contribution of activation activities
Process evaluation should consider how any planned activation activities influence outcomes. For example, this can be explored through Route User Intercept Surveys. We encourage you to integrate evaluation of these activities to better understand how they help to achieve benefits.
Plan for sharing process evaluation learning
It’s important that plans set out how learning from the process evaluation will be shared in good time, and to which stakeholders. Creating reports or summaries during the project can help make sure that findings are applied in its lifecycle, and also help improve future projects.
Final thoughts
We hope these insights help you develop or refine your M&E plan for active travel schemes. Strengthening your process evaluation provides useful insights to sit alongside impact and value-for-money assessments. This will help you deliver more effective projects and keep improving over time.
The NatCen M&E Plan Review Team