Today we published a news story on the site about what we learned from a review of 200 Heritage Grants evaluation reports.
Over the coming weeks and months I'm going to pull out some key points from the review which I think OC users will find most useful, starting with this list of dos and don'ts.
Observations from the strengths of the excellent reports:
- All of these reports embedded robust data collection methods at the outset of the project.
- Many of them used evaluation throughout the project lifespan to continually test and refine the success of activities and to monitor their impact. Some were able to demonstrate how selfevaluation findings have already had an impact on the ongoing success of the project.
- All demonstrated expert understanding of research methodologies and of impact evaluation.
- Many provided commentary on the robustness of the data and guidance for how to interpret the findings, taking into account sample sizes and confidence intervals.
- There was often a good balance between the detail of qualitative case studies and what projects meant to people individually, combined with the broader picture provided by quantitative surveys.
- Some reports referenced evaluation metrics and project successes to external references thereby setting their findings within a wider context e.g. other surveys produced by visitor / volunteer organisations, Generic Learning Outcomes.
- In some cases the more succinct excellent reports were supported by appendices, which included detailed supporting evidence such as a list of who was consulted, evaluation plans, activity plans and separate research reports.
- Many were explicit about how the learning from this project and the evaluation report would be used to inform best practice in future. Some explained exactly whom the report would be shared with, how they would use it and what impact this was expected to then have.
- All the reports clearly and explicitly focused on outcomes and legacy.
Observations from the weaknesses of the poor reports:
- It was clear that evaluation had not been a priority for these projects.
- Many reports were short summaries and lacked sufficient detail about their project. Some were incomplete reports despite their title being ‘evaluation report’: they partially reported on an aspect of the project or focused on the achievement of milestones in terms of project management and processes rather than outcomes. Some reports only contained activity summaries, some even included conservation plans and marketing plans.
- Some reports lacked a clear structure or had no introduction / aims or objectives section in which to understand the context of the project.
- In many cases there was no evidence of any evaluation data. Therefore, reports often relied on the perspective of the author/s, or used selective anecdotal data to support the findings. It was often unclear how the report’s judgments had been made.
- Where some data was provided, the reports did not demonstrate that they could analyse this robustly. Some reports included information in appendices but did not refer to this in the report nor attempt to analyse it.
- Many focused on universally positive/anecdotal verbatim and commentary, asserting that objectives had all been met despite a lack of robust evidence to support this. Where some objectives or targets had clearly not been met there was often a lack of explanation for this.
- Many lacked reflection and insight into the strengths and weaknesses of the project.
- Many did not attempt to consider objectively whether any lessons had been learned.
- Many did not consider outcomes or refer to the project’s wider impact outside of the fact that it had taken place and been delivered on time and to budget.
- A few reports had clearly not been proofread and had missing sections.
I'd be keen to hear your thoughts on what does and doesn't make a good evaluation report, or ways in which you've approached your own evaluation which others might find useful.