Conducting Third-Party Evaluations
WestEd conducts independent, third-party evaluations of implementation and outcomes, using methods that are appropriate to the local context.
Getting any new program or intervention underway takes significant time, energy, and financial resources. So, it’s important to know if your efforts are paying off, whether the program is being well-implemented, and ultimately if it is effective in meeting intended outcomes. A third-party or external evaluator can help answer these questions and other questions of interest.
A neutral third-party evaluator acts as a mirror, reflecting back to clients the information needed to answer key questions and assess progress. WestEd evaluators start by working with clients to develop a study design that is sensitive to the context and addresses clients’ questions appropriately, but also maintains an arm’s length relationship to eliminate bias. Our evaluators gather and use data to review a program’s plans and progress, to track its performance against its objectives, and, ultimately, to make recommendations about program efficacy. Armed with information from our evaluations, program implementers and funders are better equipped to make key decisions about program enhancement, expansion, and sustainability.
Evaluations may include summative components that use outcome data to measure a program’s effectiveness. Summative components may include rigorous impact designs, such as randomized controlled trials or quasi-experimental designs, or may use a longitudinal design to track participant outcomes over time.
Formative evaluation components provide rich information that can guide midcourse corrections in implementation, yield insight into what was implemented and how, and provide valuable detail on the “whys” of the impact results. Formative components of an evaluation may draw on interviews and focus groups with key program staff and participants, as well as document review of program artifacts.
Feedback from third-party evaluators is not useful unless it is provided through relevant and timely communication. WestEd evaluators tailor their communications to the program and the needs of the client, providing pertinent information at critical implementation points, not just after a project is over, to enhance decision-making. They also share information in a variety of formats ranging from infographics, data dashboards, and presentations to summary briefs or longer reports.
Illustrative examples from our work
Summative evaluation
WestEd is leading a summative evaluation of the effectiveness of Khan Academy’s online mathematics resources for community college students. Funded by the U.S. Department of Education, the evaluation randomly assigns community colleges to either use Khan Academy resources or to maintain their standard instructional practices. Student outcomes are then analyzed to identify the impact of Khan Academy resources.
Formative evaluation
Funded by the Bechtel Foundation, the Math in Common evaluation has been using formative evaluation in the early years of the initiative to document its challenges and successes across 10 districts and to report the program’s progress toward its objectives. Each of the more than 10 evaluation reports draws data from interviews, surveys, focus groups, and other sources to examine a key feature of immediate interest to the districts, such as professional learning structures, curriculum use, and site leadership. The frequency of reports allows program participants and the funder to understand how implementation is progressing and make adaptations as necessary.
Mixed methods
Many WestEd evaluations use both formative and summative components to answer questions about and provide context for the program outcomes. For example, our evaluation of the Kansas Multi-Tier System of Supports (MTSS) included interviews, focus groups, and document review to understand how MTSS was implemented across schools in the state and document who was involved in implementation. This information was cycled back to the program implementation team to guide continuous program improvement. Additionally, the evaluation used quantitative data, from both surveys and a school-level state assessment, to answer questions about how MTSS contributed to student outcomes.
WestEd’s Center for Prevention & Early Intervention led the evaluation of sites identified by the California Department of Public Health (CDPH) as part of the Maternal, Infant, & Early Childhood Home Visiting (MIECHV) Program competitive grant. The evaluation included an assessment of program outcomes and impact on families, providers, and communities through on-site interviews with key staff and community partners, family focus groups, family surveys, and multiple surveys of home visitors and program leadership. Evaluation information was shared in written reports to the CDPH and sites and through presentation of key information in the form of two-page Issue Briefs and infographics for use by the state and county programs.