This module is designed to help you prepare to monitor and assess the implementation process and client health outcomes at your site. Monitoring and evaluation measures should be considered at the beginning and throughout the implementation process, not just at the end. You will want to collect data on the process of implementation, outcomes of the program on participants (clients, staff, partners, etc.), and the long-term impact of the program on the population.
Describe tools and mechanisms to monitor and evaluate the implementation process, outcomes, and impact.
Monitoring
Monitoring involves collecting and analyzing data and using this data to inform decisions from program onset and throughout the program’s life course.
Regularly collecting information on how program tasks and activities are carried out will inform whether or not the program is being implemented as planned, identify barriers to implementation (e.g. enrollment, program utilization, staff concerns, etc.), and provide opportunities to make modifications or adaptations necessary for successful implementation. Program monitoring will also allow you to identify milestones, lessons learned, and successes in the implementation process and outcomes (client, staff, partners, etc.).
Monitoring Tools
Organizations that have previously implemented the Gabby System have used monthly implementation logs to track client enrollment and engagement. Staff members involved in enrolling clients complete these logs to monitor the date of enrollment, the number of clients that were introduced to the Gabby System during a staff-client encounter, and the number of clients that chose to enroll. Keeping track of this information allows you to visualize and track you progress, understand client needs, and analyze your organizational and client resources.
When collecting data on the process of implementation, you will want to know about the site staff’s experiences with implementation and any additional resources you may need to provide. Feedback from frontline staff can assist when making modifications to improve the implementation process. This feedback can be collected during staff meetings, morning team huddles, through the use of a staff survey, weekly suggestion box, among others.
The Gabby System allows you to view data about client usage. Through the Administrative Page (refer to Module 3, Step 6), you are able to view the amount of time each client spends on the Gabby System, the health topics that each client flagged, the health topics that each client discussed with the Gabby System, and each client’s stage of change for each respective health topic. These data can inform you about client needs and the resources they might need for optimal health and well-being. For example, if you see that several women have flagged topics within a certain health domain e.g. social-emotional well-being, you may choose to hold educational sessions or workshops regarding mental health and associated illnesses, or other similar topics in the social-emotional well-being realm.
Evaluation
Evaluation asks the question: Did the program do what it was intended to do? There are two key populations you may consider in your evaluation: staff and clients. Collaborating partners may also be considered when the effort involves multiple stakeholders outside of your organization. Evaluating staff feedback is necessary to ensure adequate resources and allocate resources appropriately. To understand whether or not the Gabby System improved the health of users, you will want to evaluate the health outcomes of clients who used the system and the experiences of clients when using the system. There are three main types of evaluation that can be used to guide your evaluation efforts: process, outcome, and impact.
Process Evaluation
A process evaluation helps us determine which/ why program components were implemented according to plan and which/ why components were not. It is important to monitor the implementation process throughout the life course of the program to allow for timely adjustments during implementation.
Process evaluations questions to consider include:
How was the program implemented? Note: Perspectives of clients and staff are useful.
Were the program activities implemented as planned?
Did the program activities reach the population they were intended for?
Was the program delivered to all participants in the same way?
Outcome Evaluation
An outcome evaluation measures the changes in health outcomes that are attributable to the program such as, how the system is improving pregnancies and birth outcomes for your clients, behavior change modification as a result of the Gabby System, increased knowledge as a result of utilizing the Gabby System, etc.
Outcome evaluations questions to consider include:
To what extent were the goals of the program met?
What worked? What didn’t work?
Were there any unexpected outcomes?
What behavior change modifications did clients make as a result of the program?
What percentage of clients moved from pre-contemplation or contemplation to action or maintenance.
Impact Evaluation
An impact evaluation assesses long-term outcomes of the program with a focus on the population.
Impact evaluation questions to consider include:
What are the differences in infant mortality rate, and can these differences be attributed to the program?
What are the differences in the maternal mortality rate, and can these differences be attributed to the program?
Logic Models
A logic model is an evaluation tool designed to outline program activities and outcomes. This visualization can assist you and other program stakeholders in understanding the program and in the identification of gaps and resources to aid in successful implementation.
Steps for building a logic model:
Conduct a needs assessment.
What are the needs and assets of your site?
What is the status of stakeholder engagement? Who are your stakeholders? How involved are they?
Assess your priorities.
Does the program align with your site’s values and mission?
What are your intended outcomes of the program?
Consider inputs.
What tools and resources will you be able to dedicate to achieve the goals of the program?
List activities.
What activities are involved in the program? What will staff do during program implementation?
Determine outputs.
What direct, tangible products will result from the activities?
Establish outcomes.
What short-term outcomes do you expect to see as a result of the program?
What medium-term outcomes do you expect to see?
What long-term outcomes do you expect to see?
Keep in mind that outputs are the tangible results that come out of activities while outcomes are your overall desired results that come out of the program.
Your inputs should be linked to activities, and activities should be linked to outputs. Logic models will help you visualize whether or not there is a disconnect in this process. If there is a disconnect, you are able to see what is causing the disconnect.
Below is a sample logic model of an organization that has previously implemented the Gabby System.
Key Takeaways:
Monitoring and data collection are necessary to improve implementation processes, better meet client needs, and enhance best practices.
Process, outcome, and impact evaluations are invaluable to not only understanding implementation activities but also demonstrating the effect of the program.