I alone cannot change the world, but I can cast a stone across the waters to create many ripples.

Mother Teresa

Got Outcomes?

Let’s face it. A lot of people are afraid of program evaluation. There are countless explanations we use to rationalize not evaluating our programs. How often have you heard (or said):

  • “I’m not an evaluator.”
  • “Evaluation is expensive. We don’t have those kinds of resources.”
  •  “Participants love the program, so it must be working.”
  •  “Our program is complicated; it’s impossible to measure outcomes.”

To make matters worse, lack of staff expertise in evaluation is sometimes accompanied by fear that perhaps the program won’t show the results we expect. Then what?

I understand the reluctance and the fear.

Yet as concepts like results-based accountability become the norm and funders increasingly require evaluation plans in grant proposals, it’s important to step back and consider what can be gained from program evaluation. Most importantly, evaluation provides critical information about whether or not your program is achieving desired results.

We all know resources are limited. Isn’t it better to know sooner rather than later if those resources are being used effectively? And for those whose work focuses on our most vulnerable populations, there is even greater urgency to ensure that programs produce positive outcomes.

So the next time you’re tempted to fall back on all the reasons you can’t evaluate your program, think about all you have to gain if you do. Remind yourself that evaluating your program and tracking your outcomes will pay off by helping you to:

  • Clarify program objectives. What are you trying to accomplish? How will you define success?
  • Solidify support and raise money. People–including funders–want to support successful efforts. Demonstrating positive outcomes is an invaluable fund-raising tool.
  • Monitor your program. Are you really doing what you said you would do?
  • Make informed decisions about program changes. Do you need to alter the program or make mid-course corrections? Better to make those changes before the opportunity to get back on track is lost.
  • Identify unintended program effects. Are there “side effects” that need to be addressed?
  • Assess overall effectiveness. Did the program work as intended? Are program recipients better off? If so, how?
  • Assess program cost versus benefit(s). Do the program outcomes justify the investment?
  • Do even more good. If you can show your program works, you have the opportunity to increase your reach and broaden your impact.

So, got outcomes?

Animal Assisted Intervention International Conference

On May 15, 2016, Lisa was in Prague, Czech Republic, conducting two workshops at the Animal Assisted Intervention International Conference. During “Pairing veterans and shelter dogs: A comparison of two different program models” Lisa discussed the fact that the number of animal-assisted programs that pair veterans with shelter dogs is growing. Yet there is great variability in program models and little information on effectiveness. Program goals vary and may include helping veterans to re-enter civilian life, develop job skills, reduce PTSD symptoms, and gain social support. Lisa’s presentation outlined the findings from evaluations of two different types of programs that pair veterans and shelter dogs: Soldier’s Best Friend (SBF) and VALOR. Differences and similarities in the two program models were discussed, and preliminary research findings from interviews with veterans from both programs were presented. Lessons learned and considerations for implementing similar programs were discussed.

During “Got Outcomes?” Lisa emphasized that the success of animal-assisted programs depends on providers’ ability to clearly articulate program elements, implement programs as designed, and track outcomes to ensure the program delivers intended results. Participants learned how to use a variety of tools (logic models, fidelity assessments, performance indicators) to develop, implement, and assess programs. Lisa used program scenarios to help participants practice building logic models and selecting performance indicators to evaluate the effectiveness of their programs. She also discussed the importance of measuring program fidelity and provided real-world examples of ways to assess whether programs are being delivered as intended.

Contact Us