|Program and Service System Evaluation (PSSE)|
Jessica Costeines, MSW, Evaluation Assistant
Amy Griffin, MA, Senior Evaluation Consultant
Doreen Fulara, MSW, Evaluation Consultant
Emily Melnick, MA, Evaluation Consultant
Diane Purvin, Ph.D., Evaluation Consultant
Joanne Richardson, BS, Program Coordinator
Ida Salusky, MA, Doctoral Psychology Fellow
Charlene Voyce, MPH, Evaluation Coordinator/Research Associate
Evaluation has been an integral part of The Consultation Center since its inception more than 30 years ago. The overall mission of The Center’s evaluation services is to enhance scientific knowledge about a given question, program, or service, and to inform public policy. Evaluations are conducted by Center faculty and professional staff and are tailored to meet the needs of an individual client or project. Both qualitative and quantitative methods are employed in evaluations, and in many instances, are combined in a single design to understand a particular issue in context. Evaluations are conducted on a local, regional, or national basis, and may involve single or multi-year assessments. Evaluation activities that we conduct include need and resource assessments, process and outcome evaluations, focus group studies, cost-outcome evaluations, service system analyses, and assessments of community coalitions.
Evaluations conducted in collaboration with The Consultation Center faculty and staff combine scientific rigor with the practical realities of implementing feasible evaluations that are ultimately responsive to local needs. As a result, evaluations are consistently useful; they inform practice, program planning and management, and policy development. Center staff and faculty provide trainings and technical assistance to community-based organizations with the goal of enhancing the evaluation capacity of these organizations. In addition to designing evaluations that yield data that is useful in understanding the processes and outcomes of a given project or organization, we strive to develop evaluation infrastructures that are sustainable beyond our tenure so that programs and organizations can continue to utilize data to inform their program and policy planning.
Our team takes a collaborative approach to evaluation where we join with key stakeholders (e.g. funders, policy makers, program staff, consumers) in the development of data collection variables, methods, and outcomes. We first work with stakeholders to create measureable language to assist with identifying or articulating the goals, objectives, indicators, and outcomes that relate to the overall vision of their work. We then create structures to collect and report key process and outcome data to inform progress toward their outcomes and goals. Data is analyzed and fed back to stakeholders at regular intervals; both through presentations and in written form through mechanisms such as quarterly reports and newsletters. Our presentations and reports are developed so that they will be accessible and relevant to multiple stakeholders including community members, program staff, and policy makers. The creation of structures for ongoing data collection and reporting provides an opportunity for continuous quality improvement and also educates stakeholders on how to interpret and effectively use data. Our ultimate goal in any endeavor is to build the capacity of community-based organizations and their funders to collect and use data to inform program and policy decision making. Our evaluation results have been used to inform funders about structures for program replication, provide process and outcome data to secure additional funding for sustainability, and offered lessons learned used to create policies and procedures for programming. Additionally, faculty and staff from the PSSE area partner with our community collaborators to prepare manuscripts that highlight strategies that bridge the gap between science/evaluation and practice.
Examples of current evaluation projects include: