Evaluation practice for collaborative growth: a guide to program evaluation with stakeholders and co

Page 1


https://ebookmass.com/product/evaluation-practice-for-

Instant digital products (PDF, ePub, MOBI) ready for you

Download now and discover formats that fit your needs...

Health Program Planning and Evaluation 3rd Edition, (Ebook PDF)

https://ebookmass.com/product/health-program-planning-andevaluation-3rd-edition-ebook-pdf/

ebookmass.com

Practical Approaches to Applied Research and Program Evaluation for Helping Professionals 1st Edition, (Ebook PDF)

https://ebookmass.com/product/practical-approaches-to-appliedresearch-and-program-evaluation-for-helping-professionals-1st-editionebook-pdf/

ebookmass.com

Evaluation and Action Research Linnea L. Rademaker

https://ebookmass.com/product/evaluation-and-action-research-linnea-lrademaker/

ebookmass.com

Sustainable Nanocellulose and Nanohydrogels from Natural Sources 1st Edition Faruq Mohammad

https://ebookmass.com/product/sustainable-nanocellulose-andnanohydrogels-from-natural-sources-1st-edition-faruq-mohammad/

ebookmass.com

Introduction to Critical Care Nursing 7th Edition

https://ebookmass.com/product/introduction-to-critical-carenursing-7th-edition/

ebookmass.com

Gendered Lives 17th Edition Julia T. Wood

https://ebookmass.com/product/gendered-lives-17th-edition-julia-twood/

ebookmass.com

Neuroscience for Neurosurgeons (Feb 29, 2024)_(110883146X)_(Cambridge University Press) 1st Edition Farhana Akter

https://ebookmass.com/product/neuroscience-for-neurosurgeonsfeb-29-2024_110883146x_cambridge-university-press-1st-edition-farhanaakter/

ebookmass.com

Contemporary Intellectual Assessment, Fourth Edition: Theories, Tests, and Issues – Ebook PDF Version

https://ebookmass.com/product/contemporary-intellectual-assessmentfourth-edition-theories-tests-and-issues-ebook-pdf-version/

ebookmass.com

Community Projects as Social Activism: From Direct Action to Direct Services – Ebook PDF Version

https://ebookmass.com/product/community-projects-as-social-activismfrom-direct-action-to-direct-services-ebook-pdf-version/

ebookmass.com

Basics of Web Design: HTML5 & CSS (5th Edition) Terry Felke-Morris

https://ebookmass.com/product/basics-of-web-design-html5-css-5thedition-terry-felke-morris/

ebookmass.com

Evaluation Practice for Collaborative Growth

Evaluation Practice for Collaborative Growth

A Guide to Program Evaluation with Stakeholders and Communities

Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2018

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above.

You must not circulate this work in any other form and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data

Names: Bakken, Lori L., author.

Title: Evaluation practice for collaborative growth : a guide to program evaluation with stakeholders and communities / by Lori L. Bakken.

Description: New York, NY : Oxford University Press, [2018] | Includes bibliographical references and index.

Identifiers: LCCN 2017057063 (print) | LCCN 2017059395 (ebook) | ISBN 978–0–19–088538–0 (updf) | ISBN 978–0–19–088539–7 (epub) | ISBN 978–0–19–088537–3 (pbk. : alk. paper)

Subjects: LCSH: Evaluation research (Social action programs)

Classification: LCC H62 (ebook) | LCC H62.B2865 2018 (print) | DDC 658.4/013—dc23

LC record available at https://lccn.loc.gov/2017057063

9 8 7 6 5 4 3 2 1

Printed by WebCom, Inc., Canada

CONTENTS

Preface ix

Acknowledgments xiii

About the Authors xv

PART ONE: Prepare

1. Thinking Like an Evaluator 3

Purchasing a Car: An Example of Evaluation in Everyday Life 4

Major Components of an Evaluation Process 6

Program Planning and Evaluation 11

Summary 12

References 12

2. Acquiring Requisite Knowledge and Skills 14

Roles of Evaluators 14

Evaluation Standards 16

Ethics and Human Subjects Protections 18

Evaluation Competencies 20

Communication and Negotiation 22

Building Partnerships and Capacity for Evaluation 24

Effective Partnerships and Collaborations 26

Inclusive Practice in Evaluation 26

Summary 30

References 30

3. Choosing an Evaluation Approach 33

Philosophical Perspectives that Influence Evaluation 33

Expertise-oriented Approaches 34

Consumer-oriented Approaches 35

Program-oriented Approaches 35

Decision-oriented Approaches 42

Participant-oriented Approaches 43

Systems Approaches 46

Approaches and the Evaluation Tree 46

Matching Approaches with Evaluation Questions 47

Examples of Integrated Approaches to Evaluation 49

Summary 50

References 50

4. Planning a Program Evaluation 53

Understanding the Evaluation’s Context 54

Identifying and Engaging Stakeholders 56

A Program’s Purpose, Goals, and Activities 59

Using Theory to Focus an Evaluation 61

Evaluability Assessment 66

An Evaluation’s Purpose and Use 67

Evaluation Questions 69

Evaluation Proposals and Contracts 71

Summary 74

References 75

PART TWO: Design

5. Designing a Program Evaluation 79

Qualitative Designs 80

Case Study Designs 82

Quantitative Designs 83

Study Designs for Evaluating Contribution 90

Threats to Internal and External Validity in Quantitative Studies 91

Mixed Methods Designs 92

Complexity in Study Designs 93

Summary 94

References 95

6. Choosing Samples, Sampling Methods, and Data Sources 96

Data Sources and Units of Analysis 97

Sample Size, Selection, and Strategies 98

Defining Inclusion and Exclusion Criteria for Your Sample 102

Sampling Bias 103

Sample Size 104

Sampling Considerations in Relation to Study Designs 105

Strategies for Participant Recruitment and Retention 106

Ethical Practice 106

Sampling and Data Collection Plans and Protocols 108

Summary 109

References 109

7. Collecting, Storing, and Tracking Information 113

What and Who Defines Credible Evidence? 114

Survey Design and Development 116

Psychometric Tests 120

Interviews and Focus Groups 123

Observations and Video-recordings 125

Checklists and Rubrics 126

Maps and Drawings 126

Photographs 129

Existing Sources 131

Capturing Accurate and Reliable Information 132

Designing Databases for Electronic Storage 133

Tracking Information 136

Summary 137

References 137

8. Analyzing and Interpreting Quantitative Data 139

Cleaning Data and Handling Missing Data 139

Matching Statistics to Quantitative Study Designs 142

Variables, Constants, and Levels of Measurement 143

Descriptive Statistics 145

Simple Plots 146

Statistical Assumptions 152

Evaluation Hypotheses 154

Sample Size, Power, and Effect Size 156

Aligning Statistical Analyses with Analytical Designs 159

Multivariate Analyses 161

Statistical Tests for Multiple Dependent Variables 163

Nonparametric Statistics 164

Reporting Statistics and Statistical Analysis 165

Working with Statisticians and Building Your Own Capacity 166

Summary 166

References 167

9. Analyzing and Interpreting Qualitative Data 169

Qualitative Thinking 169

Methodological Approaches and Analytical Strategies 170

Qualitative Approaches and How They Influence an Analysis 172

Ethical Issues in Qualitative Analysis 174

Preparing for the Analytical Process 174

Qualitative Analysis 175

Managing Data in Qualitative Analysis 182

Summarizing, Organizing, and Describing Qualitative

Information 183

Summary 185

References 185

PART FOUR: Report and Prepare Again

10. Reporting and Disseminating Findings 189

Written Evaluation Reports 189

Brief Written Reports 197

Oral Presentations 198

Storytelling 199

Electronic and Social Media 199

Acting 200

Publications in Professional Journals 201

Summary 202

References 202

11. Preparing for Complexity in Evaluation 204 From Program Theory to Systems Theory 205

Simple, Complicated, and Complex Situations and Problems 206

Static, Dynamic, and Dynamical Change 207

Realistic Evaluation for Complicated Situations and Dynamic Change 208

Systems Thinking and Complexity 209

Developmental Evaluation 213

Social Justice and Inclusive Evaluation Practice 213

Preparing for the Future: Evaluation Skills for a Changing Field 220

Summary and Implications for Evaluation Practice 221

References 222

Index 225

PREFACE

Practitioners across professions are continually faced with funders’ growing requirements for information that demonstrates a program’s worthiness of financial support and value to those it serves. Low programming budgets often prohibit small organizations, especially nonprofits, from hiring professional evaluators to address these requirements, so practicing professionals must have some level of understanding and ability to design and conduct a program evaluation.

This book provides a resource for readers who want to build their capacity for program evaluation and be guided through its seemingly daunting and elusive process. Therefore, this book is for those who develop or coordinate programs and work with people, partners, and communities in disciplines such as public health, social work, education, environmental sciences, and community development. It provides a fundamental understanding of program evaluation concepts, strategies, and practices while maintaining a focus on those that have been most useful to me and my collaborators. It, therefore, fills a unique gap among other books on program evaluation through its focus on basic concepts, simple writing style, familiar examples, and practical tools.

Throughout the book, I encourage readers to collaborate and partner with a program’s key stakeholders during the evaluation process so that the final product is both useful to and used by them. Collaborations and partnerships in evaluation can trigger disagreements and controversy among stakeholders with competing interests. So, this book prepares readers for some of the ethical and political challenges that may be encountered when conducting a program evaluation and provides strategies for how to handle them in today’s complex sociopolitical environment. At times, the book’s contents may seem a bit advanced for those who are not specialists in evaluation. Some advanced concepts are intentionally incorporated to build a reader’s evaluation capacity and avoid misapplied concepts, oversimplified approaches, or easy strategies that reduce the accuracy of information and potential power of evaluation.

Although this book is designed and written as a resource for practitioners, it can be used to support courses, workshops, and other capacity-building efforts

in which practitioners (e.g., nonprofit directors or program coordinators) and students learn to develop, conduct, and lead a collaborative program evaluation. The book’s contents are applicable in disciplines such as community psychology, community leadership, education, public health, social policy, citizen science, environmental sociology, and agroecology. Prior knowledge of evaluation is not assumed; however, readers who have knowledge or skills related to research, program planning, or data analysis may find some of the book’s contents easier to grasp than novice readers.

BOOK’S CONTENT AND ORGANIZATION

The book is organized into four major sections. Chapters One through Four provide readers with the foundational knowledge and skills necessary to plan and lead or facilitate a collaborative program evaluation. Chapters Five through Nine represent the “doing” steps of an evaluation; in other words, the components of an evaluation that prepare readers to answer an evaluation’s questions and achieve its purpose. Chapter Ten describes various ways of reporting and disseminating the evaluation’s findings. The final chapter of the book describes emergent trends in the evaluation field and what new knowledge and tasks are suggested by them. Although this book is structured in parts that reflect the evaluation process, each chapter was written to stand on its own for those readers who want to skip to chapters that are of most interest to them. Readers will find worksheets, organizational tools, references and illustrations throughout the book that will facilitate their evaluation efforts. Evaluation competency is best acquired through education and experience. I welcome readers to build and enrich their competence, confidence, and capacity to do a collaborative program evaluation so that they are better prepared to meet requirements for information, make important decisions about a program, and learn about ways to improve a program using an evaluation’s findings.

MY EXPERIENCE

This book reflects my nearly 30 years of experience as an internal and external evaluator. I have evaluated educational and, to a lesser extent, service programs both locally and nationally. Most recently, my experiences have provided me with opportunities to work more closely with nonprofit organizations and community residents. In doing so, I have worked alongside Cooperative Extension educators, community developers, coalition coordinators, nonprofit directors, and social workers who have been inspirational in their efforts to evaluate the

[ x ] Preface

programs they develop and provide. These experiences and relationships have been some of the most rewarding of my career and they have motivated the capacity-building emphasis of this book.

My practice philosophy advocates collaboration and stakeholders’ involvement in all phases of evaluation planning, implementation, reporting, and dissemination; it continually emphasizes practices in which one plans an evaluation during a program’s planning process. My work draws from a range of evaluation approaches, including objectives-based, program-oriented, participatory, decision-oriented, expertise-oriented, and systems evaluation. I have performed both retrospective and prospective program evaluations using qualitative, quantitative, and mixed-methods approaches. As an academic, I encourage scholarship and evidence-based practice; therefore, I often have integrated program evaluation with educational research in ways that are fruitful and informative to both practice and the field more broadly. It is through this book that I share these experiences and highlight the tools and strategies I have found to be most useful when conducting program evaluations.

ACKNOWLEDGMENTS

The author would like to acknowledge the individuals without whose help and encouragement this book would not have been written. I would like to thank David Follmer who encouraged me to realize a career goal and supported me throughout the process. Thank you for your kind support and patience as I struggled with the ebbs and flows of writing.

Thank you also to Cynthia Jasper, who wouldn’t let me forget that this effort was important for a variety of reasons and encouraged me to stay on task. My husband, Curtis Olson, is always a believer in me and constant supporter of every task to which I set my mind. Curt, I really appreciate your unending faith in me and my abilities to get the job done. Thank you for being by my side during this journey. I also wish to thank my parents, who have been anxiously awaiting my announcement that this book is complete. I suspect they look forward to sharing their daughter’s accomplishment with family and friends. Last, I wish to thank all the students, collaborators, and colleagues who inspired me to write this book and kept me engaged as a lifelong learner. Their questions, challenges and ideas were never ending and helped me to create a book that is better because of them.

ABOUT THE AUTHORS

Dr. Lori Bakken, MS, PhD, Professor, University of Wisconsin-Madison, has over 25 years of experience leading and conducting evaluation studies in the medical, public health, and education fields. The early part of her academic career focused on evaluating and improving laboratory performance in medicine and public health. In 1995, her focus shifted to designing, implementing, and evaluating educational programs to improve the quality and quantity of clinical research conducted in the United States. Over the next decade, she developed one of the nation’s first and highly successful education and career development programs for clinical researchers and established a National Institutes of Health- funded research program to study the career development of clinician- scientists. In 2010, she joined the University of Wisconsin (UW) School of Human Ecology’s faculty and assumed the role of an evaluation specialist for the University’s Cooperative Extension Services. In this role, she began working more closely with community and nonprofit organizations, thereby expanding her rich experiences and expertise in evaluation. In 2013–2014, Dr. Bakken took a brief leave from the UW to serve as the Director for Teaching and Research in Education Outcomes Assessment for the Center for Continuing Education in the Health Professions at the Dartmouth-Hitchcock Medical Center and Dartmouth College. Following a productive year at Dartmouth, Dr. Bakken returned to the University of Wisconsin-Madison, where her studies currently focus on evaluations that utilize systems thinking and collective approaches to demonstrate organizational and community impact. She is also interested in how gatekeeping influences inclusive evaluation practice, specifically at the intersections of food security and health. Dr. Bakken is a member of the American Evaluation Association and she holds degrees in Medical Technology (BS, 1980), Medical Microbiology (MS, 1991), and Continuing and Vocational Education (PhD, 1998) from the University of Wisconsin-Madison.

Vikram Koundinya is an Evaluation Specialist in the Department of Human Ecology at the University of California (UC)-Davis and UC Cooperative Extension. Vikram’s research focuses on program evaluation, needs assessment, and other extension educational processes. His teaching includes Evaluation Capacity Building of UC Cooperative Extension and UC-Davis colleagues in the form of workshops, webinars, educational materials, and one-on-one consulting. Prior to joining UC-Davis, he served as an Evaluation Specialist at the Environmental Resources Center of University of Wisconsin (UW)-Extension, where he worked with extension educators and project partners to plan, develop, and implement evaluation of agricultural, conservation, natural resources, and environmental programs. Before joining UW-Extension, Vikram worked as a postdoctoral fellow at the University of Connecticut and Iowa State University, supporting the evaluation of agricultural, extension, and economic development programs. He holds a bachelor’s degree in Agriculture from A.N.G.R. Agricultural University in India, and master’s and doctoral degrees in Agricultural and Extension Education from A.N.G.R. Agricultural University and Iowa State University, respectively.

Evaluation Practice for Collaborative Growth

PART ONE

Prepare

Program evaluation is as much a process as a way to make decisions about a program’s need, value, worth, or fidelity. Therefore, an adequate background and preparation for conducting an evaluation of a program is essential. The time and energy you invest in preparing for a program’s evaluation will go a long way toward its success and utility to stakeholders. Chapters One through Four equip you with the mindset, knowledge, and skills you will need to adequately prepare for a program evaluation. These preparations include thinking like an evaluator; understanding your role in an evaluation; conducting the evaluation using high standards of professional, inclusive, and ethical practice; and communicating and negotiating with stakeholders to determine the evaluation’s purpose, use, and questions. This part of the process provides you with the clues and information you need to select an evaluation approach. There are multiple approaches to evaluation, and it is important that you understand their philosophical underpinnings and theoretical foundations in order to select one or more of them to guide the evaluation appropriately. You also will learn that some programs are not necessarily ready to be evaluated, so you will acquire the background needed to determine whether to proceed with a program evaluation. You also will be given guidance on preparing an evaluation proposal so that the agreement among you and the evaluation’s stakeholders is very clear. Collectively, these four chapters will prepare you for the next phase of the process, which is to determine the details of the evaluation’s design.

Thinking Like an Evaluator

In today’s world of accountability, it is becoming increasing important for social science practitioners to evaluate the programs and interventions they develop and implement. Moreover, government agencies and foundations that typically fund these programs are increasingly requesting that social science professionals demonstrate their programs’ impacts on and values to communities through evidence-based practices. An evaluation skillset, therefore, is essential for practitioners in service-related fields, such as education, nonprofit management, social work, or public health (Davis, 2006).

Although external evaluators often are called upon to perform this service, social science practitioners must develop requisite knowledge and skills to engage in evaluation practice. By doing so, the internal capacity of their organizations and respective fields for evaluation is enhanced (Stevenson, Florin, Mills, & Andrade, 2002). Active participation in evaluation activities increases a sense of ownership, which promotes greater use of evaluation results in the decision-making and program implementation processes; thereby, facilitating successful outcomes (Hoole & Patterson, 2008; Mercier, 1997). Consequently, practitioners who incorporate evaluative thinking into daily professional practice experience and acknowledge the benefits of evaluation (Taut, 2007). By building the capacity of practitioners to conduct evaluations, sustainable practices of informed decision-making and action planning are created, which, in turn, foster high-quality, effective public services (Preskill & Boyle, 2008).

With that background, let us take a moment to engage your evaluative thinking skills. Suppose you and a group of friends were given two types of freshly baked chocolate chip cookies and asked to pick the one you liked most. What criteria would you use to select your favorite cookie? Do you prefer crispy cookies or soft ones? Do you like them loaded with chocolate chips? Do you prefer a nice, rich buttery flavor over a less rich cookie? Do

you prefer chocolate chunks over small chocolate chips? How might you negotiate differences of opinion among your friends? How would you assess each cookie so your personal biases don’t interfere with your judgment about the best cookie? What evidence would you collect and how would you collect it to determine which cookie is the best? As we go about our daily lives, we are constantly using evaluative thinking to assess various things and phenomenon that we encounter in our daily world. Cookies are just one example. We use these skills when we purchase groceries, buy a new car, determine whether we need or want to learn a new skill, select a life partner, and so on. We apply these same thinking patterns when we evaluate programs. It’s just that programs and interventions in the social sciences are a more complex, because they involve people and communities.

This book will expand your evaluative thinking skills and guide you through a four- step evaluation process. It is designed to provide you with a fundamental understanding of evaluation approaches, methods, tools, and practices so that you will become more proficient at evaluating your own programs and interventions. This chapter begins by introducing you to a few basic evaluation concepts using an example from everyday life. I will then use these same concepts to shift your thinking about program evaluation from an isolated activity to an activity that is intricately intertwined with program planning.

PURCHASING A CAR: AN EXAMPLE OF EVALUATION IN EVERYDAY LIFE

When was the last time you purchased a car? What did you do before you decided which car to buy? What type of information did you use to decide among several makes and models of cars? Why did you purchase the type of vehicle you chose? As you think about the answers to these questions, I envision a process not unlike my own. It goes something like this.

I decide that I like the nine- year-old mini sport utility vehicle (SUV) that I currently own, but it is time that I purchase a new one. As models change over time, I also decide that I want to explore similar vehicles made by other manufacturers. I begin my thoughts with a few ideas about the car’s features. I like the fact that SUV’s sit higher than regular cars and have more cargo space. I also like the way they can accommodate my recreational gear (e.g., skis, bikes, kayaks). My current SUV has a standard transmission of which I am growing weary, so I decide I also would like an automatic transmission. My current SUV is dark green, which shows dirt, so I would like to get a lightercolored car, preferably beige. I also am getting older, and I like the standard features that come with many midsized cars—air conditioning, power door locks, CD/MP3 player, etc.— so I keep these features on my wish list.

At this point, I have given some thought to the type and features of the vehicle I am looking for, so I go online to compare models that have these features. While online, I also look at prices, maintenance costs, and features that are standard on each make and model. I also like my cars to have a “small” feel because I am a petite woman, so I look at the overall dimensions of each vehicle. “Perks,” such as built-in cargo racks, also are factored into my comparison because these features can be expensive and make a difference in the overall purchase price.

After comparing models and narrowing my selection to two vehicles, I decide to test drive each of them. During my test drive, I transcend steep hills, navigate tight corners, and accelerate quickly to interstate speeds. I notice that one of the models feels a bit underpowered, but it has that smaller feel for which I am looking. It is also a bit less expensive than the more powerful model, so I consult a few of my friends and acquaintances who own that model to get their opinions on the model’s performance. After pondering the information, I decide to begin my negotiations with a salesperson. Let’s pause now and think about the process thus far and what I have done.

I determined the criteria for my vehicle purchase when I decided that I wanted a beige SUV with an automatic transmission and cargo capacity to accommodate my recreational activities. Each of these features (i.e., beige color, automatic transmission, and cargo capacity) is a criterion that ultimately would help me to compare models and make a decision, or judgment, about the car. When I did the online comparison, I collected evidence to facilitate my decision and acquired information about each vehicle’s cost, maintenance, and features. I collected additional evidence about the vehicle engine power when I did the test drive and consulted the opinions of friends and acquaintances. In summary, I collected evidence and compared it to my criteria to make a judgment about the vehicle I intended to purchase.

Evidence, criteria, and judgment are the basic elements of every evaluation, and you use these concepts every day as you go about your lives. You are evaluators when you purchase fruits and vegetables in the grocery store, select a physician to oversee your health care, meet a new neighbor, or taste a friend’s chocolate chip cookies just out of the oven. If you do it every day, why does it seem so foreign and difficult when, in your professional roles, you are asked to evaluate the programs you or others develop? The answer to this question is multifaceted. First, a social science practitioner’s language, knowledge, and skills for evaluation practice typically have not been developed. Second, as an interdisciplinary field, the evaluation field has grown tremendously in recent years and in doing so, it has adapted highly rigorous and sophisticated approaches, methods, and tools. Although this book will not cover the evaluation field in any depth (other textbooks do that very well), it will provide you with an overview of program evaluation, so you can incorporate

some evaluation practices in your programming efforts and build your personal capacity to work with professional evaluators.

MAJOR COMPONENTS OF AN EVALUATION PROCESS

Over the years, the evaluation field has expanded and become increasingly sophisticated as the questions and problems being addressed by programs and interventions have become more challenging and expansive in scope. The approaches, methods, and tools that evaluators use also have expanded as the field shifted from objective-based approaches to those of complex systems. This book takes you through a four-part process of preparing for, designing, conducting, and reporting an evaluation (Figure 1.1).

Within this process are multiple tasks and activities that will require your attention. The preparation and design for an evaluation are the most time- consuming parts of the process, but they are critical to an evaluation’s success. In order to set the stage for subsequent chapters and provide you with an overview of program evaluation, in the remainder of this chapter, I briefly describe the basic activities and tasks that are part of an evaluation.

Prepare

The first step in preparing for any evaluation is to be certain you have the requisite background to plan and conduct the evaluation. This background includes knowledge of the professional and ethical standards that guide Report

Conduct

Design Prepare

Figure 1.1. Four- step Evaluation Process.

evaluation practice, competencies needed to conduct an evaluation, and skills to build and maintain partnerships and collaborations throughout the evaluation process. It is also important to know about the various perspectives and approaches that guide an evaluation’s design. Evaluation approaches are driven by the philosophies or world views that both you and your stakeholders bring to the process. Will your approach be based on program goals and learning objectives? Will your approach be theory driven? Will it advocate participation of all relevant stakeholders? Does the approach need to be designed in a way that will aid decision-making? The answers to these questions will become apparent as you engage with key stakeholders to gain an understanding of the evaluation’s context, purpose, and questions.

This part of the preparation process requires conversations with several individuals or groups of people who have a stake in the program or its evaluation (i.e., stakeholders). An examination of a program’s context also necessitates visits to locations and communities in which the program is provided. Therefore, the preparation needed to design and conduct an evaluation is a time-intensive process that requires strong listening skills, excellent communication, abilities to successfully negotiate, keen observation, and patience. Often, the language and processes familiar to evaluators are unfamiliar and may feel intimidating to a program’s stakeholders; therefore, it is best to avoid jargon and explain concepts in ways that facilitate understanding. For example, evaluation concepts such as, desired or expected “outcomes”, are more easily sought with questions such as, “What change do you expect or is likely to happen as a result of your program?” It often takes a series of several conversations with key stakeholders to acquire the information necessary to evaluate a program. A worksheet I sometimes use to guide these conversations is presented in Box 1.1.

Once acquired, this information should be conveyed back to stakeholders in order to reach a mutual understanding about how a program is to be evaluated. Other considerations that are important in the preparation process are a clear understanding of everyone’s role in the evaluation, cultural norms and practices, protection of individuals and groups who will participate in the evaluation study, and adherence to evaluation principles and standards (American Evaluation Association, 2004; Yarbrough, Shulha, Hopson, & Caruthers, 2011). It is often helpful to conduct an evaluability assessment (Wholey, 2004) to be certain that critical elements of the preparation process are discussed.

Your early conversations with stakeholders should attempt to ascertain the evaluation’s purpose, use, goals, and guiding questions. The purpose can be determined with the questions, “Why would you like to do this evaluation?” By asking the question, “How will you use the findings from the evaluation?” you can establish stakeholders’ intended use for the evaluation’s results. Evaluation questions can be determined with the question, “What would you

Turn static files into dynamic content formats.

Create a flipbook