Why do we care about the difference between evaluation and research? How does it influence our professional development or practical work as evaluators?
Thank you for sharing with us so many great resources!
In your response, you talked about howand Quote Scriven contradictorily frame the difference between evaluation and research (e.g. methodology, particularization-generalization, and so on). I believe they are not the only two evaluators/educators who have discussed about the difference between evaluation and research.
So my question is: Why do we care about the difference between evaluation and research? How does it influence our professional development or practical work as evaluators?
– Suhan Yao
For the question: “Why do we care about the difference between evaluation and research?” I think that Michael Quinn Patton (1998) has a very interesting answer; “The purpose of making such distinctions, then must guide the distinctions made. In my practice, most clients prefer and value the distinction. They want to be sure they’re involved in evaluation, not research. The distinction is meaningful and helpful to them, and making the distinction helps engender a commitment from them to be actively involved–and deepens the expectation for use.”
I think what Patton is saying is that we care because the stakeholders care. Why do the stakeholders care? Probably because they are investing something (like time and money) and so they want to be part of a solution, not an extensive knowledge hunt that has no goal of improving the topic that needs to be evaluated.
This influences the work of evaluators because it can put pressure on them to find a problem that needs fixed when there might not be one. If there isn’t one, the stakeholders will probably question why the evaluation was conducted in the first place and might even feel that their time and money was wasted.
I very much agree with your summation and appreciate the Patton example. My own experiences seem to parallel Patten and your description of Stakeholder motives. Sometimes projects may be political, ego, or needs based – regardless of the motive it is an evaluators goal to enlighten stakeholder on the values and potential end-goals of evaluation. Unfortunately, sometimes stakeholders have been misguided or worked with under-trained evaluators. When this happens negative connotations, misapprehension, or unintended expectations may be set by approaching evaluation OR research processes.
I would like to also mention that sometimes, even in large corporate structures, research may be valuable and necessary to be performed alongside evaluation. While evaluation and research often deviate in their criteria, metrics, and end-goals sometimes they overlap. Many organizations including nonprofits, government agencies, contracting firms, advertising agencies, the military, business corporations, small businesses, and startups see the value in expanding their internal knowledge base. This can be conducted while business resources are in use and post-briefs can be written to expand the knowledge of teams. These briefs may include things like pattern libraries, style guides, training tutorials, creating facilitator groups, creatingsites for resources, writing , collaborating with training meetings, lessons learned, documentation on newly learned code, or documented resources used in the project (just to name a few).
A lot of bigger businesses even have internal policies that promote or require research alongside everyday business. Google is an excellent example of a research and development culture. Google has a policy of building innovative products and releasing unfinished or poorly evaluated products (in their beta form) in an effort to release early, find bugs fast, and fix on the go. In some cases, this has failed miserably and in other cases it has been wildly successful (, Google Wave, Google Glass, Google Search, Google Earth are all part of this structure). Google though, as a company is just now really starting to appreciate the value of UI/UX designers and initially really focused on A/B based testing (science choices) versus a holistic approach to design – that has recently slowly trickled into their culture. My own experience very much reflects this and there are a number of TED Talks and post articles that describe in lengthy detail this model if anyone is interested, but I’m afraid I’ve veered off topic and would like to jump back to research vs. evaluation.
In your final paragraph you state that, ”-plan and meet regularly with *primary* stakeholders may find themselves in a bind to address defensible criteria. However, if you use proven methodology and run through UX/project management best practices it is significantly less likely you will meet with these issues. Please don’t take these statements as facts, I’m only writing these to expand the conversations but would like to broaden this discussion about stakeholders and evaluation.
requirements all back this up – the client has to agree that those items are out of scope but may be an opportunity to expand the work, give the evaluator more money, and continue their evaluations on those broader topics. It’s important to note the words ‘if’ and ‘opportunity’ in the above sentences because everything depends on how well the evaluator plans ahead. It also very much depends how well an evaluator documents important meetings, follows up to conversations with emails that give a proven overview of those meetings, and gets -approved signed contracts/business requirements. The word opportunity is important because out of scope items are not the end of the world, they are certainly not a bad thing, and definitely not items that should be addressed within a budget where they where not agreed upon (you will probably not complete your original goal without running out of money if you do).
via Blogger http://ift.tt/1PXm7nx