Learning the art of evaluation: presume the presence of politics
Learning the art of evaluation: presume the presence of politics
Abstract and Keywords
This chapter endorses the view that every action is a political act, political in the sense that there is continuing competition for status among the clients of an evaluation. It determines where politics finds expression in a student's learning environment. It discusses that the politics of evaluation represents more than a single and tokenistic topic and underpins every aspect of evaluation from commissioning, design, and delivery through to dissemination.
In tandem with the expansion of evaluation activity across all sectors has come an increased demand for teaching and training which involves those who will become ‘external’ evaluators and those whose main interest is in participatory and self-evaluation within organisations. In the development of academic courses, as well as a wide range of other courses, which aim to convey succinctly the essence of evaluation as both theory and practice, the characteristics associated with evaluation activities may be seen as central to achieving agreed learning objectives. The nature of the interactions and contributions of evaluators and other social actors within individual settings and contexts (from inception and design to execution and emergence of any evaluation outcomes) are important to an understanding of evaluation. Within the experience of evaluators, attempts to elucidate praxis therefore take their place alongside consideration of evaluation principles and theory in addressing the challenge of how to develop effective learning and teaching on this topic.
The discussion presented here is concerned with reflections on learning and teaching about one specific dimension of theory and practice that holds a key to understanding evaluation – that is, the political. It does not describe the political effects of the process or findings from a ‘real’ evaluation, nor does it discuss the substantive political implications of different types of evaluation outcome. Its focus is how the politics of evaluation feature in learning and teaching. Where and how do the politics of evaluation come to the fore? The politics of evaluation in this context operate at macro- and micro-levels and need to be understood as the “plurality of perspectives and the existence of competing interest groups” (Clarke, 1999, p 19), or as Alkin (1990, p 51) states, “every evaluation is a political act, political in the sense that there is continuing competition for stakes among the clients of an evaluation”.
If evaluation is agreed to be an inherently political activity (through a commitment to change, by making judgements or providing the information on which judgements about policies, programmes and projects will be made) as argued by Pawson and Tilley among others (1997), where do politics find (p.240) expression in the learning environment? If it is further assumed that politics do have a bearing on how those new to evaluation learn about the process and practices involved, we would argue there are a number of different possible routes by which this takes place.
First, those who participate in courses will frequently come from politicised environments, including health, social welfare and education. Second, the range of material presented to them is likely to contain some explicit description of individual and collective experience of situations where politics have been highlighted as influential in the process and outcome of evaluation. Third, participants will often be aware of the institutional processes in place within their learning or work environments that may be useful and successful or equally be undermined by the politics of the environment. This third route for the entry of the political also points to a distinctive issue for evaluation courses delivered in contemporary higher education institutions in the context of, what could be described as, predominantly ‘performance cultures’. In addition to student satisfaction surveys and the use made of their findings, other educational quality assurance procedures or one-off evaluations may surface directly or indirectly within the students' course experience, together with their associated local institutional political contexts. This often leads to a particular way in which the institutional environment where teaching and learning takes place affects how students make sense of the politics of evaluation.
This chapter, through an exploration of key interrelated areas, examines some of the issues which have emerged from discussions we have held to review the way in which evaluation is taught at London Metropolitan University (North Campus). One such issue has been to determine how the politics of evaluation are threaded through while underpinning the courses and supervision provided to students undertaking a Master's degree in evaluation and social research (and other participants attending stand-alone courses). The areas are as follows:
• student expectations (the extent to which they are met by course design and content);
• incorporating the political (examples of our current strategies for addressing micro- and macro-politics); and
• aspects of politics (those we have yet to capture).
In order to make sense of student expectations, it is useful to acknowledge that, in our experience, those who consider embarking on a full Master's programme or who wish to attend short courses in evaluation are most usually in paid work within statutory and non-statutory organisations. Many are already involved in evaluation in some way, either through commissioning services or using evaluation in policy development. However, some arrive with the perception that policy development and its implementation is a simple and (p.241) transparent process of change influenced by evaluation findings. Others may come to evaluation without the language or skills of research methodology yet wanting a ‘takeaway toolkit’ to be provided despite the recognition that “evaluation research is more than the application of methods. It is also a political and managerial activity” (Rossi and Freeman, 1993, p 15). Still others may resist the possibility that evaluation itself is subject to the same influences and vagaries of policy, programme and project development.
Do these students get what they hoped for? To date, module evaluations suggest that they get both less and more. The less is the perceived absence of research methods teaching within evaluation modules. This is deliberate on our part. Within an emerging discipline or trans-discipline of evaluation (Scriven, 1991), there are many models or approaches including goal-free, goal-focused, participatory and realistic among others too many to list here. Each of these could make use of the full range of research ‘tools’ or techniques available and some will inevitably adopt a multi-method design. However, when working towards achieving an understanding of the principles and practice of evaluation, the emphasis must be placed on unpacking what these are, rather than concentrating on the tools of data collection and analysis. A student who designs an excellent questionnaire does not, through this, demonstrate an understanding of evaluation.
Providing students with more than they hoped for, however, relates to the contexts covered. These include the opportunity to meet with evaluators, discuss live examples and give thought to issues of politics and ethics in the evaluation process. Some of the ways we work are described later in this chapter.
Incorporating the political: examples of our current strategies
To successfully convey the message, providers of courses and teaching in evaluation may need to explicitly address the political dimensions. Published evaluation findings are rarely presented in such a way that the politics surface in reports, although the use of case studies goes some way to circumventing or uncovering the ‘real story’. This, however, may have implications in terms of confidentiality and other ethical issues. In addition, offering description and discussion of the range of evaluation approaches available and their appropriate use is important thereby avoiding or, better, making explicit any methodological or political bias on the part of course facilitators. Here we give three examples of how we have attempted to address these issues.
The first is through constant and repetitious reference to values, politics and ethics throughout the formal content of modules. This ethos underpins the whole approach to modules, yet it can still remain implicit, or seemingly so, rather than explicit through sessions which address these specific topics. The following examples illustrate our attempts to elaborate on such general referencing with more focused pedagogic strategies, drawing out the political texture of both the practical and structural features of evaluation practice.
(p.242) Thus our next example is to work with groups of students to create person specifications (‘Who is the evaluator?’) and job descriptions (‘What will the evaluator do?’). The purpose of this type of exercise is to think beyond the application of methods or toolkit approaches to a discussion which often leads to the identification of key qualities including negotiation skills; interpersonal skills; comprehension of organisational strategy and planning, and so on. Developing a job description leads to a consideration of tasks which will usually include defining evaluation boundaries, making sense of sometimes competing perspectives and an acknowledgement of the need to go below the surface.
A further example is an attempt to facilitate a broader discussion of values within evaluation practice such as an examination of ‘performativity’ within public services and policy review frameworks. This is an important dimension often embedded within introductions to and development of models of evaluation and associated activities aspiring to or claiming to provide evaluative statements about programmes and services within the public sector. In examining the range of such contemporary approaches characterised by more continuous, integrated assessment of programmes and services, the notion of performance is discussed explicitly in teaching and located in context. This hopefully serves to illustrate the importance of analysing performance assessment as more than a set of objective tools, techniques and procedures and identifying it as a social construct. The specific issue of performance and performativity is developed when examining the range of models and frameworks in use and considering their status as evaluation, and their relationship to evaluative activities. As part of this strategy conceptual linkages to a wider socio-political context are explored. (p.243) But what does it all mean if not located within specific case examples, which in turn leads to concerns about confidentiality? It may be that these broader evaluation issues can be transformed using ‘real’ evaluation. One example of this was a specific evaluation study in which an issue emerged that related to the context for the evaluation: a change in the staffing and management arrangements of an organisation which raised questions about how two postholders worked together. This shift generated changes in the extent to which they linked their work programmes, and in turn had implications for an evaluation plan being developed around certain areas of the work. There was a division of work between the two post-holders, which had some basis in service need and different ethnic communities, but it was also historical: a permanent post and then a new short-contract, separately funded post. The latter post was part of a demand for more coordinated approaches to working with communities and so implied close working with the other similar post anyway. But funding boundaries, and experience of previous evaluation activities, restricted evaluation planning to specific areas of the work. The temporary shift of the permanent post to be managed by the second post-holder's line manager while a vacancy was filled resulted in closer planning of programmes.
A second issue that emerged from this in thinking about evaluation was that both post-holders identified the evaluation with the evaluation of posts, and not with the evaluation of a programme of work. They reported concern at this, and given that at this point they were now planning a specific coordinated project together, they wanted the evaluation of this project to be undertaken separately to other aspects of the evaluation already anticipated. If this did not happen, the permanent post-holder was concerned that his or her own input to any activity would be invisible, not recognised or would be attributed to the contract post-holder.
Another relevant dimension forming part of the context here was that the coordination and project planning were behind schedule and more funding was desired than was available, so a further funding bid was being put together. It was an expectation of the post-holders preparing this that an evaluation strategy be included to add clout to the bid. They were also hoping to use discussions about a desirable evaluation with their evaluation consultant as a way of persuading a line manager to agree to apply for further funding. The line manager was concerned about delays and, indeed, about incurring further delays if they waited for the additional appointments envisaged with enhanced funding. So, in this case, the evaluation was inextricably bound up in the politics of management of work.
Evaluation itself can also be seen as a tool for visibility. The line manager accepted the need for a distinct evaluation focus on the coordinated project but wished to carefully restrict the inclusion of other permanent post-holder activities in this area (which might legitimately be considered as relevant) due to a priority to evaluate contract-funded work.
How did the evaluation consultant respond? By agreeing that, if a work aim is to achieve more coordination in activities, then the evaluation should reflect (p.244) this in its boundary/scoping, but also that it is not possible to evaluate something until its happening, and that evaluation is not responsible for determining the programme content even though it may influence its direction. So, here in this example, it appeared that the political issues related to the following:
• drawing the boundaries of evaluation;
• the use of evaluation as a tool among staff and managers negotiating work plans;
• the identification of evaluation with assessment of the post-holders not the activity; and
• the politics of profile yielded by evaluation.
An example such as the one described earlier in this chapter provides an excellent opportunity to illustrate how and where politics enter into the evaluation process. Nevertheless, what must be considered when illustrating aspects of evaluation with a case study in teaching is the difficulty of anonymising sufficiently, and the importance of doing so when discussing political issues. The presentation here has removed any reference to topic/subject area or type of organisation and by doing so its use in teaching is questionable in the absence of important context material. The necessary reduction in specific information offered, in order to achieve anonymity, makes it more difficult to illustrate either how the political context operates or how competing interests and perspectives are played out in particular evaluation settings. Political contexts are created through the detail of day-to-day practices and relationships within and between organisations and individuals.
The construction of hypothetical case studies is one way of developing this approach, enabling a more anonymised demonstration of typical or important political issues by drawing upon a series of ‘real’ evaluation stories. However, the provision of case studies to indicate what happened (real), or what might happen (hypothetical), in evaluation practice, while useful, is essentially a descriptive exercise which remains limited in its scope as a tool to support learning for action and reflection. They take time to introduce and discuss productively in a class context, yet need to be balanced within curriculum priorities alongside delivery using more generalised approaches to topics. They are also constrained in their capacity to deliver insights for students about their own responses as evaluators to situations they may face, what the options and choices available to them might be, and how they would actually respond to the politics of evaluation.
Alternative strategies need to be devised which can help students to distil the essence of politics within evaluation through exploration of a range of contexts while using their own practical experience as a positive learning tool. It is (p.245) important, in our view, that learning and teaching about evaluation move beyond just hearing about what other evaluators do (both real and hypothetical) and one way in which we have attempted to achieve this has been through the introduction of workshop activities. These, while drawing on ‘real’ evaluation situations, invite students to consider their response to a specified scenario. In relation to the micro-politics of evaluation, they might be asked: what would you do on meeting a key stakeholder (with whom there had been some difficulties during an ongoing evaluation) at a social occasion? Alternatively, they are presented with a brief to develop a proposal to evaluate a safer communities initiative within an urban area likely to involve varied stakeholders including residents, the police and those perceived to make the community less safe. Both of these exercises would be based on existing projects but used in ways which do not ‘give the game away’ and could therefore potentially jeopardise the evaluator or others involved in the evaluation. These types of activity involve students in a dynamic and active process – facing the challenges perhaps before they are required to face them in the ‘real’ setting.
Aspects of politics: those we have yet to capture
The conclusion of the real evaluation story above provides illustration of the areas we do not believe we have captured yet in teaching and learning. As previously noted, we are aware that there are many ways in which it is not or may not be possible to always convey the nature of politics within evaluation through what are essentially class-based experiences. However, there are many areas to consider (Box 15.2).However, it seems that this is not just a question of building in more issues for student discussion as there are a number of reasons which suggest that this may not be appropriate in all settings. The following are some examples.
There is a level of descriptive detail needed to identify the subtle and sometimes complex reality which shapes perceptions of evaluation by stakeholders, but which may easily breach confidentiality of participants.
There will almost invariably be a shifting context for evaluation where internal or external changes can alter the way in which an evaluation is perceived, and some aspect of the setting or relationships can suddenly become sensitive. This may be coupled with the idea of the elusiveness of the experience of evaluation as negotiation.
Student familiarity with organisational context
To many, the organisational environment and inherent politics may be unfamiliar territory. Appreciation of the principle of stakeholder interests is fundamental to learning and teaching in evaluation, but students often bring the experience of only one of these to their learning, and perhaps none. Conveying the diversity and validity of such interests and perspectives within the evaluation context in a constructive rather than seemingly negative manner can be difficult. In some cases, there may be limited experience of organisational or service settings in general to underpin teaching-led illustration of micro- or wider political processes.
We conclude by suggesting that the ‘politics of evaluation’ represent more than a single and therefore tokenistic course topic. They underpin every aspect of evaluation – from commissioning, design and delivery through to dissemination – and as a result need to underpin how evaluation is introduced to those keen to become evaluators, enhancing rather than detracting from or being obscured in the evaluation process. The challenge is: how do we do so?
Alkin, M. (1990) ‘Debates on evaluation’, in A. Clarke with R. Dawson (eds) (1999) Evaluation research, London: Sage Publications, p 19.
Clarke, A. with Dawson, R. (1999) Evaluation research, London: Sage Publications.
Pawson, R. and Tilley, N. (1997) Realistic evaluation, London: Sage Publications.
Rossi, P.H. and Freeman, H.E. (1993) Evaluation: A systematic approach, San Francisco, CA: Sage Publications.
Scriven, M. (1991) Evaluation thesaurus (4th edn), London: Sage Publications. (p.248)