Jump to ContentJump to Main Navigation
The New Technocracy$

Anders Esmark

Print publication date: 2020

Print ISBN-13: 9781529200874

Published to Policy Press Scholarship Online: January 2021

DOI: 10.1332/policypress/9781529200874.001.0001

Show Summary Details
Page of

PRINTED FROM POLICY PRESS SCHOLARSHIP ONLINE (www.policypress.universitypressscholarship.com). (c) Copyright Policy Press, 2022. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in PPSO for personal use. Subscriber: null; date: 27 June 2022

Technocratic Calculation: Economy, Evidence and Experiments

Technocratic Calculation: Economy, Evidence and Experiments

Chapter:
(p.173) 7 Technocratic Calculation: Economy, Evidence and Experiments
Source:
The New Technocracy
Author(s):

Anders Esmark

Publisher:
Policy Press
DOI:10.1332/policypress/9781529200874.003.0007

Abstract and Keywords

The chapter takes up the technocratic preoccupation with quantification, measurement and scientific politics. While this is a consistent feature of technocratic governance, the proliferation of performance management, accountability and evaluation systems, evidence-based policy and experimental learning also reflect a new commitment to radical incrementalism and a ‘what works’ approach, which is significantly different from earlier and industrial technocracy. The chapter illustrates the implications of this development in the cases of experimental EU governance and nudging interventions.

Keywords:   Knowledge society, Econocracy, Performance management, Accountability, Evaluation

Cost-benefit analysis reflects a firm (and proud) commitment to a technocratic conception of democracy.

(Sunstein, 2018: xi, original emphasis)

The last (and first) piece of the puzzle

This chapter concludes the analysis of the three basic paradigms of the new technocracy by looking in more detail at the search for scientific results and evidence in public policy, conducted under the broader label of performance management. Historically, performance management has been developed through a steady accumulation of concepts, practices and standards such as quality management, management by numbers, evidence-based policy, evaluation practices, auditing and inspection. The common denominator, however, is a fundamental commitment to improve public policies through measurement of effects in ways that make it possible to confer the status of quantifiable and objective evidence on these effects more or less directly. The core idea of performance management is thus to make the policy process part of, and ideally identical to, the scientific process of discovery. As such, the chapter tackles what is often considered the first and last dimension of technocracy more directly: governmental scientism and the preoccupation with quantification, measurement and objective evidence as the foundation of political authority and effective social engineering.

This focus comes with a particular challenge: given the historical constancy of governmental scientism in technocratic rule, what is then (p.174) new about the new technocracy in this respect? Whereas the paradigms of network organization and reflexive risk regulation introduce rather clear shifts and reversals in technocratic rationality and practice, the current attempt to provide a scientific basis for public policy through performance management seems rather to confirm the idea of eternally recurring Saint-Simonism: the claim that technocracy has remained largely unchanged since its initial conception. This degree of historical permanence is certainly part of the equation when it comes to performance management. As much as I would like to list more decisive events of the 1980s in the spirit of the two preceding chapters, such an exercise would not make much sense. If anything, this would be where the interpretation of the 1980s as the decisive decade in the rise of neoliberal hegemony becomes relevant, at least when it comes to the question of the econocracy and the influence of economic knowledge and expertise on public policy.

In some ways, the current technocratic exercise of performance management is as close to an affirmation of Bell’s baseline hypothesis about the belated realization of technocracy in post-industrial society as we can get: performance management thrives because our current society is still a knowledge society based on the continued ‘centrality of theoretical knowledge as the source of innovation and policy formulation in society’ (Bell, 1999: 14). Whereas network society and risk society both comprise dimensions of post-industrial society that did indeed only come into full view roughly around the 1980s, Bell’s axial principle, primus inter pares among the five basic dimensions of post-industrial society, turned out be the most prescient part of his social forecasting. Together with the ancillary development of new intellectual technologies, the axial principle still has significant purchase on the current state of affairs in public policy. Performance management is to some extent the most recent result of the mutual constitution of knowledge society and a form of political management exercising social control and engineering on the basis of theoretical knowledge for the purpose of directing innovation and change.

However, knowledge society has also, in important ways, become a learning society, an experimental society, an evaluation society and an audit society. The structures and dynamics underlying all of these developments have occurred in the interplay between technology and methods for the production and calculation of knowledge. The common denominator between the various extensions and permutations of the knowledge society, correspondingly, is that they challenge the importance of purely theoretical knowledge and its ancillary intellectual technologies originally observed by Bell. The (p.175) new technocracy has changed accordingly: rather than a univocal commitment to theoretical knowledge, the new technocracy is guided by a broader and more inclusive principle of learning from evidence and the continuous improvement of policy. On the one hand, this principle clearly reflects the historical continuity with earlier technocratic versions of technocratic scientific government and policy. On the other hand, however, the principle also reflects a significant degree of change in the span and purpose of knowledge applied in public policy and in the way evidence is collected and calculated. This, however, has only served to make current performance management a broader and more influential policy paradigm, in contrast to the limited influence sometimes attested to by its forerunners.

This success is largely due to the efforts of the new governance. Performance management is widely seen as one of the cornerstones and lasting contributions of NPM, even among those inclined to declare NPM dead. In the NPM version, performance management largely comes down to a focus on quantification, measurement and calculation of individual and organizational performance for the broader purpose of mimicking the informational flows and signals of the market in the public sector. The result is the pervasive spread of accounting systems designed to calculate efficiency and, by the same token, enable auditing, inspection and public accountability. However: although NPM reforms have certainly had a bigger impact on performance management than network governance and risk management, the NPG version of the new governance has brought its own version to the table. The NPG version of performance management calls for better (quasi-)scientific procedures of data collection, calculation and dissemination in order to find out ‘what works’ in public policy, often with the help of engaged citizens. The result is a form of evaluation systems designed for the creation of public value, problem-solving, experimentation, innovation and public participation. Although the accountability and evaluation systems are rather different in intention and potential effects, they can be hard to disentangle in practice and they share certain fundamental features such as a commitment to hard evidence, comparison, ranking and cost/benefit calculation, broadly understood.

Both versions of performance management exacerbate a conflict with bureaucracy that was always inherent to the technocratic reliance on quantification, measurement and scientific evidence in the pursuit of public policy. Although this conflict may have been moderated by the compromise of techno-bureaucracy, current performance management is less inclined to strike a compromise with bureaucracy. Regardless (p.176) of whether performance is viewed as a matter of increased efficiency and accountability or innovation and problem-solving, bureaucracy is largely understood as an obstacle to improved performance. And, following a now familiar pattern, this anti-bureaucratic stance more or less invariably correlates with a new claim to democratic legitimacy, be it through efficiency and improved accountability or improved public value and social welfare. Either way, the democratic gain in performance management comes down to a matter of output legitimacy. This is how, as represented by Sunstein’s proud commitment to a technocratic conception of democracy discussed earlier, performance management contributes to the new technocratic alliance with democracy and simultaneous rejection of bureaucracy. In order to gauge this development, however, we have first to consider the interplay of technology and dominant methods of calculation and choice more fundamentally.

Still the age of econocracy? Technocratic knowledge and methods of calculation

In institutional terms, Bell’s identification of post-industrial society as a knowledge society is based on a new alliance of science and technology under the auspices of a politically guided search for innovation (see Chapter 2). The specific axial principle arising out of this development, however, is at one and same time highly specific and extremely general: ‘what has become decisive for the organization of decisions and the direction of change’, he states, ‘is the centrality of theoretical knowledge – the primacy of theory over empiricism and the codification of knowledge into abstract systems that, as in any axiomatic system, can be used to illuminate many different and varied areas of experience’ (Bell, 1999: 20, original emphasis). In other words, the axial principle builds on a direct contrast between theoretical knowledge and empirical knowledge, or at least the amateur ‘empiricism’ that drove the major innovations of industrial society. The nature of this claim and its importance for the assumption about the rise of technocracy in knowledge society gives rise to a number of more specific questions at the heart of the belated realization thesis.

Scientific disciplines and fields of knowledge

First and foremost, what scientific fields and bodies of knowledge support technocratic power and influence? On the one hand, the axial principle clearly refers to the advent of ‘hard’ sciences in general and (p.177) in public policy in particular. As Bell’s later restatement of the axial principle makes clear, theoretical knowledge principally means the axiomatic and symbolic systems of mathematics applied to domains such as quantum theory, relativity theory, solid-state physics, materials science and so on (Bell, 1999: 39). In this way, the axial principle refers to the core academic disciplines at the heart of the new relationship between technology and science, that is, the involvement of physicists, chemists and mathematicians in the technological innovations of modern warfare, propelled by the Second World War but carried over to the peacetime operations of the military-industrial complex and institutions with a broader public policy agenda such as RAND. In other words, the proliferation and importance of theoretically codified knowledge is what draws the hard sciences into public policy and helps technocracy overcome its previous limitations as the rule of engineers.

However, the involvement of hard science and foundational research in technological innovation is not the most important implication of the axial principle for technocratic influence on public policy. For this, we have to look to the impact of the new primacy of theoretically codified knowledge in a ‘less direct but equally important way’, in the ‘formulation of government of government policy, particularly in the management of the economy’ (Bell, 1973: 22). More specifically, the axial principle of post-industrial society is brought most decisively to bear on public policy and social engineering through ‘the attempted use of an increasingly rigorous, mathematically formalized body of economic theory, derived from the general equilibrium theory’, resulting in a situation where ‘managing the economy is only a technical offshoot of a theoretical model’ and ‘economic policy formulations, though not an exact art, now derive from theory and must find their justification in theory’ (Bell, 1973: 25).

In other words, the axial principle is also the immediate source of the econocracy – a technocracy operating more or less exclusively through the body of knowledge nurtured by neoclassical economics (see Chapters 1 and 2), and the corresponding idea the idea that a background in economics, the quintessential form of ‘technical’ social science, is the most important indicator of a technocratic orientation (see Chapter 3). Bell’s axial principle thus points us to an almost unbroken line of mutually reinforcing formalization of economics and involvement in economic policy since the economic discipline started its path to academic and political prominence more than 70 years ago: even the experience of the last great economic recession, largely unforeseen and by most accounts at least partially caused by economic experts, has done little to reduce the status of the neoclassical paradigm (p.178) and its influence on public policy (Chang, 2014; Earle et al, 2017). However, the econocracy is also a ‘changeable beast that has evolved with economic knowledge but that always values this knowledge above all else and always involves experts in turning political problems into purely “economic” ones’ (Earle et al, 2017: 17).

The major changes in the nature of the econocratic beast include additions to the neoclassical paradigm such as (new) institutional economics, behavioural economics and, to a lesser degree, ecological economics. Indeed, these traditions have been welcomed into the economic mainstream: key representatives of institutional economics such as Coase, North, Ostrom and Williamson have all received the ‘memorial’ Nobel prize, as have representatives of the behaviouralist school such as Kahneman and Thaler, the latter being one of the principal architects behind the nudging agenda that rose to prominence in the United States under Obama’s tenure, although it was represented at the governmental level even earlier in the United Kingdom by the so-called Nudge Unit. Simon, who received the prize in 1978, is a key figure in both traditions. Nordhaus, a recent laureate, received the price for integrating climate change into standard economic models, thus providing official recognition for one of the economists ‘flirting with the idea of moving away from GDP as the central measure of economic success’ (Earle et al, 2017: 16). The question is, then, whether outcrops such as institutional economics, behavioural economics or ecological economics signal deeper changes in the established body of economic knowledge or its influence on public policy.

On the one hand, it is clear that none of the more recent additions represent a fundamental break with the dominant paradigm. Although they proceed from observations of certain limitations in basic assumptions of neoclassical economics, their ambition is still to improve and expand on the basic axiomatic system of economics. Moreover, they remain firmly committed to the image of economics as a form of hard or technical science, even if advanced mathematics is sometimes used more sparsely. On the other hand, institutional economics and behavioural economics also place greater emphasis on the ‘layered’ or ‘bounded’ rationality of individuals and their interplay with institutions as an important part of the economy. Correspondingly, proponents of new institutional economics and behavioural economics will typically take a more flexible stance on government interventions and, more importantly, look to a wider range of policy domains as areas of intervention rather than the traditional reliance on macroeconomic policy levers.

(p.179) This development can also be seen to suggest the new technocracy is in some sense more pluralist or even cross-disciplinary than a straightforward econocracy. Institutional economics is situated somewhere between political science and economy in a manner akin to a revised form of political economy. Kahneman and Tversky, the godfathers of behavioural economics, where both trained as psychologists. The creator of the British nudge unit, also a psychologist, has suggested that the real revolution in behavioural public policy lies in bringing psychologists to the heart of public policy (Halpern, 2015: 30). Taken to its radical conclusion, this suggests that the use of behavioural economics in public policy amounts to a ‘psychocracy’ rather than an econocracy (Feitsma, 2018). Although economists still maintain a substantial grip on public policy, the econocracy has in this sense also experienced pressure from other disciplines with their own particular version of formal and technical social science. These disciplines have to a large extent been incorporated into the econocracy, which has in turn begun to offer policy advice based more on experiments, empirical studies and data, in addition to the purely theoretical knowledge traditionally provided by economic model simulations and calculations.

Intellectual technologies and methods

In addition to the axial principle itself, the fifth and last dimension of post-industrial society pointed to the rising importance of a set of ancillary ‘intellectual technologies’, enabling the ‘substitution of algorithms (problem-solving rules) for intuitive judgements’, and projected in the original forecast to be ‘as salient in human affairs as machine technology has been for the past century and a half’, if not sooner then by ‘the end of the century’ (Bell, 1973: 29). In essence, this suggests that the influence of theoretically codified knowledge captured by the axial principle extends to different techniques or methods of decision making and ‘algorithmic’ problem-solving based on the abstract and formal system of mathematics and logic. These technologies include, inter alia, cybernetics, systems analysis, game theory, decision theory, utility theory and, in mathematics, advanced set theory, probability theory and stochastic processes.

The proliferation of such intellectual technologies clearly has an intimate relationship with the opportunities offered by the calculative power and programming language of computer technology. The terminology of algorithms and technology thus lends itself easily to the idea that intellectual technologies can be equated with computer (p.180) software, programmes and models in relation to the hardware of information technology. Indeed, Bell’s revisiting of the original forecast confirms that this is the most important aspect of the informational revolution: the proliferation of intellectual technologies is greatly helped along by the transition from mechanical machine technology to information technology, but this is merely a development subsumed under the axial principle of knowledge society (Bell, 1999: 38). Current preoccupation with Big Data and machine learning clearly testifies to the continued enthusiasm for the power of algorithms in this respect. However, the new intellectual technologies ultimately invoke a broad and dual ‘methodological promise’ for the second half the century: the ‘management of organized complexity (the complexity of large organizations and systems, the complexity of theory with a large number of variables)’; and ‘the identification and implementation of strategies for rational choice in games against nature and games between persons’ (Bell, 1973: 28).

Techniques and methods for the management of complex organizations and systems generally fall under the province of systems theory and cybernetics, both cross-disciplinary approaches loosely structured around the attempt to understand mechanical, biological, cognitive and social systems under the assumption that these display the same basic properties of organized complexity such as emergent and spontaneous order, self-organization, recursivity, feedback loops, learning and adaptation, which is then taken as a starting point for the pursuit of more adequate tools of management, steering and control (Wiener, 1961). The implied idea of a technical social science of steering and control based on the attribution of formal system properties to social systems has led some to view cybernetics and systems theory as the technocratic intellectual technology par excellence (Meynaud, 1968; Fischer, 1990). While cybernetics and systems theory do indeed bridge between mathematics, computer science, engineering, biology, psychology, sociology, law, organizational studies and management, however, they have also failed to consolidate themselves scientifically largely for the same reason. This is particularly pronounced in social science, where the approach is associated with functionalism and thus largely out of fashion, save for a stronghold in theories of organization and leadership. Moreover, direct influence on public policy has always been debatable.

To some degree, the intellectual technologies related to the identification and implementation of strategies for rational choice originate in the same branches of mathematics as cybernetics and systems theory. However, their development has also been defined (p.181) by a much closer alignment with the econocracy and its core body of knowledge. Game theory, in particular, originates from the basic axiomatic system of the neoclassical paradigm (Earle et al, 2017: 96). In general, however, game theory is not a theory or model of markets or any particular object, but a method for the calculation of optimal choices under different strategic conditions of cooperation and conflict, probability and availability of information, which lends itself easily to a wider set of applications outside the market context or the economy understood in more narrow terms. In the domain of policy and politics, this logic is archetypically represented by public and social choice theories, which specifically attempt to develop a unified framework for economic and political science, based on the application of economic tools to political choices and decisions.

However, the more recent development and application of game theory and the wider set of intellectual technologies for identification and implementation reflect the same basic ambiguity observed on the level of economy as a field and body of knowledge more broadly. On the one hand, the success of game theory and rational choice can be seen as an essential component in the steady expansion of economic reasoning to non-economic fields and ultimately to everything (Chang, 2014). Moreover, current use of computer models and simulations to determine optimal choice is often taken straight out of Bell’s playbook for intellectual technologies, only now with exponentially increased powers of calculation. On the other hand, however, we also find signs of pluralism, cross-disciplinary interaction and, in particular, a renewed focus on empirical variation and experimentation, intertwined with institutional economics and behavioural economics. In relation to the former, the use of game theory, evolutionary algorithms, agent-based modelling, simulation and lab experiments have thus been seen to provide institutional economics with new tools to tackle core issues such as interdependence and complex interaction (Elsner et al, 2015).

The same line of reasoning lies behind ‘design economics’, which has been involved in the construction of markets for, inter alia, labour and professional services, education, electricity, telecommunications bandwidth and, not least, financial markets since the 1990s. As defined by Roth, one of its principal proponents (and yet another Nobel laureate), design economics involves the use of game theory, experimentation and simulations in the comprehensive construction of specific markets, in contrast to just analysing naturally existing markets, which means that the design economist cannot simply rely on ‘the simple conceptual models used for theoretical insights into the general working of markets’ (Roth, 2002: 1341). Rather, design (p.182) economics envisions ‘the economist as engineer’ in relationship to the core neoclassical body of knowledge identical to the ‘relationship of engineering to physics, or of medicine to biology’ (Roth, 2002: 1343). Looking at economics from the outside, the simple conceptual model of the neoclassical paradigm obviously already places the economist in the role of a social engineer. However, design economics nevertheless suggests a more direct and experimental approach to social engineering.

Cost-benefit analysis

The most influential ‘intellectual technology’ of decision making used by econocracy, however, is not particularly theoretical or formal, but rather the potentially simple calculation of costs and benefits. The centrality of cost-benefit analysis (CBA) was highlighted in the original analysis of econocracy as a ‘vice worse’ than any form of technocracy run by experts in hard or technical sciences: CBA is viewed here as the ‘supreme example of econocracy’, based on the broader point that ‘much of the rationale of economic science is, or is supposed to be, that of bringing a diversity of factors into the common language of accounting’ (Self, 1975: 44). Thus understood, CBA has been the key source of economic influence on public policy and the primary tool for the expansion of welfare economics to a wide array of policy domains. In a decisively less critical vein, this is also the essence of Sunstein’s more recent argument for the development of a Cost-Benefit State (2002a) and the ‘triumph of the technocrats’ based on a ‘silent cost-benefit revolution’ from Reagan’s presidency onwards (2018: 7). Maybe the 1980s is also a watershed in the case of performance management after all.

At its core, CBA is a simple and flexible form of addition and subtraction determining whether the benefits outweigh the costs, based on values assigned within a more or less developed system of data collection. The basic operation of CBA thus remains simple, flexible and can be applied to practically any decision with a variety of data and more or less advanced methods of calculation. In this respect, the idea that CBA is the quintessential econocratic intellectual technology presents a slightly different view of econocracy than the emphasis on the dominance of the neoclassical paradigm and its highly abstract and theoretical model-building. Although CBA is still a part of econocracy from this perspective, it plays a rather marginal role (Earle et al, 2017: 10). In other words, the two versions of econocracy are not necessarily strongly linked: the common language of accounting and CBA can be applied with little or no theoretical or practical connection (p.183) to the econocracy of mainstream economics and the neoclassical paradigm. The use of CBA may of course also rely more directly on theoretical assumptions and models, but a key to its success is also that it can be broadly applied without such refinements.

This is to say that the use of CBA cannot be seen as an extension of the axial principle or its ancillary intellectual technologies per se: CBA may be economic in a broad sense, but it does not require the possession or use of advanced theoretical knowledge. In a broader sense, this has to do with fact that CBA predates modern econocracy. Indeed, CBA can be viewed (anachronistically) as the historical raison d’être for the development and use of statistics and accounting systems in public policy more generally (Porter, 1995: 115). Historically, economic quantification has been ‘closely allied to accounting’, as has ‘much of economics itself, especially the parts of it that has [sic] been created or mobilized to aid in management, planning and regulation’ (Porter, 1995: 114). As Porter goes on to say, this creation and mobilization of economic quantification as an extension of accounting practices is integral to both the French and American technocratic revolutions, albeit in different ways. This view places CBA in the broader historical sociology of statistics, or more generally the development of statistics as the ‘knowledge of the things that comprise the very reality of the state … of the forces and resources that characterizes a state a given moment’ (Foucault, 2007: 274).

This is a history that precedes post-war econocracy by centuries. Compiling and calculating statistics in this way is a matter of accounting, which is simply to say placing values on the various dimensions of the population as an object of regulation and then counting (setting aside for now the massive practical obstacles to this operation throughout much of its early history). As such, statistics and accounting is a historical precursor to modern economics, embodied for example by the first ever ‘modern’ model of interacting economic forces (‘the economic table’) developed by the French physiocrats (Foucault, 2007: 342). This ancestry is reflected by the modern discipline of econometrics and the development of national budgets aggregating economic data into relevant categories for statistical analysis, used for historical and structural analysis of the economy, forecasting and policy advice (Bell, 1973: 24; Earle et al, 2017: 97). Econometrics thus provides the econocracy with a pervasive statistical apparatus for the collection and analysis of empirical data, historically rooted in accounting but also lending itself to various tests of the theoretical model of the neoclassical paradigm. Econometrics has resulted in a steady expansion and refinement of statistical data on national economies accumulated (p.184) both by states themselves and various international organizations. However, the new technocracy is less about econometrics than the use of CBA, accounting and calculation of value in a much broader sense, but usually also on a smaller scale. This, in turn, has to do with more fundamental changes in the role of knowledge at the intersection of state and society.

Beyond knowledge society? Audit society vs learning society

The organization of post-industrial society around knowledge, in Bell’s original version of the argument, has a dual purpose: directing innovation and exercising social control. More to the point, the political focus on the process of innovation in the newly fused system of technology and science leads to a broader demand on the ability to exercise social control and direct processes of change. The dual focus on innovation and ‘commitment to social control’, in turn, ‘introduces the need for planning and forecasting into society’, which is also to say a demand for the technocratic ‘hallmarks’ of rationality, planning, and foresight (Bell, 1973: 28). In an attempt to illustrate the implications of this development for public policy in more concrete terms, Bell provides a blueprint for a ‘System of Social Accounts’, meant to create a ‘balance sheet that would be useful in clarifying policy choices’ by providing a ‘broader framework’ for the use of economic accounting and CBA (Bell, 1973: 326). In addition to established data on the national economy, the system should include measures of the social cost and return of innovation, social gains such as opportunity and mobility, ills such as crime, family disruption and the like, and lead to the creation of ‘performance budgets’ in areas of defined social needs.

Such integration of social indicators and performance budgeting clearly resonates widely in current performance management. Indeed, current performance management has been seen as the result of a continuous development from scientific management to early performance budgeting and the social indicators movement in the 1960s and 1970s, onwards to NPM in the 1980s, evidence-based policy from the 1990s, and finally ongoing review and revision of accumulated practices since 2010 (Van Dooren et al, 2015: 42). In this interpretation, the major changes in the history of recurring ‘quantification of government activity’ are new technological tools making the reinvention and/or realization of old ideas possible (such as computational mapping of social geography and Big Data on health issues), and the increasing institutionalization, professionalization and specialization of production (p.185) and use of performance data (Van Dooren et al, 2015: 55). While the link between current performance management and earlier attempts at quantification in the name of planning and accounting efficiency is certainly worth emphasizing, however, there are at least two possible historical ruptures that must be taken into account: the audit explosion and the experimental revolution.

The audit explosion

At the most fundamental level, the audit explosion suggests a mutation of knowledge society into an audit society, defined through and through by the principles of formalized accountability and practices of constant checking and verification, roughly since the 1980s, at least in the case of the United Kingdom (Power, 1999). In general, this development can be understood as a more or less subtle mutation of external powers of oversight, policing and surveillance into more indirect, but potentially also more omnipresent, forms of control and supervision based on the ‘programmatic restructuring’ of organizational life in order to make individuals and organizations accountable, that is, capable of giving auditable accounts, through the creation of performance criteria that set the standards of internal improvements and make such improvements externally verifiable and certifiable. There are certainly traces of earlier scientific management, planning and control of efficiency to be found in systems designed for purposes of accountability in this way. However, the audit explosion has less ‘to do with policing or surveillance in the normal sense of external observation, although elements of this may exist; it has more to do with attempts to re-order the collective and individual selves that make up organizational life’ (Power, 1999: 42).

Although rooted in earlier practices of budgetary accounting and auditing, the audit explosion thus represents a more or less subtle development from efficiency to accountability as the overriding standard of performance management, creating a demand for extensive collection of performance data, examination of such data and authoritative views, judgements and assessments based on the results (Power, 1999: 66). An important element in the audit explosion is thus the creation of auditing bodies with extensive independence and powers of oversight in domains such as health, environment, social services, education and ultimately public policy as such, turning public and private organizations into ‘auditable’ subjects. Broadening the geographical scope from United Kingdom to Sweden, the Netherlands, Finland and France, but limiting the analytical focus more to so-called supreme audit institutions (SAIs), the audit explosion can be said to (p.186) have merged older forms of auditing and performance management in new formula of ‘performance auditing’ in the context of broader public management reforms and the new governance, roughly since the late 1970s and early 1980s (Pollitt, 1995).

Inspection and auditing have often been associated with technocratic influence. Indeed, the three ‘administrative’ (non-engineering) Grands Corps (Conseil d’État, Cour de Comptes and Inspection des Finances) of the institutionalized French technocracy are all, in various degrees and ways, ‘supreme’ audit institutions or inspectorates. However, the audit explosion is not merely a quantitative extension of older forms of public auditing and inspection of financial and regulatory compliance. What is at stake here is, rather, a new centrality of performance auditing in public policy and a foundational shift towards more or less systematic development of internal planning, control and/or accountability systems that facilitate continuous self-inspection and self-evaluation under the auspices of general audit institutions and systems of accreditation and quality assurance. Through this development, narrower concepts of CBA, efficiency and compliance are taken over by a broader standard of accountability and expansive systems of performance measurement designed to ensure governmental, organizational and individual accountability.

The experimental revolution

The second potential rupture in the otherwise continuous history of performance management is a new focus on empirical and experimental knowledge, the kind of knowledge explicitly rejected by the emphasis on the scientifically codified and validated theoretical knowledge following from the axial principle of knowledge society. This experimental revolution, however, is not an argument for a return to the ‘amateur’ empiricism of the industrial era, but rather the result of a competing vision for the role of social science in public policy. Roughly around the time of Bell’s forecasting, David T. Campbell, noted psychologist, godfather of programme evaluation and a prominent advocate of experimental methods in social science and public policy, presented a rather different vision of post-industrial society as an experimental society. This society, although ‘nowhere yet an actuality’, would be more properly scientific, active, honest, nondogmatic, open to criticism, accountable, decentralized, responsive and voluntarist than the thoroughly programmed and planned knowledge society. Briefly put, it would be an ‘evolutionary, learning society’ (Campbell, 1991: 224, original emphasis).

(p.187) This agenda was pitted explicitly against the ‘dogmatic’ and ‘nonexperimental’ extrapolations from established theories for purposes of ‘optimal social organization design’ conducted by ‘governmental and industrial planners everywhere’, that is, the ‘economists, operations researchers and mathematical decision theorists’ that ‘trustingly extrapolate from past science and conjecture, but in general fail to use the implemented decisions to correct or expand knowledge’: the experimenting society, by contrast, means a ‘scientific society in the fullest sense of the word … distinguished from an earlier use of the term scientific in social planning’ (Campbell, 1991: 225, original emphasis). This all-out confrontation with social planning and engineering clearly suggests that technocratic influence on public policy based purely on theoretical knowledge and intellectual technologies of decision making should be viewed as a failure. However, the problem is not technocratic influence per se, but rather a failure to truly live up to the goal of technocratic scientism and conduct proper social engineering by treating ‘reforms as experiments’. In order to achieve this goal, technocrats should be recruited from the full array of social sciences and act as frontrunners for the experimenting society, taking it on themselves to ensure that the public policy reforms and ameliorative programmes be used as an opportunity for social experiments (Campbell, 1969).

This vision has to a wide extent been realized with the subsequent development of evidence-based policy, identification of best practices and evaluation systems. In particular, Campbell’s vision of social engineering as experimentation and the corresponding idea of the experimental learning society have gained traction with the widespread commitment to evaluation and evidence-based policy in most advanced democracies since the 1990s. Indeed, this development amounts to an experimental revolution and a new commitment to finding ‘what works’ in public policy, or, in the fuller version: ‘what works for whom under what circumstances and why?’ (Sanderson, 2002: 19). While evidence-based policy and evaluation are rooted in ideas about extensive use of economic and social measures and indicators as touchstones of public policy, they also shift the underlying idea of social engineering fundamentally to social experimentation and continuous innovation in order to ensure adequate solutions to policy problems and optimal individual and collective welfare (Sunstein, 2018). In other words, learning from evidence and the continuous improvement of public policy becomes a matter of sufficient experimentation.

The combined result of the audit explosion and the experimental revolution is a basic duality in the current rationality and practice of (p.188) performance management. This duality can be described in terms of distinction between the nature and function of the evidence provided. The systems created in and through the audit explosion are designed to provide evidence of accountability in a way that still harks back to the search for evidence of governmental efficiency to guide public policy decisions found in programme planning budgeting systems and other planning systems. Although the standards of performance are wider and the reach of inspection deeper, the principal function of accountability systems is still to provide documentation for results and overall political legitimacy. By contrast, evaluation systems seek to provide more experimental evidence of how well policies and programmes work under different circumstances (Sanderson, 2002: 33). This is sometimes viewed simply as a difference between the mere provision of performance information and actively using such information (Berman, 2006). However, the schism between efficiency and experimentation in performance management emphasizes that, viewed from the latter perspective at least, a more fundamental difference between forms of evidence and the systems that produce them are at stake. Nowhere has this been more evident than in the development of the new governance.

The new technocracy and innovative governance

The difference between performance management as a search for increased efficiency vis-à-vis innovation has been a matter of considerable attention in the new governance, broadly reflecting the respective positions of NPM and NPG. While variations in specific reforms under the NPM label remain a matter of ongoing discussion, they are widely seen to share a preoccupation with government efficiency, explicit and quantifiable measures and indicators of performance, output control and accountability (Hood, 1991; Christensen & Lægreid, 2011; Van Dooren et al, 2015). Indeed, improved performance management stands out as the lasting effect of NPM reforms in the landmark analysis declaring the death of NPM, including extensive valuation of public assets, systematic comparison and publication of performance results, and rewards for improved performance (Dunleavy et al, 2006). A more recent critical review also highlighted the performance-enhancing approach to public policy and management by numbers as the crucial component of NPM reforms in the United Kingdom since the 1980s, albeit with the particular point that the reality of NPM reforms is a paradoxical combination of ‘evidence hunger’ and ‘evidence destruction’, the latter being the result of shifting systems of measurement, volatile indicators and data breaks (p.189) that make accumulation and comparison of data over time difficult or even impossible (Hood & Dixon, 2015: 45).

Correspondingly, analyses of the audit explosion and the rise of performance auditing also point to NPM as a decisive factor. For one, the emphasis on more indirect mechanisms of steering, output controls and organizational autonomy gives accounting practices and systems a central role in operationalizing the administrative ideals of NPM, aptly summarized in the characterization of the basic NPM formula as ‘autonomous entities with financial reporting and audit requirements’ (Power, 1999: 44). The audit explosion can be seen as a crucial and necessary part of NPM governance reforms insofar as the emphasis on increased accountability, output controls and government spending lead naturally to the common language of accounting, intensification of financial and non-financial information flows and the use of CBA as a generalized method of calculation. In other words, as the state becomes increasingly and explicitly committed to an indirect supervisory role, auditing and accounting practices assume a decisive function. Correspondingly, the performance audit appears as the result of direct and indirect influences back and forth between earlier auditing practices and NPM reforms modernizing, streamlining, and some cases minimizing the entire state apparatus (Pollitt et al, 1999: 22). Other effects notwithstanding, the principal legacy of NPM is performance management, and even if NPM is presumed dead on most other accounts, performance management is generally assumed to be alive and well.

It would appear then, that the new technocracy is more defined by NPM than network governance and NPG when it comes to performance management. Compared to network organization and risk regulation, the influence of NPM on the new technocracy is certainly also more apparent. However, post-NPM reforms and NPG are clearly not without effects in this case either. The central issue at stake in this version of the NPM–NPG debate essentially comes down to diverging perspectives on how to generate and use knowledge as a source of innovation and of policy formulation. On one level, the issue has to do with the underlying standard of public value. In this respect, NPM tends to equate value with efficiency in a manner that broadly resembles earlier planning, programming and budgeting systems of performance management. In the narrowest sense, the underlying standard of efficiency is ‘value for money’ or similar concepts of basic economic efficiency and productivity. In a slightly expanded version, the basic values of NPM have been defined as the so-called ‘three Es’: economy, efficiency and effectiveness (Pollitt et al, 1999; Power, (p.190) 1999; Kickert, 2011). However, this also introduces a distinction between the two first Es and the last one (Pollitt, 1995). Whereas the former two remain a matter of input and output, broadly speaking, the latter involves a change of focus to quality or more specifically to outcomes and impact on users of public service and citizens in general. This schism is particularly evident in outcrops from the NPM paradigm such as ‘entrepreneurial governance’ (Osborne & Gaebler, 1992) and management of ‘public value’ (Moore, 1995) that aims to shift the focus from efficiency to outcome, impact and innovation in the development of public services.

This schism has then been further exacerbated by post-NPM criticism calling for ‘wider’ and ‘deeper’ understanding of performance information and broader standards of public value such as utility and sustainability of policy solutions and programmes (Van Dooren et al, 2015: 21). The post-NPM approach thus moves decisively beyond the three Es to a standard of problem-solving capacity, innovation and learning, thus defining the ultimate objective of performance management as ‘effectiveness in tackling the problems that the public most cares about; stretching from service delivery to system maintenance’, and the use of knowledge to improve public policy as a matter of ‘reflection, lesson drawing, and continuous adaptation’ (Stoker, 2006a: 44). Rather than efficiency or effectiveness, the post-NPM approach thus ties innovation to learning, either in the sense of instrumental (first order) learning with the aim of improving policy effectiveness through evidence of what works, or in the broader and more disruptive sense of (second order) ‘reflexive social learning’ associated with more paradigmatic changes and shifts in preferences, identities and foundational beliefs (Gilardi & Radaelli, 2012).

A related aspect of the NPM–NPG debate over the collection and use of knowledge as a source of policy innovation concerns the use of evaluation and the role of evidence-based policy. Although largely coinciding with the gradual transition from NPM to NPG, proliferation of evaluation and evidence-based policy, in one interpretation, can be seen as a consolidation of the kind of performance management set in motion by NPM reforms (Van Dooren et al, 2015). Thus understood, evaluation effectively functions as an extension of the audit explosion and its particular form of steering and control for purposes of accountability, in effect turning audit society into an ‘evaluation society’ (Dahler-Larsen, 2012). However, the turn to evaluation and evidence-based policy can also be seen to present a rather different version of performance management and a way of learning from evidence more in line with the experimentalist agenda, that is, a form of performance (p.191) management invested less in the provision of performance information for purposes of measuring efficiency and providing accountability than in the pursuit of experimental evidence as part of a process of policy innovation based on instrumental and/or socially reflexive learning. While these two approaches can remain hard to disentangle in current performance management, the latter can reasonably be seen as part of the post-NPM development of the new governance and NPG in the broad sense.

In addition to the emphasis on policy innovation and learning on the level of public values and evaluation systems, the post-NPM approach to performance management is also a project to some degree of cross-fertilization with risk management. On a more general level, this reflects the logic that performance management often functions as a powerful tool of risk processing (Power, 1999). The more operational expression of this coupling of risk and performance management can be called change management or change governance, focused on the need for management of change and innovation created by technological, economic, political and social changes (Osborne & Brown, 2005). The focus of change management and governance, correspondingly, is the development of individual, organizational and systemic capacities for change, learning and adaptation under circumstances of unpredictability and uncertainty, parallel to the search for individual, organizational and societal resilience in risk management. However, a more apparent and pervasive connection has been established between the NPG version of performance management and the involvement of stakeholders.

This intersection of network governance and performance management revolves around an attempt to develop an adequate post-NPM ‘management response’ that combines an ‘evidence-based approach’ to policy innovation and effectiveness with the idea of ‘stakeholder involvement’ as a basis for efficient and legitimate decisions and the pursuit of active citizen ‘involvement’ in, and ‘endorsement’ of, public policy solutions (Stoker, 2006a: 49). The result is embodied in the idea and practice of performance governance – an ‘interactive’ and ‘hyper dynamic’ form of performance management, based on a recognition of the need to ‘organize the public sector to allow for citizens and customers of public services to participate in the whole policy cycle. This means that citizens are involved in co-designing, co-deciding, co-producing and co-evaluating public services in society’ (Bouckaert & Halligan, 2008: 189). Such performance governance ranges from ‘co-production’ of public services (Brandsen & Pestoff, 2006) to ‘collaborative innovation’ in public policy more generally (Sørensen & Torfing, 2016; Torfing et al, 2019). This approach (p.192) clearly draws on broader ideas about networks as a source of critical information about policy problems and services vital to learning, innovation and the development of sustainable policy solutions in a wider sense (Goldsmith & Eggers, 2004; Eggers, 2005; Agranoff, 2007; Ansell & Gash, 2008; Goldsmith & Kettl, 2009; Ulibarri & Scott, 2016).

However, performance governance also links stakeholder inclusion and citizen involvement more directly to innovation and ongoing improvement of public policies and services. Indeed, performance governance can be seen as a model for policy innovation and improvement based on a combination of the experimental method and citizen involvement (Stoker & John, 2009; John et al, 2013). This model, in turn, rests on a claim about a natural affinity between the role of evaluation and experimentation in the improvement of policy design and the need for strategic guidance of governance networks under conditions of increased complexity and uncertainty, placing significant responsibility ‘upon policy experimentation and evaluation as key institutional practices in interactive governance to provide the basis for reflexive social learning’ (Sanderson, 2002: 99). In addition to the focus on stakeholder involvement, the emphasis on policy learning in the NPG version of performance management also overlaps significantly with the broader design principles of MLG, which can thus be seen as mechanisms for ‘policy transfer’ on and between all levels of the MLG system (Evans & Davies, 1999). I conclude the chapter by discussing two examples that reflect both the internal logic of performance management and its intersection with risk regulation and network organization in the new governance.

Experimentalist governance: the EU and beyond

One of the more apparent expressions of the new governance approach to policy innovation, learning and experimentalism is so-called experimentalist governance. Experimentalist governance revolves around the idea of a distinct architecture for systematic policy learning and design based on three general design principles. First, experimentalist governance architectures combine centrally stated goals, frameworks and targets with autonomy for locally developed solutions in order to accommodate diversity of particular conditions. The second design principle of experimental governance architecture is the provision of a mechanism for coordinated learning from local experiences through regular and systematic comparison of locally adopted solutions, based on more or less extensive lists of measures (p.193) and indicators, peer review, supervision and plans for improvement. Third, both means and ends are considered provisional and subject to change in the light of new evidence and experience, thus allowing for instrumental policy learning within existing goals as well as socially reflexive or ‘double loop’ learning where fundamental goals, values and frameworks are questioned and potentially revised (Zeitlin, 2015: 10).

Originally extrapolated from features of policy coordination and development in the EU associated with the so-called ‘open method of coordination’ (OMC), the architecture of experimentalist governance rests on the institutional features of the MLG system (Szyszczak, 2006). However, the focus is on the iterative process of policy innovation and learning within the basic structure of any ‘tiered’ or ‘nested’ governance system with two or more levels of decision-making power and autonomy. For the same reason, experimentalist governance is by no means exclusive to the EU, although it can be considered an early and leading adopter of experimentalist governance due to the particular combination of policy expansion, uncertainty and internal diversity (Sabel & Zeitlin, 2008). More generally, however, the development of experimental governance architecture can be seen as a ‘widespread response to turbulent, polyarchic environments, where strategic uncertainty means that effective solutions to problems can only be defined in the course of pursuing them, while a multi-polar distribution of power means that no single actor can impose her will on other without taking into account the views of others’ (Zeitlin, 2015: 11). While this description increasingly also fits national governance, architectures of experimentalist governance have been identified and studied mostly in the transnational realm.

In the case of the EU, experimentalist governance has developed well beyond the OMC and its use in areas such as macroeconomic policy guidelines, the European Employment Strategy and social inclusion. In the current state of affairs, experimentalist governance architecture has become an institutionalized feature of the EU across a variety of policy domains such as environmental policy (in particular water management and industrial emissions regulation), consumer protection, food, drug and water safety (genetically modified organisms, for example), energy and telecommunications, finance, justice, security, data privacy and fundamental rights (Sabel & Zeitlin, 2008; Zeitlin, 2015). Moreover, the EU more or less systematically seeks to extend experimental governance architecture to non-member states as part of its external and extended governance efforts. Outside the EU, a number of ‘transnationalist experimentalist regimes’ with architectural features similar to the EU have developed across a similarly wide policy (p.194) spectrum. Examples include the Montreal Protocol for the protection of the ozone layer, the Forest Stewardship Council for sustainable foresting, the Financial Stability Board, the UN Convention on the Rights of Persons with Disabilities and the Global Food Safety Initiative (Zeitlin, 2015).

As is clear from these examples, there is a considerable overlap between the development of experimentalist governance architectures and core risk policies such as environmental protection, consumer protection and critical infrastructures. Indeed, experimentalist governance architecture is in this sense an attempt at risk regulation in a situation where strategic uncertainty has overwhelmed the capacities of traditional regulation, in some cases drawing direct inspiration from systems of risk management and risk regulation regimes (Sabel & Zeitlin, 2012). In line with the original debate on the OMC, the design principles of experimentalist governance architecture are often viewed as a form of ‘soft law’, albeit in various combinations with hard law. However, experimentalist governance remains rather indistinct as a form of regulation and a risk regulation regime in a narrower sense: there is no overarching regulatory strategy or distinct orientation towards precaution, laissez-faire or resilience building in response to risk, uncertainty and turbulence. What is distinctive about experimentalist governance is its particular architecture and iterative procedure of policy innovation and learning from diversity, based on continuous and systematic evaluation, comparison and supervision.

Although there is an element of scoring, identification of top performers and so-called naming and shaming to experimentalist governance, the principal purpose is not measurement of efficiency or formal accountability, but rather effectiveness in dealing with (wicked) policy problems. Information is not compiled for purposes of broad publication or legitimation of government performance to the general citizenry, but for supervised peer review and exchange. Moreover, data compilation and calculation are relatively simple. Although the list of indicators according to which particular solutions are measured and compared can be extensive, experimentalist governance does not rest on complex models, advanced calculation methods or formal intellectual technologies of decision making. Rather, governance processes are designed to ‘systematically provoke doubt about their own assumptions and practices; treat all solutions as incomplete and incorrigible; and produce and ongoing, reciprocal readjustment of ends and means through comparison of different approaches to advancing general common aims’ (Zeitlin, 2015: 3). This approach is experimental, so the argument goes, in the manner of American (p.195) pragmatism and John Dewey (Sabel, 2012). Extending this point, Campbell’s experimental learning society is certainly also an ideational forefather of experimental governance.

In the manner of performance governance more generally, this emphasis on evaluation and experiments goes hand in hand with a focus on stakeholder inclusion and collaborative governance. Indeed, experimental governance is viewed as a decisively post-NPM approach, relying on the vertical and horizontal links of the MLG architecture (Sabel & Zeitlin, 2010). This means that networks are, first and foremost, networks of public authorities formally included on the national and/or local level of MLG systems. Correspondingly, private stakeholders included in the wider networks of transnational experimentalist governance are more or less always high-level interest organizations, business interests and well-organized NGOs within the particular policy domains in question. While ordinary citizens may, of course, be involved in local experimentation with policy solutions, they are generally completely absent on the higher levels of the MLG system.

In this respect, experimentalist governance is subject to the well-known problems of democracy beyond the national level and the democratic deficit in the EU. Nevertheless, experimentalist governance also reflects the broader claim to ‘near-democratic’ legitimacy advanced by the new governance. Although experimentalist governance conforms neither to the ‘traditional canon of input nor output legitimacy’, it is deemed ‘normatively attractive’ due to the ability to accommodate diversity and provide ‘policy space’ for local solutions (Zeitlin, 2015: 11). More generally, experimentalist governance is seen to provide a form ‘directly deliberative polyarchy’, based on a deliberative procedure where administrative authorities are forced to justify their solutions in light of comparable choices made peers and richer performance information about possible alternatives than is available in traditional forms of hierarchical governance (Sabel & Zeitlin, 2008; 2010). Ultimately, however, such claims reflect the basic legitimation strategy of the new technocracy: equating less bureaucracy with more democracy.

The focus on MLG systems means that experimentalist governance architectures have rarely been analysed in the national context, and then mostly in federal systems such as the United States. However, experimentalist governance is broadly comparable to processes and mechanisms of performance governance on the national level associated with co-production, public sector innovation and collaborative innovation. Although coined in a somewhat different terminology, (p.196) the basic architecture of experimentalist governance is clearly visible in the image of policy innovation as a ‘design process’ where ideas and solutions are ‘tested and redesigned until they work in a satisfactory way and produce desirable outcomes’ through ‘pragmatic methods of experimentation and trial and error’ and ‘iterative rounds of inspiration, ideation, selection and implementation’ (Ansell & Torfing, 2014: 44). In contrast to the emphasis on systematic evaluation, this design process is clearly more open-ended and focused on creative and reflexive learning vis-à-vis instrumental policy learning (Stoker & John, 2009; Stoker, 2010). However, the basic objective is still problem-solving capacity, utility and sustainability. Being an extension of collaborative governance and NPG, this approach to policy innovation and change places greater emphasis on stakeholder involvement and the creation and management of collaborative forums around the design process that broadly reiterates the logic and institutional design and hands-on management of networks (Torfing et al, 2019).

Experimenting with citizens: nudging and the construction of choice architecture

A rather different but highly significant variation of experimentalist governance can be found in the nudging agenda spearheaded by Thaler and Sunstein’s widely debated manual on the improvement of ‘health, wealth and happiness’ through the use of ‘nudges’ and the design of ‘choice architecture’ around individual welfare choices across a wide array of domains such as health, environmental policy, savings and investment, education (2009). In a broader sense, the nudging agenda is merely an expression of a broader movement towards behavioural public policy, meaning that it can aptly be described as ‘the basic manual for applying behavioral economics to policy’ (Kahneman, 2011: 372). On the one hand, the success of the nudging agenda and the underlying discipline of behavioural economics can be taken as evidence of the changeable nature of econocracy and the fact that the ‘role of economic experts has been expanded beyond simply designing policies to influencing the decisions of citizens’ (Earle et al, 2017: 17). In other respects, however, the nudging agenda is also so far removed from the original parameters of the econocracy that the only remaining link is a vague connection with the cost-benefit ‘revolution’ (Sunstein, 2018). The other revolutions claimed by the nudging agenda, however, have less to do with econocracy.

Nudging is, first and foremost, part of the behavioural revolution and the broader movement towards ‘behavioural public policy’ (Oliver, (p.197) 2013; Shafir, 2013). Indeed, key proponents of nudging have associated themselves rather straightforwardly with this agenda (Halpern, 2015; John, 2016). Behavioural economics has thus been instrumental in developing and testing the core claim of the nudging agenda: that public policy can by significantly improved by substituting standard assumptions about rational choice with consistent attention to the psychological heuristics, biases and mechanisms affecting individual welfare choices (Halpern, 2015; John, 2016; Madrian, 2014). In the place of rational choice, the nudging agenda operates with a distinction between two systems of cognition and decision making called the ‘automatic system’ and the ‘reflective system’ by Thaler and Sunstein (2009: 21), but known more generally as ‘system 1’ and ‘system 2’ in behavioural economics (Kahneman, 2011; Thaler, 2015), and ‘dual process theory’ in psychology (Evans & Stanovich, 2013). The cognitive processes of system 1 are fast, uncontrolled, unconscious, effortless, associative and skilled, but also based on heuristics, cues and shortcuts responsible for a plethora of biases and cognitive mistakes. The processes of system 2 are slow, controlled, effortful, logical, calculating and rule-following, and in this sense a precondition for reflection and reasoned decision-making.

The distinction between system 1 and system 2 has been aptly described as ‘a simple dichotomy common to all variants of behavioural theory: that we possess both “reflective” (rational) and “automatic” (emotion-driven) “systems”, arguing that the automatic system repeatedly wins out’ (Leggett, 2014: 5). In behavioural psychology and economics, the distinction serves as a common reference point for experimental research generating an ever-expanding list of biases and cognitive problems created by system 1 (Kahneman, 2011; Shafir, 2013). Concrete nudging interventions may or may not include specific assumptions about such biases, but in general the distinction between systems 1 and 2 serves to distinguish two modes of nudging relying on rather different techniques. The ideal path of intervention for choice architects is activation of system 2 in order to correct for the cognitive flaws of system 1 and generate more deliberate, reflective and reasoned welfare choices. In this group we find three techniques allowing for different degrees of free choice and free ‘thinking’: mapping, feedback and social influence. Nudging can, however, also target the various biases, heuristics and shortcuts of system 1 directly and thus seek to programme choices and thinking with increasing degrees of automation through priming, framing and gaming, as is more or less routinely done in electoral and commercial campaigning (Esmark, 2019).

(p.198) The application of such techniques in public policy is clearly not limited to the traditional domain of economic policy, nor do these techniques rely extensively on economic knowledge, data or forms of calculation. The more formal variants of behavioural economics do make use of game theory to some degree, but at its core behavioural public policy simply attempts to design policy in accordance with the assumed existence of cognitive flaws and biased decision-making leading to suboptimal welfare choices about, inter alia, food consumption, exercise, education, savings, energy conservation and pro-environmental choices more generally. In other words: ‘just as an engineer with a better understanding of air flows and wind resistance can use this knowledge to design a more economical car, a better turbine, or faster plane, so we try to use behavioural insights to improve the design of lots of different policy levers’ (Halpern, 2015: 318). Good policy design, thus understood, also requires application of CBA where both costs and benefits are understood and calculated broadly, including material wealth, security, health and happiness (Halpern, 2015; Sunstein, 2015).

This approach to policy design displays is both generally and specifically linked to risk regulation based on the principle of resilience (Sunstein, 2002b). Nudging and the construction of choice architecture involves the use of policy instrumentation and tools in a way that is highly adverse to traditional regulation, legislation and prohibitions. This is reflected in the very definition of a nudge as ‘any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives’ (Thaler & Sunstein, 2009: 6), and ‘a means of encouraging or guiding behaviour, but without mandating or instructing, and ideally without the need for heavy financial sanctions … It stands in marked contrast to an obligation; a strict requirement; or the use of force’ (Halpern, 2015: 22). More generally, the nudging agenda is in this sense based on a government–citizen relation modelled on the idea of reflexivity (Leggett, 2014). Policy design as nudging and construction of choice architectures is essentially meant to correct the flaws in the risk calculi exercised by individual citizens in their welfare choices. In this way, nudging trains ‘self-reflexivity’ and strives to build resilient citizens that constantly assess and change behaviour based on continuous assessment and calculation of risks.

Second, nudging is also part of the informational revolution. Nudging is akin to established communicative policy tools such as notification, moral suasion, persuasion, exhortation and indeed more or less traditional public information campaigns (Vedung & van (p.199) der Doelen, 1998; Howlett, 2009; Rice & Atkin, 2013). However, nudging applies such tools as a form of ‘smart information provision’, adding behavioural insights and new media to well-established tools such as the public information campaign and canvassing (John, 2013, 2016). In similar fashion, nudging has been characterized as a form of ‘behaviourally shaped informing’, meaning a revision of established tools aided not only by behavioural insights, but no less so by new media, digital government and big data ‘shaping and enhancing the power of nudges’ (Halpern, 2015: 180). As such, nudging is clearly a form of communicative governance displaying all the hallmarks of a network state operating in the informational flows of network society (Esmark, 2019). Indeed, the job of a choice architect consists in the deliberate construction of informational networks and flows around specific welfare choices within particular policy domains.

The intersection between nudging and network governance also extends, albeit to a lesser degree, to collaboration and the inclusion of stakeholders. Although most nudging interventions are designed specifically to correct welfare choices and thus target individual citizens as objects of regulation, the broader expansion of the broader nudging agenda into the domain of so-called ‘think’ strategies accurately reflects the ambitions of collaborative governance in its application of behavioural insights political participation and deliberation rather than individual choice (John et al, 2013). In contrast to the choice-correcting function of most nudging interventions, the purpose of ‘thinking’ interventions is to create institutional spaces for participation and deliberation, shared policy platforms and learning in a manner that largely equates democratic network governance, citizen-driven innovation and co-production (Mathur & Skelcher, 2007; Edelenbos et al, 2013).

Last but not least, the nudging agenda has positioned itself as the spearhead of an ‘experimental revolution’ in public policy (Halpern, 2015). The experimental approach of the nudging agenda is inherited from the roots of behavioural economy in cognitive psychology, leading to an even split between laboratory and field experiments among the 156 nudging studies carried out so far (Szaszi et al, 2018). In more practical terms, the source of the experimental revolution is the development and consolidation of evidence-based policy in the field of health and medicine. Referring to the case of the United Kingdom, the consolidation and spread of the ‘what works approach’ from health policy (starting with the creation of the National Institute for Clinical Excellence in 1999) to other policy domains have been described somewhat programmatically but aptly as the (p.200) ‘rise of experimental government’ based on a commitment to ‘radical incrementalism’, and as a process of ‘industrializing the experimental approach’ (Halpern, 2015: 281). The ‘industrial’ aspect refers to the creation of so-called ‘clearing houses’, which collect, compare and disseminate accumulated evidence on the impact of different types of intervention and policy tools. Reflecting Campbell’s vision of public policy as social experimentation more or less to the letter, experimental government involves a commitment to plan, execute and evaluate all policy interventions, big or small, as experiments to the greatest extent possible. Where the gold standard of randomly controlled trials cannot be applied, the experimental logic should be applied as much as practically possible (Stoker & John, 2009; Stoker, 2010).

This is a rather different version of learning, from evidence and continuously improving policy rather than social engineering and planning based on an axial principle of purely theoretical knowledge and corresponding technologies for calculation of optimal choices. Optimal choices are certainly still being calculated by proponents of the nudging agenda and choice architects, and they may indeed rely to some degree on old mainstays of earlier techno-bureaucracy such as game theory. Moreover, the use of CBA, albeit in a broad and flexible sense beyond efficiency and value for money, is more or less endemic to the nudging agenda and the broader movement towards behavioural public policy. However, the nudging agenda also adheres to a very specific understanding of scientific public policy as design of interventions based on and providing experimental evidence in a continuous search for better solutions, public value and general welfare. This logic ultimately runs counter to the axial principle of knowledge society and suggests a reconfiguration of the state/​​society nexus in the image of an experimental learning society. The consolidation of experimental government, correspondingly, does not depend on largescale systems ensuring value for money, but rather on the expansion of the experimental logic beyond the various ‘nudge units’ to more and more policy interventions and on the creation of systems for the accumulation and dissemination of experimental evidence. The result, as indicated by Sunstein, will be decisively technocratic, but also less bureaucratic and more democratic.

Conclusion

The preoccupation with learning from evidence and the continuous improvement of public policy in current performance management goes to the core of the technocratic scientism, measurement, calculation (p.201) and the culture of objectivity. Although the technocratic idea of government by science has is roots in engineering and was pushed further with the new role of physics, chemistry and mathematics with the post-war alliance of science, technology and politics, economic knowledge and forms of calculation have been most instrumental to technocratic influence on public policy. In this dimension at least, industrial technocracy was first and foremost an ‘econocracy’. The kind of performance management exercised by the new technocracy, however, is not just an extension of the econocracy. For one, this has to do with the fact that the econocracy, at least understood restrictively, is to some extent limited to the domain of economic policy. Even understood more expansively as a form of decision making based on the application of CBA, however, the new technocracy has moved beyond the formula of industrial econocracy.

On the one hand, the basic principle of cost-benefit calculation is used more pervasively (and proudly) than ever, even to the extent that we are in the midst of a cost-benefit revolution, according to some. On the other hand, however, CBA is simply an anchor point for processes of performance management that are not based on economic knowledge or techniques in a narrow sense, if it all. The new technocratic use of performance management has in this sense moved beyond industrial econocracy in two ways. First, through the creation and management of systems designed to calculate efficiency and enable auditing, inspection and public accountability. Such systems may be considered an extension of earlier econocracy in the domain of accounting and budgeting practices, but they are also much more expansive and push further into every aspect of organizational life. Second, performance management can be pursued more as a matter of creating and managing evaluation systems designed with an eye to public value, problem-solving, experimentation, innovation and public participation. Here, we have moved more decisively beyond the scope of econocracy and into the avant-garde of the new technocracy: experimentalist governance treating social engineering as a matter of social experimentation in the learning society. (p.202)