Performance measurement in Canadian employment service delivery, 1996-2000.
Grundy, John
Introduction
Performance measurement is at the centre of public sector reform
initiatives across a range of jurisdictions. Establishing performance
measures and benchmarks, tracking organizational activity and publicly
reporting results are now essential tasks for organizations under
mounting pressure to demonstrate value for money. The current emphasis
on measuring results has transformed the administration of employment
services for the unemployed in many jurisdictions, including Canada.
From the early 1990s onward, the OECD stressed rigorous results
measurement of employment service delivery as a key component of active
labour market policy, and it became a major site of policy transfer and
expertise in the area. While the performance measurement of employment
services in the U.S., Europe and Australia is well documented (Kerr,
Carson and Goddard 2002; Nunn, Bickerstaffe and Mitchell 2009; Soss,
Fording and Schram 2011; Weishaupt 2010; Brodkin 2011), the practice in
Canada remains under-explored.
This paper presents a historical case study of an employment
service performance measurement system implemented by Human Resources
Development Canada in the mid-1990s, known as the Results-Based
Accountability Framework (RBAF). Adopted at the height of the federal
government's embrace of new public management (NPM), and
complementing labour market policy reforms associated with the
Employment Insurance (El) Act (1996), the RBAF was a key means for
embedding a rapid re-employment orientation in service delivery. It
altered the terms for measuring public and third-party employment
service providers. In place of process-based measures such as the number
of individuals served, the RBAF imposed two primary results indicators:
the number of individuals returned to work and the savings in unpaid El
benefits generated as a result of re-employment. Administrators
repeatedly asserted that these metrics would engender a culture of
accountability for results.
The paper approaches the RBAF through the analytical lens of
Foucauldian governmentality, which is defined most succinctly as
"the conduct of conduct" (Foucault 1991: 93). Researchers
across many disciplines have elaborated Foucault's initial
formulation of governmentality into a critical analytical approach that
illuminates any deliberate attempt to govern people and things. This
approach does not attempt to explain governance in terms of causal
variables such as parties, institutions, or economic forces. It remains
agnostic around questions of "why" and focuses instead on the
"how" questions of governance: how do the problems concerning
authorities in different sites emerge and how do they change over time?
Through what discourses and techniques do authorities seek to bring
about desired outcomes? As a pervasive technique of governance, the
practice of performance measurement is a prominent topic within
governmentality research. Scholars in this field emphasize how
performance measurement governs conduct at a distance by imposing new
forms of calculative scrutiny and self-surveillance. Numerous studies
stress the disciplinary capacity of performance measurement regimes to
"re-shape in their own image the organizations they monitor"
and fabricate calculating selves in the process (Shore and Wright 1999:
570; see also Miller 1994).
The analysis of the RBAF presented below departs from this image of
performance measurement, however. Its central claim is that the RBAF did
not gain the traction often implied in governmentality scholarship. The
efforts of administrators to impose calculability encountered a series
of dilemmas. These include the mundane but significant difficulties of
technical coordination which led many to doubt the RBAF's validity
from the outset; forms of contestation on the part of organizational
actors and external stakeholders invested in different terms of
measurement; and finally, growing recognition that the meaning of the
performance data was inherently ambiguous.
The RBAF's implementation difficulties warrant reflection on
the part of researchers of governmentality. A dominant theme in this
literature is the diffusion of neoliberal governmentality, a mode of
governance characterized by the entrenchment of logics of enterprise and
calculation, including auditing and performance measurement, in more and
more areas of social life (Rose 1996; Dean 1999). While undoubtedly
enriching our understanding of neoliberalism, studies in this vein are
under mounting criticism for too often attributing a false coherence and
effectiveness to governmental practices, and for making them
"appear settled and sometimes even complete in ways that they are
not" (Brady 2011: 266; see also O'Malley 1996; Mckee 2009;
Walters 2012). A number of scholars call for governmentality research to
focus less on programs of governance as expressed in official policy
reports and statements, and to pay greater attention to what Li (2007:
28) describes as "the beyond of programing" the refractory
processes that precede and invariably confound governmental ambitions.
This entails more attentiveness to the limits to governance posed by
difficulties of technical coordination, realities of political
contestation, and the limits and ambiguities of expertise. Analyses that
do so are yielding more complex accounts of the work of governing. They
show how governance practices that often appear in governmentality
scholarship as unproblematically implemented and successful in their
intended reach and effects are actually deeply contested, incoherent,
and often failure prone (Larner and Walters 2000; Fliggins 2004; Howard
2006; Li 2007; Mckee 2009; Best 2014; Brady 2011 and 2014). (1) Along
these lines, the case study presented here illustrates how performance
measurement appears very differently when we elevate rather than elide
these realities. It also shows how public administration research on
performance measurement has many insights that can inform inquiry into
the fragility and incoherence of calculative techniques of governance.
The analysis proceeds in the following manner. The first section
provides a brief overview of performance measurement and its growing use
in employment service delivery. Turning to the empirical study, section
two traces the implementation of the RBAF and its embroilment in
technical difficulties, political contestation and the ambiguities of
measurement. Drawing out the practical and theoretical implications of
the analysis, the conclusion points to the limits of performance
measurement as a means of coordinating employment service delivery. It
also confirms recent calls for more variegated accounts of governance in
governmentality studies.
A final note on methods is warranted here. The case study is based
on the analysis of department reports and administrative records
acquired through a series of requests made under the Access to
Information Act (2) and research conducted in the now defunct Library
and Archives of Human Resources and Skills Development Canada (renamed
Employment and Social Development Canada). This yielded extensive
departmental documentation including technical and consultancy reports
on various aspects of employment service delivery, meeting minutes,
department memos, presentation notes and other correspondence on the
RBAF and employment services more generally. For another perspective on
administrative reforms to employment services during the nineties, I
consulted back issues of a newsletter produced by members of the Canada
Employment Immigration Union (CEIU), and housed at the union's
Toronto office. Documentary research was supplemented by
semi-structured, anonymous interviews with two HRDC staff at different
levels of the organization .(3) Interview participants were asked
questions relating to the implementation of the RBAF and other reforms
to employment service delivery in the 1990s.
Conceptualizing performance measurement
Performance measurement has both a short and a long history. On one
hand its diffusion reflects the ascendance of NPM over the past three
decades in nearly all advanced industrialized countries (Pollitt and
Bouckaert 2000). Premised on a dim view of Weberian bureaucratic
administration, NPM seeks to remake public sector bureaucracy in the
image of the private sector through measures such as performance
measurement and performance pay, cost-unit accounting, competitive
tendering of government services, and privatization. As Brodkin (2012:
5) notes, few managerial techniques have been as widely replicated as
performance measurement. On the other hand, the current enthusiasm for
performance measurement reflects a preoccupation with the efficiency and
effectiveness of government dating back more than a century. It is the
most recent in a long list of managerial innovations including
scientific management of the progressive era, management by objectives
of the fifties, and experiments in cost-benefit analysis during the
sixties and seventies, which sought to bring rigorous quantitative
visibility to all government functions "in search of the public
sector approximation of private enterprise's 'bottom
line' and for the operational control and clarified political
choices consequent thereon" (French 1984: 33).
Studies based on Foucault's lectures on governmentality open
up new ways to interpret the diffusion of performance measurement. This
scholarship starts from a broad understanding of governance as "any
more or less calculated and rational activity, undertaken by a
multiplicity of authorities and agencies, employing a variety of
techniques and forms of knowledge, that seeks to shape conduct ..."
(Dean 1999: 11). Governmentality studies investigate the forms of
expertise and discourses involved in defining problems to be solved, the
often mundane technical practices and procedures that enable
governmental ambitions to become practical interventions, and the modes
of selfhood and subjectivity fostered through governance projects
(Walters 2000). A central theme in this literature is the diffusion of
neoliberal governance characterized by logics of enterprise and
competition, and the proliferation of techniques such as contractualism
and performance measurement that aim to autonomize and responsibilize
organizations and actors at a distance (Rose and Miller 1992; Miller
1994). From the perspective of governmentality studies, performance
measurement appears as a key technique of neoliberal governance that
enables the activities of widely dispersed actors to be "made
inscribable and comparable in numerical form, in figures that can be
transported to centres of calculation, aggregated, related, plotted over
time, represented in league tables, judged against national averages,
and utilized for future decisions about the allocation of contracts and
budgets" (Rose, 1999: 153; Larner and Le Heron 2004).
Governmentality scholarship also emphasizes how performance measurement
can induce self-monitoring on the part of scrutinized individuals and
organizations. Those under performance measurement may internalize its
norms and values and conduct themselves accordingly (Miller 1994;
Lambert and Pezet 2012). Studies of the adoption of performance
measurement in sites such as universities, hospitals, cultural
organizations and social service agencies emphasize its capacity to
delineate what activities constitute performance, and to exclude others
from the organizational record (Shore and Wright 1999; Doolin 2004;
McDonald 2006; Suspitsyna 2010).
Governnmentality-based approaches to performance measurement are
highly relevant for understanding changes to the administration of
employment services for the unemployed. The rigorous performance
measurement of employment services is a key plank of reforms associated
with the "activation" paradigm of labour market policy, now
dominant in nearly all advanced welfare states (Nunn, Bickerstaffe and
Mitchell 2009; van Berkel 2009; Weishaupt 2010; Soss, Fording and Schram
2011; Brodkin 2012). According to the activation paradigm, so-called
passive income security programs such as Employment Insurance produce
work disincentives for unemployed individuals and harmful labour market
rigidities. It calls on governments to activate the unemployed through
"help and hassle" employment service measures oriented
primarily toward rapid re-employment. According to the activation
paradigm's logic, traditional bureaucratic administration is
inadequate to effect this transformation in the governance of the
unemployed. Instead, it calls on employment service providers to embrace
a new management culture in which "outcomes are tracked, program
impacts estimated and less effective programs are replaced with more
effective ones" (OECD 2005: 214). Soss, Fording and Schram (2011:
1205) characterize this new regime of activation as entailing the
"interplay of paternalist systems for disciplining clients (e.g.,
sanctions) and neoliberal systems for disciplining service-providers
(e.g., performance management)". The results-based measures used in
most jurisdictions relate to the number and speed of job seekers
re-employed and the off-flow of individuals from benefits as a result of
service interventions. Performance measures may also be established for
specific populations such as the long-term unemployed, youth or older
workers. Many jurisdictions use performance measurement along with
performance pay, competitive tendering, customer satisfaction surveys or
the engineering of quasi-markets of employment service providers.
Reflecting the disciplinary capacity of performance measurement that
governmentality scholarship highlights, policy makers often set
performance benchmarks continually higher to induce service providers to
increase job placements. In a study of U.S. welfare-to-work programs,
Schram et al. (2010) describe how performance measurement operates as a
hierarchical chain of disciplinary relationships that runs from the
federal government through lower levels of government, to individual
offices, case workers, and ultimately to the individual client: "At
each point in this cascade, benchmarks for outcomes are established and
monitored, and managerial techniques, incentives, and penalties are used
to discipline actors below" (Schram et al. 2010: 746).
The rigorous performance measurement of employment services is a
key plank of reforms associated with the "activation" paradigm
of labour market policy
Breaking with overly systematized accounts of neoliberal
governmentality, recent scholarly interventions call for better
recognition of the fragile and uncertain work involved in constituting
centers of calculation, the difficulties of making disparate spaces and
subjects calculable, and the forms of contestation that arise from such
efforts (Higgins and Larner 2010a; Best 2014; Prince 2014). The large
public administration literature that details the technical challenges
and unintended consequences associated with performance measurement can
advance this new direction in governmentality scholarship on calculative
techniques. Numerous studies undertaken by public administration
scholars underscore the difficulties of management information system
design and utilization, and the effects such difficulties can have on
the interpretability and legitimacy of performance data (Doolin 2004;
Rist and Stame 2006). Other studies emphasize how performance
measurement can skew organizational activity toward those aspects that
are measured, often at the expense of the substantive objectives of an
organization, and often in conflict with administrative due process or
equity (Perrin 1998; Kerr, Carson and Goddard 2002; Radin 2006; Chan and
Rosenbloom 2010; Brodkin 2005 and 2011). Public administration
literature also highlights the fundamental ambiguity of measurement in
light of the thorny question of causality. In the context of social
service delivery including employment services, many factors beyond the
particular agency can be the cause of the observed outcomes, and the
more obvious those other factors are the less credible performance
measurement becomes (Mayne 1999: 7). As the following sections
illustrate, foregrounding these dilemmas within governmentality-based
analyses of performance measurement can yield more complex and
multifaceted accounts of this key technique of neoliberal governance.
Human resource development Canada's results-based
accountability framework (RBAF)
The development of an employment service performance measurement
system reflected government-wide shifts in the administration of federal
bureaucracies during the nineties. The Liberal government, which was
elected in 1993, was deeply influenced by the tenets of NPM and put
questions of performance and efficiency at the centre of its agenda
(Ilcan 2009). Under the Liberal government, the Treasury Board assumed
new powers as a catalyst of managerial reform and implemented a
twice-yearly departmental performance reporting process. In this
context, employment service delivery increasingly stood out as an
evaluative challenge. Employment service outcomes are not directly
observable and are notoriously difficult to specify (Breslau 1998). The
primary measures of organizational activity that existed for the
employment service were input or process measures, such as the money
spent providing services or the number of clients served, which provided
no indication of outcomes. The Auditor General of Canada also criticized
the employment service over the lack of results-based assessment (Office
of the Auditor General of Canada 1988). In response to such criticism,
employment service administrators ramped up net impact evaluation of
service delivery in the late eighties. They also established a National
Working Group on the Impact of Employment Counseling, which explored
options for rendering frontline staff accountable for results
(Employment and Immigration Canada 1991 and 1993). In short, results
measurement was increasingly at the centre of service delivery reform
initiatives.
The development of an employment service performance measurement
system reflected government-wide shifts in the administration of federal
bureaucracies during the nineties.
With the adoption of the new Employment Insurance (EI) Act in 1996,
officials at HRDC NHQ set out to institute a new performance measurement
system for Employment Benefits and Support Measures (EBSMs) which
included training, targeted wage subsidies, self-employment assistance,
as well as more short-term services including counseling, resume
preparation and group sessions. Discussions among officials over
possible performance metrics were framed by the priorities of the day.
Given the emphasis on rapid re-employment in both the influential OECD
Jobs Study of 1994 and the federal government's social policy
reform agenda, officials adopted the measures of 1) the number of
clients employed or self-employed as a result of a service intervention,
and 2) the amount of savings in unpaid EI benefits resulting from client
re-employment. Staff in HRDC's NHQ developed benchmarks for these
indicators using administrative data on service users and benefit
claimants in previous years. Regional targets were derived from these
benchmarks and distributed to regional headquarters in 1996.4 A
computerized performance tracking system went online shortly thereafter
along with the El Act's new employment services. For active El
claimants who received services and returned to work before the end of
their benefit entitlement, the system credited the found-work count and
benefits savings to the office where services were provided. Clients who
attained employment after benefit exhaustion would count only for the
found-work measure, and would be captured through a follow-up telephone
survey. Data from service interventions were compiled at NHQ and posted
monthly on a departmental website. Office managers were encouraged to
use the data to monitor the performance of their offices. The intended
effect of this procedure was to induce a cultural change throughout the
organization, and to communicate the message that staff would be made
responsible for their performance in achieving results rather than
following administrative processes (HRDC 1996a: 2).
The RBAF was quickly entangled in dilemmas of tech- nical
coordination, forms of political contestation, and the indeterminacies
of quantification.
HRDC's performance measurement system undoubtedly exerted an
influence over service delivery. Shortly following its implementation
along with the new El Act, service delivery shifted toward a short-term,
rapid re-employment orientation. The proportion of service users
participating in short-term services quickly increased, and the cost per
participant fell sharply (Canada Employment Insurance Commission [CEIC]
1997 and 1999). One evaluation concluded that, "the emphasis on
short-term results has dominated the implementation of EBSMs" (HRDC
1998: 73). These effects are consistent with governmentality-based
accounts of the way performance measurement can discipline and reshape
organizational practice. Yet the implementation of the RBAF generated
other effects that cannot be accounted for within governmentality
narratives of discipline and surveillance. The RBAF was quickly
entangled in dilemmas of technical coordination, forms of political
contestation, and the indeterminacies of quantification. In turning to
these now, the case study is intended to help redress what Brockling,
Krasman and Lemke (2011: 20) suggest is the failure of governmentality
research to adequately accentuate "the dislocations, translations,
subversions, and collapses of power with as much meticulousness as [its]
programs and strategic operations."
Problems with technical coordination
The case of the RBAF is instructive to scholars of governmentality
because it illustrates the difficulties involved in making performance
measurement workable in everyday practice. The mundane challenges of
technical coordination undermined the legitimacy and integrity of the
technology. Problems associated with data entry were perhaps the most
immediate challenge to the RBAF. The process of tracking results relied
on the standardized entry of client information in case management
software at the outset of a client's employability intervention.
Perhaps not surprisingly, an evaluation report indicated that a sizable
portion of frontline staff was not consistently carrying out this task
(HRDC 2001: 15). There were also a range of difficulties with client
follow-up surveys after the end of service provision. (5) Despite the
promise of adequate funding by NHQ, offices reported having scant human
and financial resources to devote to client follow-up (HRDC 1998).
Ensuring standardized data collection was made even more challenging
given the diverse arrangements with third-party agencies providing
employment assistance services. One administrator's assessment of
the management information system was that it was more sensitive to
changes in data entry and follow-up practices than it was to picking up
changes in client employment (Personal Interview, HRDC Administrator).
Such formidable challenges associated with office-level data entry
confirm what Higgins and Larner (2010b: 6) describe as the
precariousness of technologies of calculation that rely on the
standardization of activity across different sites.
Problems associated with data entry were perhaps the most immediate
challenge to the RBAF.
Difficulties of data entry were compounded by growing concerns over
the possibility of gaming and creaming at the individual office level
(Working Group on Medium-Term Savings [WGMTS] 1999). Departmental
documents note two different strategies adopted by offices. Given that
performance results were only counted for case-managed clients (that is,
clients who had a return-to-work action plan entered in case management
software), some offices had begun to expand their case management
practices to "everyone who walks through the door" (HRDC 1997:
3). A computer record was thus generated in the hope of later
accumulating employment and savings counts. This effect of performance
measurement is common in employment service delivery as agencies seek to
accumulate performance points through unnecessary service provision
(Nunn, Bickerstaffe and Mitchell 2009: 15). On the other hand, there was
widespread concern that many offices had found ways to limit services to
those who facilitate performance achievement, essentially those without
more complex or multiple employment-related needs (WGMTS 1999). This
practice was acknowledged in the formative evaluation of EBSMs released
by HRDC in 1998. It conveyed service providers' concern that
"some organizations have adapted or changed the clientele they
served in order to obtain results. Consequently, community partners
believed that some clients were 'falling through the
cracks'" (HRDC 1998: 42-43). In response, HRDC's senior
management took measures to mitigate the risk of creaming. They produced
communications and organized field workshops admonishing office managers
and staff to maintain what they called a "balanced portfolio"
of clients. Their message was that reconciling short-term placement and
savings targets with equitable service provision was possible, but
required skillful decision-making on the part of frontline staff (HRDC
1996b: np). However, records indicate that a contingent of NHQ
administrators recognized that exhorting offices simply to avoid
creaming would have little effect (WGMTS 1999).
The RBAF therefore illustrates the tenuousness of officials'
efforts to impose new forms of calculability, a dynamic that remains
under explored in governmentality literature. In this case, difficulties
related to data entry and growing suspicions of office-level gaming
quickly generated doubts among staff over the integrity of the
performance data, and the very possibility of measuring results.
Reflecting such difficulties, one administrative report noted
"confusion at the local level concerning how results are calculated
and, very importantly from management's point of view, what they
mean and how to use them once they have been reported" (HRDC 1998:
43). Attending to difficulties of technical coordination such as these
can productively complicate governmentality studies, and counter the
risk of reifying the coherence of governmental techniques such as
performance measurement.
Challenges to performance measurement
Examination of the RBAF's implementation can yield another set
of insights for researchers of governmentality into the contested nature
of performance measurement. Governmentality scholarship on performance
measurement tends to emphasize the formation calculable spaces and
disciplined subjects rather than the forms of contestation that take
shape in and around calculative practices. As the case of the RBAF
shows, however, the power of officials to define what counted as
organizational performance and to embed this definition in the way
service delivery was measured was deeply contested from its outset.
The RBAF was implemented in a politically charged organizational
environment made increasingly turbulent by successive managerial
reforms. Its implementation followed a downsizing exercise amounting to
a twenty percent reduction in the department's full time staff and
involving much greater use of community-based and for-profit service
providers (Bakvis 1997; Good 2003). The Canada Employment and
Immigration Union (CEIU), an active union among the federal public
service (McElligott 2001), was deeply critical of these initiatives. The
newsletter produced by members of CEIU's Ontario branch, Paranoia
(a name inspired by the department's newsletter Panorama), reported
numerous times on how funding for government-run employment services was
being redirected to third-party providers including for-profit agencies
(CEIU Ontario 1994: 3). One vocal HRDC staff member stated that the new
contractual and performance based service delivery model was eliminating
the role of the employment counselor: "We've been in
situations where counselors are being sent out to train community
partners how to do our jobs ... there's no counseling going to be
left under this model. If you're just going around monitoring
contracts, then you're no longer a counselor" (CEIU Ontario
1997: 3). The National President of the CEIU later characterized this
period as one in which employment counselors were "reduced to
passive compilers of paperwork" (Meunier McKay 2005). These
developments were criticized by many staff whose sense of
professionalism remained closely tied to human service work rather than
the rituals of verification of NPM (Personal Interview, HRDC Employment
Counselor 2008). Perhaps not surprisingly, as senior administrators went
out into the field to promote the new performance measurement framework,
many staff objected to the idea of being rendered accountable for the
outcomes of service users (Office of the Auditor General of Canada 1997;
Personal Interview, HRDC Administrator 2010). This reaction is
consistent with much public administration research that highlights the
negative effects of performance measurement on staff morale (Dias and
Maynard-Mooney 2007; Diefenbach 2009). It also challenges simplistic
narratives, common in literature on neoliberal governmentality, of the
fabrication of calculating selves who self-govern in accordance with
calculative techniques.
The RBAF was implemented in a politically charged organizational
environment made increasingly turbulent by successive managerial
reforms.
A practice administrators adopted in years following the
RBAF's implementation was to have individual Human Resource Canada
Centres (HRCCs) establish their own numerical performance targets in
consultation with NHQ. This was intended to mitigate conflict likely to
arise from top-down, mechanical imposition of targets, and ideally to
facilitate local level ownership over the results measurement process.
Such ownership and participation did not extend to the initial
determination of the primary short-term oriented results measures, which
remained controversial. A common concern among staff was that its
short-term measures did not provide a way to account for the
intermediate steps many service users with more extensive needs were
required to undertake prior to securing employment (WGMTS 1999). Equally
problematic for many was the lack of any way to account for the quality
of work found in terms of duration or wages. Such concerns illustrate
how in complex systems of social welfare provision, there is often no
single obvious measure of outcomes, but instead, a range of differently
situated actors who are invested in different terms of measurement
(Paton 2003: 45). Such multivocality within governance requires careful
theorization in governmentality scholarship as it has important
implications for how governance plays out in everyday practice. As the
practice of having offices set their own numerical targets illustrates,
it often requires forms of negotiation, compromise, and often some
degree of mutual accommodation (O'Malley 1996: 313; see also Brady
2011).
Amidst growing controversy over developments in employment service
delivery, the Unemployed Workers Council (UWC), established in 1992 by
the Toronto Labour and Building Trades Councils, organized a
twenty-seven city tour of Ontario with a former HRDC employee. The tour
sought to raise awareness about the increasingly exclusionary nature of
employment service delivery under the new El Act and HRDC's
performance-based regime. The UWC claimed that the department's
emphasis on generating El savings amounted to discrimination against the
disabled, immigrants, women, and others with special needs. It even
sought to initiative a complaint with the Canadian Human Rights
Commission. While the UWC failed in its bid to initiate a complaint, it
established a hotline and encouraged anyone who felt they were denied a
service unfairly to call (CEIU Ontario 1998: 3; Fort Frances Times
Online 1998).
The RBAF generated forms of contestation poorly captured in much
governmentality studies of performance measurement and neoliberalism
more generally. It was uneasily grafted onto an organizational context
characterized by a range of actors with divergent visions of service
delivery goals. Values of due process, equity and quality service
provision posed obstacles to the new performance based regime. The RBAF
thus underscores how techniques of neoliberal governmentality do not
simply extend themselves unproblematically across social and
organizational fields, steamrolling over past formations in the process.
Instead, as Brodie (2008: 148) argues, "[previously cultivated
identities, political consensus, and cultural ideals ... constitute
obstacles to the promotion of a new governing order, and its particular
way of representing and intervening." Documentation of these
obstacles and their implications is necessary to avoid overstating the
reach and effects of calculative technologies associated with neoliberal
governmentality.
Limits of organizational knowledge within performance measurement
The RBAF facilitates a third insight for scholars of
governmentality concerning the role and limits of expertise in
governance. One of the central tenets of this literature is that
governance is a knowledge-intensive activity. Foucault's account of
the emergence of modern governmentality highlights the formation of
arrangements through which formal state apparatuses incorporated expert
knowledge in areas such as public health and statistics in projects of
social administration. Scholars of governmentality continue to explore
the interactions between political authorities and experts that assist
in governance. Many studies document an ongoing shift whereby experts
able to wield powerful know-hows of calculation and monitoring are
gaining influence over other forms of professional power established
over the 20th century by teachers, social workers, counselors, doctors
and others (Rose and Miller 1992; Isin 2002). While accounts of this
development tend to emphasize the increasing clout of calculative
expertise, a growing strand of governmentality research highlights the
persistence of ambiguity and incoherence within calculative practices
associated with neoliberal governance (Higgins and Larner 2010a; Best
2014; Prince 2014).
The RBAF exemplifies the ambiguities that can confound calculative
technologies. While promotional material from NHQ stressed the ability
of the RBAF to capture organizational results, many recognized that
there was no necessary relation between the work of staff and the
performance data recorded in the management information system. While
the RBAF could provide some information as to how an employment office
functioned, it could not indicate whether the outcomes recorded were a
result of service interventions and not the result of any number of
factors, including chance (HRDC [Strategic Evaluation and Monitoring]
1998). This fueled concern among both staff and management that the RBAF
was unfairly penalizing offices where the failure to meet placement and
savings targets reflected poor local economic conditions rather than any
deficiency in service delivery. Conversely, there was concern that it
was crediting offices with performance points that were more likely the
result of a buoyant local economy (WGMTS 1999). Given that the majority
of El claimants do not exhaust benefits even when they do not receive
services, many questioned the logic of treating savings in unpaid
benefits as an attribute of service delivery and a measure of
performance (Personal Interview, HRDC Administrator 2010). These
dilemmas of attribution undermined the capacity of the RBAF to
discipline and reshape organizational practice in the ways typically
stressed in governmentality studies.
While promotional material from NHQ stressed the ability of the
RBAF to capture organizational results, many recognized that there was
no necessary relation between the work of staff and the performance data
recorded in the management information system.
The deficiencies of the RBAF had been a serious concern for a
number of the department's program evaluators. They asserted that
determining the results of service delivery in a manner that met a bare
minimum of scientific legitimacy required a "net" impact
evaluation that could isolate program impacts from other influences.
Only in this way could officials determine the benefit that would not
have occurred in the program's absence. The concerns of program
evaluators reflected divisions between program evaluation, which
emphasizes methodological sophistication and is both time and resource
intensive, and performance measurement, which privileges managerial
utility over scientific rigour. Over the past few decades performance
measurement has gained prominence over the practice of program
evaluation given officials' preference for continual streams of
easily understood performance data that can facilitate managerial
control (Bastoe 2006; McDavid and Huse 2006).
A number of HRDC administrators well versed in program evaluation
established a working group to develop a measure of office performance
over the medium term. The group sought to devise an "analytically
meaningful operational measure of medium-term [EI] savings to help
mitigate the effects of undue reliance on short-term measures and
confusing signals the accountability regime was sending to HRCCs"
(WGMTS 1999: i). It settled on an evaluation method known as
difference-indifferences to determine medium-term "net" EI
savings. (6) This involved comparing El use among claimants three years
before and three years after an employability intervention. It then
entailed a non-experimental exercise to estimate what claimants' EI
use would have been without receiving an employment service. Medium-term
"net" El savings resulted if, over three years following an
employability intervention, claimants' actual El use was less than
their estimated El use in the absence of an intervention. According to
the working group, this method of calculation would correct for external
influences on employment office results such as local economic
conditions. By allowing for a three-year time-horizon in which results
could be documented, it would alleviate the pressure placed on frontline
staff to generate short-term results, and allow agencies to better
accommodate the needs of individuals and communities. For these reasons,
the development of medium-term measures was a priority among many
program evaluators, senior regional managers, office managers and staff
(WGMTS 1997a).
HRDC staff involved in the working group recognized the need to
present their message carefully to secure senior management support
(WGMTS 1997b). Performance measurement systems always advance certain
organizational interests over others, and changes to them can shift the
balance of organizational power. Administrative records indicate that
the response of senior management to the working group's proposal
was mixed. Some members of HRDC's Audit and Evaluation Committee
were unreceptive to the working group's report, and perceived its
method to be in competition with the existing RBAF (HRDC [Strategic
Evaluation and Monitoring] 1998: 3). Medium-term operational measures
might have allowed for an alteration of service delivery practices in
ways not consistent with the broader policy orientation of the Liberal
government toward short-term, rapid re-employment interventions. As
research on program evaluation frequently demonstrates, evaluation
methods at odds with the political preferences of policy makers often
remain marginal in policy deliberation (Weiss 1999). Nevertheless, some
among senior management did support efforts to develop medium-term
measures of performance (see Good 1999), and longer-term outcome
tracking was included in periodic employment service evaluations carried
out in the different provinces.
The point to stress here is that the RBAF did not bear out the
coherence usually attributed to performance measurement in
governmentality literature. The performance measurement system did not
generate a clear picture of organizational results as its measures were
widely recognized to be inherently ambiguous. As one report put it:
"both HRDC and [the] TB [Treasury Board] are in transition towards
a new government-wide accountability and performance measurement regime
that neither may be fully comfortable with, nor understands" (WGMTS
1999: 13). One administrator similarly recalled that, "as we went
to implement it, there was a lot of struggle ... if you pulled at these
measures, they weren't that robust" (Personal Interview, HRDC
Administrator, 2010). In this way, the RBAF gave rise to new ambiguities
around the effects of service delivery and the measurability of service
outcomes.
The performance measurement system did not generate a clear picture
of organizational results as its measures were widely recognized to be
inherently ambiguous.
Beginning in 1996, Labour Market Development Agreements (LMDAs)
were established between the federal and provincial governments. The
agreements eventually transferred the administration of EI Act related
employment services to the provinces. This development saw the
externalization of the RBAF as a means of coordinating intergovernmental
accountability. Two of the three primary measures of provincial
performance in the LMDAs were drawn directly from HRDC's internal
performance measurement system: the number of El claimants returned to
work and the El savings generated as a result of service interventions.
The LMDAs incorporated the number of active El claimants served as a
third measure. While the performance measures of the LMDAs are analyzed
elsewhere (Wood and Klassen 2009), it is important to note how they
reproduced the dilemmas of HRDC's internal RBAF. Critiques
initially leveled against HRDC's internal performance measures were
soon made of the LMDAs. As an administrator recalled: "[a]fter the
agreements were signed ... there was some legitimate criticisms that the
definition of found work wasn't rigorous enough or the benefits
[measure] wasn't meaningful" (Personal Interview, HRDC
Administrator 2010). These concerns underscore scholarly criticism of
public performance reporting as a mechanism for ensuring
intergovernmental accountability in Canada (Anderson and Findlay 2010;
Graefe and Levesque 2013).
Conclusion
HRDC's Results-Based Accountability Framework was a key
element of neoliberal labour market policy reforms adopted by the
federal government in 1990s. Through its implementation, administrators
sought to mobilize the disciplinary capacity of performance measurement,
amply documented in governmentality scholarship, to reshape the
activities of frontline staff and managers in accordance with the
objective of rapid reemployment of El claimants. This entailed the
interplay of new pressures aimed at the unemployed who were increasingly
subject to rapid reemployment measures, and employment service staff
made accountable for results.
Yet the case of the RBAF exemplifies complexities too often
discounted in governmentality scholarship on calculative techniques.
Discipline and self-surveillance on the part of staff were by no means
the RBAF's only organizational effects. It became embroiled in
technical and political challenges and numerous unintended consequences.
Far from simply generating calculating subjectivities, the RBAF did not
sit easily with the values of many staff and was contested. Rather than
imposing a grid of calculability, the RBAF generated considerable
confusion over the validity and meaning of the performance data. Actors
throughout the employment service recognized that the integrity of the
data could not be assured given the difficulties involved in
standardized data entry and client follow-up. An even more fundamental
ambiguity arose over the question of attribution and causality.
Ultimately, rather than furnishing a bottomline measure of
organizational results, the performance measurement regime gave rise to
new ambiguities around the measurability of service delivery.
The RBAF provides a vivid illustration of the difference induced in
governmentality analysis when more attention is given to the
difficulties and unintended effects of governance practices. This study
therefore confirms the need for further elaboration of governmentality
studies more closely attuned to the fragility and contestability of
practices of governance (O'Malley 1996; Mckee 2009; Brady 2011;
Walters 2012). It also underscores the capacity of public administration
research to deepen and extend this line of governmentality-based
inquiry. Public administration scholarship on the technical and
political obstacles that confound performance measurement warrants close
consideration in studies of calculative governmentalities.
Finally, the foregoing analysis points to several broader
implications. It confirms previous research which shows that the
imposition of narrow performance metrics on a complex service delivery
organization is likely to lack legitimacy and generate contestation
(Dias and Maynard-Mooney 2007). This study also highlights how
performance measurement should be considered a form of policy making,
rather than simply a neutral technical or administrative exercise
(Brodkin 2011). The matter of how officials define performance warrants
a much more prominent place in public deliberation over policy
implementation. This is especially pressing given the well documented
tendency of performance measurement to induce organizations to
"make the numbers" in ways that may run up against
legislation, the entitlements of service users, as well as norms of
equity or quality service delivery. Without careful assessment of the
full effects of performance measurement, this central pillar of
managerialism may exacerbate deficits of organizational transparency and
democratic accountability (Brodkin 2011).
References
Anderson, Lynell, and Tammy Findlay. 2010. "Does public
reporting measure up? Federalism, accountability and child-care policy
in Canada." Canadian Public Administration 53 (3): 417-438.
Bakvis, Herman. 1997. "Getting the giant to kneel: A new human
resources delivery network for Canada." In Alternative Service
Delivery: Transcending Boundaries, edited by Robin Ford, and David
Zussman. Toronto: IPAC/KPMG.
Bastoe, Per Oyvind. 2006. "Implementing results-based
management." In From Studies to Streams: Managing Evaluative
Systems, edited by Ray Rist, and Nicoletta Stame. New Brunswick, NJ:
Transaction Publishers.
Best, Jacqueline. 2014. Governing Failure: Provisional Expertise
and the Transformation of Global Development Finance. Cambridge:
Cambridge University Press.
Brady, Michelle. 2011. "Researching governmentalities through
ethnography: The case of Australian welfare reforms and programs for
single parents." Critical Policy Studies 5 (3): 265-283.
--. 2014. "Ethnographies of neoliberal governmentalities: From
the neoliberal apparatus to neoliberalism and governmental
assemblages." Foucault Studies 18: 11-33.
Breslau, Daniel. 1998. In Search of the Unequivocal: The Political
Economy of Measurement in U.S. Labor Market Policy. Westport: Praeger.
Brockling, Ulrich, Susanne Krasman, and Thomas Lemke. 2011.
"From Foucault's lectures at the College de France to studies
of governmentality: An introduction." In Governmentality: Current
Issues and Future Challenges, edited by Ulrich Brockling, Susanne
Krasmann, and Thomas Lemke. New York: Taylor and Francis.
Brodie, Janine. 2008. "We are all equal now: Contemporary
gender politics in Canada." Feminist Theory 9 (2): 145-164.
Brodkin, Evelyn. 2005. "Toward a contractual welfare state?
The case of work activation in the United States." In
Contractualism in Employment Services: A Neiv Form of Welfare State
Governance, edited by Els Sol, and Maria Westerveld. The Hague: Kulwer
Law International.
--. 2011. "Policy work: Street-level organizations under new
managerialism." Journal of Public Administration Research and
Theory 21: i253-i277.
--. 2012. "Reflections on street-level bureaucracy: Past,
present, and future." Public Administration Review 71 (6): 940-949.
Canada Employment Insurance Commission (CEIC). 1997. 2 997
Employment Insurance Monitoring and Assessment Report. Ottawa: Her
Majesty the Queen in Right of Canada.
--. 1999. 1999 Employment Insurance Monitoring and Assessment
Report. Ottawa: Her Majesty the Queen in Right of Canada.
Canada Employment and Immigration Union (CEIU), Ontario Region.
1994. "CEC jobs threatened by contracting out." Paranoia: The
Workers Magazine of CEIU 12 (3) May-June: 3.
--. 1997. "CEIU members resist devolution in Metro
Toronto." Paranoia: The Workers Magazine of CEIU 14 (3)
January-February: 3.
--. 1998. "Former CEIU member slams HRDC." Paranoia: The
Workers Magazine of CEIU 15 (3) June-July: 3.
Chan, Hon, and David Rosenbloom. 2010. "Four challenges to
accountability in contemporary public administration: Lessons from the
United States and China." Administration and Society 42: 11S-33S.
Dean, Mitchell. 1999. Governmentality: Power and Rule in Modern
Society. London: Sage.
Dias, Janice Johnson, and Steven Maynard-Moody. 2007.
"For-profit welfare: Contracts, conflicts, and the performance
paradox." Journal of Public Administrtion Research and Theory 17
(2): 189-211.
Diefenbach, Thomas. 2009. "New public management in public
sector organizations: The dark sides of managerialistic
'enlightenment'." Public Administration 87 (4): 892-909.
Doolin, Bill. 2004. "Power and resistance in the
implementation of a medical management information system."
Information Systems Journal 14: 343-362.
Employment and Immigration Canada (Working Group on the Impact of
Counseling). 1991.
Employment Counseling Into the 1990's. Gatineau: Employment
and Immigration Canada.
--. 1993. Employment Counseling Measurement and Accountability
Framework. Gatineau: Employment and Immigration Canada.
Fort Frances Times Online. 1998. "Workers' group fighting
government over El." July 8 1998. Available at:
http://newsite.fftimes.com/node/55041.
Foucault, Michel. 1991. "Govemmentality." In The Foucault
Effect: Studies in Govemmentality, edited by Graham Burchell, Colin
Gordon, and Peter Miller. Chicago: University of Chicago Press.
French, Richard. 1984. How Ottawa Decides. Planning and Industrial
Policy-Making 1968-1984.
Toronto: James Lorimer and Company with the Canadian Institute for
Economic Policy.
Good, David. 1999. "Management response." In Managing by
Results: Employment Benchmarking and Savings Impacts for Employment
Insurance, Final Report, by Ging Wong and Lesle Wesa. Strategic
Evaluation and Monitoring, Evaluation and Data Development, Human
Resources Development Canada. Available at:
http://dsp-psd.pwgsc.gc.ca/Collection/ RH63-2-062-02-99E.pdf.
--. 2003. The Politics of Public Management: The Human Resources
Development Canada Audit of Grants and Contributions. Toronto:
University of Toronto Press.
Graefe, Peter, and Mario Levesque. 2013. "Accountability in
labour market policies for persons with disabilities." In
Overpromising and Underperforming? Understanding and Evaluating New
Intergovernmental Accountability Regimes, edited by Peter Graefe, Julie
M. Simmons, and Linda Ann White. Toronto: University of Toronto Press.
Higgins, Vaughan. 2004. "Government as a failing operation:
Regulating administrative conduct 'at a distance' in
Australia." Sociology 38 (3): 457-476.
Higgins, Vaughan, and Wendy Larner (eds). 2010a. Calculating the
Social: Standards and the Reconfiguration of Governing. Houndsmills:
Palgrave Macmillan.
Higgins, Vaughan, and Wendy Lamer. 2010b. "Standards and
standardization as a social science problem." In Calculating the
Social: Standards and the Reconfiguration of Governing, edited by
Vaughan Higgins, and Wendy Lamer. Houndsmills: Palgrave Macmillan.
Howard, Cosmo. 2006. "The new governance of Australian
welfare: Street-level contingencies." In Administering Welfare
Reform: International Transformations in Welfare Governance, edited by
Paul Henman, and Menno Fenger. Bristol: Policy Press.
Human Resources Development Canada (HRDC). 1996a. Results-Based
Accountability Framework. Ottawa: Her Majesty the Queen in Right of
Canada.
--. 1996b. "Corporate incremental El savings objective."
Memo from Mel Cappe, Deputy Minister, Human Resources Development
Canada, to Regional Director Generals. Gatineau: Human Resources
Development Canada.
--. 1997. "National workshop to take stock of employment
targets." Interoffice Memorandum (07/21/97). Gatineau: Human
Resources Development Canada.
--. 1998. Formative Evaluation of the Employment Benefits and
Support Measures, Final Report. Gatineau: Human Resources Development
Canada.
--. 2001. Canada-Newfoundland and Labrador LMDA/EBSM Evaluation of
Support Measures, Final Report. Gatineau: Human Resources Development
Canada.
Human Resources Development Canada (Strategic Evaluation and
Monitoring). 1998. "The Role of Evaluation in the Human Resources
Development Canada Results-Based Accountability Framework, Draft.'
Memo. Gatineau: Human Resources Development Canada.
Ilcan, Suzan. 2009. "Privatizing responsibility: Public sector
reform under neoliberal government." Canadian Revieiv of Sociology
46 (3): 207-234.
Isin, Engin. 2002. Being Political: Genealogies of Citizenship.
Minneapolis, MN: University of Minnesota Press.
Kerr, Lorraine, Ed Carson, and Jodi Goddard. 2002.
"Contractualism, employment services and mature age job-seekers:
The tyranny of tangible outcomes." The Drawing Board: An Australian
Review of Public Affairs 3 (2): 83-104.
Lambert, Caroline, and Eric Pezet. 2012. Accounting and the making
of homo liberalis. Foucault Studies 13: 67-81.
Larner, Wendy, and Richard Le Heron. 2004. "Global
benchmarking: Participating 'at a distance' in the globalizing
economy." In Global Governmentality: Governing International
Spaces, edited by Wendy Larner, and William Walters. London: Routledge.
Larner, Wendy, and William Walters. 2000. "Privatization,
governance and identity: The United Kingdom and New Zealand
compared." Policy and Politics 28 (3): 361-377.
Li, Tania. 2007. The Will to Improve: Governmentality, Development
and the Practice of Politics. Durham: Duke University Press.
Mayne, John. 1999. "Addressing attribution through
contribution analysis: Using performance measures sensibly."
Available at: http://www.org-bvg.gc.ca/intemet/docs/99dp1_e.pdf.
McDavid, James C., and Irene Huse. 2006. "Will evaluation
prosper in the future?" The Canadian journal of Program Evaluation
21 (3): 47-72.
McDonald, Catherine. 2006. "Institutional transformation: The
impact of performance measurement on professional practice in social
work." Social Work and Society 4 (1): 25-37.
McElligott, Greg. 2001. Beyond Service: State Workers, Public
Policy, and the Prospects for Democratic Administration. Toronto:
University of Toronto Press.
Mckee, Kim. 2009. "Post-Foucauldian governmentality: What does
it offer critical social policy analysis?" Critical Social Policy
29: 465-486.
Meunier McKay, Janet. 2005. "The 'call for
proposals' process: A view from inside HRSDC." A presentation
to the House of Commons Standing Committee on Human Resources, Skills
Development, Social Development and the Status of Persons with
Disabilities, 38th Parliament, 1st Session, April 12th 2005.
Miller, Peter. 1994. "Accounting and objectivity: The
invention of calculating selves and calculable spaces." In
Rethinking Objectivity, edited by Allan Megill. Durham: Duke University
Press.
Nunn, Alex, Tim Bickerstaffe, and Ben Mitchell. 2009.
"International review of performance management systems in public
employment services." Department for Work and Pensions Research
Report No 616. Available at:
http://campaigns.dwp.gov.uk/asd/asd5/rports20092010/rrep616.pdf.
OECD. 2005. "Public employment services: Managing
performance." In OECD Employment Outlook 2005. Available at:
http://www.oecd.org/dataoecd/2/40/36780883.pdf.
Office of the Auditor General of Canada. 1988. Report of the
Auditor General of Canada to the House of Commons Fiscal Year Ended 31
March 1988. Ottawa: Minister of Supply and Services Canada.
--. 1997. October Report of the Auditor General of Canada. Chapter
17--Human Resources Development Canada--A Critical Transition Toward
Results-Based Management. Ottawa: Supply and Services Canada.
O'Malley, Pat. 1996. "Indigenous governance."
Economy and Society 25 (3): 310-326.
Paton, Rob. 2003. Managing and Measuring Social Enterprises.
London: Sage Publications.
Perrin, Burt. 1998. "Effective use and misuse of performance
measurement." American journal of Evaluation 19 (3): 367-379.
Pollitt, Christopher, and Geert Bouckaert. 2000. Public Management
Reform: A Comparative Analysis. Oxford: Oxford University Press.
Prince, Russell. 2014. "Calculative cultural expertise?
Consultants and politics in the UK cultural sector." Sociology 48
(4): 747-762.
Radin, Beryl. 2006. Challenging the Performance Movement:
Accountability, Complexity, and Democratic Values. Washington:
Georgetown University Press.
Rist, Ray, and Nicoletta Stame (eds). 2006. From Studies to
Streams: Managing Evaluative Systems. New Brunswick, NJ: Transaction
Publishers.
Rose, Nikolas. 1996. "Governing advanced liberal
democracies." In Foucault and Political Reason, edited by Andrew
Barry, Thomas Osborne, and Nikolas Rose. Chicago: University of Chicago
Press, 37-65.
--. 1999. Powers of Freedom: Reframing Political Thought.
Cambridge: Cambridge University Press.
Rose, Nikolas, and Peter Miller. 1992. "Political power beyond
the state: Problematics of government." The British Journal of
Sociology 43 (2): 173-205.
Shore, Chris, and Susan Wright. 1999. "Audit culture and
anthropology: Neo-liberalism in British higher education." Journal
of the Royal Anthropological Institute 5 (4): 557-575.
Schram, Sanford, Joe Soss, Linda Houser, and Richard Fording. 2010.
"The third level of U.S. welfare reform: Govemmentality under
neoliberal paternalism." Citizenship Studies 14 (6): 739-754.
Soss, Joe, Richard Fording, and Sanford Schram. 2011. "The
organization of discipline: From performance management to perversity
and punishment." Journal of Public Administration Research and
Theory 21 (supplement 2): i203-i232.
Suspitsyna, Tatiana. 2010. "Accountability in American
education as a rhetoric and a technology of govemmentality."
Journal of Education Policy 25 (5): 567-586.
van Berkel, Rik. 2009. "The provision of income protection and
activation services for the unemployed in 'active' welfare
states. An international comparison." Journal of Social Polity 39
(1): 17-34.
Walters, William. 2000. Unemployment and Government: Genealogies of
the Social. Cambridge University Press.
--. 2012. Govemmentality: Critical Encounters. New York: Routledge.
Weishaupt, J. Timo. 2010. "A silent revolution? New management
ideas and the reinvention of European public employment services."
Socio-Economic Review 8: 461-486.
Weiss, Carol H. 1999. "The interface between evaluation and
public policy." Evaluation 5 (4): 468-486.
Wood, Donna, and Thomas R. Klassen. 2009. "Bilateral
federalism and workforce development policy in Canada." Canadian
Public Administration 52 (2): 249-270.
Working Group on Medium-Term Savings (WGMTS). 1997a. "Minutes
of the meeting of the Working Group on Medium-Term Savings, November 17,
1997." Gatineau: Human Resources and Skills Development Canada.
--. 1997b. "Minutes of the meeting of the Working Group on
Medium-Term Savings, December 12, 1997." Gatineau: Human Resources
and Skills Development Canada.
--. 1999. Report of the Working Group on Medium-Term Savings.
Gatineau: Human Resources and Skills Development Canada.
Notes
(1) For an in-depth discussion of this new direction in
governmentality studies, see the October 2014 special issue of Foucault
Studies, titled "Ethnographies of Neoliberal
Governmentalities."
(2) The Access to Information Requests made to HR DC were broad in
scope. They sought all records related to the Results-Based
Accountability Framework as well as the Service Outcome Measurement
System, an outcome measurement initiative administrators began work on
in 1994.
(3) The interview with the HRDC administrator was conducted on
September 10, 2010. The interview with the Employment Counselor took
place on January 4, 2008.
(4) According to one report, regional headquarters tended to allot
performance targets to individual offices based on the proportion of
resources they used. For instance, an office that absorbed ten percent
of the regional budget would be responsible for achieving ten percent of
the region's targets (HRDC 1998: 43).
(5) Departmental documents convey a lack of uniformity in methods
for conducting such surveys. Some indicate that surveys would be
conducted by national or regional headquarters. Others suggest that
individual offices would be provided with resources to conduct the
surveys either by using local third-parties, office staff or regional
tele-centre facilities.
(6) In using the difference in differences method, the working
group built on previous efforts undertaken by staff in HRDC's
evaluation branch.
John Grundy is a postdoctoral fellow, School of Occupational
Therapy, University of Western Ontario,
[email protected]. This research
was supported by a Social Sciences and Humanities Research Council of
Canada Postdoctoral Fellowship. The author thanks the journal's
anonymous reviewers for helpful comments.