Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Public-Administration-in-Southeast-Asia.pdf
Скачиваний:
188
Добавлен:
21.03.2016
Размер:
4.4 Mб
Скачать

Performance Management in Hong Kong 303

efficiency savings obtained from implementing the Enhanced Productivity Program introduced throughout government in late 1998 (Civil Service Bureau, 2000b: 4). In order to ensure that the pilot scheme is conducted fairly and equitably, external consultants were employed to offer advice and assistance to the Civil Service Bureau and participating departments.

Whether team-based performance rewards will be extended service-wide was supposed to depend on the government’s evaluation of the pilot experiments. To date, however, no conclusions one way or the other have been officially made by the Civil Service Bureau. In a list of major initiatives to reinforce a performance-based culture released in 2005 (Civil Service Bureau, 2005), the bureau identified the following measures, which were by no means vigorous mechanisms to enforce a performance-related human resource management regime:

The annual Customer Service Award Scheme launched in 1999

The Management-initiated Retirement Scheme for directorate officers launched in 2000 (up to October 2005, 13 directorate officers had retired under the scheme)

Streamlining the procedures for handling substandard performers in 2003 and 2005

The Secretary for the Civil Service’s Commendation Award Scheme launched in 2004

15.4Assessment of Outcomes of Performance Management Reforms

Performance management is a management principle pursued by most governments and managers, in the current era of NPM. Citizens as taxpayers also demand government to be better accountable for its performance—in terms of policy outcomes and service delivery results. Introducing performance management measures is thus both an attempt at self-improvement as well as a response to legitimate public expectation by government. Performance pledges and indicators, in the form of response times to emergency calls by the police and other disciplined forces, for example, have been found to be very useful in crime control and security risk management (Brewer and Huque, 2004). Questions remain, though, as to how effective performance management measures actually are in practice. Here we focus on four aspects that are critical to the success of sustaining a performance management regime:

Are departments properly measuring their performance?

Are budget decisions based on performance results?

Are pay rewards reflecting performance?

Are customers involved in performance target setting and performance monitoring?

Answers to these questions would reveal if a proper causal link between motives and rewards, and between means and results, does exist, and whether there is an inbuilt modus operandi (and culture) that can support and sustain a cycle of continuous improvement.

15.4.1 Are Departments Properly Measuring their Performance?

In October 1994, the Audit Commission conducted the first review of the adequacy and quality of financial and performance information provided by policy bureaus and government departments to the Legco. In 2005, it again examined the appropriateness and adequacy of the performance

© 2011 by Taylor and Francis Group, LLC

304 Public Administration in Southeast Asia

information reported in the CORs, as well as the reliability of such information. The results were presented in Chapter 6 of its Report No. 45 in November 2005 (Director of Audit, 2005).

The Efficiency Unit’s guide on performance measurement directs that targets should be set wherever possible as they improve the clarity of expectations, motivate performance, and improve accountability. Similarly the FSTB guidelines require that targets reported in the CORs should indicate the extent to which a department’s operational objectives are being achieved and controlling officers should focus on reporting the effectiveness of their departments’ operations (Director of Audit, 2005: para. 2.8). However, several shortfalls in performance measurement were identified by the Audit Commission:

The lack of quantified targets: Of the 3262 performance measures reported by all controlling officers in 2004–2005, only some 30% were “quantified” targets while the rest were “nonquantified” indicators (Ibid., para. 2.9). On many occasions, the performance information reported in CORs had not provided stakeholders with a complete and meaningful view of the bureau’s/department’s performance. Some performance measures reported by departments were found not to be the key and most meaningful ones that could best indicate the quality, efficiency, and effectiveness of their work. Of the performance measures, 59% related to workload, 25% to service quality, 11% to effectiveness, and only 5% to efficiency (in which 3% related to unit cost measures and 2% to productivity measures) (Ibid., Figure 2, p. 16).

Most performance measures did not deal with program outcomes: For example, for the Education and Manpower Bureau’s primary education and secondary education programs, the targets only set out those initiatives launched to improve education, like language support schemes, but “hardly reflect how students’ achievements have been improved” (Ibid., para. 2.17 (b) (ii)).

Performance measures reported often did not address inter-departmental and horizontal issues: An example was food business license applications, of which the overall time required for processing across several departments concerned was not stipulated. In some cases, there was insufficient information to facilitate the proper interpretation of the performance measures, and an explanation for the significant deviation from targets was found lacking.

The reliability of performance information in CORs was also called into question. The following anomalies were found (Ibid., Part 3):

Incorrect/misleading performance results being reported

Clear def nition of performance measures not always provided

Proper validation procedures not always established

Proper performance records not always kept

In terms of incorrect and misleading information, the Audit Commission cited the case of the Student Financial Assistance Agency’s (SFAA) report on the processing of student loan applications (Ibid., para. 3.3(b)). SFAA included a target processing time of “two months” for all applications, and reported “two months” as the actual processing time for each type of application under the Local Student Loan Scheme, implying 100% compliance rate. In its COR, it was also stated that “the Agency was generally able to process all applications with complete information within the time frame as pledged.” The Audit Commission, however, found that “applications with complete information” only referred to the applications with complete information, including supporting documents, furnished at the first time of submission. The target did not cover the processing of those applications for which additional information was submitted subsequently. As the percentage of those applications without complete information furnished at the first time accounted for

© 2011 by Taylor and Francis Group, LLC

Performance Management in Hong Kong 305

65% and 77%, respectively, of the 2002–2003 and 2003–2004 applications, the “two months” target processing time was only applicable to less than 35% of all applications. It was also found that only 90.8% and 98.6% of the applications with complete information for 2002–2002 and 2003–2004, respectively, met the target, not 100% compliance as reported by the SFAA.

The merits of a system of management by results are laudable, but problems lie in the technical feasibility of working out proper, reliable, and preferably “quantifiable” performance measures that can cover the full range of services and activities of departments, and the mindset of service managers and providers—whether they really believe in performance management, or simply pay lip service to it, or even produce incomplete and misleading reports as illustrated above. The Audit Commission reviews suggest that most government departments tend to pay lip service to performance management and do it only for the sake of satisfying the requirements.

15.4.2 Are Budget Decisions Based on Performance Results?

To understand what has changed and what has not changed in the new mode of budgetary behaviors under one-line budget and budgeting for results, the author conducted a survey of departmental/bureau financial managers in 2004.2 Three bureaus and twenty-two departments/agencies responded and their answers were illuminating. Despite considerable devolution of financial management, which they all welcomed, spending departments still felt frustrated with:

Insufficient resources being allocated and the need to “do more with less” in an environment where the public has escalating expectations of government services

Inter-bureau coordination problems3

The cash-based accounting system that constrained multi-year expenditure planning

While most agreed with the notion of budgeting for results, there were quite a few skeptics as illustrated by the following expressions by interviewees:

We have worked out the broad performance targets and indicators for the different policy programme areas with the policy bureaus concerned. But in the process of resource allocation to departments, both the Treasury Branch [of FSTB] and the policy bureaus would take into account criteria or factors from a higher perspective. At present a top-down approach is adopted in budget/resource allocation. …Departments are required to achieve efficiency savings/additional savings by various economy measures to achieve the cost-cutting targets. (Author’s emphasis)

In theory [budgeting for results] is correct, but in practice there are still so many constraints. “Results” tend to be derived from the easiest measurable indicator of what we already do which reflects us in a good light.

2A comparative research project on budget reform in Hong Kong and Singapore, fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 1063/02H).

3Under the system of resource allocation according to policy program areas, for a department that delivers services under several program areas controlled by more than one policy bureau, it has to depend on multi-bureau funding. Insufficient inter-bureau coordination may give rise to conflicting program targets and gaps in overall service delivery.

©2011 by Taylor and Francis Group, LLC

306 Public Administration in Southeast Asia

The application of the concept of “Budgeting for Results” is not very obvious in the… reforms, although the Save & Invest arrangement4 can be said to partly achieve this concept. In this context, “results” are measured by the extent of the Controlling Officer’s success in effecting genuine savings in recurrent expenditure. The “results” thus identified will be used to help determine the level of budget allocation by including a percentage of these genuine savings in the Operating Expenditure Envelope of each Director of Bureau in the Estimates of the next nearest financial year.

Although the COR accompanying the annual expenditure estimates now specifies the department’s program areas, the resources allocated to each program area, performance targets, PIs, and the actual performance in achieving those stated targets, such performance information is more to satisfy the Legco and its Finance Committee than to really guide the annual resource allocation decisions made by FSTB and the financial secretary. Budget caps imposed by the financial secretary are determined across the board and not tied to the specific measurement of performance. In any case, the fragmented nature of PIs would not be conducive to the determination of precise amounts of budget allocations. Any major deficiencies in meeting performance targets would probably be taken into account when policy secretaries review the budget request submitted by department heads under their portfolio, and might have some bearing on the level of budgetary allocation, but such impact is at best a holistic one. There are no clear links between the composition of budgetary allocations and the spending departments’ various PIs.

Expenditure envelopes were introduced in 2003 mainly for the purpose of capping departmental budgets at the time of fiscal stress, where balancing the budget became the priority objective of government. As a result, as confirmed by bureau/departmental managers interviewed, the financial secretary had in practice made reference to the historical spending patterns of departments when arriving at a provisional expenditure envelope, with a percentage deduction to take account of the ongoing need for budget cutback and of his own fiscal targets for eliminating fiscal deficits. The operating expenditure envelopes ultimately given to bureaus by the financial secretary were all based on historical spending levels as adjusted by targeted cutback rates, as well as additional provisions for any major new services approved by the chief executive to be launched, where sufficient savings could not be obtained, by way of redeployment of existing resources. One respondent to our 2004 survey added:

While the Treasury Branch [of FSTB] plays a role in assisting the FS [Financial Secretary] in determining the envelope provision and efficiency savings, the negotiation between the FS and Directors of Bureaus [i.e. policy secretaries] seems more important in deciding the envelope provisions and efficiency savings targets at the bureau level.

The implications seem clear. Budgetary decisions are not made on the basis of performance measurement per se.

4The Save & Invest Account was introduced in 1999–2000, together with the launch of the Enhanced Productivity Program (EPP), whereby departments were required to achieve targeted efficiency savings and, in exchange, were given the flexibility to retain a portion of the savings in a centrally held account (Save & Invest), from which withdrawals could be made subsequently to fund re-engineering projects to improve productivity and efficiency.

©2011 by Taylor and Francis Group, LLC

Performance Management in Hong Kong 307

15.4.3 Are Pay Rewards Reflecting Performance?

The straight answer to this question is no, based on the lukewarm attitude toward performancerelated pay, whether from management or the staff side. Even the team-based performance rewards pilot scheme was not pursued after its first round in 2001. The lack of enthusiasm for any change can be illustrated by the fact that only six departments took part in the pilot scheme, while others, notably the Social Welfare Department, Education Department, and even the business-oriented Post Office, decided not to join, reflecting the worry among some departmental senior managers about the potential divisiveness of such bonus payments and the practical difficulties of performance measurement and evaluation.

The skepticism and hostility toward linking employee remuneration to performance measurement was strong throughout the civil service, ranging from the frontline to top management. Feedback from departmental managers and staff representatives alike during the 2002 review of the Task Force on Civil Service Pay Policy and System (Task Force, 2002) also reflected the continuing reservations about the merits and workability of performance-related pay arrangements.5 Civil service pay issues are not just managerial ones, but entail wider political and even legal and constitutional repercussions in Hong Kong’s case (Cheung, 2005a). Forcing through any perfor- mance-related pay scheme by government fiat alone may risk open confrontations and conflicts with the staff side and upsetting the stability of the civil service. So the existing pay system persists whereby performance and rewards remain disconnected, and salary remuneration is not determined on the basis of the individual civil servant’s performance. At the same time, departmental managers cannot use pay rewards as a motivator to induce staff to work according to prescribed performance targets and levels.

Putting aside the political and institutional repercussions, the validity of any performance pay system in practical terms remains problematic because of methodological complexities, and managerial attitude, prejudice, and cynicism (Cheung, 1999). A performance pay system only works if the “performance” of individual civil servants can be reliably measured by objective PIs that are acceptable to both staff and management. Otherwise, performance evaluation will tend to focus on only those aspects of work that are technically easier to recognize and measure, overlooking the importance of the non-quantifiable aspects (the so-called “intangibles”), or PIs might turn out to be just “political products” of bureaucratic bargaining between staff and managers, making a mockery of the system.

There is also the question of transaction costs in implementation, for example, in terms of bureaucratic bargaining over the formulation of PIs and the valuation of performance output, the need to institute adequate scrutiny and monitoring mechanisms over supervisors to ensure fair and comprehensive performance evaluation, and managerial time and efforts to implement the new system. Unlike the private sector where performance can ultimately be denominated in gross quantifiable terms and monetary currency, government work is often team-based and quality- or process-oriented, not easily susceptible to simple measurement and quantification. What is measurable may not be what is most significant, but what gets measured tends to be what is performed. Performance pay is not necessarily a more effective performance motivator, either. For many public sector managers, job independence, sense of accomplishment, challenging work, and respect and fair treatment are more important factors than performance pay (OECD, 1997).

5This author was a member of the Task Force, established by the Standing Commission on Civil Service Salaries and Conditions of Service, and heard first hand such skeptical views.

©2011 by Taylor and Francis Group, LLC

308 Public Administration in Southeast Asia

The successful implementation of a performance pay system, apart from the hardware elements such as reliable and trust-building performance measurement methodologies and tools, also depends critically on the attitude and culture of managers. If managers are risk averse, as many middle-level managers are, they may tend to play a “nice boss” to subordinates by adopting an egalitarian approach to attribute equal merit to their staff and to reward them more or less equally. To some managers, performance pay would mean greater control over staff performance; but to others, it may mean more work on performance measurement documentation, more office politics and less staff trust, and thus an unwelcome exercise (Cheung, 1999).

15.4.4Are Customers Involved in Target-Setting and Performance Monitoring?

In the aforementioned Efficiency Unit survey of 1998, while the customers seemed generally satisfied with departments’ performance against their pledged standards and the departments also generally found pledges a useful means for managing their operations and staff performance, less than two-thirds of the departments surveyed included all their services in the pledges and less than two-thirds of the customers surveyed thought the pledges had contributed significantly to service improvements. In other words, the effectiveness of performance pledges as “contracts with customers” and as tools to secure service improvements was still obscure.

This author conducted a content analysis of a total of 80 performance pledges published during 2001 to 2003, to appraise their effectiveness with respect to five dimensions—access, choice, information, redress, and participation (representation)—following Potter’s (1988) five-principle framework for public sector consumerism (Cheung, 2005b). It was found that most performance pledges followed some kind of template, with standard listings like:

Vision, Mission, and Values—usually separately listed

Description of the department and its work and scope of services

Performance standards and targets—comparing pledged targets, achievements in the past year, and target for a current or future year

Customer feedback survey results (if any)

Effective monitoring

The public’s right of appeal

Contact details (including telephone numbers and sometimes fax numbers and email addresses)

Of the 80 pledges examined, only seven met all five criteria of access, choice, information, redress, and participation in some ways. Most pledges looked more like publicity material than a carefully crafted tool to encourage customer response and involvement. The rights of customers and information on how they could be involved in the formulation of service provision decisions and the monitoring of the range and quality of services were largely absent, leaving little trace of the “culture of service” that was supposed to underpin the performance pledges. The pledges had thus served managerial interests rather than customer purposes. There was no evidence of customer input to the design of the pledges themselves.

Our findings corroborated the Efficiency Unit’s 1998 survey results, only more sharply. They pointed to the gradual routinization of the performance pledges program, rendering it just another management exercise that takes the customer role as a peripheral, if not superficial, one. What is contained in the performance pledge is grossly insufficient for discharging performance

© 2011 by Taylor and Francis Group, LLC

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]