����������������
Public Spending, Outcomes and Accountability Citizen Report Card as a Catalyst for Public Action
The government’s failure to effectively monitor the outcomes of public expenditure is a major reason for corruption and the low level of accountability in the country. There are genuine problems of observation, measurement and incentives behind this failure. In respect of public services, it is possible to substantially compensate for this failure by seeking “user feedback” on services. This paper presents the findings of a civil society initiative in Bangalore that produced “citizen report cards” on the city’s services, based on user feedback and stimulated public agencies over a period of a decade to improve service outcomes. While citizen monitoring may be the only option when a government is indifferent to outcomes, there is no reason why the latter should not seek user feedback and benchmark its services when its internal monitoring is weak or incomplete. The citizen report cards of Bangalore offer a replicable approach that has now been tried out in other cities both in India and abroad, and even in rural areas. Whether the advocacy that follows the report card will improve service outcomes in all contexts is difficult to say as the responses from public agencies may vary. But even when a government and its agencies are indifferent, citizen report cards can be used to nudge them to pay more attention to service outcomes and public accountability.
SAMUEL PAUL
T
The reform of public services is by no means an easy task. The monopolistic nature of service delivery and other related imperfections invariably lead to inefficiency and non-responsiveness in public agencies. Furthermore, the tasks of linking each and every service to specific outcomes and managing large bureaucracies are indeed complex. Those who operate on the supply-side of services are not always able to observe and monitor the actual outcomes in the field. Often, outcomes are difficult to quantify and the incentives to track outcomes are weak in any case. It is in this context that citizens, as users of services, can act as a valuable source of information to fill this gap. User feedback on services can shed light on the outcomes generated and on how accountable service providers are. Scholars have analysed user feedback and confirmed the validity of this hypothesis [Deichmann and Lal 2003].
There is no unique way to define the outcomes of government’s policies and services. Often, policy-makers report outcomes in terms of physical achievements, outputs and growth rates. Thus, the outcome of a road transport service may be represented by the passenger kilometres run or the tonnage of goods carried. Sometimes, the rate of return or the surplus generated by the service may be shown as the outcome. While each of these measures captures aspects of outcomes, a summative measure of outcomes needs to reflect the quality and other attributes of the service that gives satisfaction to its users.
The term “user feedback” may sound novel to many in government and policy circles who may question its relevance to the functions they perform. In reality, however, government seeks and responds to user feedback without ever using this term. Whenever new policies or laws are introduced or changes are made, governments receive representations from those affected and often respond to them. The organised interests such as trade, industry and labour unions are well known for providing such feedback and seeking remedies for the problems they face. In respect of public services, there are no organised groups to “voice” the problems of those affected.
A citizen report card (CRC) is an innovative way to remedy this gap. A CRC rates a public agency or department based on the user feedback on its services or functions. Needless to say, such ratings also reflect the expectations of the people from service providers.1 The CRCs have been experimented with in India as well as several other countries by the Public Affairs Centre in Bangalore for over a decade now, and the evidence shows that civil society groups and governments can use this approach to improve the outcomes and accountability of public service providers [Paul 1995; Balakrishnan and A Iyer 1998; Paul and Shekhar 2000]. In this paper, we propose to narrate the experience of using CRCs in Bangalore where the feasibility of this approach was first tested in relation to the major public services in the city. The Bangalore case is of special interest as the evidence pertains to the lessons from three CRCs spread over a period of 10 years. It is the replication of such micro level initiatives across the country that could lead to increased public awareness and demand for change, and eventually make governments more accountable.
I CRCs in Bangalore
Bangalore was a city with a population of over four million in 1993. It was a growing industrial city and was turning into India’s hub of information technology in the early 1990s. A quarter of its population was poor, most of them living in slums spread throughout the city. As in other Indian cities, Bangalore’s residents too depended on several public agencies established by the provincial (state) government for their essential services. Thus the city’s municipal corporation provided services such as roads, street lights and garbage removal, while electricity was supplied by another large agency. Similarly, water, transport, telecom, healthcare and urban land and housing were the responsibility of other large public service providers. A common feature of all these services was that they were monopolistic or dominant supply sources. This mattered even more to the poor as they could not afford some of the high cost options that richer people could tap, in the event that public service providers failed.
It was against this background that a small citizens’ group in Bangalore launched a survey of citizens to gather feedback on the public services in the city. Based on the survey findings, a CRC was prepared that rated the city’s public services. The actual survey work was carried out by a market research firm that supported this initiative. The survey costs were met through local donations. The survey was launched after the group assessed the service related problems being faced by the people through focus group discussions. Structured questionnaires were designed in light of this knowledge and pretested to ensure their relevance and suitability for field level interviews. The survey covered nearly 1,200 households selected from among the middle class and low income households. Separate questionnaires were used for interviewing these two segments. But the objectives of the survey in both cases were to find out (a) how satisfactory were the public services from the user’s perspective; (b) what aspects of the services were satisfactory and what were not; and (c) what were the direct and indirect costs incurred by the users for these services. Satisfaction was measured on a rating scale (1 to 7) and aggregated to yield averages for its different dimensions.
The first report card on Bangalore (1994) revealed several interesting patterns about the city’s public services. It showed that the satisfaction levels of the middle income respondents did not exceed 25 per cent for any of the service providers covered by the survey. The dissatisfaction levels, on the other hand, were much higher, and in the case of the Bangalore Development Authority was as high as 65 per cent. Public satisfaction with staff behaviour in these agencies was a mere 25 per cent and over a quarter of the people had to make three visits or more to the agencies to solve their problems. The problem resolution was 57 per cent when all agencies were taken together. On an average, 14 per cent of the respondents had paid bribes to the agency staff and 50 per cent of them claimed that bribes were demanded by the staff. Many households incurred additional costs because of the investments they had to make to compensate for the unreliability of the services (e g, generators to cope with power outages).
The feedback from the sample of low income households was also similar. Over 70 per cent of them had to make three or more visits to the agencies to solve their problems. Nearly a third of them had to pay bribes. Their problem resolution rate was much lower than that of the middle class households. Yet, their satisfaction with the service providers was not as low as in the middle income sample, perhaps because of their low expectations from services. The report card from both the middle income and low income households presented a picture of highly unsatisfactory and non-responsive service providers in the city [Paul 1995].
The report card findings were widely publicised through the press in Bangalore. The government and the service providers were also kept informed about the full report card. Citizen groups were invited to debate the findings and propose ways and means to deal with the problems being highlighted by the report card. The newspapers played a major role in creating public awareness about the findings of the report card.
Beyond the publication of the report card, the citizen group that started the initiative did not take any other follow up action. But enquiries began to reach the leader of the group on how this work along with advocacy for reform could be scaled up. The growing public interest in this endeavour persuaded the leader of the group to establish a new non-profit body called “Public Affairs Centre” (PAC) in Bangalore in 1994 to expand and strengthen this work in the country. One of its early activities was to respond to the requests for advice from three of the city’s service providers covered by the CRC. One of them was the worst rated agency which sought PAC’s help in further probing into its problems and finding remedies.
PAC prepared a second report card on Bangalore’s public services in 1999. It provided new evidence on the state of public services in the city after a lapse of five years. The survey methodology used was essentially the same as in 1993, but the sample size was increased to 2,000 households. The results showed a partial improvement in public satisfaction with most of the agencies, but the satisfaction level was still below 50 per cent even for the better performers. A disturbing finding was that corruption levels in several agencies had increased. The low income people continued to visit agencies more often than their middle income counterparts to solve their problems. The report cards indicated a clear link between petty corruption and
General Households
procedures and mindsets of many agencies. But the two report
cards demonstrated how such phenomena could be tracked and
highlighted through credible methods and used to bring the
agencies under a “public scanner” [Paul and Shekhar 2000].

The follow up actions in 1999 differed significantly from those
in 1994. Well before the public dissemination of results, PAC
presented mini report cards to the major service providers in the
city on a one-on-one basis. This was followed by a seminar for
the management teams from selected agencies to exchange their experiences with reforms since the first report card. The objective of this exercise was to learn from each other. The deliberations showed that agencies other than those who sought its help were also engaged in improving their services in different ways. The final event was a public meeting where the report card findings were presented to both leaders and staff of all the service providers with citizen groups and media also present. The leaders of the agencies addressed the gathering and explained to the public their plans to deal with the problems highlighted in the report card. This event and the report card findings were widely covered in the news media.
Though the CRC of 1999 showed only partial improvements in the city’s services, it was clear that several of the service
providers had initiated action to improve service quality and
BMPBESCOMBWSSBBSNLPoliceBDABMTC
Government HospitalsRTO
Agency | ||
---|---|---|
Completely satisfied | Partly satisfied | |
BMP | : | Bangalore Municipal Corporation |
BDA | : | Bangalore Development Authority |
BESCOM | : | Bangalore Electricity Company |
BWSSB | : | Bangalore Water Supply and Sewerage Board |
BMTC | : | Bangalore Metropolitan Transport Corporation |
BSNL | : | Bharat Sanchar Nigam |
Police | : | Bangalore Police |
RTO | : | Road Transport Authority |
Chart 2: Satisfaction with Public Services across CRCs
60
respond to the specific issues raised in the first report card. One
example is the improvements in billing procedures and dissemi
nation of information in some of the agencies. Another is the
increasing use of joint forums with users to improve the respon
siveness of staff. But within a few months of the second report
card, the new chief minister of the state of which Bangalore is

1994 1999 2003 Year
Per Cent Satisfied
50
40
30
20
10
0
the capital announced the creation of a Bangalore Agenda Task Force (BATF) to improve the services and infrastructure of the city with greater public participation. He set up BATF as a publicprivate partnership with several non-official and eminent citizens as members along with the heads of all service providers. In contrast to the more limited agency responses, this move by the chief minister raised the level to systemic responses across agencies. It created a forum where all the stakeholders could be brought together both to solve the city’s problems and to tap ideas and funds from the private sector. It was the first time that a chief minister had launched an initiative to improve services in response to citizen feedback. BATF began its work in earnest in 2000 and catalysed several reforms in a number of agencies.
II Outcomes and Impact
We summarise below the evidence on outcomes that has thus been gathered through user feedback on the services at different points in time in Bangalore. We begin with the findings of the third CRC on Bangalore (2003). A comparison of these findings with the earlier report cards will show whether there has been any improvements in the city’s public services. Due to space constraints, only summary findings will be presented.
A person’s satisfaction with an agency’s services reflects his/ her overall assessment of that agency. Full satisfaction with an agency implies a higher rating of its services than partial satisfaction. Satisfaction can be measured for different dimensions of the quality of a service or agency. In addition to an overall satisfaction score, we present below three measures of agency responsiveness, namely, problem incidence, staff behaviour and bribes paid or demanded. These measures reflect different aspects of quality and responsiveness as experienced by the users of services. An increase in the proportion of users who are satisfied with a service/agency is an indirect indicator of an improvement of that service/agency.
The findings presented below are divided into two parts: the first pertains to general households (mainly middle income), and the second to low income or slum households.
Feedback from General Households
Overall satisfaction: The satisfaction of the middle income citizens of Bangalore with the different services ranged from 70 to 96 per cent in the third CRC (2003). Chart 1 shows the user satisfaction levels (measured by the proportions of users who are fully or partially satisfied) for the nine agencies.
User satisfaction among general households ranged between 97 per cent for BMTC and 73 per cent for BWSSB, BMP and government hospitals. Agencies did vary, however, in respect of the proportions of people who have given a rating of “completely satisfied”. Those who have large proportions of “partially satisfied” users clearly have much more work to do. Improvements in services between 1994 and 2003: The comparison of user satisfaction presented below calls for a word of explanation. Comparability of data over time is a problem as
Chart 3: Problem Incidence
Per cent that paid a bribe
Per Cent Problem Incidence
30 25 20 15 10 5 0

1999 2003 Year
Chart 4: Satisfaction with Staff Behaviour across CRCs

Chart 5: Corruption Incidence across CRCs
General Households
25
15 14
10 9
5
0 1994 1999 2003
Year
changes invariably occur in the survey setting and the methods used. In the present case, the survey methodology used in the second and third report cards was fine tuned in the light of experience, especially with regard to the rating scale. To ensure comparability of the data between periods, Chart 2 uses the evidence from all respondents who had interacted with one agency or another. The data for all agencies are aggregated for each report card. For 2003, the column represents the proportion of users that is fully satisfied with a service. This is compared with the proportion of users in the upper end of the scale, namely, “very satisfied” and “satisfied”, in 1994 and 1999. A comparison of these categories with “completely satisfied” is defensible though the former slightly exaggerates the level of satisfaction in 1994 and 1999.2 In other words, the data given against 2003 is a more restrictive measure of user satisfaction. Despite this limitation, the chart shows that the average user satisfaction increased by over 40 per cent between 1999 and 2003. Focus on the upper end of the scale is appropriate also because it sets a goal for the service provider to achieve, namely, giving full or high satisfaction to the user.
The Chart 2 indicates a trend of across the board improvement over time in user satisfaction when all the agencies are taken together (a weighted average). It is important to understand what lies behind this change. Does the improvement in satisfaction reflect real changes that might have occurred in the quality of services, responsiveness of the service providers, and efficiency of service delivery? Did the need for interaction with the agencies significantly decrease? What actions might have been taken by the government and its service providers to achieve such positive outcomes? The comparative Charts 3-8 provide some answers to these questions. Problem incidence: People are likely to be more satisfied when they have fewer problems in getting a service or while interacting with an agency. The extent to which users of services experience problems has come down in 2003 in comparison with 1999 (Chart 3). Fewer problems mean fewer interactions with the agencies. This usually happens when more people experience relatively more reliable or hassle free services. Since this aspect was not quantified in 1994, it is not possible to say whether the same pattern existed between 1994 and 1999.
Reduction of problems is an important reason for improved satisfaction. It is likely that the reduction in the frequency of routine problems translates into fewer interactions with citizens, thereby reducing the scope for delay, harassment or corruption.
The Chart 4 indicates a positive change in staff behaviour for all the agencies taken together. It is difficult to imagine that people who gave low ratings in the past to the same staff would applaud them now without valid reasons. Corruption: An important question is whether service improvement has been accompanied by a reduction in corruption. The evidence on this is given in the Chart 5. Despite some improvement in the services, corruption seems to have increased between 1994 and 1999. But compared to the report card of 1999, the findings of CRC 2003 for general households show that corruption in the agencies has come down (Chart 5). We suspect that this reflects a reduction in the bribes demanded and paid by people in routine transactions. The streamlining of procedures and systems and increased transparency may well have contributed to this outcome. This does not imply that all pockets of corruption have been eliminated. In specialised areas such as building permits and approvals of various kinds, corruption may still be substantial, but this survey was not designed to unearth them. The findings definitely support the premise that simpler procedures and improved efficiency in routine operations such as selfassessment of property tax by the City Corporation, simplified land transfer by the Urban Development Authority and the like, as well as measures such as IT enabled billing systems in BESCOM (the Electricity Company), served to reduce harassment and extortion citizens faced in the late 1990s.
Feedback from Slum Households
Slum dwellers satisfaction with services: While the poor (slum dwellers) also indicated substantial improvement in satisfaction with services, their ratings are significantly lower, with four of the six agencies receiving satisfaction ratings above 70 per cent. The poor do not use the entire range of services shown against the general households (middle income).
Chart 6: Overall Satisfaction 2003 Satisfaction with staff behaviour: In most agencies, satisfaction
Slum Households
with staff behaviour was higher among slum households than among general households (Chart 8).
93 88
The corruption score: The slum household survey also shows
a decline in the proportion of people that have paid bribes. The
CRC of 1994 had shown that a third of the poor had paid bribes.

Agency
This proportion declined to 25 per cent in 1999 and to 19 per
cent in 2003. In this regard, the experience of the poor is
somewhat similar to that of the middle class. But the proportions
are higher for the poor than for the middle class households, a highly iniquitous outcome.
In summary, the CRC findings discussed above show that a turnaround has taken place in Bangalore’s public services over a 10-year period. The improvement in public satisfaction

levels reported above has cut across all the major service providers. This improvement is reflected in the feedback provided by both middle and low income households. The positive changes reported in the quality dimensions of the services are consistent with the higher overall satisfaction ratings of the different agencies.
There is a surprising degree of internal consistency among the foregoing findings. If through various reforms, streamlining, etc, most agencies have managed to reduce the problems or hassles that people encounter during their interaction with agency staff, the scope for petty corruption would tend to decline. This is an unusual finding and has major implications for corruption control strategies. The improvement of services and reduction of problems in the course of interactions tend to reduce the scope for
corruption.
BESCOM
BMP
BMTC
BWSSB
Gov Hospitals
Police
Agency
Chart 7: Problem Incidence across Report Cards
Slum Households 100
60
38 40
34 27 28
18 178
6 0
20 3

BMP
BESCOM
BWSSB
Police
BMTC
Gov Hospitals
Per cent incidence
Drivers of Change
Chart 8: Satisfaction with Behaviour of Staff
Slum Households
Many observers believe that the improvement in services reported above did not happen overnight. Starting with the
100

first Bangalore CRC in 1994, the spotlight on public services
had set in motion a series of actions by different stakeholders
that converged and cumulated to produce these results. Some
agencies had taken remedial steps to improve their services as
is evident from the CRC of 1999. How these and other
factors interacted and cumulated to achieve this turnaround in Bangalore is not easy to measure and explain. Nor is it possible to attribute the precise contribution of each of these factors to the turnaround. Since these changes occurred over a decade, other factors such as levels of income and education
BWSSB
BESCOM
BMTC
BMP
Police
Gov Hospitals
1999

The ratings given by slum dwellers ranged between 93 per cent for BMTC and 64 per cent for Bangalore police. Overall, a relatively smaller proportion of slum dwellers were satisfied with most services (despite their low expectations) in comparison with general households. Problem incidence in services: This is not to suggest that quality of services in the slums has not improved. The feedback from slum dwellers indicated that service quality in terms of availability of water in public toilets and regularity of garbage clearance had improved substantially. Problem incidence has also declined and compared well with that reported by general households (Chart 7).
In 1999, slum dwellers encountered problems in their interactions with agencies most often while dealing with the Bangalore police. This has come down significantly in 2003.
of citizens would have improved and these in turn, might also have contributed to the positive outcomes noted above.3 But a quick check of the data did not reveal any major shifts in income or education. The information technology sector, known for its higher income and education levels, has certainly grown in Bangalore, but it represents too small a population segment to have had a major impact on the average income and education levels in Bangalore. In the present analysis, we assume, therefore, that the improved service outcomes in the city could not be attributed to changes in long-term variables such as per capita income and education.
The drivers of change in Bangalore can be divided into two categories: one set of factors operated from the demandside, and the other from the supply-side. The demand for better services tends to operate from outside the government system. Citizen demands and media pressure are some examples. In a real sense, all demand-side factors act as external catalysts. They have no direct role in the design or delivery of services. These external pressures can be sustained, however, only in open, democratic societies that tolerate dissent and debate.
The supply of services, on the other hand, is the business of government itself. The factors that cause supply responses to happen therefore tend to be linked to government and are largely within its control. The governments could take action on their own, or they may act in response to demand-side drivers of change. The interaction between the demand-side and supply-side factors that caused positive service outcomes has been a special feature of the past decade in Bangalore. In terms of sequence, demand side forces were the first to appear on the city scene. The supply responses came later.
Demand-Side Interventions
The Glare Effect of Citizen Report Cards
The Bangalore report cards exerted pressure on the city’s service providers in three ways. First, the focused information on their performance from the citizens’ perspective (CRCs) put them under the “public scanner”. Since such information was new to them, and much of it was negative, it had the effect of “shaming” the poor performers. The evidence from the corporate world shows that measuring and quantifying work and outputs tend to make organisations pay more attention to what is being measured.
Second, inter-agency comparisons seem to have worked as a surrogate for competition. Though each service provider is a monopoly and its area of activity is distinctive, the CRC challenges this power by permitting an inter-agency comparison of certain common attributes. The users, media and civil society groups see delays, bribery and non-responsiveness as negative features in any service provider.
This sense of competition can percolate down even to the lowest levels in a public agency. After the CRC of 2003 was announced, a conductor in a public bus (BMTC) in Bangalore is reported to have proudly told a quarrelsome passenger: “Don’t you know that PAC has rated our transport service as the best among all the services in the city?” That the report card’s message had gone down to the conductor’s level is instructive. It means that the leadership of the agency has spread the word among its employees. The incentive effects of inter-agency comparisons cannot be overemphasised.
Third, it appears that at least the chairmen of some of the agencies saw the CRCs as an aid in their efforts to reform their agencies. Though the feedback on their agencies was negative to begin with, these leaders took a positive view of the exercise. They used the CRC findings to goad their colleagues to take action to improve the services. It shows that a CRC, when prepared impartially and professionally, can be used to encourage the more proactive among the public leaders to move ahead on the reform front.
The CRC work did not end with the dissemination of its findings. The dissemination was followed up with advocacy for more responsive and efficient agencies. It was the repeated report cards (three in 10 years) and the subsequent public advocacy work together that seem to have made a cumulative impact on the government and citizens of Bangalore. This work was done along with many other civic groups and NGOs in the city. Their education and networking abilities were part of the outcome of the advocacy work. After the public meeting held in Bangalore in connection with the second CRC of 1999, a leading newspaper, The Times of India said in an editorial “…PAC, in creating this forum, has opened doors, even windows, for a healthy tete-a-tete with our service providers. The honesty on display was remarkable...this is the spirit of democracy in action. The civil society working in tandem with government for the greater good of all.”4
Demand Pressure through Civil Society Groups
As noted above, PAC’s advocacy work was carried out through a network of civil society groups in Bangalore. In fact, the number of such groups increased significantly since the time of the first report card. There were two types of organisations in the network. The neighbourhood groups called residents associations have a direct interest in all the service providers. Then there are public interest groups that work citywide, but on specific civic or service related issues. There were only about 20 such active groups in Bangalore in 1994. By 2000, their number exceeded 200. Most of them are civic groups with local members, local resources, and with no staff of their own. This is a different breed from the conventional NGOs one comes across. Not all of them are dynamic groups, but many did participate in the campaigns and meetings organised by PAC. The demand pressure created by them can be divided into two types: First, their participation in public meetings and seminars where report cards or other civic issues were discussed became an effective means to voice people’s concerns about the services and to demand improvement in agency performance.
Second, the citywide NGOs have made a different kind of contribution to these dialogues. Their focus on specific issues and their citywide campaigns have given greater visibility to the demand-side pressure on the agencies. PAC has assisted and partnered them in most cases, thus strengthening the city’s “social capital”.
Reinforcement of Pressure by the Media
The print media in Bangalore played an unusual role by adding their weight to the pressure for better services. In 1994, all that the newspapers had done was to publicise the negative findings of the report card or other similar critical assessments. The investigative reports on civic issues were few and far between. But the scene changed since then as some of the newspapers decided to devote more space to public service problems and related civic issues. A large number of public officials were thus exposed to the issues of the localities and stimulated to respond with answers.
As noted above, these three factors are demand-side interventions and hence can be credited with adding to the external pressure on the service providers to deliver better services to the people. They have worked both in sequence and in an interactive mode. Thus the first report card stimulated media publicity as well as civil society activism. By the time of the second CRC, civic groups and PAC were working together interactively.
Supply-Side Interventions
Entrepreneurial Responses by Agency Heads
Until 1999, the modest improvements in services that occurred in Bangalore and reflected in the second CRC could be attributed to the actions taken by the agency leaders on their own initiative after the first report card in 1994. A specific problem that agencies such as Bangalore Telecom and BWSSB (water board) addressed was problem of excess billing. The streamlining and computerisation of the billing system reduced errors and the need for customers to keep visiting these offices. The reform of the grievance redressal procedures was another aspect that agencies such as BMP attempted. The BMP commissioner also took the lead in creating a new forum called ‘Swabhimana’ (self-esteem) for intereaction between his officials and civic groups on important local issues. The electricity provider (BESCOM’s predecessor) initiated periodic meetings between citizens and the agency staff and these occasions to inform the people about the reforms being planned.
A common feature of all these interventions from 1994 by agency heads is that the initiative came from them. They responded, despite the lack of political commitment and support at the level of the chief minister at that time. In a real sense, they were entrepreneurial responses, well within the purview of the authority and resources of the agencies. In almost all cases, they were responding to the first report card’s findings. These leaders saw the CRC as an aid to their endeavour to reform the system rather than as a threat.
The Bangalore Agenda Task Force: A State Initiative
The scene changed for the better in 2000 when the new chief minister of Karnataka created a new body called the “Bangalore Agenda Task Force” (BATF) to work with the major service providers in a partnership mode. This happened a few months after the release of the second CRC and showed the government’s determination to deal with the problems being experienced by the public. BATF consisted of several prominent persons from the private sector and the professional world along with the chairpersons of seven service provider agencies. This publicprivate partnership was authorised to mobilise funds and expertise to assist and stimulate change in the functioning of these agencies, and to involve the public in appropriate ways in the process. It provided a forum for the service providers to test and experiment with reform ideas, seek assistance and give a public account of their plans and outcomes. This was indeed an institutional innovation that could potentially stimulate the service providers to adopt better practices and be more accountable. BATF launched a series of six monthly summits where citizens were also invited to listen to these plans and achievements. The main contributions of BATF can be summarised as follows:
as well as the chief minister who also questioned the heads of agencies on their plans and achievements.
BATF had no legal or administrative authority over the public agencies with which it worked. It did not approve their budgets and plans, or oversee their programmes and projects. Its influence stemmed solely from its partnership and catalytic mode of operation, reinforced by the political support behind it. It provided strategic inputs and assistance to the agencies that found them valuable and timely. Resource mobilisation by the agencies: A parallel development since the BATF was set up, was a visible improvement in the resources available to the seven service provider agencies. New projects and expansion of infrastructure did call for more resources. There is clear evidence that the leaders of the different agencies mobilised additional resources through a variety of sources. In the case of BMP, its roads and related infrastructure programme was financed by a loan from the Housing and Urban Development Corporation. Similar loans were accessed by BMTC, BWSSB and BESCOM from other sources. BDA was a unique case where most of the funds required for new infrastructure projects was raised from its own internal surpluses. Role of the ‘Lok Ayukta’ (ombudsman): The ombudsman (Lok Ayukta) in Karnataka state played an indirect role in enhancing accountability in the agencies. He has powers not only to investigate grievances from the people about public agencies, but also to initiate investigations into the operations of the agencies on his own. In Bangalore, the ombudsman has been active on both fronts, even since his appointment in 2000. His raids on public offices and the subsequent actions taken to penalise public officials who indulged in corruption have given much adverse publicity to many agencies and departments of the state government. Political commitment and support: The common thread that runs through the different supply-side interventions (except the first) discussed above is the political commitment and support of the chief minister of the state. This was a weak factor during 1994
99. The change in the chief ministership in 1999 made a decisive difference. The new chief minister was a leader committed to improving public services and infrastructure. He was determined to find answers to the citizen dissatisfaction with essential services and industry’s dissatisfaction with infrastructure. That political commitment can vary with changes in leaders and governments does raise questions about the sustainability of reforms.
In 2004, a new coalition government took over the reins of power in Karnataka. Coalition politics has already led to the dismantling of BATF and a weakening of the political commitment for reform that was at work in Bangalore. A new CRC may well show a deterioration in the city’s services at this time. In the absence of the erstwhile champions, it is the civil society groups that are once again taking up the cause of public services and corruption. That progress has been achieved for a period is no guarantee that the process will go forward in a linear fashion. It underscores the critical role of civil society institutions as monitors of governance and catalysts for reform. The civil society initiatives and demand for accountability are essential for coping with the vagaries of political commitment.
The foregoing discussion highlights the contributions made by a variety of interventions that reinforced one another in the Bangalore context. It is their joint influence that is reflected in the CRC of 2003 (see the charts). As noted above, the precise influence of each of the factors is difficult to quantify. An agency head, for example, could take credit for the turnaround in his/ her agency. But the fact remains that without the support or pressure from the other factors mentioned above, the agency head may not have taken the necessary actions.
IV Conclusions
The internal monitoring mechanisms of government are seldom able to effectively track the outcomes of its policies and programmes. Some aspects of outcomes are best known to the users and beneficiaries of programmes and services. This source of information is rarely used by government. In this paper, we have shown how service providers and their supervising authorities can learn a great deal about the quality and adequacy of their services by listening to citizen feedback. The citizen report cards offer a valuable tool to gather such feedback in a systematic and representative manner from the users of services. The diagnostic information and benchmarks provided by CRCs separates this tool from the conventional feedback from people through protests and complaints against public agencies. Furthermore, they highlight both positive and negative features of services, and hence present a balanced view of the realities on the ground.
The relevance of this tool for the poor cannot be overemphasised. It is difficult and costly for poor people to make their voice heard in powerful and large public agencies. Often their voice may not be correctly represented by their leaders or mediating organisations. Even public hearings tend to be confined to those who are nearby and who could afford the time to participate. The survey methods used by report cards, on the other hand, permit the poor to make their voice heard directly and with minimal bias. CRC findings can empower the poor by giving them information that they can use in their interactions with service providers.
When a government and its service providers are non-responsive or perform poorly, the only option left is for civil society to demand greater accountability. CRCs in conjunction with advocacy can then become a tool for civil society to stimulate government and its service providers to respond to the systemic problems being experienced by the people. The Bangalore CRCs summarised in this paper show how this has been accomplished. CRCs work only from the demand-side and hence there is no guarantee that such positive impacts will occur in every case.
Though a CRC on public services can be conducted as a technical exercise, the dissemination and advocacy work that follows, will benefit a great deal from the involvement of the local civil society institutions in the process from the start. Citizen groups and other civic associations, NGOs and the media can playauseful role both in supporting the initiative and taking it forward through advocacy and dialogue. In Bangalore, consultations with NGOs working with the poor helped sharpen the survey’s focus on their problems. The public-private partnerships acted as an effective vehicle for catalysing reform and improving services, once the government took a positive stance in favour of change.
A major share of government expenditure is devoted to the delivery of essential services for ordinary citizens. The outcomes of a significant segment of government’s policies and programmes can therefore be understood and measured with the aid of user feedback of the kind that resulted in the CRCs of Bangalore. Needless to say, public expenditure that does not directly impact on the people cannot be tracked or assessed in this manner. Many reforms and policies that aim to make the economy more competitive or to improve the government’s internal housekeeping may have to be assessed in other ways. But let us begin with the easier and more significant part where user feedback can assist us in assessing the outcomes of public services and programmes that matter most to the people.

Email: pacindia@vsnl.com
Notes
[An earlier version of this paper was presented at the Shanghai Conference organised by the World Bank in 2004. The author is grateful to Deepa Narayan, Suresh Balakrishnan, Sekhar Shah, Sita Sekhar, M Vivekananda and K Gopakumar for their advice and assistance in the preparation of this paper. The author alone is responsible for any errors that remain.]
1 For the most part, public service providers do not announce or enforce any standards. If users knew what standards apply to a service, they could have assessed one against those standards.
2 This is because all those who come under “satisfied” may or may not be “completely satisfied”.
3 There are two ways in which changes in income and education levels can impact on service outcomes. First, citizens can become more aware, and hence, more demanding when they move up the income and education ladder. The service providers may become more responsive under such conditions. On the other hand, when people become more demanding, they may also apply tougher standards in judging services. This may result in services being rated more stringently than would be the case when people are less demanding and aware.
4See The Times of India, Bangalore, November 8, 1999.
References
Balakrishnan, S and S Manjunath (2004): Civic Engagement for Better Public Governance, Public Affairs Centre, Bangalore. Balakrishnan, S and A Iyer (1998): Bangalore Hospitals and the Urban Poor: A Report Card, Public Affairs Centre, Bangalore.
Deichmann, U and V L Lal (2003): ‘Are You Satisfied? Citizen Feedback and Delivery of Urban Services’ (mimeo), Development Research Group, World Bank, Washington DC.
Goetz, A M and J Gaventa (2001): Bringing Citizen Voice and Client Focus into Service Delivery, IDS, Sussex, UK. Nilekani, N (2003): Bangalore Agenda Task Force: Partnership with Promise?, Public Affairs Centre, Bangalore. Paul, Samuel (2002): Holding the State to Account: Citizen Monitoring in Action, Books for Change, Bangalore. Paul, S (1995): A Report Card on Public Services in Indian Cities: A View from Below, Public Affairs Centre, Bangalore. Paul, S and S Shekhar (2000): Benchmarking Urban Services: The Second Report Card on Bangalore, Public Affairs Centre, Bangalore. Paul, S et al (2004): ‘The State of India’s Public Services: Benchmarks for the States’, Economic and Political Weekly, February 28, pp 920-33.
Ravindra, A (2004): ‘An Assessment of the Impact of the Bangalore Citizen Report Cards on the Performance of Public Agencies’, OED Working Paper, World Bank, Washington DC.
World Bank (2004): World Development Report 2004, Oxford University Press, New York.