> Home > Good practices > Good practices in brief > Citizen report cards score in India - Monitoring public service delivery


Left menu


Left menu



Citizen report cards score in India - Monitoring public service delivery Good practices in brief


Citizen report cards score in India - Monitoring public service delivery

After more than a decade of monitoring by civil society organisations, the city of Bangalore in Southern India has achieved real progress in improving the quality and cost-effectiveness of its public services.

User feedback is a cost-effective way for a government to find out whether its services are reaching the people, especially the poor. Users of a public service can tell the government a lot about the quality and value of a service. Strangely enough, this is not a method that is known to, or used by, most developing country governments. The continuing neglect of the quality of services is in part a consequence of this fact.

In sharp contrast, there is an active practice of seeking customer feedback in the business world, or at least among those who produce and sell goods in the competitive market place. The ‘take it or leave it’ attitude one comes across—especially at the lower levels of the public service delivery bureaucracy—is no doubt due to the fact that government is the sole supplier of most essential services. But the disinterest among the higher levels of political and bureaucratic leadership in seeking public feedback on the quality and responsiveness of service providers reinforces this tendency.


What is a citizen report card?

When a government is indifferent, the initiative for change must come from civil society. Citizens who elect and pay for governments cannot and should not remain quiet when essential services are in disarray and public accountability is lacking. It was against this background that the citizen report card (CRC) on public services in Bangalore, Southern India, was launched in 1994. The CRC represents an assessment of the city’s public services from the perspective of its citizens. The latter are the users of these services and can provide useful feedback on the quality, efficiency, and adequacy of the services and the problems they face in their interactions with service providers. When there are different service providers, it is possible to compare their ratings across services. The resultant pattern of ratings (based on user satisfaction) is then converted into a ‘report card’ on the city’s services.

A citizen report card on public services is not just one more opinion poll. Report cards reflect the actual experiences of people with a wide range of public services. The survey on which a report card is based covers only those individuals who have had experiences in the use of specific services, and interactions with the relevant public agencies. Users possess fairly accurate information, for example, on whether a public agency actually solved their problems or whether they had to pay bribes to officials. Of course, errors of recall cannot be ruled out, but the large numbers of responses that sample surveys generate lend credibility to the findings.

Stratified random sample surveys using well-structured questionnaires are the basis on which report cards are prepared. It is generally assumed that people from similar backgrounds in terms of education, culture, and so forth, are likely to use comparable standards in their assessments. But these standards may be higher for higher income groups than for the poor whose expectations of public services tend to be much lower. Dividing households into relatively homogeneous categories is one way to minimise the biases that differing standards can cause.

The Bangalore experiment

The Public Affairs Centre (PAC) in Bangalore has done pioneering work on CRCs over the past decade. The first report card on public agencies in 1994 covered municipal services, water supply, electricity, telecommunications and transport. Since then, PAC has brought out report cards on several other cities and rural areas, and also on social services such as health care. But since it has tracked services for a longer period in Bangalore, we shall refer only to this experiment.

The findings of this first CRC on Bangalore were most striking. Almost all the public service providers received low ratings from the people. Agencies were rated and compared in terms of public satisfaction, corruption and responsiveness. The media publicity that these findings received, and the public discussions that followed, brought the issue of public services out in the open. Civil society groups began to organise themselves to voice their demands for better performance. Some of the public agencies responded to these demands and took steps to improve their services. The inter-agency comparisons and the associated public glare seem to have contributed to this outcome. When the second report card on Bangalore came out in 1999, these improvements were reflected in the somewhat better ratings that the agencies received. Still, several agencies remained indifferent and corruption levels continued to be high.

The third CRC on Bangalore, in 2003, showed a surprising turnaround in the city’s services. It noted a remarkable rise in the citizen ratings of almost all the agencies. Not only did public satisfaction improve across the board, but the incidence of problems and corruption seem to have declined perceptibly in the routine transactions between the public and the agencies. It is clear that more decisive steps had been taken by the agencies to improve services between 1999 and 2003.

Lessons

What accounts for this distinct turnaround in Bangalore’s public services? And what lessons can we learn from this experiment? Needless to say, without deliberate interventions by the government and the service providers, no improvement would have taken place in the services. But the key question is what made them act? A whole complex of factors seems to have been at work. The new Chief Minister who took over in 1999 was very concerned about the public dissatisfaction with the city’s services. He set in motion new mechanisms such as the Bangalore Agenda Task Force, a forum for public-private partnerships that helped energise the agencies and assist in the upgrading of the services. The civil society groups and the media supported and monitored these efforts. It is significant that the initial trigger for these actions came largely from the civil society citizen report cards initiative.

What are the preconditions for such civil society initiatives to work? It is obvious that these initiatives are more likely to succeed in a democratic and open society. Without adequate space for participation, CRCs are unlikely to make an impact. A tradition of civil society activism would also help. People should be willing to organise themselves to engage in advocacy and seek reforms supported by credible information. Political and bureaucratic leaders must have the will and resources to respond to such information and the call for improved governance by the people.

The credibility of those who craft CRCs is equally important. The initiators of the exercise should be seen as non-partisan and independent. They need to maintain high professional standards. The conduct of the survey and the interpretation of the findings should be done with utmost professional integrity. A report card does not end with the survey and its publication. Much of the advocacy work that follows will draw upon the report card findings. The CRC thus is a starting point, to be followed by further action through organised advocacy efforts, including civic engagements and dialogues with the relevant public agencies.

Conclusion

When a government on its own improves its services and accountability, initiatives such as CRCs may not be necessary. But even under these ideal conditions, a report card can be an effective means for civil society groups to monitor the performance of government and its service providers. Public agencies can, on their own, initiate report cards on their performance as indeed some in Bangalore have done. However, when a government is indifferent to these concerns, the report card approach can be an aid to civil society groups that wish to induce the government to perform better.

The key stages of a citizen report card study

  1. 1. Assess the applicability of citizen report cards. Conditions which affect the outcomes of CRCs include the receptiveness of the political context, the extent of decentralisation, the extent to which citizens can voice opinions freely, local competency to carry out surveys and advocacy. Public Affairs Foundation (PAF), a sister concern of PAC, provides advisory services to various clients. It has developed a structured assessment exercise to explore the applicability of the tool to the local context.

  2. 2. Determine the scope and plan the procedures. The next step is to identify key sectors/services to be included in the survey, map service provision structures and identify local partners who will participate in the survey.

3. Design the questionnaire. Focus group discussions involving both service providers and users are necessary to provide input for the design of the questionnaire. Providers of services may indicate not only what they have been mandated to provide, but also areas where feedback from clients can improve their services. Users may give their initial impressions of the service, so that areas that need attention can be determined.

4. Sampling. To collect feedback from the entire population would require too much time and resources. Sampling, when carried out accurately, gathers feedback from a sample group that is representative of the larger population. The appropriate type of sampling design must be determined. A knowledge of statistics and prior experience in developing a sampling plan is necessary, although it may also be useful to consult an expert on sampling techniques if the population in question is complex.

5. Execute the survey. First, select and train a cadre of survey personnel. Second, after a certain proportion of interviews are complete, perform random spot monitoring of question sessions to ensure that the recording of household information is accurate. Third, upon completion of each interview, go over the information collected to identify any inconsistencies.

6. Analyse the data. Typically, respondents give information on aspects of government services on a numeric scale (e.g. –5 to +5 or 1 to 7). These ratings are then aggregated and averaged, and percentage measures are produced. A typical finding may look like this: Boys tend to drop out of school more than girls. Of those children who drop out of elementary school, 60% do so in grades 4 and 5.

7. Disseminate the results. There are three important points to consider with regard to the dissemination of CRC findings:

  • The findings should be constructively critical and should not aim to embarrass or laud a service provider’s performance.

  • The media is the biggest ally for dissemination. Prepare press kits with small printable stories, media-friendly press releases, and translations of the main report into local languages.

  • Following the publication of the CRC survey findings, service providers and users should meet in a town-hall type setting. This not only allows for a constructive dialogue, but also puts pressure on service providers to improve their performance for the next round. If more than one agency is being evaluated, these settings can foster a sense of healthy competition among them.

8. Advocacy and service improvements. The findings of the pilot citizen report card survey can then be used in an advocacy programme which seeks to increase public pressure, build coalitions and partnerships and influence key players.

Source