Counting the Wrong People: The Hidden Errors Behind Welfare Policy
Why bad data make good policies look bad – and bad ones look good – is the hidden foundation of social protection.
View as PDF
Akanksha Negi: Monash University.
Digvijay Singh Negi: Ashoka University.
SDG 16: Peace, Justice and Strong Institutions
Institutions: Ministry of Statistics and Programme Implementation | Ministry of Cooperation
Governments spend billions each year on welfare programmes that promise to lift vulnerable citizens – pensions for the elderly, food subsidies for the hungry, cash transfers for vulnerable families. Their purpose is clear; what remains in doubt is whether they work.
The answer depends on data: who actually received the benefits, and how their lives changed. Yet those basic facts are rarely reliable. When the information on who was helped is inaccurate, even the most sophisticated evaluations can mislead policymakers.
A deep-dive into two of India’s biggest welfare schemes, the Indira Gandhi National Old-Age Pension Scheme (IGNOAPS) and the Public Distribution System (PDS), show how flawed data can distort the story – one makes an effective programme look weak, while the other makes a failing one appear strong.
When Pensions Quietly Work
The IGNOAPS offers elderly Indians a modest monthly payment, about ₹200–₹400, or roughly $5, for those without steady income or family support. The sums are small, yet in poor households the money rarely stays with the elderly alone: it often buys food or school supplies for children.
On paper, the scheme seems to underperform. Household data from the India Human Development Survey (IHDS) show pensions reaching only half as many people as official records claim, and many supposed recipients deny ever receiving benefits – sometimes to stay eligible for other schemes, sometimes from mistrust of surveyors.
Taken at face value, the findings appear discouraging. Children in households that report receiving pensions show little improvement in nutrition, and even a rise in underweight cases.
But when corrected for hidden recipients, those who received payments but didn’t disclose it, the picture reversed. Children in pension-receiving households were less likely to be stunted or underweight. Under-reporting had quietly erased the evidence of success, making an effective scheme look feeble.
When Food Aid Reaches the Wrong Plate
Data from the PDS, which supplies subsidised grains such as rice and wheat to families holding Below Poverty Line (BPL) ration cards, tells the opposite story. According to the same IHDS data, 56 percent of poor households lacked a BPL ration card, while 8 percent of non-poor households had one. Many poor families were excluded due to bureaucratic hurdles or outdated rolls, while better-off ones slipped in through influence or clerical error.
At face value, the programme looks like a success. Recipients appear to consume about 2.7 percent more calories per person than non-recipients – a result easily interpreted as evidence that food subsidies improve nutrition.
But this illusion fades once analysis corrects for mistargeting by distinguishing true from the nominal beneficiaries. The effect virtually disappears, and the calorie gain falls to near zero. The result: a statistical artefact that counted benefits where they did not belong.
The contrast between the two programmes is striking. The pension scheme looked weaker than it was; the food subsidy looked stronger than it was. Both were victims of the same underlying problem: errors in identifying who actually received the benefit.
How Good Economics Can Go Wrong
These contradictions reveal a broader problem in how policy effectiveness is measured. Economists often rely on a method called difference-in-differences (DiD) to assess impact. The idea is straightforward: compare how outcomes change over time for those who received a programme and for those who did not. The difference between these changes is attributed to the programme’s effect.
It sounds simple enough. But the method assumes that the researcher correctly observes who was treated and who was not. When that assumption fails, the results can be badly skewed.
If some recipients fail to report benefits – as in the pension scheme – the measured effect will usually be understated, because treated households are hiding within the control group. If, instead, the wrong people are counted as recipients – as in the PDS – the effect can be overstated, because the treated group includes those never meant to be treated.
What is less obvious, and more troubling, is how severe this distortion can be. In the current two cases, even a 10 percent rate of false reporting made estimates more than 20 times more biased. Small data errors can produce massive policy misjudgements.
Worse, the direction of bias flips with context. The same flaw can make one programme look too weak and another too strong.
The Invisible Cost of Bad Data
Behind these statistical debates lies a more practical truth. The credibility of welfare policy depends as much on information systems as on budgets.
In countries like India, this challenge is magnified by the scale. Programmes reach hundreds of millions of people through overlapping channels, each with its own paperwork, eligibility criteria, and reporting incentives. At every step, there is room for data to go wrong.
Surveys remain indispensable, but their quality depends on how respondents perceive risk and reward. Beneficiaries may hide information for fear of losing access or to claim eligibility elsewhere. Local officials, too, may have incentives to understate errors or inflate reach.
The result is that data quality becomes a policy variable in its own right -- one that shapes not only how well a programme runs but how we think it runs.
Building Data into Delivery
Improving this picture requires attention to both delivery and diagnosis.
First, welfare systems must treat data accuracy as infrastructure, not an afterthought. Transparent beneficiary lists, regular audits, and better integration between administrative records and household surveys can sharply reduce errors. With safeguards, digital ID systems such as Aadhaar can link delivery and measurement more precisely. But such measures must be backed by grievance mechanisms to catch exclusions.
Second, targeting processes need more checks against local discretion. Community verification, social audits, and data-sharing between programmes can ensure that eligibility lists reflect current reality rather than outdated rosters.
Third, evaluation itself must evolve. Too often, analysts take the data as given and proceed to apply standard econometric tools. But as these examples show, data imperfections are not mere noise. In fact, they can completely reverse conclusions. Testing how sensitive findings are to reporting errors, or using corrective techniques that account for misclassification, should become standard practice in applied policy research.
Stronger data systems build not just better evidence but greater trust in government decisions.
Seeing Clearly
At its heart, evaluation is an exercise in accountability. But accountability begins with seeing clearly. Before asking whether welfare policies work, governments must first know for whom they work.
Data, in this sense, are not just numbers; they are the record of the social contract. They determine who is recognised as deserving support and who is invisible to the state. When those records are wrong, both policy and justice falter.
Fixing the plumbing of welfare delivery - the systems that record, verify, and target benefits – is therefore not a technical chore.
The pension that quietly helps children and the food subsidy that fails to reach the poor both tell the same story. In development policy, as in life, seeing things clearly is the first step to getting it right.
View as PDF
Authors:

The discussion in this article is based on the authors’ research published in the Journal of Applied Econometrics (Volume 40). Views are personal.


