Saturday, July 07, 2007
Big Brother just wants to help
Big Brother just wants to help
Mar 8th 2007
From The Economist print edition
Software: The use of data mining by governments need not be sinister, and could help to deliver public services more efficiently
Belle Mellor
WHEN you order books from an online bookstore or buy groceries from a supermarket's website, the personalised book suggestions that pop up, and the reminder that you normally buy milk, are generated by data-mining software that analyses buying habits. The use of such technology by retailers is commonplace. But now governments are adopting it too, in fields from education to tax collection, in order to plan, implement and assess new policies. “Not only do firms like Tesco have good operational systems that control their costs, but they understand their customers and can offer particular product mixes which are attractive to certain groups,” says Peter Dorrington of SAS, one of the biggest providers of data-mining and analysis software. Why, he asks, shouldn't governments do the same?
After all, government policies, like a supermarket's special offers, are designed to meet the needs of particular subsets of the population. Using data-mining tools, it is possible to spot trends and optimise processes. Take, for example, the British government's efforts to encourage more people from poor backgrounds to go to university. The government gives universities extra funds if they recruit and retain students from poor backgrounds. The Universities and Careers Admission Service (UCAS) categorises the 2m university applications it processes each year by age, gender, ethnic origin, parental occupation, domicile, and the desired institution and course. Universities use this data when selecting candidates and the government uses it to see how its policy is working and to assess the effects of changes in policy.
Last year UCAS tested the use of data-mining software from SAS to evaluate applicants' suitability for courses based on their personal statements and references. For a set of applicants—those who had applied for medical school in recent years—the text from these documents was analysed to look for keywords such as “patient”, “experience”, “hospital” and “team” that might indicate that applicants had relevant experience and other signs of commitment. Data-mining software then looked for links between the occurrence of these keywords and outcomes, such as whether an applicant was accepted on a course or whether that applicant completed the course. If such links can be reliably identified, it would enable universities to select students who are most likely to complete a given course irrespective of their socio-economic backgrounds. That could help to reduce discrimination against poorer applicants, who may be regarded as bad risks by universities.
Similarly, a number of school districts in American states including Iowa, New York, Alabama, Colorado and Minnesota are using data-mining tools from SPSS, another software firm, to analyse students' records and spot trends in order to meet the requirements of the No Child Left Behind Act. These are relatively small projects so far, but could easily be scaled up. Big commercial users of SPSS's software, such as telecoms firms, use it to analyse databases of over 40m customers, says Colin Shearer of SPSS. So there is no technical reason why large government databases cannot be mined for insights.
One of the largest government systems to employ data mining is Centrelink, Australia's benefits agency, which deals with over 6.4m claimants and carries out more than 5 billion computerised transactions a year. Centrelink already has a predictive model, called the Job Seekers' Classification Instrument, which evaluates benefit claimants and assesses the risk that they will become long-term unemployed. Claimants thought to be at high risk are then given more help in finding a new job. The agency is now planning a scheme to test the use of data mining to identify fraudulent claimants. The inspiration comes from insurance companies, which use predictive risk models (developed from thousands of claim histories) to analyse claims. Low-risk claims are paid quickly, and high-risk claims are investigated further. Similarly, Centrelink plans to use data mining to identify claimants for whom further investigation is merited.
Tax agencies around the world already mine data to look for possible fraud. But a more recent trend is text-mining to help taxpayers avoid errors. Sweden's tax authority is using SPSS's software to analyse the patterns of mistakes in tax returns so as to provide better guidance and improve the design of tax forms. And Australia's tax office is employing SAS tools to sort queries from taxpayers who are uncertain whether the rules apply to them or not. The office can then supply taxpayers with the right information—and learn which parts of the tax code are causing the most confusion. Data-mining software is also used by Denmark's National Board of Health, France's benefits agency, the South African treasury and Belgium's finance ministry for performance measurement and policy planning.
All of these schemes use data mining in an effort to improve the delivery of public services. But despite the good intentions, the collection and analysis of personal data by governments inevitably raises Big Brotherish concerns over civil liberties. In Britain, for example, the National Health Service is establishing a national database so that the most important data about patients can be called up by any hospital in the country. But many family doctors are refusing to hand over records, which are now kept in local surgeries, because to do so would break patient confidentiality. Liberty, a human-rights campaign group, worries that data collected by one arm of government will be made available to others.
Another worry is that data mining could prove counterproductive. “Those at the more vulnerable end of the social scale are likely to stop seeking advice and help if they know that the information will be noted and generally available,” says Gareth Crossman of Liberty. Yet another concern is that data-mining and classification schemes can get things wrong. The American Civil Liberties Union, for example, is worried about the Automated Tracking System, an American security scheme that uses data mining to assign a risk score to anyone who enters the country. If a model labels someone as high-risk, there is no way to find out why or to challenge the label.
Dr Paul Henman from the University of Queensland, who has written extensively on the subject, raises a rather more philosophical objection to government data-mining: that the technology starts to transform the nature of government itself, so that the population is seen as a collection of sub-populations with different risk profiles—based on factors such as education, health, ethnic origin, gender and so on—rather than a single social body. He worries that this undermines social cohesion. “A key principle in liberal democracies is that we are all peers and equal before the law,” he says. But for proponents of the technology, such segmentation is the whole point: policies, like supermarket special offers, are often aimed at groups—and the more accurately they can be targeted, the better.
Copyright © 2007 The Economist Newspaper and The Economist Group. All rights reserved.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment