Categories
Uncategorized

How fair is an algorithm? A comment on the Algorithm Assessment Report

The worldwide use of faster, smarter and more complex algorithms have the potential to make many things better but some things worse. The now famous controversies over facial recognition software and the COMPAS criminal recidivism prediction tools that overstated the future risk of African Americans being cases in points. Of course, we have had our own history of controversy over the use of predictive analytics in the field of child welfare – first proposed to deliver preventive services, then trialled in child protection decision making at the intake office of what is now Oranga Tamariki. Neither are currently in use, confirmed in the Algorithmic Assessment report released last week by Stats NZ, a report outlining all the ways that algorithms are currently used in government Algorithm Report.

The report is a great start towards more transparency around the ways algorithmic tools are currently used in Aotearoa, and shows a commitment to increased public transparency around the use of such tools. The report gives some insight into the ways algorithms are used across a range of services – from identifying school leavers at risk of long term unemployment, to identifying dodgy packages arriving at the border for the NZ customs service. But how should we evaluate the ways algorithms impact on rights? Algorithmic tools used in social policy and criminal justice spheres inevitably shape who qualifies for limited resources, and the interactions of the state with those in contact with criminal justice systems. In both areas, there are important ethical implications, and these implications depend on the data used, the type of algorithm, and to what extent to which it is used in actual decision-making.