Predictive risk modelling: on rights, data and politics.

One of the items included in the scope of the current New Zealand government’s review of the Child, Youth and Family services (CYFS) is this one: ‘The potential role of data analytics, including predictive risk modelling, to identify children and young people in need of care and protection’.

Predictive risk modelling (PRM) is a simple and seductive idea. If we can predict with accuracy who is likely to abuse children before they have done so, then we can target services to those families, fulfilling the dual objectives of preventing harm before it occurs, and being uber efficient with taxpayer dollars. Such seductive ideas, especially in an age where access to the ‘big data’ required to attempt such a proposition is viable, are often worth investigating. Enormous datasets can be mined, a large number of variables can be included, and patterns of particular combinations of risk factors for certain populations can be identified. In the case of the proposed Ministry for Social Development (MSD) PRM tool, however, there a number of issues. In particular, the level of accuracy of the PRM tool is overstated, the data it relies on has serious problems, its use as a practice decision-making tool is minimal, it has significant rights implications, and using it to decide who should be offered preventive services may not be any more effective than the current state of affairs (although to be fair this is difficult to ascertain – but needs to be).