Pluralistic: "Attacking Machine Learning Training By Re-ordering Data" by Cory Doctorow

Writing in Pluralistic, Cory Doctorow explains how certain weaknesses of machine learning (and perhaps of human nature). In particular, he looks at the special problem of initialization bias — that the results of machine learning are dependent on the order in which the algorithm gets fed data. This matters to everyone, because the stakes are high:

We have increasingly outsourced our decision-making to machine learning models ("the algorithm"). The whole point of building recommendation, sorting, and "decision support" systems on ML is to undertake assessments at superhuman speed and scale, which means that the idea of a "human in the loop" who validates machine judgment is a mere figleaf, and it only gets worse from here.

There are real consequences to this. I mean, for starters, if you get killed by a US military drone, chances are the shot was called by a machine-learning model.

And it's worse than that, since these errors probably aren't something that can be audited.

Full article here.


Comments