Pluralistic: "Attacking Machine Learning Training By Re-ordering Data" by Cory Doctorow
We have increasingly outsourced our decision-making to machine learning models ("the algorithm"). The whole point of building recommendation, sorting, and "decision support" systems on ML is to undertake assessments at superhuman speed and scale, which means that the idea of a "human in the loop" who validates machine judgment is a mere figleaf, and it only gets worse from here.
There are real consequences to this. I mean, for starters, if you get killed by a US military drone, chances are the shot was called by a machine-learning model.
And it's worse than that, since these errors probably aren't something that can be audited.