Algorithmic fairness is "philosophy: hard mode"

Got thinking about “fairness” today.

Most of the conversation I’ve heard has been pointing out things that are obviously and grossly unfair. Like sentencing algorithms that give black people longer sentences or the DHS/ICE thing that recommended detention for 100% of people.

That’s a good place to start! For these dumb cases, we can probably say “stop that.” Sure. But it doesn’t give us a lot to go on in terms of figuring out algorithmic fairness as a field.

Algorithmic fairness didn’t used to be a problem. For the old-fashioned analogues of a lot of things that are getting algorithmized today, you would just accept a little inaccuracy, and that was fine. If you had a little warehouse and you were trying to hire workers, you’d try to hire the best workers, and then as long as they were working hard, and you shipped as many things as you needed, that was probably fine. You didn’t have to think too hard about what basis you’re hiring workers on. “Well, honest hard workers, that’s all!” But if you’re Amazon and you’re trying to stock your modern warehouse, every 1% efficiency boost is 1% more profits, so if you can fire Slightly-Less-Efficient Steve and hire Efficient Earl, company-wide, you’ll make another $100m or something. And you can monitor them like never before - and tweak your scanner-machine algorithm to squeeze out every last second!

So now, if you’re Amazon, you have to think about the weird questions of, what do you actually want from your workers? And how are you going to tweak your algorithms to optimize for this? And if you’re regulators or organized labor, how do you tweak your laws to force Amazon not to optimize for the wrong thing?

Turns out, we never even solved this! We never agreed on what is fair/just/Right to ask from workers; we just kinda outlawed the worst abuses, settled on some defaults like “40 hour work week”, and hoped “people being decent” would do the rest. (in the case of labor, Reagan and the anti-union last couple decades apparently decided “let the companies optimize for whatever they want, it’ll be fine!")

Take the same argument and turn it on pricing models, or prison sentencing, or ad targeting. We never solved the underlying problem; we just kludged it along to a point where nobody with enough power argued too terribly much. Now we’re trying to algorithmatize stuff, and we’re realizing all these gaps exist.

I guess this is why people are working on this. But I think it’s under-resourced if it’s being treated as just an HCI problem. (or “just a ____ problem”, where ___ is any one field.) We’re prying up a couple rotten floorboards, and we’re going to discover our whole ethical foundation is not really as strong as we’d hoped.
(or maybe, instead of “we’re going to discover”, I mean “legal scholars and philosophers already know, but techies are now discovering.")


blog 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010