For some reason there’s a sudden interest in what seems like a minor topic: whether machine learning is biased. The basic argument is that since computer programs are written by humans and trained on human data, they inevitably inherit human biases along the way. Therefore, we need to…
…do something. But I’m not sure what. Nobody ever seems to suggest much of anything.
I suspect there’s a reason for that: the problem of AI bias is far less pervasive than critics suggest. It happens, of course. But the thing to keep in mind about AI and machine learning is that they don’t have to be perfect to be useful. They just have to be better than humans. As long as an algorithm is no more biased than the average person—and I’ve never heard of an example where one is—then it’s a useful thing.
But there’s more than that to say in favor of automation: even if an algorithm is biased, it’s far easier to correct than it is in a human. For example, if you’re concerned about why an algorithm is making its decisions, you can program it to tell you. It will be 100 percent honest and 100 percent non-defensive about this. Humans, by contrast, frequently don’t even know why they make particular decisions, and if they do they’ll often lie about it.
Likewise, if you’re concerned that a training set has introduced bias, you can retrain an algorithm. Once you have a goal in mind, this is fairly quick and painless. Humans, by contrast, are all but impossible to retrain once they become adults.
It’s useful to be aware of these things, and to insist that algo designers incorporate bias antigens from the start. Algorithms of any complexity shouldn’t be black boxes. They shouldn’t be hard to retrain. They should be written to hunt for possible biases and report them. This isn’t trivial, but it’s hardly the biggest programming challenge in the world. Given all this, the odds that machine intelligence will end up being more biased than human beings is, to anyone who’s aware of how biased human beings are, pretty laughable.