HIRING BIAS
Can machines do any better?
Financial institutions are looking to shake off their pale, male and stale reputations.
But will handing over hiring decisions to machines make workplaces more diverse?
Experts in the field of artificial intelligence (AI) and recruitment weigh in on whether bias in machine learning models is a problem and, if so, what’s being done about it.
“I don’t think the goal should be to completely eliminate all possible biases in one fell swoop but to do better than the status quo and keep improving over time,” says Ariel Procaccia, associate professor in computer science at Carnegie Mellon University in Pittsburgh, in the US.
Procaccia says significant progress has been made in tackling the problem of bias in machine learning but that a complete fix is still a long way off. Researchers have identified sources of bias, defined formal notions of fairness, and designed AI algorithms that are fair according to those ideas, he says.
However, Procaccia says there are two obstacles to putting this into practice. “First, ironically, there is an embarrassment of riches when it comes to definitions of fairness and potential fixes, and it’s still unclear how to choose among them,” he says.
“Second, researchers have identified inherent trade-offs between notions of fairness and other qualities of AI algorithms; it seems that pushing bias out of algorithms must come at some cost to their effectiveness.”
“It’s a multi-step process, ” says Ashutosh Garg, CEO and co-founder of Eightfold.ai, an AI-powered recruiting platform based in Mountain View in California.
Despite all the scepticism about the technology, Garg says it’s possible to train machines to be unbiased. You start by collecting data and models from thousands of sources, he says. You then remove “anything that can create division like gender, race, and ethnicity” from the data.
Machine learning systems can be optimised for equal opportunity, Garg says, and analytics can be used to detect and measure bias.
But Joy Buolamwini, founder of the Algorithmic Justice League and a research
If you don’t have [the right] foundation, you’re going to perpetuate discrimination
assistant at MIT Media Lab in Massachusetts, says: “You’ll think you’re being neutral because you’re using data, but our research shows that’s not always the case.”
Buolamwini says researchers have been finding algorithmic bias in machine learning systems for years. “Now these systems are being sold and incorporated into the tools we use every day,” she says. “This is part of why we’re seeing the algorithmic bias.”
Buolamwini says in some cases machine learning tools are still at an early stage and aren’t an appropriate foundation for commercial applications such as recruiting bots. “If you don’t have [the right] foundation for building these systems, you’re going to perpetuate discrimination.”
Rashida Robinson, director of policy research at AI Now Institute at New York University, says AI hiring tools are only as unbiased as the people who feed the systems data and interpret the results.
“Hiring is a multi-step process. If you’re not looking through the entire pipeline of that process and how this tool will interact with all of the other decision points, then you’re choosing to take a very narrow view on what you think that problem is.”
Robinson says research shows women and people of colour aren’t proportionately represented in higher-paying sectors.
“If you apply an AI hiring tool in that environment, it’s only going to accelerate that problem, favouring whoever is currently benefiting from the power structure within a company.”
Mekala Krishnan, a senior fellow at McKinsey Global Institute in Boston, says it’s important that “technology is made by diverse individuals”.
“Women make up about 20% or less of tech workers in developed economies, and so there’s a lot to be done to increase women’s participation.”