All disruption has to fight against fear

BY Barbara Hyman

A collection of our thoughts and opinions regarding AI in recruitment

Depending on which media you read, technology, and specifically Artificial Intelligence, will create or destroy thousands of jobs. It is already radically changing many, as well as how we apply and hire for them.

Back in the day when cars were first released, there was such a fear about the danger they presented to society, that when they came to a junction, they were required to stop the car, get out and fire a warning shot so that the people in the surrounding area would be safe from unexpected danger.

I was reminded of this when reading the commentary around Amazon and its use of AI to screen talent.

In case you missed it, Amazon did an experiment. They analysed 10 years of CV data to build a predictive model to help filter through what I am sure is hundreds of thousands of applications to work at the company. Because the sample group was mostly male, the CVs were naturally based towards male ‘traits’ if there is such a thing. The model built off this training data naturally ended up mirroring that sample group which meant it preferred male to female CVs.

It is pretty obvious to all of us that if you create a product off one homogenous group, then you will end up flavouring it with the characteristics of that group. YouTube found that when the team they used to build their iOS app didn’t consider left-handed users when it added in mobile uploads, causing videos recorded in a left-handed person’s view of landscape to be upside down. I presume because the team building it was comprised of all right handed people.

Humans are heavily prone to unconscious bias. In fact we rely upon it to survive. While these biases help us not go insane, unfortunately, it has led us to the point today where they are having a very significant effect in the workforce. There are many serious forms of bias, but the best known is gender bias. A recent study showed simple by changing the name of an applicant from a woman’s to a man’s, with every other detail kept the same, the ‘male’ applicant was more likely to progress to an interview. The exact same CV.

When humans do screening, they are prone to making snap judgements based on superficialities, ignoring the very many factors that can help actually predict whether a candidate will perform. This is where data platforms actually have an advantage, by doing ‘blind screening’ and making the process both faster and fairer. However, this only works when the data that goes into the model manages for human frailties.

When it comes to using data to build predictive models to inform and guide decision-making, it is important to really dig deep on the input data.

The key insight for this experiment for Amazon is that relying on CVs to assess talent, is inherently flawed. This is accentuated even more when you accept that what differentiates talent now and will become even more acute in the future is not hard skills, not what uni someone went to or degree they have, but soft skills . Jeff Weiner who has the benefit of this kind of rich data from 600m users attested to that this week.

At PredictiveHire, working with dozens of companies across the world to help blind screen thousands of candidates, we know that it’s the behaviours and values of a potential coworker that will influence their performance and tenure. Values, such as commitment and attitudes are invisible in a CV. It’s not easy to see either in an interview. But it’s easily tested using well-crafted data platforms.

So let’s try to look beyond the news grab, the headline which naturally attract attention when it has Amazon in the first line.

  • Algorithms will be biased if the data they are built with is biased.
  • Algorithms can be tested for bias. Humans can’t be
  • Algorithms can be trained to remove bias . Humans, truthfully cant be
  • Algorithms are blind to your gender, skin colour age. Humans are very sensitive to this, especially when it comes to hiring

We have a once-in-a-millenium opportunity to extend and enable better, fairer thinking through careful and conscious AI-assisted decisions. The algorithms we build aren’t sentient beings or unmanageable acts of nature, they are built by humans. When we recognise that and are conscious of those risks, we can start to counteract these biases through technology to help humans see what’s in front of us more clearly, without the filters of bias.