Amazon has scrapped a secret AI recruiting program after it showed bias in favour of men.

According to Reuters, Amazon’s machine learning specialists began working on a program in 2014 that automated the recruitment process by independently reviewing candidates’ CVs. It then ranked applicants from one to five –  before anyone at the company had looked at the CVs themselves.

“They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those,” an unnamed Amazon employee told Reuters.

Just one year after launch, developers realised the new system rated men and women differently, preferring male candidates for technical posts even if women had similar qualifications.

The problem was poor data pre-processing. Amazon’s model analysed CVs submitted to the company over a 10 year period – as men dominated technical roles during this period the system mistook gender as an indicator of success. 

In an alarming example of shortsightedness, the system also viewed all-women’s colleges as a reliable indicator of incompetence.

Although Amazon refined and corrected the algorithm’s more obvious biases, in the end, it could not guarantee the machines weren’t using other discriminatory metrics to rank candidates. In addition, the system recommended many unqualified candidates for technical jobs, seemingly at random. Amazon scrapped the project last year.

Increasingly companies are turning towards algorithms to replace service sector workers with the hope that they make more rational, informed, and reliable decisions. Not all jobs are at risk, but when it comes to filling vacancies, firms no longer want to take risks.

A large number – including Hilton and Goldman Sachs – are turning to machine learning options to automate their recruitment process, either in-house or through recruitment as a service solutions.

Aside from amplifying our unconscious biases, critics have pointed out that algorithms can not yet adequately judge character. Candidates also have complained that they are being turned away out-of-hand based on fuzzy logic and unproven science such as image-based personality detection.