Talent Acquisition

The Ethics Of Artificial Intelligence & Talent Acquisition: Assessment

Artificial Intelligence’s efficiency proposition for talent acquisition teams is undeniable. Organisations, however, need to not only be efficient, but also ethical. In this 3 part series, based on the Acolyte whitepaper It is ethical to apply AI to recruitment, we will take a look at how artificial intelligence is being applied to recruitment and the ethical challenges that are being. In part two we are exploring the impact of AI on assessment.


Artificial Intelligence’s efficiency proposition for talent acquisition teams is undeniable. Organisations, however, need to not only be efficient, but also ethical. In this 3 part series, based on the Acolyte whitepaper It is ethical to apply AI to recruitment, we will take a look at how artificial intelligence is being applied to recruitment and the ethical challenges that are being. In part two we are exploring the impact of AI on assessment.

How can AI be applied to assessment?

AI can be used to automate the decision-making process, determining whether or not an applicant matches hiring criteria/fits a hiring profile.

What is the AI doing?

AI can be used in a number of different ways to automate decision making based on complex candidate inputs, process natural language, and learn (machine learning) over time what characterises applicants that ‘pass’ verses applicants that ‘fail.’  

In essence, however, in all situations, a target profile is established (and reinformed) and applicants are automatically measured against that target profile using objective data points.

Does AI make assessment more or less ethical?

The application of AI to assessment, particularly in a volume hiring context, was one of the areas seen as having the most potential within recruitment. The idea of being able to remove the transactional assessment load from humans, whilst also being able to increase objectivity and consistency is very appealing.

Unfortunately, however, this is the area in which the application of AI has been most controversial. In fact, many early adoptions of AI in this capacity have been removed (e.g. some video Interview platforms used to use AI to evaluate candidate body language, but this has been removed from most tools after court cases challenged the validity of the assessments being made).  

The effectiveness of AI-driven assessment decision making anchors on the quality of the data that underpins it. It needs to have no restrictions in range, it needs to be bias free, and there needs to be enough of it to be able to drive reliable insights. Unfortunately, we are still a very long way away from that being the case. Mass data-sets are coded by humans so inherit human biases and they also have limitations (e.g. a photo of someone smiling might be labelled as ‘happy;’ but they could also be smiling as a defence mechanism e.g. to mask pain etc). They can also over-simplify things e.g. define gender in a binary way, as opposed to in more fluid terms. Organisational data-sets are limited by the characteristics of their workforce e.g. ‘success’ will only be defined in the way that it is achieved by your current population, and excludes the ways in which different populations, who aren’t currently represented within your workforce, might achieve (or even define!) ‘success’.

There are also still limitations in technology capability. Whether it’s the accuracy of speech to text translations (especially when accents or speech impediments are factored in), or technology’s ability to filter out irrelevant details from relevant details (e.g. analysing the wallpaper behind a person, not just the person).

Finally, just because something might characterise ‘most’ people who fit for a role, it doesn’t mean it characterises ‘all’ people who fit a role and it will take an unimaginable amount of algorithm training before AI is able to account for all potential variables whilst also still achieving the efficiency proposition e.g. AI might pick up that people who live nearest to an office are most likely to be offered and accept a job but it doesn’t then follow that someone who doesn’t live near to the office will in all situations not be right for and not accept that role.

So, whilst this remains the area in which AI has the most potential to transform, it is possible to answer definitively that the application of AI at the moment is not ethical. Assessment decision making involves candidate jeopardy – making the wrong decision can have a material impact on the life of the applicant – so it is critical that it is as robust as possible and we cannot currently trust AI to make reliable and unbiased decisions. The only question is whether it is any less ethical than human decision making which has its own significant ethical limitations.

DOWNLOAD THE FULL WHITEPAPER

Read part 1: AI for Candidate Sourcing

Read part 3: AI for Recruitment Process Management

Similar posts

Get notified on new insights, reports & whitepapers

Be the first to know about new insights for HR and Talent Leaders. Subscribe now to be added to our monthly newsletter and receive copies of our latest reports based on data from our platform.