Question: AI Alone Can’t Make You Better at Recruiting

Artificial intelligence (AI) is often pitched as the be-all and end-all solution for tasks that range from tedious to tortuous. In recruiting, there are indeed many benefits of using AI: It can take on the monotony of resume screening, reduce the number of interviews required, and help avoid the unconscious bias that often creeps into interpersonal interactions.

But it can only do these things well — accurately, fairly, and efficiently — if the data it is trained on is good — representative, relevant, and vast. Yet this fact raises numerous difficult questions: What data do companies deem relevant or necessary? Do they have enough information to be representative and systematically avoid bias? Are the inputs themselves biased? What leads to genuinely predictive outcomes?

It’s worth taking a closer look at the pros and cons of some common recruiting applications of AI, especially as they relate to data inputs:

Assessments and AI

The recruiting industry has used assessments successfully for many years to get a more holistic understanding of candidates’ personalities, innate abilities, and potential. Only recently have assessments been paired with AI, which offers the benefit of being able to predict a candidate’s likelihood of succeeding on the job based on an analysis of their assessment responses.

However, in order to really reap the benefits of this customized approach, it is essential that the same assessments be given to both employees and candidates. The quality of your AI’s predictions is reliant on gathering data about existing employee performance. After all, the AI needs some yardstick against which to measure candidate potential. Equally as important is the relevance of the questions being put forth in the assessment. If you want to achieve a high level of predictability, your assessment’s prompts must be job-specific.

Video Interviews and AI

Some companies use video technology and facial recognition to help determine whether someone will be a good fit for a role. Applicants answer predetermined questions on camera, and AI is said to analyze their facial expressions, determine their moods, and assess their personalities.

If you are hiring for a large number of customer-service or customer-facing roles, this technology may be useful and unproblematic. Facial recognition can predict some simple components of behavior, like whether someone smiles often, which may be all you need to know about a candidate. However, let’s say your company values cooperation. AI could determine that a cooperative person may smile more than a less cooperative person, but that doesn’t mean a smiling candidate is necessarily cooperative.

Facial expressions don’t tell you much about how a candidate will perform in a more specialized role, because facial expressions are not predictive of job performance. It’s also important to consider whether the AI making these determinations is trained on relevant data. For example, if you are prioritizing thoughtfulness and empathy in candidates, you need to be sure the models your AI uses are trained on data related to expressions of thoughtfulness and empathy rather than, say, enthusiasm and positivity.

You should also vet the data the AI is trained on for representativeness. Were faces of all different races, genders, ages, shapes, and sizes fed into the system? If not, your AI’s predictions will be detrimentally compromised, usually at the expense of women, seniors, and people of color.

Gamification and AI

Practical applications of neuroscience are also being implemented in the recruiting space through the use of interactive puzzles and games meant to determine how someone may perform or behave in certain situations. These gamified assessments are typically more engaging and less daunting than other screening tools, which drives high completion rates and investment of effort.

While these games can measure things like impulsivity, attention to detail, and how one processes information, context must be considered in interpreting the results. For example, someone who has a high risk tolerance within the context of a game won’t necessarily have the same level of tolerance in the real world. People are often far more willing to take risks in a virtual setting than they are on the job, where risks can have real consequences. Human beings are complex, and gamified applications of AI may reduce a person’s potential to a narrow set of binary characteristics that should be considered in a three-dimensional way...

Source: Recruiter.com - Daily Articles and News