AI Recruitment: Fair Judgment or Bias Trap?

Reinforcing Preconceptions: Examining How We Unintentionally Exacerbate Social Biases This article analyzes the hidden biases in everyday behaviors and decisions, reveals their potential impact on social equity, and proposes directions for reducing bias.

New Challenge in Job Applications: "Impartial" Human-AI Interaction

It's graduation season again, and the "you" mentioned above might be a representation of millions of graduates.

The artificial intelligence revolution has begun, affecting almost every aspect of people's professional and personal lives, including academic interviews and job recruitment.

Corporate management increasingly realizes that AI can potentially bring higher efficiency in various aspects such as supply chain management, customer service, product development, and human resources.

Among these, the so-called "AI interviewer" can be used for recruitment.

It is actually an interactive Q&A robot that analyzes and organizes interviewees' responses based on predetermined questions using various algorithms such as semantic recognition, facial expression recognition, and voice recognition, ultimately providing a reference score.

For example, when screening resumes according to job requirements, AI has advantages in accuracy and efficiency compared to humans. In relatively simple, standardized initial interviews, it can also complete the initial screening at the fastest speed.

Moreover, AI interviewers are indeed physically "impartial," avoiding biases caused by prejudice, discrimination, or nepotism, thus improving the fairness and consistency of decision-making.

The "2023 China Online Recruitment Market Development Research Report" shows that AI video interviews already account for 31.8% of application scenarios, and this proportion will only increase in the future.

Not only in China, but the use of AI to improve recruitment efficiency is also becoming increasingly common globally.

Consumer goods giant Unilever once released a set of data: AI can help save 100,000 hours of interview time and $1 million in recruitment costs annually.

However, although the introduction of AI can reduce costs and increase efficiency, the nature of artificial intelligence and the training data behind it carry human imprints, inevitably introducing human biases and possibly even "exacerbating" existing prejudices.

Amplifying Human Biases

Although one of the reasons for using artificial intelligence in recruitment is to be more objective, multiple studies have found that this technology is likely to be biased.

The root cause of this situation is still data. If the data has biases and flaws, artificial intelligence will also replicate these defects.

After interviewing 22 human resources professionals, The Decoder discovered two common biases in recruitment - "stereotype bias" and "similarity bias."

"Stereotype bias," as the name suggests, comes from stereotypes about a certain group. For example, favoring applicants of a particular gender, leading to gender inequality.

"Similarity bias" refers to recruiters favoring applicants with similar backgrounds or interests to themselves.

These biases seriously affect the fairness of the recruitment process. They may flow into historical recruitment data and then be used to train artificial intelligence systems, resulting in AI bias.

For example, since 2014, Amazon has been developing artificial intelligence for resume screening, hoping to quickly screen out the most ideal candidates from a vast number of resumes.

However, after just one year, it was discovered that the AI's screening results included strong gender preferences.

Even if the candidate's resume did not explicitly state gender, AI would look for "clues" in the text, such as "captain of the women's chess club" or graduation from a women's college.

Insiders said that the training material for this artificial intelligence was the company's employment history over the past 10 years. In the technology field, long-term occupational stereotypes and "male-friendly culture" have led to a larger number of male employees than female employees, especially in technical positions.

In 2017, Amazon abandoned this AI model for resume screening.

The persistence of such biases demonstrates that careful planning and monitoring are necessary to ensure the fairness of the recruitment process, whether or not AI is used.

Can Humans Eliminate Bias?

In addition to human resources professionals, The Decoder also interviewed 17 artificial intelligence developers, hoping to study how to develop artificial intelligence recruitment systems to reduce rather than exacerbate recruitment bias.

Based on the interviews, they established a model that allows human resources professionals and AI engineers to exchange information, question, and remove preconceived notions during the process of researching datasets and developing algorithms.

However, the research results show that the difficulty in implementing this model lies in the educational and professional differences between human resources professionals and AI developers.

These differences hinder the ability to communicate effectively, collaborate, and even understand each other.

Human resources professionals are traditionally trained in personnel management and organizational behavior, while AI developers are proficient in data computation and technology. These different backgrounds can lead to misunderstandings and inconsistencies during collaboration.

How to Optimize AI + HR

A recent survey of 11,004 Americans by the Pew Research Center found that 66% of people do not want to apply for jobs with employers who use AI for recruitment.

Only 32% said they would apply, while the rest were unsure. And 71% of people oppose recruitment decisions made by AI.

Therefore, if companies and the human resources industry want to address the bias problem in AI recruitment, several changes need to be made.

First, human resources professionals need to be trained, with structured training focusing on information system development and artificial intelligence being crucial.

The training content should include the basic principles of AI, how to identify biases in the system, and how to reduce these biases.

In addition, promoting better cooperation between human resources professionals and AI developers is also important.

Companies should establish teams that include both human resources and AI experts. This helps bridge communication gaps and better coordinate the work of both parties.

Furthermore, establishing high-quality datasets with cultural diversity ensures that the data used in AI recruitment processes can represent different population groups.

Finally, countries need to develop guidelines and ethical standards for using AI in recruitment to help build trust and ensure fairness. Organizations should implement accountability in AI decision-making processes and increase transparency.

By taking these measures, we can create a more inclusive and fair recruitment system. Since AI excels at analyzing objective data and providing decision-making references, it should be used as an auxiliary tool rather than rashly becoming a judge of fate carrying "stupidity" from insufficient training and "badness" from copying biases.

Reference:

https://the-decoder.com/what-will-a-robot-make-of-your-resume-the-bias-problem-with-using-ai-in-job-recruitment/