Employing artificial intelligence in the hiring process may seem like a dependable strategy. This perception arises from the awareness that humans possess their own imperfections and biases. Surprisingly, AI also exhibits these shortcomings. 

For a considerable period, critics of artificial intelligence have voiced concerns that this technology would displace jobs from the labor market. Recent research indicates that AI is indeed doing so, but in a manner that was not initially foreseen: by displaying bias against qualified candidates on the grounds of their gender, race, and age.

In theory, AI has the potential to create a level playing field in recruitment, except for one vital consideration: it is programmed and trained by humans. To comprehend this issue in greater detail, let’s delve into the matter.

Where does this bias originate from?

In theory, an AI-driven screening tool seems ideal. Since it lacks the capability to think or form judgments, it should be able to make objective decisions, right? Not precisely. With the growing adoption of AI, the number of reported bias incidents has also increased. Let’s examine some instances from 2023 alone.

In June, Bloomberg revealed an analysis of 5,000 images generated by Stable Diffusion that reinforced gender and racial stereotypes to a more significant extent than those observed in the real world. Higher-paying jobs were consistently depicted by individuals with lighter skin tones while lower-paying jobs were associated with those with darker skin. 

Gender stereotypes were also apparent, as roles like cashiers and social workers were predominantly represented by women, while images of politicians and engineers almost exclusively featured men. In fact, the majority of occupations in the dataset were depicted by male subjects. 

A spokesperson from the startup StabilityAI, which operates Stable Diffusion, explained to Bloomberg that all AI models exhibit biases based on the datasets they are trained on. This is where the problem arises.

Example

Companies either possess or acquire datasets, and they use these datasets to train AI models. The AI models then perform tasks based on the patterns they’ve observed. If the dataset is flawed, the model will be flawed. 

So, what happens when these same AI models are utilized for the hiring process? Consider the case of Workday, a software company specializing in HR and finance solutions. 

They currently face a class-action lawsuit led by Derek Mobley, a job seeker who claims that their AI screening tool discriminates against older, Black, and disabled applicants. 

Mobley, a 40-year-old Black man diagnosed with anxiety and depression, alleges that he has applied to around 100 positions since 2018, meeting the job qualifications each time, but has been rejected consistently. He represents an undisclosed number of individuals who have reported similar discrimination. 

A Workday spokesperson asserts that the company conducts regular internal audits and legal reviews to ensure compliance with regulations, and they deem the lawsuit to be without merit. This matter will be determined by a jury, but it wouldn’t be the first instance of an AI recruitment software demonstrating bias.

Read More: 15 Ways to Earn Up to $10,000 a Month using ChatGPT

Gender Bias in Amazon’s AI Screening

In 2017, Amazon famously abandoned its AI-powered screening tool because it was less likely to consider female applicants as qualified for positions in a male-dominated industry. Additionally, in July of the current year, Stanford University researchers discovered that seven AI detection tools frequently misclassified writings by non-native English speakers as generated by AI. 

These detection tools measure text perplexity, meaning that the more complex or unexpected the next word in a sentence is, the less likely it is considered to be generated by AI. Non-native English speakers, due to their limited vocabulary, might use simpler words, causing detection tools to categorize their work as generated by AI. 

In Stanford’s case, a grade is at stake. In a professional setting, the consequences can be much more severe for non-native job applicants who are deemed less qualified by a screening tool based on their language usage.

All of these incidents point to the same issue: when an AI tool is trained on existing data that contains cultural, racial, and gender disparities, it will inadvertently reflect the very biases that companies aim to eliminate.

Government Responses to AI Recruitment Bias

  • Illinois’s Notification Requirement (2019): In 2019, Illinois passed a bill requiring employers to notify candidates about AI analysis during video interviews. This was an early step in addressing AI bias in recruitment.
  • Maryland’s Facial Recognition Ban (2019): Maryland took action by prohibiting the use of facial recognition technology during pre-employment interviews without the explicit consent of job applicants. This move aimed to safeguard privacy and reduce potential biases.
  • The EEOC’s AI and Algorithmic Fairness Initiative (2021): In 2021, the Equal Employment Opportunity Commission (EEOC) initiated the “Artificial Intelligence and Algorithmic Fairness Initiative” to monitor and evaluate the use of technology in hiring practices. This marked a significant federal effort to address AI bias.
  • New York City’s Disclosure and Audit Law (2021): During the same year, the New York City Council enacted a law that mandates employers using AI technology in hiring to disclose its use to applicants and undergo yearly audits to check for bias. Enforcement of this law began in July 2023.
  • U.S. Justice Department and EEOC: As part of their AI initiative, the U.S. Justice Department and the EEOC jointly released guidance, cautioning employers against “blind reliance on AI.” This was intended to prevent civil rights violations similar to those faced by companies like Workday.
  • EEOC Hearing on AI in the Workplace: More recently, the EEOC conducted a hearing featuring insights from computer scientists, legal experts, and employer representatives to discuss the potential benefits and harms of AI in the workplace, reflecting an ongoing commitment to address AI bias issues.

What’s next for AI in the hiring process?

Many U.S. companies have already incorporated AI into their hiring procedures. According to a study conducted in 2022 by SHRM, 79% of companies utilized automation and AI for recruitment and hiring.

In the near future, organizations should anticipate increased scrutiny of their hiring methods. A new law in New York is expected to set a precedent that may influence other states, such as California, New Jersey, Vermont, and New York, to develop their own regulations governing the use of AI in hiring and recruitment.

Large corporations like Amazon, with abundant resources, have the capacity to construct, train, and evaluate their AI tools to achieve more favorable outcomes. However, companies that purchase AI tools from third-party vendors face a higher risk of introducing bias and violating civil rights.

For businesses using AI technology for employment purposes, it is essential to implement thorough vetting and auditing procedures. The key message for businesses is that in the near future, simply attributing decisions to AI won’t suffice. It’s crucial to consider the human aspect on both sides when implementing new AI hiring software.

With a team of passionate enthusiasts, Hazehunt strives to bring you the latest insights, trends, and updates from the ever-evolving world of technology and entertainment.

Leave A Reply

Exit mobile version