Artificial intelligence has a foot in the door in every industry, from chatbots responding to customer inquiries to programs providing details on hiring and firing decisions. As AI continues to become a widely accepted practice, the Equal Employment Opportunity Commission is taking a proactive approach to monitoring discrimination.
What is the Current Situation with AI in Employment Practices?
The Vice Chair of EEOC, Jocelyn Samuels, describes that the organization wants businesses “to enjoy the benefits of new technology while protecting the fundamental civil rights that are enshrined in our laws.”
The EEOC held a conference on January 31 with professionals across various fields, including computer scientists, employer representatives, lawyers, and civil rights advocates. The goal of this conference was to discuss the impact of AI on employment discrimination.
A recent study found that AI hiring programs were scanning resumes for certain keywords and evaluating candidates’ facial expressions and speech patterns. This can create bias and lead to discriminatory practices.
There are tax advantages to hiring from certain groups, such as minorities and women. This has led many companies to leverage AI to screen for these protected categories, resulting in discrimination against candidates outside of these groups.
Even if companies aren’t trying to pinpoint certain groups for employment, there are other ways discrimination comes into play. One example is discriminating against candidates with a poor credit history or criminal record. AI picks up this information with little to no work required on the employer’s part.
How is the EEOC Combatting Potential Discrimination?
State agencies have begun proposing legislation to manage how companies are using AI in the employment decision-making process. New York is at the forefront of this movement, with new legislation going into effect on April 15. New York’s new legislation requires a bias audit and notification to applicants before AI can be utilized.
Furthermore, legislation and oversight surrounding the use of AI in hiring and firing decisions are going to become more prevalent going into the next few years as the EEOC prioritizes creating a fair playing field.
Employers that try and mimic a model candidate are more likely to be caught and fined for discrimination. For example, if a business labels someone named Jim that plays soccer as a model employee, the name and gender might lead the system to prioritize white male workers since the name is highly correlated to that group.
What Does This Mean for Your Business?
This doesn’t mean you need to eliminate all use of AI in your employment decisions. However, you should be aware of the intended goal of AI to ensure it doesn’t promote historical discrimination.
Many companies are in favor of disclosing the use of AI in the employment process, but new legislation, like the New York laws, may disincentivize businesses from using AI. This means you need to find a beneficial trade-off that leverages AI to improve efficiency and productivity in hiring, but also maintains a neutral approach with no underlying discrimination practices.
This can be done by overhauling or tweaking your existing hiring processes through education and resources, like pre-employment assessments. At CRI, we provide different assessments based on the position you are looking to fill and that do not utilize any form of AI, ensuring you have a clear paper trail and are effectively evaluating your candidate without any harmful discrimination practices.