AI, or artificial intelligence, is a concept that seems vaguely capable of anything. The idea of thinking machines spinning big data into extraordinary solutions captivates our imaginations with optimistic visions of the future. Perhaps as a result of it being so loosely defined, the term has become a euphemism for “business magic.” A one-size-fits-all replacement for traditional solutions from the wizards of Silicon Valley. Unfortunately, when the curtain is pulled back, the reality often fails to live up to the hype.
Legendary sci-fi author Kim Stanley commented on this disparity in an interview saying, “All of the AI people that are actually working on it at Google and Silicon Valley will freely admit that AI is just a fundraising phrase. There’s nothing to it. There isn’t even machine learning to speak of. You have to understand that most talk now about technological innovation has to do with fundraising.”
As AI garners more attention as a promotional term, it’s important to question how much true AI is actually in play and its benefits. HR professionals must be aware of this gulf between the sizzle and the actual steak or risk becoming targets for salespeople relying on lofty buzzwords.
Currently, AI can be useful for narrowly-defined and repetitive tasks, but no AI is capable of learning, adapting, and making complex assessments outside of a narrowly defined ruleset, like playing a game of Chess. These algorithms require data to be curated and interpreted by humans as a reference point and even those results must be carefully verified to be considered valid.
Further, there are significant compliance issues involved with AI. A myriad of intricate factors must be built into AI to ensure that the results are valid, reliable and justifiable should the employer encounter a discrimination claim. Weighting the various assessment measurements relative to job requirements is quite complicated from a machine learning standpoint, yet the human brain can do it in an instant, with the proper justification that is necessary to ensure compliance.
AI developers argue that their systems are completely blind relative to any input other than applicant data and thus are inherently unbiased. While it is true that the code is blind from an operational perspective, humans write the code and their biases are inherently present. It is extremely difficult for the code to capture the right balance across many data points and job requirements. It can only work on a relatively narrow set of characteristics or a small number of job requirements, all while leaving the employer with no real means of justification as to why a decision was made, or whether the process utilized was valid.
With societal changes demanding a focus on diversity, employers will increasingly be required to justify their hiring decisions. AI solutions can provide assistance in specific situations, but will likely never outpace the traditional, human-centered approach to hiring decisions. The “old school” method still provides the greatest versatility and accountability when identifying the right applicant the majority of times while giving the employer the confidence that they can justify their decisions.