The push toward the use of “people analytics” in HR is ever increasing. More and more HR departments are using analytics. According to People Analytics: Recalculating the Route “…71% of companies see people analytics as a high priority in their organizations.” Companies see this as being so important that in some organizations the use of these analytics has been taken away from HR and is being used by the C-suite directly. Their complaint is that HR has been to slow in adopting and matching analytics to business needs. That may not be such a good thing.
The Problem with AI
Most of you are aware of the embarrassing debacle that United Airlines had in removing a passenger from one of its flights. The flight was overbooked and a passenger was selected and asked to give up his seat on the plane. He refused and he was then hauled (manhandled) off the plane. That was, in all likelihood, not a people decision. AI, in the form of an algorithm, looked at the passenger list and selected a passenger sitting in a seat on a ticket that had probably cost less than other tickets. He was not as high status, airline-wise, as were other surrounding passengers. The algorithm did not know his need to make sure he arrived at his destination when he was supposed to in order to see a patient. The AI selected him based upon the parameters of the algorithm. Humans then carried out the instructions, apparently without a lot of thought on what was going on.
Google has a similar AI malfunction in some photo AI identification.
AI is trained by humans and humans have biases
In his article 4 challenges Artificial Intelligence must address, Ben Dickson says:
Machine Learning, the popular branch of AI that is behind face recognition algorithms, product suggestions, advertising engines, and much more, depends on data to train and hone its algorithms….The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. And at present, the AI industry is suffering from diversity troubles that some label the “white guy problem,” or largely dominated by white males.
Another reference discovered that AI assigned “male” pronouns to the concept of “doctor.”
Not as infallible as we would like
Because AI learns from humans we have to sometimes apply the old adage of “garbage in, garbage out.” Until such time as we know for a fact that what AI has learned is unbiased we need to apply our human reasoning to the actions that AI instructs us to take. Blindly following a program and setting aside your human judgment can lead to potential problems, whether it is following a traffic suggestion or selecting your next sales candidate.
Futurists like Martin Ford, Ray Kurzwell and James Canton all say that it is “human qualities” that will preserve your job. If you abdicate all you decision-making to following machine directions you will be more likely to someday lose that job to a “robot.”