Why Emotional Intelligence Is the Key to Managing Artificial Intelligence

By: | December 23, 2019

Les Williams, CRM, is Cofounder and Chief Revenue Officer of Risk Cooperative. He holds a B.S. in mechanical engineering from the University of Virginia and an MBA from Harvard Business School. Prior to joining Risk Cooperative, Les served in various institutional sales positions at SoHookd, JLL, and IBM.

Technology that once seemed like science fiction taken from a Hollywood blockbuster  is now becoming reality in today’s modern world. This is taking the form of artificial intelligence (AI).

AI can be found in a variety of industries. In healthcare, Austin-based Diligent Robotics developed Moxi, a robot assistant that helps with tasks in a hospital allowing nurses and doctors to focus more strategically on patient care.

Moxi utilizes AI to learn from humans so that it can complete new tasks, further lightening the workload of hospital staff. In the security industry, Athena Security is an AI-based security system allowing staff to better identify objects in a video or picture from one of its security cameras.

The main goal of the system is to identify someone who may be carrying a deadly weapon into a building, such as walking in a certain manner that suggests that individual may be concealing a weapon or the individual waving a firearm from different angles. AI is even being used in the financial sector. Nasdaq utilizes machine learning to identify spoofing, or suspicious trading done to illegally manipulate the market.

AI still needs Human Intelligence

While AI is an important tool to help humans, we must remember that it should not be used in a vacuum. According to The American Hospital Association, 40% of the occupations supporting
healthcare could be replaced by AI functions, similar to the tasks that Moxi performs.

Robots still do not have the human touch required to care for sick patients, and these positions will still require emotional intelligence and not simply artificial intelligence. How do you gauge the bedside manner of a machine and what is the patient care proposition of an algorithm or advanced learning machine making prescriptive choices?

Athena Security’s advanced AI does have a significant capability gap; according to a recent article in Fortune the algorithm did not recognize an individual pointing a firearm directly at a security camera from 30 feet away. This highlights how the best deployment of AI in a system is to augment human judgment and watchful eyes, rather than replace them.

Even the machine learning system deployed at Nasdaq has its shortcomings, Fortune highlighted the concerns of the Nasdaq Security Team regarding the staggering number of “false positives” that could trigger a spoofing event during peak trading volume at the exchange.

The need for human intelligence to be used in concert with AI is especially important when dealing with perceived social biases, as discovered during the analysis of the recent results from an algorithm used by Optum, the health-services arm of United HealthCare. According to The Wall Street Journal the algorithm located patients with certain health issues, such as diabetes and heart disease, who could be candidates for a special program where specialists manage their healthcare routines.

While several of the black patients had more critical health issues than their white counterparts, the algorithm considered “healthcare spend” in its equation. Since white patients were likely to spend more money on healthcare than black patients, the white patients were able to gain entry into this specialized program at a higher rate than black patients.

This is a prime example where a human must intervene and exercise emotional intelligence to ensure proper patient outcomes are not overwhelmed by the risk of algorithmic bias.

Apple recently faced a controversy in its use of AI. Several men found that the credit limits on their Apple Cards were higher than their wives’, even though they shared the same asset base. According to a CNN article, factors such as income and credit card spend factor into a user’s credit limit.

Statistically, women tend to make less money than their male counterparts, which plays a major role in determining credit limits. The New York Department of Financial Services is investigating this matter currently to gauge whether a gender bias is influencing credit outcomes.

The use of AI in the hiring process has also raised some ethical eyebrows. The Guardian reported that HireVue, a maker of software that helps employers search and screen job applicants, has an application where potential employees are asked interview questions in front of a camera.

The software analyzes the physical changes in a job candidate’s face, verbal intonation, and posture during the interview. This information is then compared against the results of current employees who are deemed “high achievers,” the idea being that applicants who share mannerisms of a current outstanding employee will also have a better chance of being successful new hires.

This method of hiring may contradict the edict that diverse teams tend to generate more creative ideas and perform better as a whole. What makes this method even more troubling is an organization runs the risk of hiring and promoting a large percentage of workers who are virtual clones of one another, an issue which currently plagues certain industries such as financial services and real estate.

Applying Lessons Learned from the Boeing 737 Max

Businesses who are utilizing AI to influence their decision-making could learn from the recent actions of the Federal Aviation Administration (FAA) Administrator, Stephen Dickson.

Last month, the Wall Street Journal reported that Dickson wants the FAA to work closely with airplane manufacturers as a new plane is being designed during the early stages of the process.

The article stated, “human factors — such as how rapidly airline pilots realistically are able to react in certain emergency situations — should be more of a priority in the process of designing jets.”

The investigation in the wake of the two 737 Max tragedies found that Boeing made unrealistic assumptions regarding how a pilot would behave as it was designing the flight-control system of the aircraft. In this battle between man and machine, machine won twice resulting in the loss of human lives.

The renowned physicist Stephen Hawking shared a profound thought: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” &

More from Risk & Insurance