CAMBRIDGE, Mass., Oct. 12, 2021 /PRNewswire/ -- There is no single universal human response to artificial intelligence (AI), and individuals make completely different choices based on identical AI inputs, according to new research released in an article today by MIT Sloan Management Review.
A new analysis finds that these differences in AI-based decision-making have a direct financial effect on organizations. Depending on their particular decision-making style, some executives invest up to 18% more in important strategic initiatives based on the exact same AI advice.
"To champion AI in the boardroom, leaders must acknowledge human biases and decision-making styles," said Philip Meissner, professor of strategy and decision-making at ESCP Business School in Berlin. "If we do not understand the human dimension, we will only comprehend half the equation when it comes to optimizing the interplay between AI and human judgment."
The Human Factor in AI-based Decisions
New research findings suggest that executives using AI to make strategic decisions fall into three archetypes based on their individual decision-making styles:
- Skeptics do not follow the AI-based recommendations. They prefer to control the process themselves. When using AI, skeptics can fall prey to a false illusion of control, which allows them to overestimate themselves and underestimate AI.
- Interactors balance their own perception and the algorithm's advice. When AI-based analyses are available, interactors will trust and make decisions based on these recommendations.
- Delegators largely transfer their decision-making authority to AI. Delegators may misuse AI to reduce their perceived individual risk and avoid personal responsibility. They consider the AI recommendations as a personal insurance policy in case something goes wrong.
These different decision-making archetypes show that the quality of the AI recommendation itself and how executives make sense of and act on this advice are equally important in assessing the quality of AI-based decision-making in organizations. "What's interesting is that the same behavioral patterns remain relevant whether or not AI is involved," said Christoph Keding, research associate at ESCP Business School in Berlin. "In the era of AI-advised decision-making, executives' decision behavior is still shaped by their underlying decision-making styles."
3 Strategies to Optimize the Interplay Between AI and Human Judgment
To utilize AI's full potential, companies need a human-centered approach to address the cognitive dimension of human-machine interactions beyond automation. With the right balance of analytics and experience, AI-augmented decision processes can increase the quality of an organization's most critical choices and drive tremendous value for companies in an increasingly complex world.
The MIT Sloan Management Review article, "The Human Factor in AI-Based Decision-Making," provides three recommendations for boards of directors and senior executives to integrate AI into strategic decision-making processes successfully.
- Create awareness. Communicate with all executives who interact with AI-based systems about the impact of human judgment, which remains a decisive factor when augmenting the top management team. Executives should learn about the specific biases they have toward AI, which vary depending on their individual decision-making styles. This awareness is the crucial foundation for a successful integration of AI into organizations' decision-making processes.
- Avoid risk shift and illusion of control. Emphasize that the ultimate decision authority stays with the executives, even if AI is involved. And explain the potential benefits of AI as well as what parameters and data the suggested course of action is based upon. According to the article, "This intervention can interrupt the decision maker's subconscious autopilot process and elevate the decision to a more conscious and unbiased choice."
- Embrace team-based decisions. Balance the predominant tendencies of the three decision-making archetypes in teams to overcome choices that are overly risky or risk-averse. Different perspectives and multiple options improve human decision-making processes, whether or not AI is involved. Framing the AI as an additional source of input, not as a superior, undisputable authority, can help successfully integrate AI-based recommendations into discussions.
The article is based on a study of 140 U.S. senior executives. Each person was shown an identical strategic choice: whether or not to invest in a new technology that would enable them to utilize potential new business opportunities. Study participants were told that an AI-based system tasked with evaluating new business opportunities had recommended investing in the new technology. The executives were then asked how likely they would be to invest in the technology, and if they chose to do so, how much money would they be willing to commit.
MEDIA CONTACT:
Veronica Kido
[email protected]
508-242-5134
About the Authors
Philip Meissner is a professor of strategy and decision-making at ESCP Business School in Berlin, as well as the cofounder and director of the European Center for Digital Competitiveness.
Christoph Keding is a research associate at ESCP Business School in Berlin and a visiting scholar at the University of California, Berkeley.
About MIT Sloan Management Review
MIT Sloan Management Review (MIT SMR) is an independent, research-based magazine and digital platform for business leaders, published at the MIT Sloan School of Management. MIT SMR explores how leadership and management are transforming in a disruptive world. We help thoughtful leaders capture the exciting opportunities — and face down the challenges — created as technological, societal, and environmental forces reshape how organizations operate, compete, and create value.
Connect with MIT Sloan Management Review:
SOURCE MIT Sloan Management Review
Related Links
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article