Wall Street’s Next Crash Won’t Be Human-Induced: The Looming Threat of AI-Triggered Market Meltdowns

The Scenario: Fast forward a few election cycles. A new president takes office. As per usual, a changing regime comes with a changing stance on economic policy. The nation’s new leader calls for sweeping regulatory changes that will increase government oversight and tighten financial controls that impact major U.S. corporations across every industry. Naturally, this changes the financial forecast for these companies. Expenses will increase and profits will drop – along with investor confidence. Stock prices fall and sales of shares accelerate.
This is nothing new. Differing stances on regulation, taxation and debt tolerance always trigger different market reactions. Take for example how both the Dow and S&P plummeted after President Trump’s announcement of broad import tariffs, only to rebound after some of those tariffs were rolled back. Powered by speculation and predictions, the stock market is volatile by nature. A little bit of uncertainty is all it takes to send stock prices into a tailspin.
Only now, artificial intelligence is playing a bigger role in trading decisions, amplifying the effects of quick reactions that rule stock market fluctuations.
Every firm on Wall Street has an AI platform that tracks market shifts 24/7. Machine learning algorithms are scanning the web and detecting pessimistic viewpoints about the impact of tightened regulations on market activity and growth. They predict, based on decades of historical data, that this outlook means that stock prices will drop. They react as traders have reacted every time price drops occur – sell, sell, sell. But this time the reaction is swifter and more universal. Computers don’t stop working at 5 p.m. when markets “close.” There is no naturally built in pause that has typically allowed human investors to wait and watch before making sweeping decisions. The systems plow through every circuit breaker and continue doing what the data has trained them to do. Selling incites more selling.
Overnight, the market tanks by 40% — more than the initial crash that market the start of the Great Depression in 1929 when the Dow dropped by 25%.
Companies lose millions. Budgets are tightened and jobs are cut. The average U.S. citizen has had their retirement savings decimated, and now unemployment is rising, peaking at 15%. Household budgets are tightened. The decrease in consumer spending makes economic recovery progress at a snail’s pace. More and more businesses fail. The country enters a years-long recession, which triggers a global economic downturn given the U.S.’s significant role in international trade.
Analysis: While a scenario this severe has yet to occur, it demonstrates how increasing reliance on AI has the potential to turn expected stock market dips into economy-wrecking plunges with extensive ripple effects around the world.
Jim Rickards, an American lawyer and investor who has written about this possibility in his book “Money GPT: AI and The Threat to the Global Economy,” describes the scenario as an example of the fallacy of composition – or the assumption that the whole is merely the sum of its parts.
While it might make sense for an individual investor to sell when stock prices fall, if AI systems in charge of massive amounts of capital do the same thing all at once, the result could be catastrophic. Speed and the simultaneous nature of their actions make AI platforms particularly powerful in their decisions.
Rickards discusses how automated trading could create feedback loops that human traders would be able to break. He writes, “What is new is the speed at which they can happen, the amplifying effect and the recursive function.”
As investors, insurance companies and their clients would be heavily impacted by such a scenario. The industry must assess the risk that financial institutions create by incorporating AI and take steps to help build a risk management framework.
“AI use requires constant monitoring and recalibration to ensure that it performs as desired and represented to customers,” said Kevin Koehler, SVP and head of financial institutions for Westfield Specialty.
“The protection and governance of data is key and requires constant vigilance to ensure that it is used ethically. Like other corporate strategies the organization should utilize a multi-layered compliance approach to make sure that the institution maintains its fiduciary duty to its clients. Lastly, the institution must accurately represent its use to shareholders, stakeholders, and customers,” he added.
“The proper risk management strategy to govern the use of AI starts at the board level,” continued Koehler.
“Senior leadership and board of the institution must understand the organization’s current and future use of AI and take appropriate steps to ensure that such use is consistent with the enterprise’s risk appetite and broader corporate strategy. Further, the organization should establish robust AI risk committees that focus on regulatory compliance, risk management and IT issues to help manage its use”
How Is AI Being Used in Financial Markets Now?
According to a 2024 report detailing the global growth of AI in trading, the global AI trading market was valued at $18.2 billion in 2023, and could potentially triple in size by 2033.
AI is currently being used primarily to analyze risk, make predictions, and inform stock purchasing and selling decisions. In some cases, it is being used to actually execute buying and selling automatically, though this has been applied to smaller transactions.
Useful for taking in, organizing and analyzing massive amounts of data, AI can quickly recognize patterns that provide clues around overall economic health and the impact of any movement on stock performance.
AI platforms can track market trends, analyze stock price changes, and monitor signs of overall financial wellness. According to 2024 study analyzing the accuracy of machine learning algorithms in predicting financial distress, artificial neural network models were 98% accurate in predicting financial distress in Toronto Stock Exchange-listed companies, mostly by detecting anomalies and signs of fraud in financial statements.
AI platforms can also analyze sentiment by scanning social media content, online community forum discussions, and online news platforms, which provides indicators of how a company is performing and can identify early red flags of underperformance.
These functions allow investors to optimize portfolios and diversify investment among corporations according to their perceived level of risk.
Algorithmic or automated trading in some portfolios is enabled by predetermined parameters that inform the system at what price to sell and at what price to buy. By tracking multiple markets at once, automated trading platforms can buy in one market and sell in another to reap the most profit from a transaction. High-frequency trading (HFT)– a subset of algorithmic trading – executes small trade orders based on tiny price differences within fractions of a second. As of 2023, HFT constitutes about half of all U.S. trading, according to a Pacific-Basin Finance Journal analysis.
Taking all of these capabilities together, the benefits of AI-driven trading seem clear. Efficient data analysis leads to smarter decision-making and the ability to buy and sell at the right price at lightning speed. Increased efficiency and accuracy of trading decisions comes with the promise of eliminating some overhead expense and garnering more profit.
Risks of AI in Financial Markets
But utilizing AI algorithms in financial markets is a double-edged sword. No predictions are 100% certain. Especially since machine learning algorithms rely on historical data, there is always a gray area when making market predictions where unforeseen events can completely disrupt the status quo.
Given that part of the appeal of AI is reducing human intervention, questions of liability naturally will arise when something goes wrong.
Who would be blamed for an AI-induced stock market crash?
Currently, AI platforms have largely been clear about their liability in any real or perceived harms that arise from use of their technology. OpenAI states in its licensing agreement that users “will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services.”
Human users remain responsible for any output generated or actions executed based on the work of AI.
Professional liability coverage as it stands today should respond to a claim arising from an AI-related financial loss, as the event amounts to an error in judgment rather than intentional or willful negligence.
However, as AI becomes more widely implemented and the potential for related losses increases, underwriters may look to exclude certain AI-related exposures or break this risk out into separate endorsements. In any case, underwriters will be looking very closely at the risk mitigation plans corporations have in place around the use and oversight of AI applications.
The Current State of AI Risk Management
Mitigating the risks of AI while reaping its benefits and giving it room to grow is a fine balancing act. There are three key components of AI risk management – ethics and security frameworks built in by AI developers themselves, government regulation, and company-specific risk management strategies.
1) AI development
SaferAI, a non-profit organization “aiming to incentivize the development and deployment of safer AI systems through better risk management,” developed a methodology to rate the risk management maturity of various AI platforms based on three pillars: risk identification, risk tolerance and analysis, and risk mitigation.
Out of six platforms studied, the highest scoring system – Anthropic – still only received a 1.9 rating out of 5, considered “weak” by Safer AI’s standards. Overall, AI developers lack rigor in their risk management approach.
“There are several likely reasons for this. First, since there have not yet been any large-scale AI incidents, AI is not yet seen as a risk the way it is in nuclear power, aviation or oil & gas. Indeed, opinion is split between whether the risks from AI are exaggerated or not. Second, AI comes out of the Silicon Valley tech ecosystem, where moving fast and breaking things has long been the motto, rather than out of a safety-critical industry ecosystem. Third, AI is seen as highly complex. Popular media often refers to AI models as ‘black boxes.’ This makes risk management seem more difficult,” said Malcolm Murray, Research Lead with SaferAI.
SaferAI asserts that AI companies need more precise quantitative benchmarks to identify and assess risks, and that these findings need to be more transparent to users and policymakers.
In an explanation of their AI risk management framework, SaferAI researchers write that “the preferred approach expresses risk tolerance as a product of quantitative probability and severity per unit of time.” They assert that AI companies should identify key risk indicators and key control indicators that set boundaries for acceptable risk and measure the effectiveness of mitigation strategies.
There is also a need for dedicated risk governance among AI developers. SaferAI calls for AI companies to appoint a “risk owner” for each identified risk and establish an audit process to independently test risk management controls. They emphasize the need for checks and balances so that decision-making at the executive level is not driven entirely by the desire for innovation at the expense of safety,
2) Government regulation
Governments have a role to play in pushing the AI community to develop these benchmarks. In late January, the Trump Administration issued an executive order revoking “certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.”
This move toward deregulation seeks to spark rapid advancements in AI and help the U.S. gain a competitive edge, but deemphasizes the need for a methodical approach to that development that has risk mitigation baked in. It decreases accountability for AI developers to take ownership over the risks introduced by their technology.
3) User responsibility
Adoptees of AI are the final piece of the puzzle. Corporations need specific risk mitigation plans that address known risks of AI – including cybersecurity vulnerabilities – and include a system of checks and balances that ensure the accuracy and prudency of any recommendations, documentation or actions generated by AI.
As AI capabilities continue to evolve, risk management will be an ever-moving target. However, organizations don’t need to reinvent the wheel in order to address emerging risks. In a Deloitte report “AI and Risk Management: Innovating with confidence,” authors detail components of AI risk management strategy, but emphasize that “firms do not require completely new processes for dealing with AI, but they will need to enhance existing ones to take into account AI and fill the necessary gaps.”
Transparency will be key. Financial institutions should clearly outline the role of AI in their decision making and clarify how its systems will be monitored and maintained. Human oversight and the implementation of parameters that trigger human intervention may always be necessary. Organizations need to clearly identify those parameters and establish internal audit processes to test them consistently. An individual to take ownership of this area of risk should be identified. &