Stock

First Impressions of the AI Order’s Impact on Fintech

Jack Solowey

Jack Solowey, policy analyst at the Cato Institute’s Center for Monetary and Financial Alternatives.

This week, the Biden administration issued a long‐​anticipated Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”). Given the breadth of the nearly 20,000-word document’s whole‐​of‐​government approach to AI—addressing the technology’s intersection with issues ranging from biosecurity to the labor force to government hiring—it unsurprisingly contains several provisions that address financial policy specifically.

Notably, the EO names financial services as one of several “critical fields” where the stakes of AI policy are particularly high. Nonetheless, by not providing a clear framework for financial regulators to validate the existence of heightened or novel risks from AI or to understand the cost of lost benefits due to intervention, the EO risks initiating agency overreach.

As a general matter, the EO largely calls on a host of administrative agencies to work on reports, collaborations, and strategic plans related to AI risks and capabilities. But the EO also orders the Secretary of Commerce to establish reporting mandates for those developing or providing access to AI models of certain capabilities. Under those mandates, developers of so‐​called “dual‐​use foundation models”—those meeting certain technical specifications and posing a “serious risk” to security and the public—must report their activities to the federal government.

In addition, those providing computing infrastructure of a certain capability must submit Know‐​Your‐​Customer reports to the federal government regarding foreign persons who use that infrastructure to train large AI models “that could be used in malicious cyber‐​enabled activity.”

While it’s conceivable that these general‐​purpose reporting provisions could impact the financial services sector where financial companies develop or engage with covered advanced models, the provisions most relevant to fintech today are found elsewhere in the EO.

Where financial regulators are concerned, the EO requires varying degrees of study and action. As for studies, the Treasury Department must issue a report on AI‐​specific cybersecurity best practices for financial institutions. More concretely, the Secretary of Housing and Urban Development is tasked with issuing additional guidance on whether the use of technologies like tenant screening systems and algorithmic advertising is covered by or violative of federal laws on fair credit reporting and equal credit opportunity.

But the EO puts most financial regulators in a gray middle ground between the “study” and “act” ends of the spectrum, providing that agencies are “encouraged” to “consider” using their authorities “as they deem appropriate” to weigh in on a variety of financial AI policy issues. The Federal Housing Finance Agency and Consumer Financial Protection Bureau, for instance, are encouraged to consider requiring regulated entities to evaluate certain models (e.g., for underwriting and appraisal) for bias. More expansively, independent agencies generally—which would include the Federal Reserve and Securities and Exchange Commission—are encouraged to consider rulemaking and/​or guidance to protect Americans from fraud, discrimination, and threats to privacy, as well as from (supposed) financial stability risks due to AI in particular.

The wisdom—or lack thereof—of these instructions can hinge on how the agencies interpret them. On the one hand, agencies should first ask whether existing authorities are relevant to AI issues—so as not to exceed those authorities. Similarly, agencies should ask whether applying those authorities to AI issues is appropriate—as opposed to blindly assuming AI presents heightened or novel risks requiring new rules without validating those assumptions.

On the other hand, to the extent agencies interpret the EO’s instructions as some version of “don’t just stand there, do something (or at least make it look like you are),” it could end up being the very thing that initiates misapplied authorities or excessive rules. Because the EO does not offer financial regulators a clear framework for confirming the presence of elevated or new risks from AI, or for minimizing the costs of intervention, it risks being interpreted more as a call for financial regulators to hurry up and regulate than to thoughtfully deliberate. In so doing, the EO risks undercutting its own goal of “[h]arnessing AI for good and realizing its myriad benefits” by mitigating risks.

For a chance to deliberate about financial AI policy questions, join the Cato Institute’s Center for Monetary and Financial Alternatives on November 16 for a virtual panel: “Being Predictive: Financial AI and the Regulatory Future.”

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in:Stock

Stock

When Hayek Came to Cato

David Boaz On December 1, 1982, F. A. Hayek became Cato’s first Distinguished Lecturer. Cato ...