The Financial Stability Oversight Council says emerging technology poses “safety and soundness risks” as well as benefits.

Financial regulators in the United States have for the first time cited artificial intelligence (AI) as a risk to the financial system.

In its latest annual report, the Financial Stability Oversight Council said the increasing use of AI in financial services is a “vulnerability” to watch.

While AI offers the promise of reducing costs, improving efficiency, identifying more complex relationships, and improving performance and accuracy, it can also “introduce certain risks, including security and soundness risks such as cyber and model risks,” according to the FSOC in its annual report. report published Thursday.

The FSOC, which was established in the wake of the 2008 financial crisis to identify excessive risks in the financial system, said developments in AI need to be monitored to ensure that supervisory mechanisms “take into account emerging risks” while facilitating “efficiency and innovation”.

Authorities also need to “deepen expertise and capacity” to monitor the field, the FSOC said.

US Treasury Secretary Janet Yellen, chair of the FSOC, said AI adoption could increase as the financial sector adopts emerging technologies and the council will play a role in monitoring “emerging risks”.

“Supporting responsible innovation in this area can enable the financial system to reap the benefits of greater efficiency, for example, but there are also existing risk management principles and rules that need to be applied,” Yellen said.

U.S. President Joe Biden issued a sweeping executive order on AI in October that largely focused on the technology’s potential implications for national security and discrimination.

Governments and academics around the world have raised concerns about the rapid development of AI, amid ethical questions surrounding individual privacy, national security and copyright infringement.

In a recent study conducted by researchers at Stanford University, technology workers involved in AI research warned that their employers were failing to implement ethical safeguards, despite their public pledges to prioritize safety.

Last week, European Union policymakers agreed on landmark legislation requiring AI developers to release data used to train their systems and test high-risk products.

Source link

Share.

Leave A Reply

Exit mobile version