Technology has transformed financial markets and services in recent years, much as it has transformed life in general. While this has presented many opportunities – efficiency, speed, cost reductions, etc. – it has also created a dependency that has caused headaches for regulators across the globe. What is the best approach to oversight and regulation of areas such as artificial intelligence (especially generative AI), cloud services and tokenisation?
With a keen eye on operational resilience, regulators globally seek to keep financial markets safe and robust. As the Bank of England sets out, operational resilience means “the ability of firms, and the financial sector as a whole, to absorb and adapt to shocks and disruptions, rather than contribute to them.”
The debate continues about the extent to which some of this technology might contribute to these shocks and disruptions. A read of IOSCO’s 2024 Work Programme illustrates this focus, as it highlighted a plan to address new rules on sustainability and fintech. Having already published recommendations on crypto and digital assets, as well as decentralised finance, IOSCO’s Fintech Task Force turned its attention to monitoring policy implementation and work on artificial intelligence and tokenisation.
Data shows widespread use of AI already in financial markets. A Bank of England survey published in November 2024 found 75% of firms already use AI, with a further 10% planning to in the coming years. Additionally, given the fact respondents reported that 55% of all use cases for AI have some degree of automated decision making, it should not surprise anyone that regulators will pay close attention to it – even if they have yet to establish exactly how to respond.
While concerns include the impact on data and privacy, the BoE says that “the largest perceived non-regulatory constraint of AI is safety, security and robustness of AI models, followed by insufficient talent and access to skills.”
In November 2024, the BoE also established an Artificial Intelligence Consortium to provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in UK financial services. This will ultimately inform the Bank’s approach to addressing risks and challenges, and to it promoting the safe adoption of AI.
Despite the perceived benefits of AI providing improved operational efficiency, regulatory compliance and advanced data analytics, many regulators point to concerns about AI amplifying risks. The Financial Stability Board highlighted several AI related vulnerabilities that could increase systemic risk in financial markets in its November 2024 report – The Financial Stability Implications of Artificial Intelligence. These include third-party dependencies and service-provider concentration.
The BoE’s 2024 survey found that one third of all AI use cases are third-party implementations, and that the top three third-party providers account for 73%, 44% and 33% of all reported cloud, model and data providers, respectively.
According to the FSB, “The reliance on specialised hardware, cloud services and pre-trained models has increased the potential for AI-related third-party dependencies. The market for these products and services is also highly concentrated, which could expose [financial intermediaries] to operational vulnerabilities and systemic risk from disruptions affecting key service providers.”
Increased correlations in trading, lending and pricing, driven by widespread use of AI models and data sources, also concerns the FSB. “This could amplify market stress, exacerbate liquidity crunches and increase asset price vulnerabilities. AI-driven market correlations could be exacerbated by increasing automation in financial markets,” its report suggests.
The FSB also fears that AI uptake by malicious actors could increase the frequency and impact of cyber-attacks. “Intense data usage, novel modes of interacting with AI services and greater usage of specialised service providers increase the number of cyber-attack opportunities.”
The Monetary Authority of Singapore also highlighted these increased threats in its information paper – Cyber Risks Associated with Generative Artificial Intelligence – published in July 2024. It highlights examples of the risks, such as the use of deepfakes and GenAI-enabled phishing, as well as malware generation and enhancement, while adding potential mitigating steps.
Despite these concerns, regulators across the globe tend to agree that existing financial policy frameworks generally address the vulnerabilities associated with AI. A staff advisory published in December 2024 by the Commodity Futures Trading Commission said CFTC-regulated entities are expected to “assess the risks of using AI and update policies, procedures, controls and systems, as appropriate, under applicable CFTC statutory and regulatory requirements.”
The CFTC set out a non-exhaustive list of existing statutory and regulatory requirements that may be potentially implicated by CFTC-regulated entities’ use of AI. This list covers order processing and trade matching, market surveillance and system safeguards. It reminds futures commission merchants, for example, that a significant portion of CFTC regulations applicable to FCMs relates to customer protection. FCMs that consider the use of AI to account for segregated funds, “would still be required to ensure that, among other things, all of the requirements of Part 1 of the CFTC’s regulations are being met.”
In announcing the advisory, then CFTC Chair Rostin Behnam described it as “emblematic of the CFTC’s technology-neutral approach, which balances market integrity with responsible innovation in the derivatives markets.”
This echoes the industry’s stance. FIA has emphasised in letters to the CFTC and the US Treasury Department that regulators should consider existing rules and guidance applicable to financial institutions when deciding whether additional regulation is warranted in this area.
In October 2023, the White House issued an executive order encouraging US agencies – such as the CFTC – to “consider using their full range of authorities to protect American consumers from fraud, discrimination and threats to privacy, and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking as well as emphasising or clarifying where existing regulations and guidance apply to AI.”
FIA has also urged regulators to consider the outcomes and use-cases of AI, rather than the technology itself. In its letter to the CFTC, co-signed by CME Group and Intercontinental Exchange, FIA stated: “In many instances, existing CFTC rules and guidance provide the controls and oversight needed for the CFTC to promote and protect the integrity and resilience of our markets. CFTC’s risk-based approach means that its rules likely already address perceived risks.”
The BoE has largely echoed this approach. While Deputy Governor Sarah Breeden has cautioned that regulators must not be complacent, she added that the central bank was not ready to change its approach to regulating generative AI. It was more a case of keeping a close eye on it.
For its part, the FSB believes regulatory frameworks need more work before being sufficiently comprehensive. The November 2024 report concludes that the FSB, standard setting bodies and national authorities may wish to:
• Consider ways to address data and information gaps in monitoring developments in AI use in the financial system and assessing their financial stability implications.
• Assess whether current regulatory and supervisory frameworks adequately address the vulnerabilities identified in the report, both domestically and internationally.
• Consider ways to enhance regulatory and supervisory capabilities for overseeing policy frameworks related to the application of AI in finance, for instance, through international and cross-sectoral cooperation and sharing information and good practices.
In another paper, the FSB commented on the financial stability implications of tokenisation. While adoption of tokenisation is still relatively low, the FSB acknowledges it provides a number of potential benefits while identifying several financial stability vulnerabilities associated with DLT-based tokenisation. These relate to liquidity and maturity mismatch, leverage, asset price and quality, interconnectedness and operational fragilities.
The FSB’s report states that, notwithstanding these vulnerabilities, “the use of tokenisation in the financial sector does not currently pose a material risk to financial stability, mostly due to its small scale.”
It goes on to highlight three issues for national regulators to consider. First, they should consider ways to address data and information gaps in monitoring tokenisation adoption. Second, they should consider ways to increase understanding of how tokenisation and its related features fit into existing legal and regulatory frameworks and supervisory approaches. And finally, they should continue to facilitate cross border regulatory and supervisory information sharing on tokenisation.
Europe has taken a broad-brush approach to regulating the technological impact on operational resilience with the European Commission’s Digital Operational Resilience Act. The regulation entered into force in January 2023 and applied from January this year. It aims to strengthen the information and communication technology security of financial entities and make sure the financial sector in Europe stays resilient in the event of a severe operational digital disruption. It requires financial institutions to establish effective governance of risk stemming from outsourcing as well as strengthen frameworks for technology security and cyber resilience – including AI.
Implementation of DORA has presented several concerns for financial institutions and third-party ICT providers alike, necessitating significant changes to risk management processes and adjustments to existing frameworks spanning multiple operational and technological domains within a firm.
The recognition of dependencies through outsourcing of key functions is clear. Elsewhere, the European Central Bank requested comments on its own separate Guide on Outsourcing Cloud Services to Cloud Service Providers. The guide aims to clarify the ECB’s understanding of related legal requirements (such as DORA) and its expectations for the banks it supervises.
As the ECB points out, banks are increasingly using third-party cloud computing services, which are potentially cheaper, more flexible and secure. Such a dependency, however, presents risks.
“If a bank cannot easily substitute outsourced services during a failure, its functions may be interrupted,” the ECB said. “In addition, the market for cloud services is highly concentrated, with many banks relying on just a few service providers located in non-European countries. Therefore, the ECB considers it good practice for banks to explicitly take these risks into consideration.”
Of course, the best-laid plans might not prevent a cyber-attack or outage caused by another factor, so it is a question of how trading venues respond. In its Market Outages report published last summer, IOSCO proposed several steps to mitigate against the more harmful fallout of an outage.
First, trading venues should establish and publish an ‘outage plan,’ to include, for example, the organisation’s communication plan, reopening strategy, the arrangements for operating a closing auction and the methodology for providing the market with alternative closing prices, if required.
The communication plan itself should establish an appropriate communication channel through which the venue can provide an initial notice of the outage as soon as practicable, and then regular updates to all market participants on the status of the outage and the recovery pathway.
Trading venues should then share the plan for reopening of trading “in a timely and simultaneous manner” to all market participants, providing status updates, processes and steps involved in the reopening. These may well interact with existing operational resilience measures, such as business continuity and disaster recovery plans.
IOSCO also recommends trading venues conduct and share the findings of a ‘lessons-learnt’ exercise with the relevant regulators, in addition to adopting a post-outage plan with clearly defined timelines and allocation of responsibilities to remediation. The lessons-learnt exercise should include both a root cause analysis, with remediation actions for those root causes, as well as the evaluation of the handling of the outage.
It is clear that regulators continue to navigate the best approach to managing the potential risks presented by a growing dependence on technology. Education, dialogue with industry and international consensus will be key in ensuring that markets are as prepared as they can be to mitigate some of the threats that will inevitably arise along the way. And there will almost certainly be many ‘lessons learnt.’