📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Regulatory Sandbox: Ten Years Later
Author: Hilary J. Allen Source: American University
Ten years after the UK Financial Conduct Authority launched the fintech regulatory sandbox, this model has become globally popular, but its core—combining regulatory relaxation with guidance—still lacks strong empirical evidence of its actual effects. Existing evidence only shows that the sandbox benefits participating enterprises, but fails to demonstrate its impact on the overall regulatory system or the widespread benefits of innovation outcomes. The two major concerns at the time of the sandbox's inception (the weakening of regulatory effectiveness and doubts about the effectiveness of regulatory learning) have not only persisted over a decade of practice but sometimes have even intensified. Although design optimizations may alleviate some issues, the fundamental challenge lies in the need to reassess the sandbox model itself, especially in the current context of promoting generative AI innovation. Given the inherent limitations of the rapid expansion of generative AI and its significant negative impacts on privacy, intellectual property, and the ecosystem, the risks of hastily adopting a sandbox mechanism that might weaken legal protections to boost AI are too high. The Institute of Fintech Research at Renmin University of China has compiled the core parts of the research.
1. Introduction
Regulatory agencies in various countries and fields around the world are actively exploring regulatory paths suitable for technological innovation. In 2015, the UK's Financial Conduct Authority (FCA) announced the establishment of a regulatory sandbox mechanism for financial technology, and this model quickly swept the globe in the following decade. The core design of the regulatory sandbox is: selected enterprises can conduct limited product trials in an environment where regulatory constraints are eased and enforcement risks are reduced. Its goals are dual: firstly, to lower the entry barriers that may hinder financial technology innovation; secondly, to provide regulatory agencies with opportunities to understand emerging technologies, so that they can adjust regulatory strategies during the supervision of sandbox experiments. In recent years, policymakers in various countries have also shown keen interest in using the sandbox mechanism to promote artificial intelligence innovation and build a new regulatory framework for AI. However, a decade of practice with financial technology sandboxes indicates that transplanting this as a policy tool into the AI field lacks sufficient basis.
Despite the widespread adoption of regulatory sandboxes, empirical evidence assessing their degree of goal achievement remains scarce. Existing empirical studies focus on innovation indicators: the financing capabilities of participating companies, the number of patents obtained, etc. Such data neither reveals the impact of the sandbox mechanism on the overall regulatory landscape of fintech nor proves whether the innovations spurred by the sandbox benefit groups beyond the innovators themselves.
The lack of data support is crucial — the prospects for financial technology sandboxes to achieve their goals are not optimistic. Firstly, it is unclear whether fintech innovation can generate sufficient social benefits to justify the relaxation of important regulatory provisions that are originally intended to protect consumers and the financial system from harm. Secondly, due to the lack of representativeness among sandbox participants and the special environment that can easily lead to regulatory capture, the knowledge gained by regulatory agencies from experiments has significant limitations. The channels through which regulators share the knowledge obtained from sandboxes are also restricted.
2. The Theoretical Basis of Regulatory Sandbox
In 2016, the UK's Financial Conduct Authority (FCA) defined its first regulatory sandbox as a "safe space where businesses can test innovative products, services, business models, and delivery mechanisms while ensuring that consumers are adequately protected." Over the next decade, participants in the FCA sandbox primarily focused on using technology to develop new credit, investment, banking, and payment products. Many jurisdictions around the world have subsequently followed suit in establishing financial technology regulatory sandbox mechanisms. Although there are significant differences in the structure and objectives of sandboxes designed by different regulatory agencies, their core goals typically include the following elements:
Support fintech companies seeking to provide innovative products, services, or business models.
Build a more efficient and well-managed financial service system;
Clarify the interactive relationship between emerging technologies and business models and the regulatory framework, and identify potential market entry barriers.
Promote effective competition that is beneficial to consumers;
Enhance the inclusiveness of financial services.
Regulatory sandboxes are generally seen as a win-win-win mechanism: they help innovators access funding and accelerate product launch; ensure consumers have access to more fintech products; and enable regulators to understand fintech products and their relationship with regulatory compliance (not to mention shaping a "innovation-friendly" image for the jurisdiction).
Since its inception by the FCA, the concept of regulatory sandboxes has transcended the fintech sector and expanded to various scenarios such as autonomous driving and legal practices. According to the OECD 2023 report, approximately 100 sandbox programs had been implemented globally at that time. Particularly in the field of artificial intelligence, there is an increasing call to suspend regulations through sandboxes to promote AI experiments.
Regulatory sandboxes have multiple advantages:
In practice, some judicial jurisdictions have initiated AI sandbox testing. Financial technology sandbox operators in places like the UK and Singapore have begun exploring the financial applications of AI (the US has at least proposed a bill to establish a sandbox for financial institutions to conduct AI experiments). AI-specific sandboxes, independent of financial regulation, have also emerged: AI sandboxes focusing on privacy regulations have been established in the UK, Norway, and other locations. With the EU's Artificial Intelligence Act requiring member states to operate at least one AI regulatory sandbox or participate in a multinational joint sandbox by August 2, 2026, such mechanisms are expected to surge within the EU in the coming years. The Act envisions the possibility of cross-border AI sandboxes—given the operational needs of AI companies across multiple jurisdictions and the impact of the cross-disciplinary nature of AI technology, sandboxes within a single jurisdiction will also require inter-departmental regulatory collaboration.
In response to the cross-border nature of financial services, the Global Financial Innovation Network (GFIN) was established in 2019, which explores the "Cross-Border Testing (CBT) mechanism" (also known as the "Global Sandbox") aimed at "creating an environment that allows businesses to test new technologies, products, or business models continuously or simultaneously across multiple jurisdictions." In October 2020, GFIN launched the first round of applications for cross-border testing, requiring applicants to meet the entry standards of all target jurisdictions. The implementation results were less than satisfactory: out of 38 applications, only 9 passed the assessment, and ultimately only 2 companies entered the real-world testing phase. This mechanism has yet to initiate a second round, casting a shadow over cross-border sandbox practices. But is the existing empirical evidence sufficient?
3. Empirical Evidence of Ten Years of Sandbox Operation
The Financial Conduct Authority (FCA) in the UK released its first "scorecard" for the regulatory sandbox in 2017, conducting a self-assessment of its initial experiments. The report positively affirms the effectiveness of the sandbox in the following areas:
Shorten the time to market for innovative results and potentially reduce costs.
Broaden financing channels for innovators by reducing regulatory uncertainty
Facilitate more products to enter testing and are expected to be pushed to the market.
Promote collaboration between regulatory agencies and innovators, embedding consumer protection mechanisms into new products and services.
The first three objectives directly benefit innovative entities, while the last one focuses on public interest — the FCA's satisfaction with the fourth item is partly based on "customized testing assurance measures developed in collaboration with businesses."
So far, independent empirical research on regulatory sandboxes remains insufficient. An important study published by economists at the Bank for International Settlements (BIS) in 2024 pointed out: "Although regulatory sandboxes are widely adopted and attract significant attention from policymakers, there is still a lack of systematic empirical evidence on whether they truly help fintech companies with financing, innovation, or establishing viable business models." BIS confirmed through the analysis of capital acquisition, survival rates, and patent data of UK sandbox firms that "sandboxes have achieved one of their core goals: to help emerging fintech companies secure financing and stimulate innovative activities."
Such research, like the FCA's self-assessment, focuses on the impact of sandboxes on innovative entities, demonstrating that joining the sandbox queue is beneficial for companies. However, this conclusion may raise concerns about government agencies "picking winners": companies not selected may face a more challenging innovation environment. While BIS researchers acknowledge that the financing advantages of sandbox participants "align with the logic that sandboxes reduce information barriers and compliance uncertainty costs in investment and financing," they do not rule out another explanation: "The eligibility for sandbox access itself may serve as a credit endorsement, assisting companies in financing."
More critically, existing limited research has only addressed the tip of the iceberg regarding whether "regulatory sandboxes are overall beneficial to policies." The authors at BIS particularly emphasize: "The research findings do not necessarily prove that sandboxes clearly enhance social welfare. The operation of sandboxes often requires public funding support, and aiding enterprise financing is only one of the goals—enhancing consumer welfare and maintaining financial stability are equally important." Furthermore, the BIS research is based on the assumption that "sandboxes enable regulators to foresee the social welfare impacts of products before they hit the market." Moreover, law professor Doug Sarro's latest research on the practice of cryptocurrency sandboxes by Canadian securities regulators indicates that even after products are launched to the public, the impact of sandboxes on consumer welfare and financial stability continues to persist.
Salo found that, despite widespread expectations that companies will be fully compliant after "graduating" from the sandbox, Canadian provincial securities regulators "not only oversee trading platforms within the sandbox but also implement regulation long after their (nominal) exit from the sandbox." He further questioned the effectiveness of consumer protection measures tailored for the sandbox:
Regulators often fail to anticipate the emerging risks of trading platforms, only taking action when these risks are similar to those in traditional securities or have caused significant consumer harm that raises public scrutiny.
The United Nations Secretary-General's Special Advocates for Inclusive Finance (UNSGSA) and the Cambridge Centre for Alternative Finance (CCAF) 2019 report also raised other questions, with the following core conclusions:
Early experiences with regulatory sandboxes indicate that this mechanism is neither necessary nor sufficient for promoting financial inclusion. While sandboxes have their advantages, they are complex to establish and costly to operate. Practice has shown that most regulatory issues involved in sandbox testing can be effectively resolved without a real-world testing environment. Similar effects can be achieved at a lower cost through tools such as innovation offices.
In other words, if the resource-intensive financial technology sandbox were to be redirected elsewhere, it might yield better results (the report points out that many regulatory agencies were unprepared for the intensity of resource consumption by the sandbox). The primary reason for the resource intensity is that regulators need to provide customized guidance for participants—this "regulatory support" is costly, but without it, the effectiveness of the sandbox is questionable (from the perspective of participating enterprises). These findings inevitably lead to deeper questions: Is regulatory exemption through a sandbox truly necessary to promote financial technology innovation? Is merely providing guidance sufficient to stimulate innovation (and most financial regulatory agencies have already set up "innovation centers" to provide such services)? But the more fundamental question is: Is it in the public interest to use public resources to nurture innovation in the private sector?
IV. Deep-seated Concerns
Previous studies have revealed multiple hidden dangers of this model: regulatory agencies selecting sandbox companies essentially amounts to "picking winners," undermining regulatory fairness; the operational costs of the sandbox often exceed expectations; its benefits flow more towards innovators rather than the public; and as global sandboxes become more widespread, the marginal benefits of "innovation-friendly" policy signals continue to decrease. Recent research has further focused on the core contradiction: the financial technology sandbox requires a postponement of key regulations intended to protect consumers and the financial system.
Sandbox supporters default to accepting the potential rise of public harm, based on two theoretical points: first, innovation will benefit the public by enhancing efficiency and competition; second, the sandbox helps regulators understand the market performance of new technologies, thereby optimizing long-term regulation. However, this section will argue that these assumptions do not hold up in the fintech sector and are similarly difficult to establish in the field of artificial intelligence. It should be noted in advance that innovation does not necessarily benefit society—while it is seen as a necessary condition for improving efficiency and competition, the specific connotations of "efficiency" and "competition" are always subject to contextual disputes, and many interpretations are actually of little benefit to overall social welfare. Furthermore, when financial regulators transform into the "cheerleaders" and sponsors of their chosen innovations, their objectivity and willingness to share knowledge will be weakened, and the regulatory cognition itself will already be biased due to the selective existence of sandbox participants.
A. Sandbox as a regulatory learning environment
The participation of enterprises in the sandbox is entirely voluntary, so the sandbox only accommodates innovative entities that actively apply. This leads to a dual cognitive blind spot: regulators are unable to fully understand compliant enterprises that do not need to participate in the sandbox, nor can they grasp entities that self-identify as not bound by current regulations. Even among the applying enterprises, the selection criteria are often vague, and many applications are rejected without clear justification.
The knowledge that regulators gain from the sandbox thus has inherent biases. Even if the cognitive value of the biased samples remains, the sandbox should not be regarded as the only or best way to acquire knowledge. As observed by UN agencies: regulators can learn new technologies from startups through informal channels. Regulatory relaxation is by no means a necessary condition for understanding fintech or artificial intelligence.
Another flaw in the sandbox generation of regulatory knowledge is that the access mechanism fosters abnormal government-business relationships, exacerbating the risk of "regulatory capture." In simple terms, "regulatory capture" refers to regulators placing industry interests above public interests, with its inducements being either explicit (such as corruption) or implicit. A typical example of implicit capture is when regulators primarily obtain information from the industry itself (without consulting independent researchers and consumer groups), their understanding inevitably permeates the industry perspective and becomes assimilated. This process is referred to as "cognitive capture," and the superficial technical complexity of fintech business models makes this phenomenon more likely to occur. If regulators do not establish a technical cognitive baseline through talent acquisition or internal training, their ability to critically assess industry claims will be constrained. This issue is similarly prominent in AI regulation—global AI companies are actively capturing regulators with narratives such as "regulation slows down innovation" and "forcing entrepreneurs to leave."
In conclusion, whether the sandbox can truly enhance the regulatory capacity of regulators is highly questionable. The author has previously pointed out: "Regulatory sandboxes may occasionally assist financial regulators in performing their risk prevention functions, but their popularity stems from a superficial presupposition—that is, the assumption that innovations in the private sector of financial technology inevitably align with the best interests of society." The following text will focus on examining the reasonableness of this presupposition.
B. Innovations as Regulatory Targets
As Professor Dierdre Ahern of law stated, the concept of a regulatory sandbox is based on the idea that "regulators take on the public interest function of improving consumer choice, price, and efficiency"—this fundamentally diverges from the regulatory logic that is "risk control-oriented." However, there is ample reason to question whether the "competition" and "efficiency" fostered by financial technology sandboxes truly benefit the public. Abandoning risk control may well prove to be a misjudgment. Increasing signs suggest that questioning the public benefits of AI innovation is equally valid. In this context, the rationality of policies that weaken public protection mechanisms for the sake of inclusive innovation is in doubt—and this is precisely the essential logic of sandbox design.
Policies that promote innovation primarily benefit the innovators themselves. The theoretical assumption is that innovation will generate secondary benefits for others; however, in reality, not all innovations are mutually beneficial, and this assumption may not hold. For example, Doug Sarlos' research through the Canadian cryptocurrency sandbox found that "regulatory practices at least partially confirm concerns—that the sandbox may prioritize innovators over consumers." Earlier research by the author and other scholars also revealed that many fintech products lack substantial technological innovation aside from a smooth application interface, with some products being harmful "predatory inclusions"—appearing to serve marginalized groups that have been rejected, but actually implementing systemic exploitation. The profit sources of fintech often do not stem from technological advantages, but rather from evading consumer protection rules that should be adhered to in the name of "innovation."
An increasing amount of evidence suggests that doubts about the "win-win theory" of generative AI are equally valid (broad AI encompasses diverse technologies; generative AI specifically refers to tools that identify relationships through massive training data to generate new content). Since 2024, academia has begun to sharply question the practical value of generative AI. For instance, Jim Cramer, the head of stock research at Goldman Sachs—who has been tracking the tech industry since the internet bubble—points out that the generative AI developed in Silicon Valley lacks clear application scenarios. He warns further: "Never before has a technology been predicted to have a trillion-dollar valuation right after its emergence... In the past, technological iterations always replaced expensive solutions with cheaper ones, but now, expensive technology is trying to replace low-cost labor, which fundamentally makes little sense."
The core flaw of this AI model is its tendency for hallucination: the model frequently generates responses that seem authoritative but are actually erroneous. Typical errors include: the Google model suggesting adding Elmer's glue to make pizza more stretchy; the OpenAI model failing to correctly spell the word "strawberry" and getting the number of letter "r" wrong. Worse yet, AI often fabricates literature to support its conclusions: a BBC study in 2025 found that "13% of citations from AI assistants misrepresented or had no corresponding original text from the BBC."
If companies deploy such models without supervision, they may pay a heavy price—the lesson from Air Canada is a clear example: After its chatbot incorrectly answered funeral policy inquiries, the airline even argued that "the chatbot should be held accountable," but the civil court ruled that it must compensate the customer and pay a fine. Introducing "human intervention mechanisms" may reduce the risk of errors, but it undermines the cost advantages that AI aims to achieve. Detecting and correcting AI hallucinations requires a significant amount of specialized labor: A 2024 study by freelance platform Upwork found that 96% of executives expect AI tools to enhance corporate productivity (39% mandate use / 46% encourage use), but nearly 47% of employees using AI admit that "they do not know how to meet the efficiency goals set by their employers."
Given the aforementioned limitations, the limited commercial application scenarios of generative AI are not surprising. It is generally fortunate that enterprises are resistant to such tools—recent research reveals a significant negative correlation between the reliance on AI tools and critical thinking abilities. Although AI is touted as a tool that "liberates humans from basic tasks to focus on higher-order creativity," the reality is that higher-order abilities often stem from the refinement of fundamental practices.
Even when examining the sandbox mechanism outside of specific fields, there are still reasonable doubts about this regulatory tool. Policymakers must be particularly vigilant about the distorted incentives created by the sandbox: ideally, laws and regulatory bodies should convey a clear signal to the industry that "compliant innovation is necessary to safeguard public interest," but the sandbox may be interpreted as "sacrificing legal authority to make way for innovation."
"Competition" and "Efficiency" are indeed Rorschach inkblots reflecting the values of regulators. Taking "Efficiency" as an example, it carries different value judgments in various fields and cannot serve as a neutral and unified regulatory goal. The goals of efficiency and competition do not provide clear signposts for regulators: when assessing the sandbox, regulators must ask, "From whose perspective are we determining competition and efficiency? Is it from the participating firms, the entire industry, or the public?"
Compared to expending efforts to construct a sandbox to accommodate innovation, regulators should adopt proactive prevention strategies to curb the public harms of new technologies. Former Acting Comptroller of the Currency Michael Su proposed a regulatory framework for fintech called "accommodate and tame," which is also applicable to the regulation of technological innovation in a broader sense.
Accommodating policies may endorse flawed technologies and artificially sustain business models that have no viability. Given that innovators generally lack a holistic understanding of the operating environment (as mentioned earlier), taming is often a better path. Technology culture scholar Alati Ward points out regarding AI tools:
The ability of artificial intelligence technology experts to assess its social and political impacts is far below that of professionals who claim to disrupt the field. Professionals such as doctors, teachers, social workers, and policymakers are not outsiders when discussing AI—they are precisely the ones best qualified to understand the potential risks of the misuse of automation technology in their fields.
It should be clear: codified regulations sometimes need to evolve for the public good, but when regulatory changes are pushed through in a piecemeal manner and mainly benefit a few sandbox companies, caution must be exercised. If regulators indeed need to experiment with new strategies, there have long been many industry-wide applicable tools available before the sandbox was created. When evaluating fintech sandboxes, UN agencies emphasize: "Proportional principles or risk-based licensing systems can reduce compliance costs for startups, and unlike sandbox testing, they cover all market participants."
Informal regulatory measures may be effective when dealing with rapidly evolving technologies, but they always come at a cost—especially the lack of public participation rights and transparency in regulatory decision-making. These costs are particularly pronounced in the sandbox context: private enterprises have significant influence over regulatory terms, and affected groups may not even be aware of the terms, let alone raise objections. When the technological complexity of sandbox enterprises' products is extremely high, regulators often yield to their "technical authority," making it easier for them to dominate the formulation of terms.
Regulators act as "cheerleaders" for sandbox companies, leading to a continuous lowering of regulatory standards. The Canadian case shows that cryptocurrency companies still cannot operate in compliance after "graduation"—as their profit nature relies on regulatory arbitrage rather than technological innovation. When temporary exemptions expire, regulators face a dilemma: enforcing compliance will lead to business closures, or making the exemptions permanent. Political and economic realities often force the latter choice: the employee-customer ecosystem formed by companies creates a vested interest network, making it difficult for regulators to tighten the rules.
The result is a fragmentation of rules, with different enterprises subject to differentiated standards, creating an unfair competitive environment that completely deviates from the original intention of the sandbox to "cultivate comprehensive compliance." Policymakers must be clear-headed: once enterprises enter the sandbox, regulators find themselves in a passive accommodating dilemma, forced to long-term tolerate public risks. The fundamental solution lies in shifting to a taming model—restricting the boundaries of innovation through a unified regulatory framework, rather than sacrificing public interests for technological development.
C. The Governance Dilemma of Cross-Border Sandboxes
The EU's "Artificial Intelligence Act" promotes a cross-border sandbox mechanism, highlighting the special challenges of cross-border regulation: the contradiction between the need for enterprises to operate in multiple jurisdictions and the effectiveness dependence on small legal jurisdictions. However, cross-border implementation faces deep-seated obstacles—fragmented regulatory standards, high coordination costs, and the dilution of policy signals, further corroborating the reasonable doubts about the sandbox tool.
The Global Financial Innovation Network (GFIN), established in 2019, aims to operate a cross-border sandbox for financial technology. However, it has only successfully completed one cross-border trial to date, with only two companies entering the real-world testing phase. A significant reason for the low adoption rate is that participants must meet the differentiated regulatory requirements of different jurisdictions. To reduce the coordination costs of multiple jurisdictional consensus, GFIN adopted a "lead regulator" mechanism, but acknowledges that:
The lead regulatory agency bears immense resource pressure - it needs to coordinate the management of 38 applications and 23 regulatory bodies, investing substantial human and material resources to ensure that the questions of enterprises and regulators are resolved in a timely manner, and to ensure that the application process progresses in compliance and on time.
Enhancing the utility of cross-border sandboxes inevitably requires the coordination and unification of legal standards, but cross-border coordination is indeed a highly politicized process, often influenced by domestic interest group dynamics. The effectiveness of any sandbox "policy signal" will dissipate during the coordination process—when all jurisdictions adopt uniform standards, there will be no "innovation-friendly judicial jurisdictions." The challenges of resource and responsibility allocation will also persist—this is true whether for cross-border operations or domestic inter-agency collaboration. Although sandboxes claim to promote new technologies, these resource coordination challenges are merely old issues that have been discussed frequently, and regulatory sandboxes have not provided any innovative solutions.
5. Conclusion
This article inherits the author's previous research and argues that in the field of financial technology, regulators should prioritize public risk prevention over enhancing efficiency and competition through private innovation. Increasing evidence suggests that this principle also applies to the field of generative artificial intelligence—hence there are multiple concerns regarding the implementation of AI sandboxes.
Although the ingenious sandbox design can alleviate some risks, we should not skip fundamental questioning to directly discuss technical solutions: the urgent task is to re-examine the applicability of regulatory sandboxes in specific contexts. Society urgently needs to collectively reflect on the "Silicon Valley-style innovation worship," and increasing vigilance towards the sandbox model (and its regulatory cognitive approach) should be a core component of this reflection. After all, it has been over a decade since the UK's Financial Conduct Authority first introduced the regulatory sandbox, and there is still little conclusive evidence that these resource-intensive regulatory tools have genuinely enhanced public welfare.