Home » Finance » Why we need to carefully regulate artificial general intelligence
Future of Finance Q2 2023

Why we need to carefully regulate artificial general intelligence

Robotic hand pressing a keyboard on a laptop 3D rendering
Robotic hand pressing a keyboard on a laptop 3D rendering
iStock / Getty Images Plus / Guillaume

Susanne Chishti

CEO & Founder, FINTECH Circle

Rapid technology advancements have brought us to the brink of a new era, where Artificial General Intelligence (AGI) has arrived with all its risks and opportunities.


AGI risks and opportunities exist not only across the financial services sector but all parts of our daily lives. AGI refers to highly autonomous systems that can outperform humans at economically valuable work. This raises concerns about the control and oversight of AGI systems.  

Artificial general intelligence ethics and job impacts 

Without regulation, AGI could be developed with inadequate safety measures, potentially resulting in catastrophic outcomes. Establishing a regulatory framework will ensure that it is developed responsibly and ethically — with proper safety precautions. 

Another key argument for regulation is its impact on the job market. AGI can automate a wide range of tasks and jobs, potentially leading to job losses. Without proper regulation, this transition could occur without adequate support mechanisms for those affected.  

I don’t believe that regulation will stifle innovation and progress. Regulations can help us harness AGI’s capabilities. We have seen this in the fintech sector where regulation has kick-started innovation, for example in the open banking sector. 

Establishing ethical guidelines and promoting transparency 

To effectively regulate AGI on a global scale, policymakers must focus on four key areas: establishing ethical guidelines; promoting transparency; fostering international cooperation; investing in education and reskilling. 

On the 16th of May, OpenAI CEO Sam Altman testified before the US Congress to advocate for regulatory measures in AI as he believe it could cause irreparable damage if not controlled. He proposed the creation of a regulatory agency tasked with issuing licences and safety benchmarks for companies like OpenAI, which develop large models. In other words: treat AI models like pharmaceutical drugs.  

Ethical guidelines, such as this, must be established to ensure AGI is developed and utilised in a responsible and beneficial way. The potential risks, such as job losses and privacy breaches require proactive measures.  

Transparency is crucial to building trust and accountability. Developers and organisations must be transparent about the data used, decision-making processes and potential biases.  

Fostering international cooperation and investing in education  

Global cooperation is essential to effectively regulate AGI. International collaboration supported by organisations such as FINTECH Circle can foster the exchange of knowledge, best practices and regulatory standards. Furthermore, investing in education and reskilling programmes will prepare the workforce for challenges and opportunities.  

We need to quickly develop a regulatory framework to help mitigate the many risks while allowing society to harness the benefits of AGI in a responsible and controlled manner. 

Next article