While law and regulation are needed to regulate AI, addressing the transparency-related issues around the use of technology may need the help of blockchain and quantum computing.

In an industry where transparency is of paramount importance, the opacity of AI systems in the financial sector is of particular concern and would need to be regulated, said Joshua Dupuy, a British-American lawyer in a recent commentary.

The EU trailblazed in this regard with the drafting of the «AI Act» aimed at regulating the opacity of AI systems, specifically in finance. The act, expected to receive final parliamentary approval in April 2024, sets out a comprehensive legal standard to address the risks associated with high-stakes AI applications, promote ethical AI practices and enhance overall system transparency and accountability.

«This legislative structure is designed to demystify AI decision-making processes, ensuring that advancements in AI are in harmony with societal values,» Dupuy said. 

The EU AI Act

The EU AI Act introduces a pyramid framework for AI governance, focusing on risk-based classification, rigorous transparency requirements and human intervention to maintain the accountability of AI systems.

Following closely on the heels of the EU is an Executive Order issued by the Biden administration in October 2023, advocating transparency without enforcing stringent regulations.

UK White Paper

The UK, in a white paper released in July 2023, advocates striking a balance between fostering innovation and managing risk. It proposed a regulatory approach that supports innovation that also involves international collaboration.

Singapore Regulation

In Asia, Singapore is one of the frontrunners in promoting the use of AI, but it has so far not introduced any law or regulation to regulate the new technology. Nonetheless, there are regulations embedded in certain laws to address AI concerns as well as guidelines issued by the Monetary Authority of Singapore (MAS).

Singapore’s Personal Data Protection Commission, for example, has issued a model AI governance framework for organizations that develop or own AI systems. The framework addresses a number of concerns such as internal governance, risk assessments, data quality and management, transparency, and other human-centric ethical principles when deploying AI systems.

FEAT Principles

MAS issued the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of AI and data analytics in the local financial sector. The MAS also launched the Veritas Initiative, which enables financial institutions to evaluate their AI and data analytics solutions against the principles of FEAT.

But laws and regulations may not be sufficient to address the many concerns arising from the use of AI, said Ronald JJ Wang, a Singapore-based lawyer in a separate commentary. He pointed out that some of the risks posed by AI technology can be mitigated with technological counter-measures.

«Already, some people are working on technological systems which can detect AI-generated content, » he said.

Black Box To Glass Box

Dupuy also calls for a move from opaque black box AI systems to more transparent glass box models which offer insights and foster trust to address transparency concerns that laws and regulations may not be able to.

«This framework emphasizes transparency over secrecy. Put simply, a 'glass box' world champions openness and accountability, while a black box world cloaks the decision-making process in secrecy, » he added. 

Explainable AI

According to Dupuy, explainable AI, backed by blockchain and quantum computing, will aid in the transition from a black box to glass box model.

«The blockchain technology has the potential to introduce an unprecedented level of transparency in financial transactions. Securing transaction records in an immutable ledger allows blockchain to trust while streamlining regulatory compliance,» he said.

Quantum Computing

Quantum computing is also able to enhance AI's analytical power, bringing forth ethical considerations around data privacy and security, Dupuy said.

The speed at which artificial intelligence is being used in many aspects of our lives and its potential to bring fundamental disruptions require laws and institutions to catch up, Wang added.