AI efficiently handles key business processes. But it can also yield biased decisions. What can financial firms do to mitigate the risks and reap the rewards?
By Gery Zollinger, Head of Data Science & Analytics, Avaloq
Artificial intelligence is a combination of machine learning and big, real-world data. But this data may contain prejudices – explicit or implicit – that can be learned by the AI system. In certain areas, most notably human resources, using historical data to train machines may reinforce frequent human biases.
So, when designing an AI system, it’s important to identify which areas of the business are high risk and to define a clear plan to constantly monitor and train the AI engine. By understanding the risks, firms can mitigate against potentially unethical outcomes and maximize the benefits of AI in their business.
What are the implications for financial institutions?
The use of AI is becoming increasingly widespread in financial services, and the pressure to integrate AI into business processes to gain a competitive edge is immense. At the same time, regulations have considerably lagged innovation, so it can be difficult for financial institutions to find guidance on AI best practices.
The European Commission (EC) is one of the first regulatory bodies in the world to produce a draft proposal on the use of AI. It classifies AI activity by risk, from unacceptably high risk to minimal risk, with credit lending, for example, classified as high risk due to the potential for prejudice.
Read Avaloq’s full report on ethical AI in finance here.
This proposal will likely act as a template for similar regulations in jurisdictions such as Switzerland and Singapore, so financial institutions across the globe should take note.
Could you share some use cases of AI in the financial industry?
The traditional use case of AI systems in finance is to automate and standardize routine tasks, allowing businesses, such as wealth managers, to focus more on enhancing their value proposition and strengthening relationships with their clients. But today, AI is capable of doing much more.
For example, financial institutions can now leverage AI to instantly create personalized portfolio recommendations based on investors’ risk appetite and goals. Another innovative area is conversational banking, where AI systems use natural language processing (NLP) to interact with clients and understand their intent. This goes beyond just improving efficiency – it enhances the client experience and boosts engagement.
How can financial institutions get the most out of AI?
To maximize the value of AI, firms need a partner that understands the technology behind the system, the regulatory landscape and the financial industry as a whole. AI needs to be coupled with a robust monitoring system to constantly improve performance as well as to identify and rectify any potential shortcomings, including unethical outcomes.
And in line with EC recommendations, AI systems should only be used in low-risk areas – such as investment recommendations, client churn predictions and chatbots – to minimize the severity of any unfair bias. By combining these factors, financial institutions can use the efficiency of AI to gain a competitive edge while ensuring fair outcomes for their clients.
- For more information about ethical AI in the financial industry, please visit Avaloq’s latest insights page.