With key tech experts voicing concern that AI could lead to human extinction, UBS decides to take a distinctly different approach – by simply embedding it into the bank’s risk framework. finews.asia takes a look.
Risk management can be a very dark art in the financial industry. As an example, when the average internal specialist identifies a new but not yet quite materialized risk outside of normal assessment procedures, they are usually told by the business or management to stop speculating.
Although not great, it is kind of understandable given a business can’t do all that much with innumerable theoretical eventualities. The last thing you probably want is a senior executive or relationship manager weighed down by existentialist qualms elicited by a bevy of overly obsessed non-financial risk experts.
But this is the year when conversational AI chatbots such as Chat GPT have reared their head in general conversation, even if tech experts had been talking about artificial intelligence for years. But at the risk of sounding like some corporate Cassandra, will things be different this time?
Extinction Risk
Right now, it is hard to tell. A public declaration on the imminent dangers of artificial intelligence by scientists and notable figures is vague, with some saying that was the intention.
«Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.»
The funny thing with the signed statement above is if you take out the words «extinction», «AI», «societal», «pandemic» and «nuclear war», it almost sounds like something any generic chief risk officer of a global bank would say at a committee meeting to a very uninterested quorum of members.
No Details
But even if the writers of «Popular Mechanics», a renowned US-based publication, called the signed statement «simple-yet-haunting», they also pointed out that the 22 words have no details on how to actually mitigate the risk. With that, the journalists and editors came dangerously close to sounding like senior bankers at the abovementioned hypothetical committee meeting.
UBS, however, doesn’t seem to be much fussed by all of this. In its most recent quarterly report, it indicated that the use of AI and machine learning have found their very own place in the non-financial risk piece of the risk management and control section.
«The increasing interest in data-driven advisory processes, and use of artificial intelligence (AI) and machine learning, is opening up new questions related to the fairness of AI algorithms, data life cycle management, data ethics, data privacy and security, and records management. We are actively enhancing and implementing the required frameworks, which are designed to ensure proper controls are in place to meet regulatory expectations,» the bank indicated then.
The Fair Algorithm
There are a number of important takeaways from those two sentences. Although the bank was directly contacted by finews.asia for more detail, they have apparently decided to remain tight-lipped for now.
Still, it would be beyond fascinating to uncover what they believe constitutes a fair or an unfair algorithm.
Another thought-provoking point is the active framework enhancement they appear to be undertaking and how that might relate to AI, not to mention the possibility of additional controls and how that better meets regulatory expectations.
No More Credit Suisse
It would also be interesting to find out if the decision to look more closely at AI algorithms was identified during the regular risk assessment process or whether it was escalated otherwise.
Anyway, if the bank ever wants a break from the quasi-permanent coverage of its forced, government-prompted takeover of Credit Suisse, they know who they can get in touch with.
That is understandably not likely to happen in the next few weeks as the transaction is likely to close soon and its erstwhile competitor’s shares are on the cusp of being delisted. But we have time – or at least we think we do.