European financial services hesitate on AI adoption amid job and regulatory concerns

European financial services hesitate on AI adoption amid job and regulatory concerns

Financial services leaders are hesitant to implement Artificial Intelligence (AI) amid concerns that its impact is outweighing the benefits of productivity gains and cost cuts, according to European FinTech executives.   

AI has the potential to revolutionise the industry, but big banks have been slow to adopt these technologies compared to FinTech companies. Only 6% of retail banks are prepared to implement AI at scale across their business, a Capgemini study found.  

Central banks have been urged to enhance their AI capabilities by the Bank for International Settlements, which recognises both productivity gains and risks associated with AI, such as misinformation and hacking. Large language models, the core to most generative AI models often generate inaccuracies, raising concerns about the technology handling sensitive information.  

“There’s not necessarily a rejection of AI, but there is hesitancy”, said Wincie Wong, head of digital at NatWest, who called for the technology’s risks, ethics and vulnerabilities to be assessed.   

Oseloka Obiora, CTO of RiverSafe, commented: “The vast volumes of sensitive data held by financial services institutions make it a prime target for cyber-breaches, both from malicious external actors and from insider threats. Without the proper security protocols and regulations in place, AI only serves to heighten the threat facing these institutions, so hesitancy around mass adoption is wise. In fact, 20% of CISOs admitted that an employee at their organisation exposed company data using AI tools such as ChatGPT according to our recent research, highlighting the risks that the industry is facing from itself, let alone from outside attackers.”  

Anssi Ruokonen, Director of AI Research and Enablement at Basware, said: “For financial services to adopt AI safely, industry and regulators should be collaborating on best practices and standards for AI implementation. Ideally, solutions should be sandboxed before being rolled out, but it is important that industry recognise that vulnerabilities do exist and not all solutions can be tested beforehand. Therefore, AI systems should be built with continuous business monitoring and investment, alongside clear audit trails, to mitigate risks, especially within finance departments.”   

“To oversee the rollout of AI systems across the industry, enterprises could consider having a Chief AI Ethics Officer. With this, use cases can be measured from concept to implementation, with more robust testing, to evaluate risks. AI has an important role to play in streamlining outdated manual processes, so it’s up to industry to reach a state of confidence where they can adopt AI safely.”  

AI’s ability to quickly analyse large volumes of text and numerical data can significantly reduce industry costs, yet job loss fears and regulatory concerns are among the factors preventing bankers from fully embracing the systems.   

Kelly Fordham, Director of Financial Services at Investigo, commented: “Financial services are facing a significant shortage of staff, still recovering from experience gaps formed during the pandemic, and institutions are struggling to fill core functions such as financial crime, risk and compliance as a result. AI has its place supporting staff, but, ultimately, the sector needs to overhaul its approach to recruitment and make the industry attractive again.”  

“Banks, for example, require specialised staff who can give them tighter balance sheet control. However, finding qualified staff who want to take on what can sometimes be seen as a less glamorous role has proven challenging for years. Therefore, financial services need to revisit staffing, considering routes such as interim staff for urgent projects, better career progression opportunities, and personal and skills development to make staff feel valued, all with the goal of boosting attraction and retention.”