Increasing reliance on complex technology leaves banks vulnerable

When Barclays experienced a three-day outage earlier this year, due to a mainframe failure, millions of UK customers were unable to access even the most basic banking services.

The disruption not only damaged the bank’s reputation but also left it facing a compensation bill of as much as £7.5mn. Incidents like this are becoming alarmingly common in the financial services sector.

Despite investing billions on state-of-the-art security tools and seeking to reassure both customers and regulators of their resilience, banks remain highly vulnerable. The increasing complexity of their software ecosystems and the long, tangled supply chains required to support them are key culprits.

In the UK, Barclays suffered 33 system failures between January 2023 and February 2025, according to data from the House of Commons Treasury select committee. Over the same period, HSBC and Santander were both hit by 32 outages.

The challenges are not limited to outages. Last year, Citigroup credited a client’s account with $81tn when it meant to send only $280, after an employee at the Wall Street bank made an input error while using a backup system with a cumbersome user interface.

“Banks operate in complex environments that contain countless applications, ranging from trading platforms to fraud detection tools,’’ says Alois Reitbauer, chief technology strategist at US software group Dynatrace. ‘‘These applications run on highly distributed cloud infrastructures, draw data from multiple stores, and rely on the support of a variety of third-party vendors”.

“Even a minor miscalculation or anomaly across the software supply chain can lead to widespread outages that disrupt services,” he adds.

As financial institutions race to modernise — shifting to the cloud and adopting emerging technologies such as artificial intelligence and quantum computing — many remain hamstrung by so-called “technical debt”. The term is used to describe the mounting cost of maintaining and building on top of outdated, poorly written code, which is one of the key causes of flare-ups.

“The recent errors from Barclays and Citigroup relate to legacy IT systems, likely developed during less mature development cycles. Having more rigorous development life cycles with proper vulnerability testing can help flag potential issues early on,” says Justin Kuruvilla, chief cyber security strategist at Risk Ledger, a London-based supply chain security specialist.

Alicja Cade, director of the office of the chief information security officer for Google Cloud, agrees. “Often financial institutions grapple with legacy technology and obsolete processes, leading to operational fragility and simple errors when stretched by new demands,” she says, adding that “insufficient testing in new contexts and overwhelmed interconnected systems further exacerbate these risks”.

A 2024 survey by 10x Banking of 200 IT decision makers found that 53 per cent cited data silos and production bottlenecks as barriers to scaling legacy systems. Tackling technical debt would also help banks improve security of their IT systems in the face of a growing cyber threat from both nation states and criminals looking to drain funds or steal data for extortion or espionage.

But making large-scale changes to upgrade systems, as well as testing, can be costly and disruptive. Banks are reluctant to introduce downtime, particularly given the underlying “consumerisation” of the financial user experience, according to Joshua McKenty, chief executive and co-founder of Polyguard.

© Gabby Jones/Bloomberg

“Customers expect their mobile apps to be as convenient and instantaneous as Instagram or PayPal, and banks have had to scale up and scale out their application development and supporting IT operations,” McKenty says. “The pressure of expectations for ‘new features, faster, and for everyone,’ and the increasing complexity of the financial operations banks offer, has spread security thin.”

To keep pace, banks are increasingly outsourcing more of their IT systems to cloud service providers. Proponents argue that doing so offers opportunities to strengthen security, potentially allowing for automated updates, real-time global monitoring, and quicker remediation if there is an incident. But others disagree, pointing out that it can leave data more exposed in a centralised location.

Jayant Dave, chief information security officer for Check Point Software Technologies in Asia Pacific and Japan, says the “growing prevalence of hybrid architectures — spanning on-premises systems, cloud platforms, and mobile environments — adds layers of complexity.”

Organisations lose certain control and visibility of their underlying infrastructure as the cloud provider takes on more responsibility. Julien Richard, vice-president of information security at Lastwall, points out that this can complicate processes around incident response and compliance.

“The shared responsibility model — while well-documented — is still a source of confusion, especially in complex environments with multiple vendors and services. When something goes wrong, knowing exactly who is responsible for what isn’t always clear, and that ambiguity can create real risk,” he says.

This makes third-party vendor due diligence, mapping and management all the more important. “Organisations need to establish clear processes for assessing the third parties they work with — not just at onboarding, but continuously over time — to ensure those relationships don’t become blind spots,” Richard adds.

“In this exposed environment, financial services organisations must remember they’re only as strong as their supply chain,” says Alex Laurie, senior vice-president at Ping Identity.

The realities of supply chain risk were highlighted by an incident in the tech sector last year, when a botched CrowdStrike update took down millions of Microsoft Windows PCs and servers in a global IT outage.

“Organisations need to deploy controls that prevent both malicious acts and unintended errors, while also gathering the required telemetry to detect when a control has failed or been bypassed,” says John Shier, field chief information security officer at Sophos. “Overlapping sets of controls and detections, at different points in a process chain, provide redundancy and will reduce the impact of a single failure.”

Some security experts advocate for further automating systems, particularly given the advent of AI. Check Point’s Dave urges financial groups to leverage AI to “accelerate the modernisation of their technology stacks and workflows, reducing manual touchpoints and minimising human error”.

Reitbauer agrees, urging banks to shift from reactive to proactive approaches to IT outages or security incidents, using AI to help predict and prevent incidents before they occur. “The key lies in real time visibility into system health, user experience, and any anomalies in normal business processes,” he says.

Still, the headlong race by many financial services companies to introduce AI to their business without due care brings challenges in itself. “AI fundamentally changes a bank’s risk profile, introducing new vulnerabilities like model manipulation, demanding a strategic response,” says Google Cloud’s Cade.

“As AI model usage is incorporated into critical infrastructure sectors, such as financial services, they are targeted by attackers, hence poorly secured or biased AI can lead to losses, penalties, and reputational damage,” she adds.

Banks should also think again about embracing the trend to push for greater deregulation, and should take as a cautionary tale the instability and breaches in the far less regulated cryptocurrency sector, according to Lastwall’s Richard.

“Mitigating these risks comes down to applying the fundamentals — strong policies, well-defined processes, empowered and informed people, and the principle of ‘trust but verify’,” he says. “What’s crucial now is doubling down on those practices, not stepping away from them.”

Leave a Comment