How to create massive value in banking with trusted AI

Banks must deploy AI to increase customer acquisition and engagement, reduce costs, and drive growth.

Simon Axon
Simon Axon
2023年11月21日 4 分で読める

There’s a lot of buzz these days around the potential of artificial intelligence (AI), especially generative AI, to revolutionize the banking sector. It was the hot topic at the recent Money 20/20 conference in Amsterdam, and media outlets such as the Financial Times are rolling out a steady stream of articles exploring use cases such as fraud prevention, customer service, and automation of core customer banking activities. Leading analysts like Accenture are predicting that AI could drive a 30% employee productivity gain over the next five years in roles spanning from customer service to back-office operations, and banks are paying attention. 

The message is clear: Banks have an imperative to deploy AI to increase customer acquisition and engagement, reduce costs, and drive growth. But for institutions making high-stakes decisions in highly regulated markets, there’s a linchpin in the case for AI that’s getting lost in the hype: AI requires high-quality data. It’s not just AI that’s needed—it’s trusted AI.

Not all AI is trusted AI

The primary technology behind today’s explosion in AI is machine learning (ML), whereby machines learn to identify patterns in data and make accurate predictions. An ML model can spot previously unnoticed relationships, interactions, and causational effects. To do this effectively, it must be trained on lots of data for which outcomes are known. For example, a model could examine tens of thousands of fraudulent activities for commonalities, then use that information to flag similar transactions as potentially fraudulent before they happen.  

As AI is deployed in more and wider-ranging use cases, it must learn on a larger and more diverse pool of data. Unstructured, fast-moving, and granular data from across and beyond the enterprise must be fed into the ML models. The quality of that data, and the knowledge of which data was used and where it came from, are critical to developing trusted AI—AI that makes accurate, justifiable, auditable, and responsible decisions. Otherwise, as with much in data science, the GIGO principle applies: garbage in, garbage out. 

The quality of AI matters in all industries, but it’s especially critical in regulated sectors. The European Union (EU) has acknowledged this reality with the creation of the AI Act, which aims to strengthen rules around data quality, transparency, and accountability to protect individuals from potential discrimination. Likewise, the U.K. recently released a “pro-innovation” vision for AI that empowers regulators to scrutinize the technology and how it is used. Banks must ensure the privacy and sensitivity of the data they use to train AI models. They can't simply ship customer and transaction data off to third parties and trust them to keep it safe.

Building the foundation of an AI-driven future

If trusted AI is essential for the future of banking, leaders need to make decisions now to create the data infrastructure needed to support it. This requires leveraging a data analytics ecosystem that not only supports advanced analytics for specific applications, but also provides the foundation for bank-wide deployment of reliable, robust, and reusable AI models. The Royal Bank of Canada (RBC) has done just that by leveraging Teradata VantageCloud, the complete cloud analytics and data platform for AI, to integrate data from across and outside the enterprise and provide actionable answers to any question against any data at any time. According to RBC's senior director of data architecture, VantageCloud empowered the financial institution to “use AI to serve our clients according to what their needs are—not according to what we think their needs are.” 

To truly capitalize on the AI opportunity, banks need to bring models to their data—or, even better, develop and train their own advanced analytics in house. Teradata’s powerful analytics engine, ClearScape Analytics™, builds on the speed and scalability of VantageCloud to create the ideal environment for advanced analytics and AI/ML model development, testing, and enterprise-wide deployment. It provides support for popular coding languages and a Bring Your Own Model (BYOM) capability, giving data scientists freedom to use the best tools. Teams can test and train models securely and at scale on real data within VantageCloud, then automatically deploy and monitor them for drift and other anomalies. Data features, notebooks, and other evidence on model development are all stored for reuse and available for audit. ClearScape Analytics enables banks to quickly deliver value from AI-driven innovation and advanced analytics while enforcing clear data rules and governance. This can ensure transparency, quality, and auditability of AI decisions.

If you’d like to learn about building a robust data analytics ecosystem that can enable your organization to unlock the potential of trusted AI, please reach out to the Teradata Financial Services Consulting team to schedule a meeting.



Simon Axon について

Simon Axon leads the Financial Services Industry Strategy & Business Value Engineering practices across EMEA and APJ. His role is to help our customers drive more commercial value from their data by understanding the impact of integrated data and advanced analytics. Prior to his current role, Simon led the Data Science, Business Analysis, and Industry Consultancy practices in the UK and Ireland, applying his diverse experience across multiple industries to understand customers' needs and identify opportunities to leverage data and analytics to achieve high-impact business outcomes. Before joining Teradata in 2015, Simon worked for Sainsbury's and CACI Limited.

Simon Axonの投稿一覧はこちら



テラデータはソリューションやセミナーに関する最新情報をメールにてご案内する場合があります。 なお、お送りするメールにあるリンクからいつでも配信停止できます。 以上をご理解・ご同意いただける場合には「はい」を選択ください。

テラデータはお客様の個人情報を、Teradata Global Privacy Policyに従って適切に管理します。