If you are a financial institution that had a data science or applied artificial intelligence (AI) program established before large language model (LLM) darling ChatGPT 3 was announced, congratulations! You more than likely already have the right governance, ethics, data, and explainability elements in place to allow your firm to do useful things with LLMs and generative artificial intelligence (GenAI). Your existing investments in AI, including the all-important AI responsibility guardrails, provide a unique opportunity to swiftly drive high-value use cases and potential competitive advantage while the rest of the world figures out how to do what financial services has been doing for years—safely and effectively using AI. With the theme of “building on existing AI capabilities” in mind, we caught up with Michael “GenAI before it was cool” Conway, a Data, AI, and Technology Transformation Partner with IBM Consulting, about the cognitive work IBM has been doing with a UK bank since 2017.
Pre-pandemic, a UK bank inked a deal to migrate applications to a private cloud environment. While data center modernization was the headline, it also opened the innovation doors within the bank to other opportunities. One of them was sorting out the bank’s cognitive strategy, inclusive of the Triple-A Trifecta of automation, AI, and analytics. Michael and his team have been with the bank since 2017 across various functions and lines of business such as retail banking, insurance, commercial banking, and enterprise shared services, initially doing a lot of advisory and cognitive build work like identifying use cases and co-developing the bank’s chatbot capability. Over time, they’ve developed into a high-performing data science team with a continued focus on driving better customer experience through cognitive innovation for the bank’s contact centers.
More recently, in June 2022, the bank was looking to improve its retail banking chatbot’s functionality further. Michael and his team identified and trialed seven different use cases leveraging what we now call LLMs, using a proprietary closed model and internal bank data. Of the seven, five yielded exciting value, including these star performers:
These proof-of-concept use cases and others were quickly put into production internally with close human-in-the-loop oversight before being appended to customer-facing chatbot functionality. The bank describes the benefits as improving customers’ “virtual assistant experience by reducing unsuccessful searches, improving virtual assistant performance, and personalizing search performance for its customers. The implemented LLM solution resulted in an 80% reduction in manual effort and an 85% increase in accuracy of classifying misclassified conversations.”
In the post-ChatGPT world, BFSI firms continue to explore their options and potential use cases for GenAI. In HFS’ growing database of in-production GenAI use cases, BFSI enterprises comprise about a quarter of all entries. An analysis of the BFSI industry use cases reveals that analytics and insights such as lending credit decisioning and underwriting is the top category (42%). Customer experience (CX), with a heavy focus on better enablement of agents and enhanced chatbot capabilities, and contextual search for internal knowledge management and refining customer-facing search capabilities rounded out the top use cases. As with the UK bank, these leading use cases are squarely aimed at beating down manual labor and dramatically increasing productivity, yielding tangible cost savings. The strong leverage of existing AI competencies fuels the depth of in-production BFSI use cases.
Sample: Analysis of 51 in-production GenAI use cases with BFSI enterprises
Source: HFS Research, 2023
While Michael likes to (rightly) state that IBM and the UK bank were using GenAI before it was cool, perhaps more relevant is why they could do so and what that means in the post-ChatGPT world. IBM and the UK bank could test and deploy new work rapidly because they had an AI baseline. They already had strong AI responsibility protocols, closed proprietary models, data, and fledging GenAI expertise. Once ChatGPT 3 was released, their existing baseline helped them rapidly assess what would work for the bank and complement their existing capabilities and AI responsibility protocols.
While financial services firms are unlikely to use public foundational models like ChatGPT due to significant security, risk, and data privacy concerns, which can cut down on the initial speed to in-production use cases, any existing AI baseline is a clear asset that can help financial services firms effectively embrace and explore the power of GenAI. The future of AI is leveraging your baseline—not starting from scratch.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started