Firmer Ground from SRM

ChatGPT and Large Language Model Use Cases for Financial Institutions

Neil Dougherty / Connor Heaton Season 1 Episode 2

In this episode, our AI subject matter expert Connor Heaton discusses the topic everyone’s talking about – OpenAI’s ChatGPT and similar AI large language models. Following a surge of investment and interest in AI tech and startups, Connor explains why the financial services industry is taking notice and onboarding these tools for internal and external functions. He covers a variety of active use cases. Even if you’re new to the world of AI large language models, you’ll enjoy the information and depth Connor provides on this topic. Give it a listen!

Welcome to firmer ground from SRM, where we explore trends and strategies impacting the current and future state of financial institutions in North America and across the globe. My name is Neil Dougherty, host of today's podcast and Managing Director of Global Marketing at SRM. Every episode features experts from the world of banking and financial services, including thought leaders. Here at SRM, executives at future thinking, financial institutions, and other experts from all corners of the industry. In this episode, I speak with SRM's Artificial intelligence subject matter expert Connor Heaton on the topic. Everyone's talking about open AI's ChatGPT and similar large language models. Following a surge of investment and interest in AI tech and startups, Connor explains how it all works and why the financial service industry is taking notice and onboarding these tools for internal and external functions even. If you're new to the world of AI large language models, you'll enjoy the information and insights Connor provides.

So, let's get started.

Neil Dougherty here. Managing Director of Global Marketing and SRM. And my guest today in the studio is Connor Heaton, and Connor is the VP of Advisory Services here at SRM. And he's highly focused on delivering high value strategy and action plans related to next generation technology for our clients in the financial services industry. More specifically, Connor leads all delivery related to AI. Solutions and intelligent automation. So happy to have Connor here today and Prior to joining, Connor was a Technology Strategy consultant at Deloitte and is a graduate of the Ohio State University. And today we're going to discuss Connors Deep dive into AI platforms like open AI's ChatGPT, also known as large language models, and how they're impacting business at large and financial institutions across North America and the globe. So welcome, Connor, how are you?

Doing well, doing well, glad to be here.

Yeah, I'm happy to have you today. So, we'll just kind of jump right into it. Obviously, this is an exciting time for AI solutions, and I know you've taken on the brunt of our coverage and perspective here at SRM, including the recent tech talks presentation that you've done, so which I should note, is also available on our YouTube channel and check that out. I want to start with a question that's been on my mind, which is why are AI solutions so attractive to businesses at large right now?

It is the same draw as any automation has ever been for business. The bottom line, right? Make doing things more efficiently, quicker, and more cheaply and this sort of automation has. Been available in one form or other for a long time, but it's been very specialized for a lot of these sorts of capabilities where it's cost a lot to implement and so in a sense, the spread of large language models and the democratization. The accessibility is a huge part of the current sort of revolution and it's the business interest and it is in making new kinds of tasks possible. Cheaper and more available. The rest I would say, outside of that comes down to what the accessibility of automation means for the future of work when you can complete an audit or a complete a targeted marketing campaign or a review process basically instantly and for negligible cost. The world looks different, you know. Do we have auto review? Cycles now continuously instead of being. You know, every half year. Or there's just a real time connection to internal audit or even to external audit bodies. You know, if we take this, you know, one or two steps further and get a little more speculative. If we have autonomous software agents which can analyze the performance and cost of an entire tech stack. In essentially real time and undertake the research process to identify alternatives and change over any component of that stack. What does that mean for middleware? What does that mean for software sales? Does that mean for consultants? Who runs RFI's like? You know, what is it? Does that fuel commoditization? Of products to a greater. Percent if our future of work is underpinned by thousands of autonomous agents working directly with other autonomous agents, what does that mean for the role of humans and how we build our systems and processes? And so companies are exploring that as. Well, and that's. A lot of what we're seeing in the startup space is. People attempting to. Filled out test expand gain adoption for different components of that potential future ecosystem.

Yeah, there's a lot to dig into there, but what I would say is it does feel like we're kind of at this, you know, critical mass point. And you know, you're starting to see the development of this technology, these solutions to, you know, effectively look like a bit of an arms race, right? You've got open AI, Google, Meta, and some legacy companies. Obviously, IBM has been doing this forever. To some degree, investing heavily though in this AI capability and I guess you know, just has someone who's looking at this day in and day out as someone who's invested a lot of time in this do you feel like there's an eventual winner here in this particular race? Do you see something else in terms of consolidation? I'm just curious, kind of, you know, if you could look, you know, a couple years out what that looks like to you?

It's definitely a race. I'm afraid it's too early to say really for which of these entities might come out on top and as far as consolidation, I expect probably to some extent. But if the tools continue to exist in their current form as sort of standalone assistance which you know they'll probably evolve, but something comparable enough to the ChatGPT equivalents of today will likely still exist into the future, but you might have a variety of. Sort of pre trained pre specialized models for different things and another model over top of them, that that helps to route queries, combine answers so on and so forth, just limited by sort of what ends up being. Efficient and effective with the computing costs and the infrastructure of it and as far as what does that offering look like and who owns it? It's just too early to say. There is an interesting question around open source, which you know I'd say until even just this month, maybe open source wasn't something that was being talked about as much, but there have been a lot of advances. Facebook Meta released a model into the open-source community. You know a month or two back. And the open-source community has made tremendous strides. Where it's image-based AI and creative endeavors. But also, for things that are more or less intended to mimic and directly compete with Bard ChatGPT, GPT 4, anthropics cloud and products like that. And making a lot of, I would say unexpected advances in scale and the underlying infrastructure and economics of some of these models, I would say and achieving surprising performance. 

December, everyone really thought that open eye had kind of a secret sauce. Wouldn't be easy for anyone to emulate or really catch up to and. Google's sort of. Rushed release of Bard. Just reinforced that perception, I think. And so having the open source. Inexpensive locally trainable models that that can you know, are advancing at a pace that. Implies the potential to seriously.

Compete with some of.

These you know. Massive tech giant models with decades of. AI expertise behind them. AI is a really interesting development. Maybe if the future of AI is open source. You know that may be a very different sort of world for at least from a monetization and business perspective. And one other thing I'll add here is that while it's too early to call a winner in the AI arms race. I can say that the chip companies like AMD, NVIDIA and Intel are winners regardless. This boom in AI is going to mean demand for specialized chips to run it all and as the people who make the hardware? They're in a really good spot.

That's an interesting point. OK. So, to summarize, we basically say the officially the race is too close to call, but that the open source piece will actually be something to really keep an eye on and just watch how that develops as historically it has another other solution. So that's super helpful. Let's take a step back maybe. And help our listeners understand, you know, in the simplest terms possible how the Chat GPT and the large language models, technology works and its reliability, can you can you kind of help us through that process counter and talk a little bit about just the secret sauce?

Yeah, so, so in a, in a sentence, you know, taken to the simplest possible, uh, you can think of it as next word prediction taken to an extreme. So when you're using, when you're texting and your phone suggests the next word in your sentence. As an assistant measure. It's the same sort of idea for what large language models are doing. They're just doing it with a huge neural net in the back end, which is pre trained with sophisticated methods on a massive amount of data to be able to predict what it should. Today, for a tremendous variety of domains, you know to answer questions, to take on a role and supply answers in that context. To generate fiction, to summarize data, it's all based on this. It's a deceptively simple idea, and then it's this is an oversimplification, inevitably, but that's kind of the core idea of it. Taking a little bit of a deeper look here when you when you run models through a lot of text data. You're teaching something about the underlying data set. And given enough curated context and adjustments. To the neural. Net via human input and basically saying which answer is better. For a given query. You can end up with capabilities. That kind of surprised everyone a little bit, you know.

This this sort of.

Moment has been around for a while. It's not a, it's not a new architecture, but the abilities that that these models have shown at scale and. With the particular type of training that's being done, reinforcement learning from human feedback. But the ability that these. Models are shown to do things like extrapolate beyond their data set, not by a lot usually, but by a little. Bit you know the. Sort of a glimmer of generalization. Outside of the training data. Is it part. Of what's so impressive and so exciting about this is the ability to sort of mix and match contexts and to extrapolate past in in certain domain for certain tasks. Into something that. We would say more closely approximates.

Right. And it might just be that that surprise element is also, you know, part of the reason. That ChatGPT and some of these other solutions are kind of having their moment right now, just as you mentioned, you know kind of exceeding expectations of you know what we knew that it could do. So that's really interesting. I want to talk a little bit more kind of micro if we can. Around how you see these solutions, these AI solutions benefiting our clients, banks, credit unions, leaders in in that space technology operation. Folks that are frankly already challenged with the weight of digital transformation and managing that process. I know this is part of the talk you gave the other week and I was just curious if you could share with our listeners, you know some of the you know the opportunities there as well some of the use cases that you've seen that have wide potential.

Sure. So I. I guess the first thing I'll say as far as.

How large language models in this technology can help FIs do things that they're already trying to do today?

A lot of it.

Is in improvements to a wide variety of tasks by a small amount system automation. That's pretty much just what you can get out-of-the-box using. Free or. Very cheaply licensed tools like ChatGPT to, you know, make drafting and e-mail easier to draft policies or board reports to help to summarize content and knowledge base to customize marketing outreach. Things which are very much in. The wheelhouse of LLM's and are. Just humans using the tools. To do things better and faster and more autonomously.

So are these.

Things that just start interrupt counter, but are these things that they should have? Already been putting in place. You know, for some time now or how do you see that from a kind of opportunity, ease of onboarding and integration? Is this something that could have already been happening?

I mean could be happening. No, no question. I mean, some of our clients have. Ordered using alums for exactly these sorts of use cases. Yeah, the stance of the financial industry almost always is the sort of conservative, you know, wait and see. And maybe it's not fast follower or. Maybe kind of. Medium follower or even slow following on new.

Right.

Technologies to make sure that there aren't. Unknown risks that are going to pop up associated with. New tech, but you know some of the early adopters that are already making use of these and there are other industries which have more or less immediately adopted these and you know particularly with large, large FI's large institutions. We have a lot of people and they a community of a particular role or. Department, which work closely with each other. Or share kind of tips and tricks. You know, early, early in, relatively early in. The cycle of. Of LM hitting and being used for business, I think it was at JP Morgan. JP Morgan and Morgan Stanley, one of the two restricted use of Chachi, BT, and other LLM's.

That's JP Morgan Chase. And I was going to ask you about that too. Kind of what your take there was, considering that, you know, some are obviously on boarding at a scalable way and others are, you know, taking a different stance.

Yeah, so.

My read from the public disclosure there is that the wealth managers at JP Morgan started playing around with that GPT and realized how much of their jobs. It could do for them. You know and how much easier it could make knowledge searching, summarizing, and communicating with clients to just handle a lot of the like everyday tasks that didn't need tremendously deep financial expertise but just needed to be done or that you know given the right sort of context and prompt could do what they need to do. But at that point in time. Unless it was licensed and being used through the API. That data wasn't private, so anything that that was put into ChatGPT at that time was sort of Fair game as far as training of the model by open AI, so that that data was retained in their database and so.

You know the folks at JP Morgan realized that and basically had a temporary lockdown on the LLMs because they didn't want that proprietary data to be making its way out into the world, right? They're innovative forward thinking employees to kind of outpaced corporate governance and they noticed it. Because the tools are so useful and. You know there's. Been news articles that are now even a couple months old about like Realtors using these tools to automate huge chunk of chunks of their jobs and jobs that involve a lot of personalized communication. You know more as it as. It almost primary chunk of time. Have seen a lot of opportunity and a lot of impact from these tools already.

And so that's. That's sort of use case. Where famous institutions could be moving on. It could have moved on this in December. Of last year and it's been five months and things are kind of only accelerating and expanding and getting more interesting and you know as far as sort of more enterprise use of the AI which is what JP Morgan is doing now and the reaction basically was OK we'll stop using the free tool because we can't enforce consistency on it and there are risks. Associated with it. But here you we'll build our own in-house use case for you know, we'll bring this in house and build our own version.

Right and manage our data appropriately, right?

So, it going from something which is a generalized tool which can do a surprisingly good job at. Being a wealth management advisor. To a specialized tool that specifically is using the institutional knowledge that a firm like JP Morgan has to do that job even better and do it more consistently by implementing a custom in-house use case and.

So and that that sort of thing.

You are fine tuning a model or you are building an application on top of GPT or some of these other solutions that that tends to look more like a. Traditional IT project in terms of the expertise and know how and resourcing that you have to apply to that problem. And so like that's the sort of project that smaller FI may want to wait more on. I mean it's hard to say in this case, like how much first mover advantage there is. Certainly there's a lot of you know there there's. Hundreds of startups that are working on making tools built on top of the OEM architecture, which are able to deliver services and value in a more turnkey sort of way to you know, wide variety of industries where. You know, today most of those are still.

You know most of those sort of turnkey.

Solutions aren't at the level of maturity that most FIS. Would be comfortable with. So you know, I, I hesitate to say, you know, hold off on doing that sort of enterprise implementation there is. A possibility that. Some vendor tool or some existing vendor might even integrate capabilities that you'll be able to. Use more out-of-the-box and that it will be able to be done so for you and. You can just license it. But also experience with this sort of project and it's doing this kind of thing is likely to have its own value in the future. You know the. First step of any custom AI project is pretty much always getting your data.

House in order, so to speak.

And making sure that your data is. Well organized that you have the appropriate. Storage that if. It's going to.

Be used for.

For applications like machine learning that you have the right tagging on it the right metadata, it's consistent. You have historical trends or aren't disjointed databases or systems, and that's. Work that I mean a lot of FF's even. A lot of big FI. 'S really haven't done or haven't done well enough and so. It's something that that in, in. As part of digital modernization as part of just projects that we were already. Seeing before the advent. Of MLM's. That's something that a lot of institutions were already pursuing already thinking about or was already on the road map. And AI is just another kind of incentive. On top of that, to get that done. So you know, those are those are the two considerations that I would put out as far as. Doing something that is. Customized for your institution is that the? Experience of doing. The project and things you'll have to do to. Get it done, will. Have strong benefits. Into the future. For likely a you know a lot of future AI developments, product services as well as just overall use of data. But that there may be some more turnkey. Tools that that are coming out in next year or that are even already out, but that immature that might let. You get some of the benefits. While skipping over some of that work and. And then the other half? Yeah, from a from a. Using the tools that are out. There for assistive automation. There isn't really a use. That there isn't a reason not to be doing that today.

Right. So I mean. Clearly there's a there's a proper sequence to all this, and one of the things that you mentioned before, when we've talked about this and maybe you can just kind of provide a little bit of perspective on this is the fact that a lot of. Times the companies that are thinking about integrating this are focused on the technology itself. But there's a human resources element to this that really has to be part of the focus. And the build. Is that accurate to say?

Yeah, absolutely. And there's.

There's a few.

Pieces, you know, one is in the. Sort of governance and control controls perspective of. How do you? Make sure that your people know enough about the tools, their strengths, and their limitations to be making effective use of them. And to not be. Risking data leaks or to be putting too much faith in them and resulting in. You know, biases manifesting in in. Your business environment or.

Or other issues.

So there's a level of kind of education, understanding and policy and oversight that needs to be in place for that. And for as with any effort that involves automation, particularly any highly visible automation technology like. This where it's in. The news there's a. Lot of anxiety about it doing the. Change management and communication effort to. To clarify, kind of what this means for employees like how exactly is. This going to. Be implemented or. Is it going? To be used for you know what? Does this mean for the day-to-day of employees? Giving them enough sort of familiarity and understanding. So it's not just empty words. So that there isn't the.

The fear in the room.

Know of, you know who's going to get? Replaced by Chat GPT today. And really, the reality here is that. It's pretty rare for an entire position to be fully replaced by, by or by other automation tools you know. Looking back at. Things like robotic process automation or intelligent automation. Most roles today have enough components of. Of different sorts of work. Of decision making and communication that automating the end to end. Is it's, it's. Rare, right? It's much.

More common than.

Are you are automating a piece of someone’s job and you know most often it's a piece of their job that they're not particularly fond of doing. You know it's the grotesque. It's the data cleaning. It's the report generation and so when people understand that this is probably going to get rid of, you know, my least favorite part of my day, they tend to be much more enthusiastic. About it.

This is a situation too where you know this is an example where jobs evolve, right? So I think you shared some data last week. Just talking about, you know, some statistics on companies looking to hire engineers and people who are experienced with these types of tools. I think the number was something like 29% of companies were looking to hire prompt engineers this year. Were you surprised by those numbers?

Uh, yes. Yes and no a little bit.

Prompt engineering. I should level set is making advanced use of large language models as they are today out-of-the-box without fine tuning just.

Through being more precise and more scientific in how you are interacting with them. You could think of it a bit like a developer, like a SQL developer who is used to working with databases or the Super user of a particular tool who's just able to get more out of the tool due to their knowledge with how to interact with it than the average user. And having 30% of companies looking to hire prompt engineers this year.

I'm not. I'm not surprised.

Maybe that that a lot of companies.

Are don't really know what.

To do with this, and are just kind of looking for what expertise they can find and the expertise they can find is people putting prompt engineer on. Their resume, even though.

You know, I'm sure if you look at.

Like if you.

Look at like the. Google sense like the. Frequency of search. On the term prompt engineer.

You know there's.

Probably like nothing prior to December, maybe even prior to like April of this year, it's incredibly new. Right, if someone says. They're prompt engineer with a year of experience they are.

They're probably translating some previous experience.

Which is definitely not prompt engineering as. We know it. Today into doing that sort of work, you know, and by the same token if. If a company is looking for prompting engineers with.

The year of experience, they're.

Going to be very disappointed. So I'm not too surprised that there's kind of this band wagoning I do question if perhaps these companies have not all considered. If they do, they need that level of expertise.

Right.

You know, do they do they need to have prompt engineers? Is that going to be the right hire for them? What are they expecting? Those people to do within. Their organization, you know, are they expecting them to lead this change program basically like roll out use of LM at their organization for everyone? Are they looking for them to? To try to tackle specific higher value use cases you know, I think it's something that may be a little bit under considered on the part of many companies that are that are I think more afraid of being left behind than anything else. And I think that's also that's what it bears out in in some of the other statistics. Around, you know 65-66% of companies believe that hiring ChatGPT experienced workers gives their company a competitive edge. And you know, for a lot of industries, they think that that is true. Having your workforce able to use these tools effectively is going to benefit you and let you. Be more efficient.

But just sort of doing that without a plan for it or understanding, you know what is the value going to be for the organization. How are you going to make use? Of it, how does this fit into the sort of broader adoption that you're doing? They'll tend to. Realize probably less value than they're hoping for without. Without understanding those things, but.

I mean, some of this is.

Is trying to get. Out far in front of it. You know, it's sort. Of an attitude of. Let's start the hiring pipeline now. Because these resources are scarce and by the time we get one in, we'll figure. Out what to do with them. So you know, hopefully that that out for folks.

Or potentially they could. They could, you know, recruit and lead A-Team as well if need be. And if it's no longer the centralized headache that it could be so. That's an interesting take there and thanks for covering off on that. I think we've covered a lot of stuff so far, but one question I had for you and maybe we can kind of. Bring it back around here. Is that as interesting as AI is? And you know that's obviously this, this popularity, this groundswell, it seems to be very polarizing. There are a lot of fears related to the widespread use of AI we just saw. That a long time innovator and AI Jeffrey Hinton had left to Google to speak about the dangers that he sees in AI. So I'm just curious what your take on this is, and does it change? Of the outlook from a professional.

Sure. I guess and this is this. Is its own rabbit hole really but.

Sure, I totally, I totally get that. So feel free. Just to put a very a broad answer on this one.

Yeah, there, there have been groups which are. Very interested and dedicated in sort of the potential and the risks of AI specifically.

You know fully.

Generalized human capable AI and what that means for humanity from an A sort of existential risk perspective. And then what we do about that? Getting into, you know, if we if we have. A human or greater intelligence level AI is. There a way that we can? Keep them aligned with. Human values and that's its own entire problem. There are research organizations dedicated to trying to figure that sort of thing out. There have been since you. Know well before. The rise of large language models and. The I guess to sort of sketch. Out the thinking because I. Know without a. Little bit of an example. This little thing can just kind of slide off.

To give an.

Example of sort. Of what the thinking is here. So the idea is progress is accelerating, right? So if you have, if you have exponentially accelerating progress in AI, if you're having breakthroughs, if you're getting much faster processing speeds that are already. The, well, well beyond. Human capabilities in terms of things like data storage and retrieval, processing speed. Automation parallelism, ability to of software to replicate.

And so the idea is if we end up creating an AI Human intelligence in. In a general sort of way right? Has a theory. Of mind, it understands the world that understands. You know what, what? How it interacts and it.

Then you know such a thing would have the capability. You know, so goes the. Theory to improve itself tremendously, quickly, right? You know double or triple sort of exponential. Curve and that that would give that sort of entity just tremendous power effectively overnight, right? It’s scale that's faster than humans have any real hope of keeping up with.

Which and so there's this this.

Worry in amongst. You know various communities of at risk and AI researchers that if we aren't able to solve the problem of alignment of how can we keep an AI entity aligned with human values, that it could potentially. Mean the end of the. The human race effectively, and you know a classic parable in this area, is it's sort of thrown around to describe this, as is a paper clip optimizer.

If you if you suppose, suppose that for.

Whatever reason, a company that produces pay. Flips is the first to develop general AI and it has the resources to improve itself. You know, maybe it makes savvy stock market investments and gains a lot of resources, but it's utility function. You know what it was built to do is just make more. Paper clips and the. The parable goes, you know, so it basically stays in the line with human values for a while because that lets it make more paper clips. But as soon as it's gained kind of enough resources and capabilities, it no longer makes sense to. Continue to serve human interests and instead just makes paper clips at the expense of humanity, the world, and eventually you know, broader solar system, et cetera. The idea of just. If the only thing. A highly capable entity cares about is you know maximizing profit or producing paper clips. If that value function it doesn't match what humans want, then humans sort of as a, as an afterthought, or just sort of a casualty of that. Uh, you know, almost by accident.

So it's kind of a silly thought experiment, but. It's one of the ways that people discuss sort of non-human intelligence and what that may mean for the species. And on the opposite end of. Things you know. There's sort of the AI utopia. School of thought of well, if we have aligned super intelligent agents, you know, that may help us cure cancer. That may help us eliminate scarcity, produce better policy and governance that. You know, benefits humanity more broadly. Instead of concentrating power and wealth in the hands of. There's potential and there's risk here. And it may sound kind of hyperbolic to talk about, but and one of the relevant statistics here is something like 50% of AI researchers, professional AI researchers believe that there is a 10% or greater chance that the development of general. AI of the. Of super intelligent AI could result in in human extinction. And so, you know some. Some very smart people take this very seriously. And well, it's and. And so the Overton Window around this. So like how OK it is for, you know, silver buttoned up people to discuss it is widening you know that's why you see personalities. Like Elon Musk talking about it. Why you have Jeffrey Hinton, you know, expressing concern and leaving Google to be able to talk more about this and engage more on the risks and control side of things.

Uh, so that's really the long answer. But as I said, it's a very deep rabbit hole of thinking on this and it has been for far longer than large language models have been.

Yeah, I mean, it's fascinating and obviously something that you know is going to continue to evolve. And I know that that was not an easy question to try to get into, but just trying to understand the kind of risk reward scenarios is helpful. And I think we're just scratching the surface on this as it relates to what we can share. With our listeners here, so hope that we can have you back on again. Appreciate the time Connor and I would encourage anyone who's interested in this topic of AI large language models to reach out and connect with Connor on LinkedIn. Be sure to read his pieces on our SRM blog. The bottom line. And I hope you'll join us again on the podcast. And thanks for listening everybody and bye for now. And Connor, thank you again.

Thanks so much, Neil.

Thanks for listening to Firmer Ground from SRM. Please stay tuned for our upcoming podcast and until then you can visit us at srmcorp.com or on LinkedIn and Twitter.