Q&A: NY Life exec says AI will reboot hiring, training, change management

During its 178 years, New York Life has had to adapt many times; now, AI is affecting nearly every corner of the insurance business, from hiring to client services, says Alex Cook, senior vice president and head of strategic capabilities at the firm.

1 2 Page 2
Page 2 of 2

Are you using external AI models or are you developing domain-specific internal models to address your business-specific needs, and what security concerns to you have? "As we started to understand some of the potential of generative AI, earlier this year we formed a steering committee, which I chair, to ensure we had a tight focus on multiple dimensions. One of those dimensions was around enablement, which is making sure from a tech-stack perspective we had access to a set of models — OpenAI’s model, Anthropic’s model — from either Microsoft Azure or Amazon’s Web Services, and that we had the right security review around those web access models. We wanted to ensure we could start using them in the context of our proprietary data so that we could be comfortable that information wasn’t going to get used to train one of those models [and] inadvertently have some of that information end up getting leaked somehow."

A lot of the focus today in enterprises seems to be more toward smaller, domain specific LLMs developed in-house versus the more general and amorphous LLMs like GPT-4, Llama 2 and Palm 2. Is that the direction New York Life is headed? "We’re trying to take advantage of what’s publicly available; there are lots of places where employees or agents would use something like a ChatGPT to help them in their role, and this is where we’ve been leaning into training folks rather than saying, 'You cannot use it because it’s going to be harder and harder to contain.' You’re going to be better off educating people.

"So we said you can use those tools, but you cannot use any PII or PHI or confidential information and we introduced scanning tools to look at any flow of information from our networks to prevent any inadvertent use of those tools for that purpose.

"Then you move into proprietary use cases, so development where you're actually taking those models and using them with the intent of saying we’ve got a huge amount of our own internal data that we want to use with those models. At this stage, our approach has been to develop and train our own models. We’re still using what is available from companies like OpenAI and Anthropic. We’re also working with a few different models to test them out, such as Llama 2 and the Claude models from Anthropic, as well as the GPT models from OpenAI. What we’re doing is ensuring we’re constructing those with a focus primarily on use cases around knowledge management — so, like a lot of other companies, tools to aid service reps and our agent advisors. We’ve got decades of policies still on the books that are still active. All of the different options in a lot of those products, it’s hard for anyone in our organization today to be able to say about that policy written 30 years ago, 'Here are all the different options and features you could use.'

"So a lot of our focus has been on standing up a generative AI conversational interface to a lot of that historical policy feature set, and other support areas within our service organization. So it’s a tool that should help our service reps be more productive and limit the degree to which they get a call from a client or an agent and have to say, 'I can’t answer that one. Let me put you on hold while I go find someone who can.''

Internal, domain-specific chatbots — is this the kind of technology that will allow a new employee to quickly come up to speed and answer client questions that might have taken months of training to do before? "Much faster than what would have been the case in the past. It enables relatively inexperienced service reps to respond to questions of clients much faster than they would otherwise — and without the need to so frequently tap into a group of experts to get the answer to relay back.

"It’s really acting as an internal expert assistant, so that when you have complex questions being asked by clients or agents, the service rep is in a good position to respond to that directly.

Did you find the contracts with genAI providers offer the same data protections you have with other vendor providers? "We pretty quickly bifurcated between the use of tools like ChatGPT that are publicly available, and OpenAI, which has progressed from their original release of ChatGPT. They were taking all the information people were putting in — all the prompts and responses — and using that to train the model. Then they [OpenAI] launched their subscription service that you can purchase and there’s an option there to not have the prompt and response included in training their models. But you have to pay for that, effectively. And, it led into an enterprise option for OpenAI that similarly has those kinds of protections. So, they’ve come along in their development.

"We were working with Microsoft Azure and their APIs for access to OpenAI’s models — GPT 3.5 and GPT 4. And via Microsoft Azure, they have the right kind of licensing to ensure none of your information will be used for training their model or otherwise retain or expose it. As soon as you start to work with AWS or Azure, they’ll typically have that kind of contracting capability and platform capability to ensure you’re able to monitor that usage."

Copyright © 2023 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon