Q&A: ServiceNow CIO sees an 'iPhone moment' for genAI

Chris Bedi has been involved in a full-court press to take advantage of the productivity, efficiency, and efficacy capabilities of artificial intelligence for years. But when ChatGPT launched a year ago, everything changed.

1 2 Page 2
Page 2 of 2

Have you been concerned about baked-in biases that have been evident in some genAI platforms, such as automated hiring assistant applications? "For sure. Bias and hallucinations. Hallucinations are real. How do you monitor for hallucinations? If you take my instant expert example, how do you make sure the models are good enough so that you don’t lead someone down the wrong path."If you think back years ago, remember when Apple Maps first came out and was directing people to drive into lakes?

"How do we make sure genAI isn’t interpreting accurate content ,but when it puts it together it becomes inaccurate. So, I think those are real issues which the industry is still unpacking. It also comes back to your level of sophistication and being able to measure hallucination rates and only displaying an answer when the confidence level is above ‘X’. There are ways to do that. It’s a story that’s yet to unfold as to the final answer, but yes I do worry about it."

At a high level, how do you deal with establishing ‘X’?, i.e., an acceptable level of AI accuracy? "We keep stuff in the lab until we’re confident in the level of hallucination rates. Then, there’s also no substitute for a human in the loop who can say, 'This isn’t right.' We’re still smarter than the machines."

How are you getting your own people up to speed on AI development internally and with skills like prompt engineering? "When we met together in June, we laid out rules, objectives, and training. Here’s this new kind of training. For my own organization, we laid out an AI training path. There’s an AI 101 and 201. We have to do some work to curate that and it’s a combination of training we developed from publicly available sources. 

"The key is making sure the talent in the organization has a path to learning AI, and this isn’t something being done to them. This is a path to a journey. And, actually, we’ve broadened that to the whole company this past quarter. We’re going to hold an AI learning day for all 20,000-plus ServiceNow employees to get very familiar with AI, because we’re all going to be working alongside AI.

"In the future, we’ll be taking that down to a department and persona. As we craft our AI strategies, we have to marry that up with what this means for the human who’s now going to be working with genAI or traditional AI, and where maybe AI is now doing x-percent of their job, which can be discomforting; but that’s our job as leader: to bring the workforce along and give them the talent, tools and training they need to be successful in an AI-centric world."

AI is probably going to eliminate some tasks and, in some cases, jobs. Do you see AI having an impact on employee headcount? "Not really. We’ve had automation technologies for a long, long time. Go back to Excel and think about all the work analysts had to do before it. I haven’t seen anything showing there are fewer analysts in the workplace today. I think it’s going to allow people to do more interesting things and now you can relegate those repetitive tasks to machines…, and if any workplace believes it has a shortage of work to do, I’ve yet to find any of them. If we can relieve people of the 20% of the toil of doing stuff we can relegate to machines, I’m confident there’s 20% more work to do in these organizations."

What keeps you up at night when it comes to AI? "I think about whether I’m moving fast enough. That’s what keeps me up at night. We all intellectually know there’s a massive unlock of productivity, of efficiency, of efficacy, experiences we can create. Are we moving fast enough. I know we have to pay attention to security and hallucinations; that’s a given. 

The biggest constraint on pace of change has typically not been around security governance; it's been the human capacity to absorb the change. So, am I creating the right conditions for us to absorb the change? Because I firmly believe companies that fully embrace AI, are going to be the winners of tomorrow."

What are you using for genAI platform? GPT, Llama, PaLM 2? Or are you mostly using your own homegrown LLMs or open-source models? "We have large language models in the ServiceNow platform. For a lot of the use cases, we’re using that. For certain use cases, we’re using Azure OpenAI. I think the industry will evolve a lot. We’re also dabbling with some open-source models for some data that’s highly confidential and we don’t want to take risks with it. That’s what I’m hearing from most CIOs. Yes, they’re doing stuff with the large hyperscale models, but they’re also doing some things with opencsource and they’re consuming it through platforms like ServiceNow."

Do you see the direction genAI is going as away from hyperscale models like GPT-4 toward smaller LLMs that are domain specific? "One hundred percent the latter, 100%. That’s what we’re developing within our platform — domain-specific LLMs — specific to use cases where there’s a high density of data already within our platform and we’ve found the efficacy of those models is better than the more general-purpose models."

It costs a lot of money to train up LLMs. How are you dealing with that? "Well, the domain specific models require less compute power. We also have a partnership with Nvidia that’s been public knowledge. We’re using Nvidia software and chips to power our LLMs and we’ve been pretty successful at it. We’re already in production with that. Again, that’s a domain-specific model, versus the economics of a general purpose LLM."

Copyright © 2023 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon