Prepare for generative AI with experimentation and clear guidelines

Figure out your most probable use cases and get the tech into users’ hands, with guardrails. Expect to adapt your business processes as the technology matures.

chatbot generative ai by the kong via shutterstock
The KonG / Shutterstock

Generative AI is catching on extremely quickly in the corporate world, with particular attention from the C-suite, but it’s still new enough that there aren’t any well-established best practices for deployment or training. Preparing for the technology can involve several different approaches, from conducting pilot projects and lunch-and-learns to forming centers of excellence based around experts who teach other employees and act as a central resource.

IT leaders may remember how, in the past 10 years or so, some user departments ran off to the cloud and made their own arrangements for spinning up instances of software — then dumped the whole mess into IT’s lap when it became unmanageable. Generative AI can make that situation look like child’s play, but there are strategies for starting to manage it ahead of time.

“It’s remarkable how quickly this set of technologies has entered the consciousness of business leaders,” says Michael Chui, a partner at consulting firm McKinsey. “People are using this without it being sanctioned by corporate, which indicates how compelling this is.”

Ritu Jyoti, group vice president of worldwide AI research at IDC, says the drive to adopt generative AI is coming from the top down. “The C-suite has become voracious AI leaders. It’s now mainstream, and they’re asking tough questions of their direct reports.” Her bottom line: Embrace generative AI, set up a framework for how to use it, and “create value for both the organization and employees.”

Getting all that done won’t be easy. Generative AI comes with plenty of risks — including incorrect, biased, or fabricated results; copyright and privacy violations; and leaked corporate data — so it’s important for IT and company leaders to maintain control of any generative AI work going on in their organizations. Here’s how to get started.

Decide which use cases to pursue

Your first step should be deciding where to put generative AI to work in your company, both short-term and into the future. Boston Consulting Group (BCG) calls these your “golden” use cases — “things that bring true competitive advantage and create the largest impact” compared to using today’s tools — in a recent report. Gather your corporate brain trust to start exploring these scenarios.

Look to your strategic vendor partners to see what they’re doing; many are planning to incorporate generative AI into software ranging from customer service to freight management. Some of these tools already exist, at least in beta form. Offer to help test these apps; it will help teach your teams about generative AI technology in a context they’re already familiar with.

Much has already been written about the interesting uses of today’s wildly popular generative AI tools, including ChatGPT and DALL-E. And while it’s cool and fascinating to create new forms of art, most businesses won’t need an explainer of how to remove a peanut-butter-and-jelly sandwich from a VCR written in the style of the King James Bible anytime soon.

Instead, most experts suggest organizations begin by using the tech for first drafts of documents ranging from summaries of relevant research to information you can insert into business cases or other work. “Almost every knowledge worker can have their productivity increased,” says McKinsey’s Chui.

In fact, McKinsey ran a six-week generative AI pilot program with some of its programmers and saw double-digit increases in both code accuracy and the speed of coding.

Jonathan Vielhaber, director of information technology at contract-research firm Cognitive Research Corp. (CRC), is using ChatGPT-3 to examine security issues including how to test for different exploits and the advantages, challenges, and implementation guidelines for adopting a new password manager. He does some wordsmithing to make sure the result is in his own style, and then drops the information into a business case document.

This approach has saved him two of the four hours needed to create each proposal — “well worth” the $20/month fee, he says. Security exploits in particular “can get technical, and AI can help you get a good, easy-to-understand view of them and how they work.”

Let your users have at it

To help discern the applications that will benefit the most from generative AI in the next year or so, get the technology into the hands of key user departments, whether it’s marketing, customer support, sales, or engineering, and crowdsource some ideas. Give people time and the tools to start trying it out, to learn what it can do and what its limitations are. And expect both sides of that equation to keep changing.

Ask employees to apply generative AI to their existing workflow, making absolutely sure nobody uses any proprietary data or personally identifying information about customers or employees. When you supply data to many generative AI tools, they feed the data back into their large language models (LLMs) to learn from it, and the data is then out in the ether.

Track who’s doing what so teams can learn from each other and so you understand the bigger picture of what’s going on in the company.

Now that CRC’s Vielhaber is a paying ChatGPT customer, he plans to implement lunch-and-learn sessions in his company to help introduce generative AI to others and allow them to “see what the possibilities are.”

Start training your employees

Depending on what your long-term goals are for the technology, you might need to plan for more formal means of spreading the knowledge. IDC’s Jyoti is a big fan of the center-of-excellence approach, where a central group can train different employees or even embed in various business units to help them adopt generative AI most effectively.

New types of jobs might be needed down the road, from a chief AI officer to AI trainers, auditors, and prompt engineers who understand how to create queries tailored for each generative AI tool so you get the results you want.

Hiring generative AI experts will not be easy as they become more in demand. You will need to look to recruiters and job boards, attend AI-focused conferences, and build relationships with local colleges and universities. You might decide it’s in your company’s best interest to create your own LLMs, fine-tune ones already available from vendors, and/or host LLMs in-house to avoid security problems. All those options will require more technical experts as well as additional infrastructure, according to the BCG report.

Geetanjli Dhanjal, senior director of consultancy Yantra, is expanding her firm’s AI practice. She’s focusing on cross-skilling existing employees, hiring external resources, and putting recent college grads through “enablement” programs that include data science, web-based training, and workshops. She’s building out centers of excellence in both India and California and says that makes it “easier to hire local talent” in both regions.

And remember to talk to your employees about how their careers may change as a result. Even now, AI can conjure up fears about specific jobs going away. One analogy McKinsey’s Chui uses is to spreadsheets. “We still use them, but now we have analysts who are modeling data instead of calculating,” he says. Programmers using generative AI, for instance, can concentrate on improving code quality and ensuring security compliance.

When AI creates first drafts, humans are still needed to check and refine content, and seek out new types of customer-facing strategies. “Track employee sentiment,” the BCG report advises. “Create a strategic workforce plan and adapt it as the technology evolves.”

It’s a two-way street, Dhanjal says. “We have to support employees with training, resources, and the right environment to grow.” But individuals also need to be open to change and to cross-skilling in new areas.

Be careful out there

As important as it is to jump in, it’s also critical to maintain some perspective about the risks of today’s tools. Generative AI is prone to a phenomenon known as “hallucinations,” where, in the absence of enough relevant data, the tool simply makes up information. Sometimes this can yield amusing results, but it’s not always obvious — and your corporate lawyers may not find it so funny.

Indeed, generative AI “can be wrong more than it’s right,” says Alex Zhavoronkov, CEO of Insilico Medicine, a pharmaceutical and AI firm that has based its business model around generative AI. But unlike most companies, Insilico uses 42 different engines to test the accuracy of each model. In the broader world, “you can sacrifice accuracy for snappiness” with some of today’s consumer-oriented generative AI tools, he says.

In February, Insilico received Phase 1 approval from the US Food and Drug Administration for an AI-generated molecule used as the basis of a medication to treat a rare lung disease. The company cleared that first phase in under 30 months and spent around $3 million, versus traditional costs of around 10 times that amount, Zhavoronkov says. The economic benefits of using generative AI mean the company can target other rare illnesses, also called ‘orphan’ diseases, where most pharma companies have been reluctant to invest, as well as conditions endured by broader segments of society.

The company uses its own highly technical tools, in the hands of chemists and biophysicists and other experts. But interestingly, “we’re still cautious” about using generative AI for text generation because of inaccuracy and intellectual property issues, Zhavoronkov explains. “I want to see Microsoft and Google introduce this into their software suites before I start relying on it more broadly,” he says.

Vendors and researchers are working on ways to identify and bar copyrighted content from AI results, or at least alert users about the sources of the results, but it’s very early going. And that’s why, at least until the tools improve, humans still very much need to be in the loop as auditors.

Get your guidelines on

In this world, ethical AI is more important than ever, says Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute. He also serves on Microsoft’s CSE AI Ethics Review Board and as an ethical AI expert for the Boston Consulting Group.

“Responsible AI is an accelerant to give you the ability to experiment safely and with confidence,” he explains. “It means you’re not constantly looking over your shoulder,” and it’s well worth the time to develop controls about what employees may and may not do.

“Set some broad guardrails,” he suggests, based around corporate values and goals. Then “capture those into enforceable policies” that you communicate to staff.

Going forward, creating guidelines for AI will be on his agenda, CRC’s Vielhaber says. The company is in the process of rewriting its IT- and security-related policies anyway, and AI will be a piece of that.

“I think we’ve crossed a threshold in AI that will open up a lot of things in the next few years,” he says, “and people will come up with really ingenious ways to use it.”

Copyright © 2023 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon