When San Francisco startup OpenAI launched ChatGPT on Nov. 30, 2022, the technology landscape was shaken to its core — and artificial intelligence (AI) rapidly moved from being a fringe idea to mainstream adoption.
“We spent a couple of decades learning how to talk to machines. What changed in November 2022 is that machines learned how to talk to us,” said Cisco CIO Fletcher Previn. “By December, it was clear [ChatGPT] would have a significant impact, and for something that’s been around a year, it continues to amaze and terrorize.”
Like other enterprises, Cisco believes generative AI (gen AI) tools such as ChatGPT will eventually be embedded into every back-end IT system and external product.
"ChatGPT's explosive global popularity has given us AI's first true inflection point in public adoption," said Ritu Jyoti, group vice president of Worldwide Artificial Intelligence and Automation Market Research at IDC. "As AI and automation investments grow, focus on outcomes, governance, and risk management is paramount.”
AI itself is not new. Companies have been investing heavily in predictive and interpretive AI for years; consider Microsoft Outlook and its AutoComplete feature. But the release of GPT-3.5 captured the world's attention and triggered a surge of investment in genAI generally and on the large language models (LLMs) that underpin the various tools.
In the simplest of terms, LLMs are next-word, image or code prediction engines. For example, ChatGPT (which stands for "chatbot generative pre-trained transformer") is built atop the GPT LLM, a computer algorithm that processes natural language inputs and predicts the next word based on what it’s already seen. Then it predicts the next word, and the next word, and so on until its answer is complete.
AI’s adoption journey is not unique. Technologists such as Previn liken it to the early days of cloud computing, which spurred similar discussions and debates about security, privacy, data ownership, and liability.
“People were saying no bank will ever put their data on a public cloud, and no enterprise will ever host their email on the Internet,” Previn said. “I think there was a lot of similar angst around what it means to put your crown-jewel data assets in someone else’s data center.”
Full speed ahead, with problems
Most enterprises are still experimenting with ChatGPT and other genAI tools, trying to figure out where their return on investment will be. And most remain uncertain about how to use it and how to benefit from it, according to Avivah Litan, a distinguished vice president analyst with Gartner Research.
“They are seriously worried that they will fall behind if they don’t adopt these new technologies, but are not adequately prepared to adopt it,” Litan said. “Organizational readiness is severely lacking in terms of skills, risk and security management, and overall strategy.”
Along with the promise of automating mundane tasks, creating new forms of digital content, and increasing workplace productivity, there was a palatable apprehension throughout industries and academia when ChatGPT burst onto the scene. In the months after its launch, some of the biggest names in technology publicly warned the world it could be the beginning of the end of humankind; they urged a sharp pause in ChatGPT’s development.
Tech luminaries such as Apple co-founder Steve Wozniak, Microsoft CTO Kevin Scott, and even OpenAI CEO Sam Altman joined more then 33,000 signatories of an open letter warning of societal-scale risks from genAI. While the letter had little impact on AI’s march, it did spur government initiatives to rein in the technology. The EU Parliament, for instance, passed the AI Act.
“The bad guys and malicious nation states will also use these technologies to attack freedom and foster their own agendas of crime, autocracy and harm. In the end, ChatGPT and genAI will make the world more extreme — from both a negative and a positive point of view,” Litan said.
In the US, President Joseph R. Biden Jr. issued two executive orders demanding, among other things, that federal agencies fully vet generative AI applications for any security, privacy, and safety issues. But most other efforts have amounted to little more than a patchwork of regional or state rules aimed at protecting privacy and civil rights.
To date, no federal legislation aimed at controlling AI has been passed.
ChatGPT and the other AI platforms are “very immature” in their development and highly flawed, which is why regulation is needed, according to Frida Polli, a technology ethicist and Harvard and MIT trained neuroscientist.
For example, earlier this month global consultancy KPMG lodged a complaint about factually inaccurate information generated by the Google Bard AI tool; Bard produced case studies that never occurred, which Polli cited as examples of why structural reform is needed.
“Instead of trying to fix all the problems with generative AI, people are simply plowing ahead to make the technology more powerful for a variety reasons. It’s that ‘move fast and break things’ philosophy that has shown itself to be problematic,” Polli said.
The LLMs that power ChatGPT, Bard, Claude, and other genAI platforms have also been accused of ingesting copyrighted art, books, and video from the internet — all fodder for training the models. Douglas Preston and 16 other authors, including George R.R. Martin, Jodi Piccoult, and Jonathan Franzen, accused GPT of gobbling up their works without their permission; they have sued OpenAI for copyright infringement.
Technologists are helping artists fight back against what they see as intellectual property (IP) theft by genAI tools, whose training algorithms automatically scrape the internet and other places for content. One weapon, called “data poisoning attacks,” manipulates LLM training data and introduces unexpected behaviors into machine learning models. Called Nightshade, the technology uses “cloaking” to trick a genAI training algorithm into believing it’s getting one thing when in reality it’s ingesting something completely different.
Implicit biases have also been found in ChatGPT and other genAI tools. Sayash Kapoor, a Princeton University PhD candidate, tested ChatGPT and found biases when the gender of the person is not obviously mentioned, apparently gleaned from other information such as pronouns. Those biases can carry through into hiring platforms powered by genAI. States and cities have responded with laws against AI hiring bias.
New York City passed Local Law 144, also known as the Bias Audit Law, which requires hiring organizations to inform job applicants that AI algorithms automating the process are being used; those companies must have a third-party perform an audit of the software to check for bias.
Will generative AI eliminate your job?
There were also fears that ChatGPT and other similar tools would eliminate enormous swaths of the job market by automating many tasks. But most analysts, industry experts, and IT leaders have scoffed at the threat of job losses, staying instead that genAI has already begun of assisting workers by tackling mundane tasks, freeing them up to perform more creative, knowledge-based work.
“The only scenario where AI takes away all jobs is one where it gets released without any human oversight,” Polli said. “I think we’re all going to need to know how to use it in order to be more successful at our jobs; that much is clear. You’re going to have to learn a new technology, just like you had to learn email or how to use the internet or a smartphone. But I do not think it’s going to be this job destroyer.”
Cliff Jurkiewicz, vice president of Global Strategy at Phenom, an AI-powered talent acquisition platform, said personal assistants that run on genAI will become as routine as a phone.
“It’s going to know everything about us the more we feed it data. Since we live in a task replacement ecosystem, a co-bot will extend well beyond setting calendar appointments the way Siri and Alexa do now, by interconnecting all the tasks in our lives and managing them,” Jurkiewicz said.
Cisco’s Previn agreed, saying it has become clearer over the past year that generative AI will be a teammate that “sits on your shoulder” and not an assassin killing off jobs. Believing that any technology will completely eliminate jobs is a fallacy based on the concept a finite labor pie.
“I believe it will be a force multiplier for being more productive, for offloading menial tasks, and essentially the pie gets bigger,” Previn said. “Twenty years ago, there was no such thing as a mobile app developer. Technology creates these new opportunities and roles, and I think that’s what we’re starting to see happen with AI.”
In fact, job postings demanding genAI-related skills have soared 1,848% in 2023 as companies work to develop new AI applications, according to Lightcast’s recent labor market analysis.
In 2022, there were only 519 job postings calling for generative AI knowledge, Lightcast data shows. So far in 2023, since the debut of ChatGPT, there have been 10,113 genAI-centric postings. and more than 385,000 postings for all forms of AI roles, according to Lightcast.
The top genAI employers include side hustle app Fud, educational company Chegg, Meta, Capital One, and Amazon, according to Lightcast. “This shows the wide ranges of organizations working to integrate this technology into their services,” Lightcast Senior Economist Layla O’Kane said.
“Adding a new skill to job descriptions is often a sign that a company has moved from experimenting with a new technology to making a real strategic commitment to it,” O’Kane said. “Right now, a lot of organizations are still in the experimental stage. But as they make key business decisions, we may well see this list grow.”
The natural progression of a truly disruptive technology such as AI is the creation of brand-new job roles, Jurkiewicz said.
Those new roles will include:
- AI Ethicist (focused on using the tools ethically)
- AI Curator
- Policy Maker & Legal Adviser
- Trainer (prompt engineer)
- Auditor
- Interpreter (someone who translates how tech is being used)
ChatGPT’s surprising use cases
One of the roles Previn never believed AI would touch is that of a software developer, which he believes is a type of art form requiring unique creative abilities. ChatGPT, however, has been adept at creating code that addresses corporate data hygiene and security and it can reuse code to build new apps.
A study by Microsoft showed that the GitHub Copilot tool, which is powered by ChatGPT, can help developers code up to 55% faster — and more than half of all code being checked into GitHub now was aided by AI in its development. That number is expected to jump to 80% of all code checked into GitHub within the next five years, according to GitHub CEO Thomas Dohmke.
“That’s very interesting, because historically there was no way to compress software development timelines,” Previn said. “Now, it turns out you can get a significant acceleration in velocity by helping developers with things like Copilot for code readings, code hygiene, security, commenting; it’s really good at those things.”
Knowing what code has or has not been touched by AI, however, will be critical to trust in the future, Previn said. It should be required that any code generated by AI be watermarked and have at least two human beings review it. “You want to have a human being in the loop on these things,” he said. (By “watermarking,” Previn was referring to either including metadata or simply stating in a code snippet that AI assisted in its creation.)
GenAI’s ability to develop or engineer software also changed Cisco’s internal IT system and external product strategy. Since last November, Previn’s IT department has developed a more “fully formed strategy” in terms of AI as a foundational infrastructure.
Internally, that means using AI to find productivity enhancements, including areas such as automated help desk functions. Externally, Cisco now thinks in terms of how to “bake AI into every product portfolio and augment the entire digital estate we’re managing in digital IT.
“Then how do we better support our customers, shorten the time it takes for customers to get answers?” Previn said. “Then, [it's important to have] the policies, security, and legal [guardrails] in place to be able to safely adopt and embrace AI capabilities other vendors are rolling out into other people’s tools. You can imagine all the SaaS vendors…, everybody’s on this journey. But are we set up to take advantage of this journey in a way that’s compatible with our responsible AI policies?”
Beyond code generation, genAI has been quickly embraced in the field of testing tools and automated software quality. “We are also seeing the convergence of generative AI and predictive AI usage,” IDC’s Jyoti said.
Human oversight remains critical
Over the next three or so years, genAI will need to significantly reduce and limit hallucinations and other unwanted outputs so organizations can reliably use it for decision making and processes. Its application in the real world needs to mature into what Litan called “game-changing use cases,” as opposed to just turning to genAI to try to achieve higher efficiency and productivity.
Litan believes multimodal capabilities will dramatically expand. (Multimodal AI can process, understand and generate outputs for more than one type of data.)
“The bottom line is that you can’t just put ChatGPT or genAI on autopilot," she said. "You need to screen responses for hallucinations, inaccurate outputs, mis- and disinformation. And to do that, you need the tooling to highlight suspect transactions and to manually investigate them."
It is a double-edged sword, however, as new tools and processes represent added expenses that detract and subtract from any potential corporate ROI. But without that type of exception screening, organizations will be steered into faulty decision making, processes, and communications.
Another concern for the future of genAI is self-awareness, or something called general artificial intelligence — when AI no longer needs human input to think.
Phenom’s Jurkiewicz believes general artificial intelligence will start to show human-like, “almost infantile responses to experiences” in the near future. “We’re going to start to see some of that next year, as more people use the technology. It’s an undeniable endpoint that this will happen sooner than later.
“Previously, AI was blocked from answering certain questions because it was programmed by a human not to respond,” Jurkiewicz continued. “AI will advance to the point that it can make a decision on what it thinks is appropriate for it to respond to. Humans can't control it, unless they shut it off.”