OpenAI launches ‘enterprise-grade’ ChatGPT

OpenAI said the newest version of its ChatGPT service comes with enterprise-grade security and privacy features.

Shutterstock/Ascannio

OpenAI today unveiled ChatGPT Enterprise, a version of its generative AI service that allows companies to decide how the model is trained and how long it can store corporate data used for that purpose.

The new ChatGPT service also offers increased security and privacy by using encryption for data at rest (AES-256) and in transit (TLS 1.2+) and Security Assertion Markup Language (SAML) single sign-on for enterprise authentication.

OpenAI also said it has opened up the bandwidth for organzaitons to connect to GPT-4, the large language model (LLM) on which ChatGPT is based and trained on. The company plans to offer “unlimited higher-speed” access to the LLM, and 32k token context windows inputs and follow-ups that are four times longer than previously available. (One token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens, according to OpenAI.)

“This marks another step towards an AI assistant for work that helps with any task, protects your company data, and is customized for your organization,” OpenAI said in a blog post. “You own and control your business data in ChatGPT Enterprise. We do not train on your business data or conversations, and our models don’t learn from your usage. ChatGPT Enterprise removes all usage caps and performs up to two times faster."

OpenAI also offers a “tokenizer tool” through which users can see how many tokens text produces.

The San Francisco-based start-up also said the Enterprise version of ChatGPT has a new admin console with bulk member management, domain verification, and an analytics dashboard plug-in for usage insights — previously known as Code Interpreter.

“This feature enables both technical and non-technical teams to analyze information in seconds, whether it's for financial researchers crunching market data, marketers analyzing survey results, or data scientists debugging an ETL script (ETL stands for extract, transform and load)."

ChatGPT and other generative AI models have been trained to understand natural language and code and can provide text outputs in response to user inputs or queries. The inputs to GPTs are also referred to as "prompts." Designing prompts is essentially how users  "program" a GPT model, usually by providing instructions or some examples of how to successfully complete a task; the process is known as prompt engineering.

GPTs can be used across a wide variety of tasks, including content or code generation, summarization, conversation, creative writing, and more. 

Most LLMs, such as OpenAI’s GPT-4, are pretrained as next word or content prediction engines — that is how most businesses use them out of the box, as it were. And while LLM-based chatbots have produced their share of errors, pretrained LLMs work relatively well at producing mostly accurate and compelling content that, at the very least, can be used as a jumping off point for other tasks.

Many industries, however, require more customized LLMs, those that understand specific jargon and produce content tailored for their users. LLMs for the healthcare industry, for instance, might need to process and interpret electronic health records (EHRs), suggest treatments, or create a patient healthcare summary based on physician notes or voice recordings. An LLM tuned to the financial services industry can summarize earnings calls, create meeting transcripts, and perform fraud analysis to protect consumers.

OpenAI customers will be able to securely extend ChatGPT’s knowledge with a company's data by connecting it to the applications they already use, but that feature is still being developed, according to a spokesperson. Because of that, included in its pricing, OpenAI is offering customers free credits to use its APIs if they need to extend OpenAI into a fully custom solution for an organization. 

ChatGPT Enterprise is available now; OpenAI did not detail any pricing, which will depend on enterprise needs.