Last Updated on November 18, 2023 by Team Yantra
In order to better serve businesses, OpenAI has introduced ChatGPT Enterprise. A version of its generative AI service that lets businesses choose how the model is trained and how long it can keep company data used for that purpose in storage.
The most recent version of the chatbot promises improved data privacy and security (an issue with prior versions), faster response times, and more content personalization choices. This chatbot version is also claimed to be quite effective at data analysis, which has been one of the platform’s most selling features.
By employing encryption for data in transit (TLS 1.2+) and at rest (AES-256), as well as single sign-on for business authentication using Security Assertion Markup Language (SAML), the new ChatGPT service also provides greater security and privacy.
Moreover, according to OpenAI, the prominent language model (LLM) on which ChatGPT is built and trained, GPT-4, has more bandwidth available for organizations to connect to it. The company intends to provide “unlimited higher-speed” access to the LLM as well as four times longer 32k token context window inputs and follow-ups than are now provided. (A token is roughly 4 characters long, or 0.75 words. According to OpenAI, the total word count of Shakespeare’s works is around 900,000 words or 1.2 million tokens.)
“This marks another step towards an AI assistant for work that helps with any task, protects your company data, and is customized for your organization,” OpenAI said in a blog post. “You own and control your business data in ChatGPT Enterprise. We do not train on your business data or conversations, and our models don’t learn from your usage. ChatGPT Enterprise removes all usage caps and performs up to two times faster.”
Previously known as Code Interpreter, the Enterprise edition of ChatGPT now offers a new admin portal with bulk member management, domain verification, and an analytics dashboard plug-in for usage insights. In response to user inputs or requests, ChatGPT and other generative AI models can produce text outputs since they have been trained to comprehend both natural language and computer code. Inputs to GPTs are also known as “prompts.” The act of creating prompts, known as prompt engineering, is essentially how users “program” a GPT model. Typically, this is done by giving instructions or some instances of how to do a task successfully.
To leverage OpenAI’s API for that purpose, organizations will now include free credits as part of the pricing for those who want to use GPT-4 in a fully customized version without the more general-purpose GPT-4 LLM. OpenAI did not provide pricing information, which may vary based on enterprise requirements, but ChatGPT Enterprise is currently accessible.
According to investor reports, CEO Sam Altman stated that the business intended to increase that amount to $200 million this year and $1 billion next year, and it is likely that he included ChatGPT Enterprise in those ambitions.
Future plans for ChatGPT Enterprise, according to OpenAI, include a ChatGPT Business option for smaller teams, the ability for businesses to connect apps to ChatGPT Enterprise, “more powerful” and “enterprise-grade” versions of Advanced Data Analysis and web browsing, as well as tools for data analysts, marketers, and customer support.
Leave a Reply
You must be logged in to post a comment.