OpenAI has announced a turbocharged version of GPT-4, the Large Language Model (LLM) currently powering its popular AI chatbot, ChatGPT. The new GPT-4 Turbo model is even more capable than the first version of GPT-4 that was initially released in March this year, OpenAI says. The announcement was made today at the company’s first-ever “DevDay” conference. During the event, the company’s CEO, Sam Altman, also announced that users will have the ability to create their own custom versions of its brainchild, ChatGPT.
The new GPT-4 Turbo, currently available on preview via API, has a 128k context window. According to OpenAI, this allows the model to “fit the equivalent of about 300 pages of text in a single prompt.” This is a significant bump compared to its predecessor which has 2 versions with 8k or 32k context windows.
The turbocharged model has been trained on data dating up to April 2023. In addition, it still has audio capabilities enabled with six voices to choose from and image prompts and is also integrated with the DALL-E 3 text-to-image generator.
The latest version of GPT-4 can accept images as inputs. This mode called “GPT-4 Turbo with vision” can be used to generate captions, analyze real-world images with details as well as read documents with figures.
Notably, GPT-4 Turbo is cheaper to run than GPT-4. The Turbo version’s input tokens are priced three times cheaper at $0.01 per 1000 tokens. The output tokens are priced at $0.03 per 1000 tokens which is twice as cheap as the earlier version. OpenAI is also reducing the pricing for other models including making GPT-3.5 Turbo approximately 3x cheaper than the previous 16k model. This comes as OpenAI doubles the rate limits for all paying GPT-4 customers.