OpenAI announces GPT-4 Turbo, can perform more complex tasks in one prompt

Spread the love

OpenAI has announced GPT-4 Turbo, an improved version of its Ai model GPT-4, at its own developer conference. The model is available in preview to paid developers via an API and will become more widely available ‘in the coming weeks’.

GPT-4 Turbo is available in two versions: one that analyzes text and one that understands the context of text and images. In addition, the model can retrieve information until April 2023. When the previous model was released, this was until September 2021. Some time later, GPT-4 had ‘knowledge’ until January 1, 2022 and recently also until April 2023. GPT-4 Turbo should also be able to perform more complex tasks based on a single prompt.

Furthermore, GPT-4 Turbo has a 128K context window. This should ensure that GPT-4 Turbo understands more of the question and provides better thought-out answers. For comparison, GPT-4 has an 8K context window version and a 32K context window version.

OpenAI says GPT-4 Turbo is cheaper for developers, with entry at $0.01 per 1,000 tokens. With GPT-4 this was still $0.03 per 1000 tokens, which makes the new GPT version three times cheaper than the previous one. The price of GPT-4 Turbo depends on the image size. An image of 1080×1080 pixels costs $0.00765, according to OpenAI.

At its first developer conference, OpenAI also announced GPTs. These are customized versions of ChatGPT that users can create without any coding required. OpenAI mentions assistants such as a ‘writing coach’, ‘sous chef’ or ‘math teacher’ as examples. There will also be a GPT Store ‘later this month’ where makers of GPTs can generate income.

You might also like
Exit mobile version