ChatGPT has been the leading AI platform since its launch in November 2022. Over time, new models and regular UI updates have been offered, but has OpenAI made any behind-the-scenes upgrades to GPT-4o that have made it more responsive?
I've been spending a lot of time working with Anthropic's Claude recently. It's no secret that I'm a big fan of the artifacts and how the Claude model responds. It's often more verbose in its output, understands what I'm asking from a single prompt, and is faster thanks to Sonnet 3.5.
However, I regularly switch platforms, including Llama on Groq, Google Gemini, and the many models available on Poe. I recently noticed that ChatGPT has become as performant as Claude with Sonnet 3.5 when it was launched, especially for longer tasks.
Just last week, I built a full iOS app in an hour using ChatGPT, rewrote multiple letters, and created shot-by-shot outlines for AI video projects, without ChatGPT breaking a sweat. It handles every request without getting caught, is super fast, and more creative.
What has changed in ChatGPT?
While OpenAI’s new models like GPT-4o get all the attention, the company often releases an improved version of an existing model that can have a major impact on performance. These models get little attention outside of developer circles because the labeling doesn’t change in ChatGPT.
Last week, GPT-4o got an upgrade and a new model called GPT-4o-2024-08-06 was released to developers. Its main promise was cheaper API calls and faster responses, but each new update also brings overall improvements through fine-tuning.
These updates have likely also been rolled out to ChatGPT – after all, it makes sense for OpenAI to use the cheapest-to-run version of GPT-4o in its public chatbot.
While this may not have the glamour of a GPT-4o launch, it has led to subtle improvements. I suspect there is also an element of behind-the-scenes infrastructure changes that allow for longer outputs and faster responses beyond simple model updates.
It seems faster and more creative
I'm just basing this on my own experience with ChatGPT over the last week. I've run it through the same types of queries I used on Claude and ChatGPT and it seems faster to me.
An example of this is how ChatGPT handles a very long block of code. I built a to-do list app for iPhone that uses gamification to encourage task completion. This often requires multiple messages for each block of code, and in the past, if the block was too long, ChatGPT would truncate a response.
It expects you to remove things and replace them in your own code. Lately, ChatGPT has been displaying entire blocks of code for every update request without being asked.
ChatGPT does this across multiple messages, but uses the very clever “continue generating” feature so that everything appears in a single block of code rather than disjointedly across messages, often breaking the layout or structure of the code in the process.
I also noticed that he was more creative in his responses to tasks such as “come up with 5 ideas for a short film about ordinary people” or “rewrite this letter for a specific audience.”
While I can't say for sure that ChatGPT has received an upgrade, its performance has definitely increased compared to where it was about two weeks ago – and I'm using it more than I have in a while.