ChatGPT in 4o mode doesn't have the new voice and image features yet
Plus Gemini Flash and PaliGemma from Google I/O
In this newsletter:
ChatGPT in "4o" mode is not running the new features yet
Plus 7 links and 5 quotations
ChatGPT in "4o" mode is not running the new features yet - 2024-05-15
Monday's OpenAI announcement of their new GPT-4o model included some intriguing new features:
Creepily good improvements to the ability to both understand and produce voice (Sam Altman simply tweeted "her"), and to be interrupted mid-sentence
New image output capabilities that appear to leave existing models like DALL-E 3 in the dust - take a look at the examples, they seem to have solved consistent character representation AND reliable text output!
They also made the new 4o model available to paying ChatGPT Plus users, on the web and in their apps.
But, crucially, those big new features were not part of that release.
Here's the relevant section from the announcement post:
We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities.
This is catching out a lot of people. The ChatGPT iPhone app already has image output, and it already has a voice mode. These worked with the previous GPT-4 mode and they still work with the new GPT-4o mode... but they are not using the new model's capabilities.
Lots of people are discovering the voice mode for the first time - it's the headphone icon in the bottom right of the interface.
They try it and it's impressive (it was impressive before) but it's nothing like as good as the voice mode in Monday's demos.
Honestly, it's not at all surprising that people are confused. They're seeing the "4o" option and, understandably, are assuming that this is the set of features that were announced earlier this week.
Most people don't distinguish models from features
Think about what you need to know in order to understand what's going on here:
GPT-4o is a brand new multi-modal Large Language Model. It can handle text, image and audio input and produce text, image and audio output.
But... the version of GPT-4o that has been made available so far - both via the API and via the OpenAI apps - is only able to handle text and image input and produce text output. The other features are not yet available outside of OpenAI (and a select group of partners).
And yet in the apps it can still handle audio input and output and generate images. That's because the app version of the model is wrapped with additional tools.
The audio input is handled by a separate model called Whisper, which converts speech to text. That text is then fed into the LLM, which generates a text response.
The response is passed to OpenAI's boringly-named tts-1
(or maybe tts-1-hd
) model (described here), which converts that text to speech.
While nowhere near as good as the audio in Monday's demo, tts-1
is still a really impressive model. I've been using it via my ospeak CLI tool since it was released back in November.
As for images? Those are generated using DALL-E 3, through a process where ChatGPT directly prompts that model. I wrote about how that works back in October.
So what's going on with ChatGPT's GPT-4o mode is completely obvious, provided you already understand:
GPT-4 v.s. GPT-4o
Whisper
tts-1
DALL-E 3
Why OpenAI would demonstrate these features and then release a version of the model that doesn't include them
I'm reminded of the kerfluffle back in March when the Google Gemini image creator was found to generate images of Black Nazis. I saw a whole bunch of people refer to that in conversations about the Google Gemini Pro 1.5 LLM, released at the same time, despite the quality of that model being entirely unrelated to Google's policy decisions about how one of the interfaces to that model should make use of the image creator tool.
What can we learn from this?
If you're fully immersed in this world, it's easy to lose track of how incredibly complicated these systems have become. The amount you have to know in order to even understand what that "4o" mode in the ChatGPT app does is very easy to underestimate.
Fundamentally these are challenges in user experience design. You can't just write documentation about them, because no-one reads documentation.
A good starting here is to acknowledge the problem. LLM systems are extremely difficult to understand and use. We need to design the tools we build on top of them accordingly.
Link 2024-05-14 Why your voice assistant might be sexist:
Given OpenAI's demo on Monday of a vocal chat assistant with a flirty, giggly female voice - and the new ability to be interrupted! - it's worth revisiting this piece by Chris Baraniuk from June 2022 about gender dynamics in voice assistants. Includes a link to this example of a synthesized non-binary voice.
Link 2024-05-14 How developers are using Gemini 1.5 Pro’s 1 million token context window:
I got to be a talking head for a few seconds in an intro video for today's Google I/O keynote, talking about how I used Gemini Pro 1.5 to index my bookshelf (and with a cameo from my squirrel nutcracker). I'm at 1m25s.
(Or at 10m6s in the full video of the keynote)
Link 2024-05-14 llm-gemini 0.1a4:
A new release of my llm-gemini
plugin adding support for the Gemini 1.5 Flash model that was revealed this morning at Google I/O.
I'm excited about this new model because of its low price. Flash is $0.35 per 1 million tokens for prompts up to 128K token and $0.70 per 1 million tokens for longer prompts - up to a million tokens now and potentially two million at some point in the future. That's 1/10th of the price of Gemini Pro 1.5, cheaper than GPT 3.5 ($0.50/million) and only a little more expensive than Claude 3 Haiku ($0.35/million).
Link 2024-05-14 Context caching for Google Gemini:
Another new Gemini feature announced today. Long context models enable answering questions against large chunks of text, but the price of those long prompts can be prohibitive - $3.50/million for Gemini Pro 1.5 up to 128,000 tokens and $7/million beyond that.
Context caching offers a price optimization, where the long prefix prompt can be reused between requests, halving the cost per prompt but at an additional cost of $4.50 / 1 million tokens per hour to keep that context cache warm.
Given that hourly extra charge this isn't a default optimization for all cases, but certain high traffic applications might be able to save quite a bit on their longer prompt systems.
It will be interesting to see if other vendors such as OpenAI and Anthropic offer a similar optimization in the future.
Quote 2024-05-15
The MacBook Airs are Apple’s best-selling laptops; the iPad Pros are Apple’s least-selling iPads. I think it’s as simple as this: the current MacBook Airs have the M3, not the M4, because there isn’t yet sufficient supply of M4 chips to satisfy demand for MacBook Airs.
Quote 2024-05-15
But unlike the phone system, we can’t separate an LLM’s data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it’s the very thing that enables prompt injection.
Link 2024-05-15 How to PyCon:
Glyph's tips on making the most out of PyCon. I particularly like his suggestion that "dinners are for old friends, but lunches are for new ones".
I'm heading out to Pittsburgh and giving a keynote (!) on Saturday. If you see me there please come and say hi!
Quote 2024-05-15
If we want LLMs to be less hype and more of a building block for creating useful everyday tools for people, AI companies' shift away from scaling and AGI dreams to acting like regular product companies that focus on cost and customer value proposition is a welcome development.
Link 2024-05-15 OpenAI: Managing your work in the API platform with Projects:
New OpenAI API feature: you can now create API keys for "projects" that can have a monthly spending cap. The UI for that limit says:
If the project's usage exceeds this amount in a given calendar month (UTC), subsequent API requests will be rejected
You can also set custom token-per-minute and request-per-minute rate limits for individual models.
I've been wanting this for ages: this means it's finally safe to ship a weird public demo on top of their various APIs without risk of accidental bankruptcy if the demo goes viral!
Link 2024-05-15 PaliGemma model README:
One of the more over-looked announcements from Google I/O was PaliGemma, an openly licensed VLM (Vision Language Model) in the Gemma family of models.
The model accepts an image and a text prompt. It outputs text, but that text can include special tokens representing regions on the image. This means it can return both bounding boxes and fuzzier segment outlines of detected objects, behavior that can be triggered using a prompt such as "segment puffins".
You can try it out on Hugging Face.
It's a 3B model, making it feasible to run on consumer hardware.
Quote 2024-05-15
But where the company once limited itself to gathering low-hanging fruit along the lines of “what time is the super bowl,” on Tuesday executives showcased generative AI tools that will someday plan an entire anniversary dinner, or cross-country-move, or trip abroad. A quarter-century into its existence, a company that once proudly served as an entry point to a web that it nourished with traffic and advertising revenue has begun to abstract that all away into an input for its large language models.
Quote 2024-05-16
[...] by default Heroku will spin up multiple dynos in different availability zones. It also has multiple routers in different zones so if one zone should go completely offline, having a second dyno will mean that your app can still serve traffic.