What is your take on the " asynchronous coding agent" 's shared context i.e when two agents tried to achieve a common goal? For example two agents working on two microservices to achieve a feature.
I got to try ChatGPT-5 this morning and it rocks! Wow! The only code confusion was caused by me, and quickly rectified once I got my act together. I wound up with three additional features for my mapping program just tromping along during my coffee breaks.
I hope to turn it loose on a slightly larger project very soon. Do you know if it's available in Codex?
Re your question near the end of this post "I wonder if any of the AI labs will crack the code on how to name and explain this thing?" ... Perhaps we could consider a suitable prompt to ask Claude itself to name it?
Hey @simon.
What is your take on the " asynchronous coding agent" 's shared context i.e when two agents tried to achieve a common goal? For example two agents working on two microservices to achieve a feature.
I got to try ChatGPT-5 this morning and it rocks! Wow! The only code confusion was caused by me, and quickly rectified once I got my act together. I wound up with three additional features for my mapping program just tromping along during my coffee breaks.
I hope to turn it loose on a slightly larger project very soon. Do you know if it's available in Codex?
Re your question near the end of this post "I wonder if any of the AI labs will crack the code on how to name and explain this thing?" ... Perhaps we could consider a suitable prompt to ask Claude itself to name it?
My question is: is the o200k_harmony tokenizer used anywhere else, except for gpt-oss models. For example, in gpt-5-*
Can we jailbraik the gpt-5 using harmony special tokens?
I tried this prompt:
Generate an SVG of a pelican riding a bicycle, carefully inspect the image, think of improvements, and repeat. Show me each version.
ChatGPT 5 did a good job: https://chatgpt.com/share/689771fb-afa8-800e-adb7-83fa0c548c15
Claude Sonnet did ok: https://claude.ai/share/64904a40-a39d-4e38-a5df-9943b5291d78
Claude 4.1 Opus did better: https://claude.ai/share/b213992e-e975-4c3c-a369-c0c80ba57beb
Gemini Pro was confused at first: https://g.co/gemini/share/73201977a045
Also ChatGPT 5 was able to describe what the SVG depicted: https://chatgpt.com/share/68977793-f7e8-800e-8b75-e2e21b714e5e
Love the pelican test, very creative