1 Comment
User's avatar
Bill Prin's avatar

The coining of 'context rot' seemed to really resonate with you and you mention it here, but it's something that I'd still want to learn a lot more about. In an overly simplistic model of LLMs, it feels like more context is always better, but clearly practitioners are saying thats not true. But do we have any way of understanding why? Measuring it? I suppose if we had an "eval" system , then we could start with a working prompt then test how and when irrelevant information breaks the prompt. I'm also confused on how much placement in the prompt / context matters, is irrelevant information more harmful if its recent? Feels like this topic is barely talked about thus far and yes is going to be critical for any sort of agent development.

Expand full comment