Discussion about this post

User's avatar
Patrick Senti's avatar

I think it's interesting, but also flawed due to the halting problem. Ultimately it is impossible to tell whether any sequence of operations can be safely executed. The path to safe operations of LLM based agents in my opinion is to use state machines and safe guards to state transitions, and to the operations that execute the transitions, a process I like to call Safe-state Agentic Workflow (SAW).

Expand full comment

No posts