It might be worth going back ~25 years to see how the distributed AI community was using the term "agent". At the time (1997-2000) I was working in Sun Microsystems Laboratories, and my team was involved in FIPA, https://en.wikipedia.org/wiki/Foundation_for_Intelligent_Physical_Agents . While we were interested in a wide range of use cases (from industrial automation to office productivity to robotic search-and-rescue!), we agreed on a core abstraction for agents: the Belief-Desire-Intention (BDI) model https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention_software_model . My group was particularly concerned with multi-agent systems, for use cases such as calendar management - how can your agent and mine interact to successfully schedule a meeting?
Two key requirements that emerged from this were the need for concepts of attribution and reputation - what was the source of a particular belief, and what kind of trust can I place in a particular source of beliefs (in part so that I can resolve conflicting beliefs). The industrial robotics guys were particularly good about modeling the interaction between different agents (robots) that were dealing with different aspects of a process. For office automation, we assumed that there was a relationship between reputation/trust and ownership/delegation, and an obvious cybersecurity angle to both of these.
I've been out of the field for 20 years, although my later work on smart city "digital twins" ran into similar problems. Absent a compelling alternative, I still think that any software system intended to exhibit "on behalf of" agency is going to have to fit into a BDI framework.
> An LLM agent runs tools in a loop to achieve a goal.
I think this is as good a definition as any, but seems to include modern chatbots as well. If I tell chatgpt/claude/gemini to "recommend restaurants in [city]" it will run tools in a loop to do that.
It might be worth going back ~25 years to see how the distributed AI community was using the term "agent". At the time (1997-2000) I was working in Sun Microsystems Laboratories, and my team was involved in FIPA, https://en.wikipedia.org/wiki/Foundation_for_Intelligent_Physical_Agents . While we were interested in a wide range of use cases (from industrial automation to office productivity to robotic search-and-rescue!), we agreed on a core abstraction for agents: the Belief-Desire-Intention (BDI) model https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention_software_model . My group was particularly concerned with multi-agent systems, for use cases such as calendar management - how can your agent and mine interact to successfully schedule a meeting?
Two key requirements that emerged from this were the need for concepts of attribution and reputation - what was the source of a particular belief, and what kind of trust can I place in a particular source of beliefs (in part so that I can resolve conflicting beliefs). The industrial robotics guys were particularly good about modeling the interaction between different agents (robots) that were dealing with different aspects of a process. For office automation, we assumed that there was a relationship between reputation/trust and ownership/delegation, and an obvious cybersecurity angle to both of these.
I've been out of the field for 20 years, although my later work on smart city "digital twins" ran into similar problems. Absent a compelling alternative, I still think that any software system intended to exhibit "on behalf of" agency is going to have to fit into a BDI framework.
an agent is just an actor that aims to achieve some objective given constraints.
A simple function is an agent, its just a deterministic one.
A function that calls an LLM is an agent, a non-deterministic one.
A function that loops is also a non-deterministic agent with more orchestration steps.
Nothing has really changed except we can make more non-deterministic decisions now with LLMs, but agents were always there.
Agent was defined very clearly by Russel and Norvig in their classic AI book, 2 decades ago.
System prompts can be full of good life advice
> If you need to write a plan, only write high quality plans, not low quality ones.
> An LLM agent runs tools in a loop to achieve a goal.
I think this is as good a definition as any, but seems to include modern chatbots as well. If I tell chatgpt/claude/gemini to "recommend restaurants in [city]" it will run tools in a loop to do that.
I love how simple the definition of 'agent' is, I think that's a sign that its distilled down to its basics.