The AI community is notorious for its ever-changing terminology trends. “Connectionism”, “neural networks”, and “deep learning” all refer to the same core set of technical concepts but are anchored in different time periods, like “Information Superhighway”, “World Wide Web”, and “Internet”. You’d be forgiven for assuming that “autoregressive language model” and “causal language model” referred to different things (they don’t), or that “inference” and “prediction” were distinct (in AI, they usually aren’t).
Sometimes the popular terminology is so infused with metaphor as to be misleading, as in phrases like “thought vectors” and “consciousness priors”. Back in 2018, Zach Lipton and Jacob Steinhardt complained about these excesses, and things have arguably gotten worse since then.
In our view, the latest instance of terminological drift concerns “agent” and its adjectival form “agentic”. These descriptors are everywhere now and seem to have been bleached of almost all their meaning. But our intention here is not to complain about this usage. Rather, we want to reassure you: whatever you are doing with GenAI, it is almost certainly deserving of the label “agentic” (as that term is used now).
The ReAct agent of Yao et al. 2022 is a paradigm case of an agent. “ReAct” stands for “Reasoning and Acting”: if you give a ReAct agent a set of tools and some context, it will reason about which of its tools to use in that context, take the action of using that tool, and return the result along with commentary about its progress. ReAct agents can iterate through multiple tool calls to complete a task.
The DSPy library implements a number of tools in this sense, but it seems like dspy.ReAct is the workhorse. dspy.ReAct is a very general and flexible version of ReAct that will work with any DSPy program and a wide range of tools. One of our favorite recent posts is this deep dive on seven agentic frameworks. DSPy finishes in the top group, with a program that is pretty much dspy.ReAct(prompt, tools).
ReAct genuinely feels agentic: these programs reason and act according to their own logic. When it works, it seems like science fiction. However, shaping the behavior of a ReAct agent can be very challenging, because of the combinatoric explosion of possibilities: with 3 tools and the iteration-max set to 5, you already have 35 = 243 possible tool-use sequences to worry about. The number jumps to 1024 if you add a fourth tool. You also have to hope that the agent calls the tools with the correct arguments and parses the responses correctly.
It is very sensible to want to get control of this by writing simpler modules that bring more structure to the tool usage. For example, you could write a number of small modules that simply choose a tool for the broader program to execute. What shall we call these simpler modules? As you can imagine, the community settled on the name “routing agent”. Routing agents might have very little agency, but they still feel like they are in your program diligently helping you with your work, so “agent” doesn’t feel wrong per se.
The leap from here to everything being an agent is pretty predictable. Suppose one of the modules in your program is just an LLM call with a prompt that summarizes search results for you in a particular way. How will you refer to this module? Well, it just seems simpler (and cooler sounding) at this point to refer to all modules as “agents”, right?
And so there you have it: even the most humble prompt has become an agent. If that feels like a stretch to you, you can consider adding “Think step by step” to the end of your prompt to give the LLM some room to reason for itself before giving a final answer (though the power of this step seems to be in decline).
There is a serious point behind this: unless you are doing exploratory GenAI research, it shouldn’t matter whether your system is (truly) agentic. What matters is that you’re able to shape it to behave the way you want it to, test it rigorously, and reliably improve it over time. For some tasks, this might imply that the best choice is ReAct with dozens of tools and an unlimited compute budget. For others, a simple prompt or routing flow might be vastly superior. There is freedom in referring to all these components as “agents”, because it means we can focus on the substantive tasks at hand.
Now that even the humblest prompt is an “agent”, we can focus on what really matters
Category:
Agents
Reading time:
2 min
