In Part 1 of this tutorial sequence, we launched AI Brokers, autonomous packages that carry out duties, make choices, and talk with others.
Brokers carry out actions via Instruments. It’d occur {that a} Instrument doesn’t work on the primary strive, or that a number of Instruments have to be activated in sequence. Brokers ought to have the ability to set up duties right into a logical development and alter their methods in a dynamic setting.
To place it merely, the Agent’s construction have to be stable, and the conduct have to be dependable. The most typical method to do this is thru:
- Iterations – repeating a sure motion a number of instances, typically with slight modifications or enhancements in every cycle. Each time would possibly contain the Agent revisiting sure steps to refine its output or attain an optimum resolution.
- Chains – a sequence of actions which can be linked collectively in a sequence. Every step within the chain depends on the earlier one, and the output of 1 motion turns into the enter for the following.
On this tutorial, I’m going to indicate how one can use iterations and chains for Brokers. I’ll current some helpful Python code that may be simply utilized in different comparable instances (simply copy, paste, run) and stroll via each line of code with feedback to be able to replicate this instance (hyperlink to full code on the finish of the article).
Setup
Please seek advice from Part 1 for the setup of Ollama and the principle LLM.
import Ollama
llm = "qwen2.5"
We’ll use the YahooFinance public APIs with the Python library (pip set up yfinance==0.2.55
) to obtain monetary knowledge.
import yfinance as yf
inventory = "MSFT"
yf.Ticker(ticker=inventory).historical past(interval='5d') #1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max
Let’s embed that right into a Instrument.
import matplotlib.pyplot as plt
def get_stock(ticker:str, interval:str, col:str):
knowledge = yf.Ticker(ticker=ticker).historical past(interval=interval)
if len(knowledge) > 0:
knowledge[col].plot(colour="black", legend=True, xlabel='', title=f"{ticker.higher()} ({interval})").grid()
plt.present()
return 'okay'
else:
return 'no'
tool_get_stock = {'sort':'operate', 'operate':{
'identify': 'get_stock',
'description': 'Obtain inventory knowledge',
'parameters': {'sort': 'object',
'required': ['ticker','period','col'],
'properties': {
'ticker': {'sort':'str', 'description':'the ticker image of the inventory.'},
'interval': {'sort':'str', 'description':"for 1 month enter '1mo', for six months enter '6mo', for 1 12 months enter '1y'. Use '1y' if not specified."},
'col': {'sort':'str', 'description':"certainly one of 'Open','Excessive','Low','Shut','Quantity'. Use 'Shut' if not specified."},
}}}}
## take a look at
get_stock(ticker="msft", interval="1y", col="Shut")
Furthermore, taking the code from the earlier article as a reference, I shall write a common operate to course of the mannequin response, similar to when the Agent needs to make use of a Instrument or when it simply returns textual content.
def use_tool(agent_res:dict, dic_tools:dict) -> dict:
## use device
if "tool_calls" in agent_res["message"].keys():
for device in agent_res["message"]["tool_calls"]:
t_name, t_inputs = device["function"]["name"], device["function"]["arguments"]
if f := dic_tools.get(t_name):
### calling device
print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
### tool output
t_output = f(**tool["function"]["arguments"])
print(t_output)
### remaining res
res = t_output
else:
print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
## don't use tool
if agent_res['message']['content'] != '':
res = agent_res["message"]["content"]
t_name, t_inputs = '', ''
return {'res':res, 'tool_used':t_name, 'inputs_used':t_inputs}
Let’s begin a fast dialog with our Agent. For now, I’m going to make use of a easy generic immediate.
immediate = '''You're a monetary analyst, help the consumer utilizing your accessible instruments.'''
messages = [{"role":"system", "content":prompt}]
dic_tools = {'get_stock':get_stock}
whereas True:
## consumer enter
strive:
q = enter('🙂 >')
besides EOFError:
break
if q == "give up":
break
if q.strip() == "":
proceed
messages.append( {"position":"consumer", "content material":q} )
## mannequin
agent_res = ollama.chat(mannequin=llm, messages=messages,
instruments=[tool_get_stock])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
## remaining response
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
As you can see, I started by asking an “easy” question. The LLM already knows that the symbol of Microsoft stock is MSFT, therefore the Agent was able to activate the Tool with the right inputs. But what if I ask something that might not be included in the LLM knowledge base?
Seems that the LLM doesn’t know that Facebook changed its name to META, so it used the Tool with the wrong inputs. I will enable the Agent to try an action several times through iterations.
Iterations
Iterations refer to the repetition of a process until a certain condition is met. We can let the Agent try a specific number of times, but we need to let it know that the previous parameters didn’t work, by adding the details in the message history.
max_i, i = 3, 0
while res == 'no' and i ", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
The Agent tried 3 times with different inputs but it couldn’t find a solution because there is a gap in the LLM knowledge base. In this case, the model needed human input to understand how to use the Tool.
Next, we’re going to enable the Agent to fill the knowledge gap by itself.
Chains
A chain refers to a linear sequence of actions where the output of one step is used as the input for the next step. In this example, I will add another Tool that the Agent can use in case the first one fails.
We can use the web-searching Tool from the previous article.
from langchain_community.tools import DuckDuckGoSearchResults
def search_web(query:str) -> str:
return DuckDuckGoSearchResults(backend="news").run(query)
tool_search_web = {'type':'function', 'function':{
'name': 'search_web',
'description': 'Search the web',
'parameters': {'type': 'object',
'required': ['query'],
'properties': {
'question': {'sort':'str', 'description':'the subject or topic to look on the net'},
}}}}
## take a look at
search_web(question="fb inventory")
Up to now, I’ve all the time used very generic prompts because the duties had been comparatively easy. Now, I need to make it possible for the Agent understands how one can use the Instruments in the correct order, so I’m going to write down a correct immediate. That is how a immediate ought to be achieved:
- The objective of the Agent
- What it should return (i.e. format, content material)
- Any related warnings that may have an effect on the output
- Context dump
immediate = '''
[GOAL] You're a monetary analyst, help the consumer utilizing your accessible instruments.
[RETURN] You will need to return the inventory knowledge that the consumer asks for.
[WARNINGS] With the intention to retrieve inventory knowledge, you might want to know the ticker image of the corporate.
[CONTEXT] First ALWAYS attempt to use the device 'get_stock'.
If it would not work, you should use the device 'search_web' and search 'firm identify inventory'.
Get details about the inventory and deduct what's the proper ticker image of the corporate.
Then, you should use AGAIN the device 'get_stock' with the ticker you bought utilizing the earlier device.
'''
We are able to merely add the chain to the iteration loop that we have already got. This time the Agent has two Instruments, and when the primary fails, the mannequin can determine whether or not to retry or to make use of the second. Then, if the second Instrument is used, the Agent should course of the output and be taught what’s the correct enter for the primary Instrument that originally failed.
max_i, i = 3, 0
whereas res in ['no',''] and that i ", f"x1b[1;30mI can try with {llm_res}x1b[0m")
agent_res = ollama.chat(model=llm, messages=messages, tools=[tool_get_stock])
dic_res = use_tool(agent_res, dic_tools)
res, tool_used, inputs_used = dic_res["res"], dic_res["tool_used"], dic_res["inputs_used"]
i += 1 if i == max_i: res = f'I attempted {i} instances however one thing is flawed'
## remaining response
print("👽 >", f"x1b[1;30m{res}x1b[0m")
messages.append( {"role":"assistant", "content":res} )
As expected, the Agent tried to use the first Tool with the wrong inputs, but instead of trying the same action again as before, it decided to use the second Tool. By consuming information, it should understand the solution without the need for human input.
In summary, the AI tried to do an action but failed due to a gap in its knowledge base. So it activated Tools to fill that gap and deliver the output requested by the user… that is indeed the true essence of AI Agents.
Conclusion
This article has covered more structured ways to make Agents more reliable, using iterations and chains. With these building blocks in place, you are already equipped to start developing your own Agents for different use cases.
Stay tuned for Part 3, where we will dive deeper into more advanced examples.
Full code for this article: GitHub
I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.
👉 Let’s Connect 👈