Close Menu
    Trending
    • AMD CEO Claims New AI Chips ‘Outperform’ Nvidia’s
    • How AI Agents “Talk” to Each Other
    • Creating Smart Forms with Auto-Complete and Validation using AI | by Seungchul Jeff Ha | Jun, 2025
    • Why Knowing Your Customer Drives Smarter Growth (and Higher Profits)
    • Stop Building AI Platforms | Towards Data Science
    • What If Your Portfolio Could Speak for You? | by Lusha Wang | Jun, 2025
    • High Paying, Six Figure Jobs For Recent Graduates: Report
    • What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»AI Agents from Zero to Hero – Part 1
    Artificial Intelligence

    AI Agents from Zero to Hero – Part 1

    FinanceStarGateBy FinanceStarGateFebruary 21, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Intro

    AI Brokers are autonomous applications that carry out duties, make choices, and talk with others. Usually, they use a set of instruments to assist full duties. In GenAI functions, these Brokers course of sequential reasoning and might use exterior instruments (like internet searches or database queries) when the LLM information isn’t sufficient. Not like a primary chatbot, which generates random textual content when unsure, an AI Agent prompts instruments to supply extra correct, particular responses.

    We’re shifting nearer and nearer to the idea of Agentic Ai: programs that exhibit the next degree of autonomy and decision-making skill, with out direct human intervention. Whereas in the present day’s AI Brokers reply reactively to human inputs, tomorrow’s Agentic AIs proactively interact in problem-solving and might alter their habits based mostly on the scenario.

    At the moment, constructing Brokers from scratch is changing into as simple as coaching a logistic regression mannequin 10 years in the past. Again then, Scikit-Be taught offered an easy library to rapidly practice Machine Studying fashions with only a few traces of code, abstracting away a lot of the underlying complexity.

    On this tutorial, I’m going to indicate the best way to construct from scratch several types of AI Brokers, from easy to extra superior programs. I’ll current some helpful Python code that may be simply utilized in different comparable circumstances (simply copy, paste, run) and stroll via each line of code with feedback with the intention to replicate this instance.

    Setup

    As I mentioned, anybody can have a customized Agent working domestically at no cost with out GPUs or API keys. The one needed library is Ollama (pip set up ollama==0.4.7), because it permits customers to run LLMs domestically, without having cloud-based companies, giving extra management over information privateness and efficiency.

    Initially, you should obtain Ollama from the web site. 

    Then, on the immediate shell of your laptop computer, use the command to obtain the chosen LLM. I’m going with Alibaba’s Qwen, because it’s each sensible and lite.

    After the obtain is accomplished, you possibly can transfer on to Python and begin writing code.

    import ollama
    llm = "qwen2.5"

    Let’s take a look at the LLM:

    stream = ollama.generate(mannequin=llm, immediate=""'what time is it?''', stream=True)
    for chunk in stream:
        print(chunk['response'], finish='', flush=True)

    Clearly, the LLM per se could be very restricted and it may’t do a lot moreover chatting. Due to this fact, we have to present it the likelihood to take motion, or in different phrases, to activate Instruments.

    One of the vital widespread instruments is the flexibility to search the Web. In Python, the simplest technique to do it’s with the well-known non-public browser DuckDuckGo (pip set up duckduckgo-search==6.3.5). You may immediately use the unique library or import the LangChain wrapper (pip set up langchain-community==0.3.17). 

    With Ollama, in an effort to use a Device, the operate should be described in a dictionary.

    from langchain_community.instruments import DuckDuckGoSearchResults
    def search_web(question: str) -> str:
      return DuckDuckGoSearchResults(backend="information").run(question)
    
    tool_search_web = {'sort':'operate', 'operate':{
      'title': 'search_web',
      'description': 'Search the online',
      'parameters': {'sort': 'object',
                    'required': ['query'],
                    'properties': {
                        'question': {'sort':'str', 'description':'the subject or topic to look on the internet'},
    }}}}
    ## take a look at
    search_web(question="nvidia")

    Web searches could possibly be very broad, and I wish to give the Agent the choice to be extra exact. Let’s say, I’m planning to make use of this Agent to study monetary updates, so I can provide it a particular device for that subject, like looking solely a finance web site as a substitute of the entire internet.

    def search_yf(question: str) -> str:  engine = DuckDuckGoSearchResults(backend="information")
      return engine.run(f"website:finance.yahoo.com {question}")
    
    tool_search_yf = {'sort':'operate', 'operate':{
      'title': 'search_yf',
      'description': 'Seek for particular monetary information',
      'parameters': {'sort': 'object',
                    'required': ['query'],
                    'properties': {
                        'question': {'sort':'str', 'description':'the monetary subject or topic to look'},
    }}}}
    
    ## take a look at
    search_yf(question="nvidia")

    Easy Agent (WebSearch)

    In my view, probably the most primary Agent ought to a minimum of be capable of select between one or two Instruments and re-elaborate the output of the motion to present the person a correct and concise reply. 

    First, you should write a immediate to explain the Agent’s objective, the extra detailed the higher (mine could be very generic), and that would be the first message within the chat historical past with the LLM. 

    immediate=""'You might be an assistant with entry to instruments, you have to determine when to make use of instruments to reply person message.''' 
    messages = [{"role":"system", "content":prompt}]

    With a purpose to preserve the chat with the AI alive, I’ll use a loop that begins with person’s enter after which the Agent is invoked to reply (which generally is a textual content from the LLM or the activation of a Device).

    whereas True:
        ## person enter
        attempt:
            q = enter('🙂 >')
        besides EOFError:
            break
        if q == "give up":
            break
        if q.strip() == "":
            proceed
        messages.append( {"function":"person", "content material":q} )
       
        ## mannequin
        agent_res = ollama.chat(
            mannequin=llm,
            instruments=[tool_search_web, tool_search_yf],
            messages=messages)

    Up thus far, the chat historical past might look one thing like this:

    If the mannequin desires to make use of a Device, the suitable operate must be run with the enter parameters advised by the LLM in its response object:

    So our code must get that info and run the Device operate.

    ## response
        dic_tools = {'search_web':search_web, 'search_yf':search_yf}
    
        if "tool_calls" in agent_res["message"].keys():
            for device in agent_res["message"]["tool_calls"]:
                t_name, t_inputs = device["function"]["name"], device["function"]["arguments"]
                if f := dic_tools.get(t_name):
                    ### calling device
                    print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                    messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                    ### tool output
                    t_output = f(**tool["function"]["arguments"])
                    print(t_output)
                    ### closing res
                    p = f'''Summarize this to reply person query, be as concise as potential: {t_output}'''
                    res = ollama.generate(mannequin=llm, immediate=q+". "+p)["response"]
                else:
                    print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
     
        if agent_res['message']['content'] != '':
            res = agent_res["message"]["content"]
         
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )

    Now, if we run the full code, we can chat with our Agent.

    Advanced Agent (Coding)

    LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.

    I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command exec().

    import io
    import contextlib
    
    def code_exec(code: str) -> str:
        output = io.StringIO()
        with contextlib.redirect_stdout(output):
            try:
                exec(code)
            except Exception as e:
                print(f"Error: {e}")
        return output.getvalue()
    
    tool_code_exec = {'type':'function', 'function':{
      'name': 'code_exec',
      'description': 'execute python code',
      'parameters': {'type': 'object',
                    'required': ['code'],
                    'properties': {
                        'code': {'sort':'str', 'description':'code to execute'},
    }}}}
    
    ## take a look at
    code_exec("a=1+1; print(a)")

    Similar to earlier than, I’ll write a immediate, however this time, firstly of the chat-loop, I’ll ask the person to supply a file path.

    immediate=""'You might be an professional information scientist, and you've got instruments to execute python code.
    Initially, execute the next code precisely as it's: 'df=pd.read_csv(path); print(df.head())'
    Should you create a plot, ALWAYS add 'plt.present()' on the finish.
    '''
    messages = [{"role":"system", "content":prompt}]
    begin = True
    
    whereas True:
        ## person enter
        attempt:
            if begin is True:
                path = enter('📁 Present a CSV path >')
                q = "path = "+path
            else:
                q = enter('🙂 >')
        besides EOFError:
            break
        if q == "give up":
            break
        if q.strip() == "":
            proceed
       
        messages.append( {"function":"person", "content material":q} )

    Since coding duties generally is a little trickier for LLMs, I’m going so as to add additionally reminiscence reinforcement. By default, throughout one session, there isn’t a real long-term reminiscence. LLMs have entry to the chat historical past, to allow them to bear in mind info briefly, and monitor the context and directions you’ve given earlier within the dialog. Nevertheless, reminiscence doesn’t at all times work as anticipated, particularly if the LLM is small. Due to this fact, an excellent observe is to strengthen the mannequin’s reminiscence by including periodic reminders within the chat historical past.

    immediate=""'You might be an professional information scientist, and you've got instruments to execute python code.
    Initially, execute the next code precisely as it's: 'df=pd.read_csv(path); print(df.head())'
    Should you create a plot, ALWAYS add 'plt.present()' on the finish.
    '''
    messages = [{"role":"system", "content":prompt}]
    reminiscence = '''Use the dataframe 'df'.'''
    begin = True
    
    whereas True:
        ## person enter
        attempt:
            if begin is True:
                path = enter('📁 Present a CSV path >')
                q = "path = "+path
            else:
                q = enter('🙂 >')
        besides EOFError:
            break
        if q == "give up":
            break
        if q.strip() == "":
            proceed
       
        ## reminiscence
        if begin is False:
            q = reminiscence+"n"+q
        messages.append( {"function":"person", "content material":q} )

    Please observe that the default reminiscence size in Ollama is 2048 characters. In case your machine can deal with it, you possibly can enhance it by altering the quantity when the LLM is invoked:

        ## mannequin
        agent_res = ollama.chat(
            mannequin=llm,
            instruments=[tool_code_exec],
            choices={"num_ctx":2048},
            messages=messages)

    On this usecase, the output of the Agent is generally code and information, so I don’t need the LLM to re-elaborate the responses.

        ## response
        dic_tools = {'code_exec':code_exec}
       
        if "tool_calls" in agent_res["message"].keys():
            for device in agent_res["message"]["tool_calls"]:
                t_name, t_inputs = device["function"]["name"], device["function"]["arguments"]
                if f := dic_tools.get(t_name):
                    ### calling device
                    print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                    messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                    ### tool output
                    t_output = f(**tool["function"]["arguments"])
                    ### closing res
                    res = t_output
                else:
                    print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
     
        if agent_res['message']['content'] != '':
            res = agent_res["message"]["content"]
         
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )
        start = False

    Now, if we run the full code, we can chat with our Agent.

    Conclusion

    This article has covered the foundational steps of creating Agents from scratch using only Ollama. With these building blocks in place, you are already equipped to start developing your own Agents for different use cases. 

    Stay tuned for Part 2, where we will dive deeper into more advanced examples.

    Full code for this article: GitHub

    I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.

    👉 Let’s Connect 👈



    Source link
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCan AI Help Solve the Global Mental Health Crisis? | by Saima Khan | Feb, 2025
    Next Article Why Most Startups Fail — And the Top Reason Behind It
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    How AI Agents “Talk” to Each Other

    June 14, 2025
    Artificial Intelligence

    Stop Building AI Platforms | Towards Data Science

    June 14, 2025
    Artificial Intelligence

    What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization

    June 14, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Top Machine Learning Jobs and How to Prepare For Them

    May 22, 2025

    The next evolution of AI for business: our brand story

    February 5, 2025

    Prediction on Post AGI Consequences | by JUJALU | Feb, 2025

    February 25, 2025

    Boost 2-Bit LLM Accuracy with EoRA

    May 15, 2025

    Top 25 Python Libraries You Need to Know

    February 4, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Japanese-Chinese Translation with GenAI: What Works and What Doesn’t

    March 27, 2025

    Best Jobs for Introverts With the Highest Pay: Report

    March 13, 2025

    CRA hits taxpayer with hefty ‘foreign property’ penalty

    March 6, 2025
    Our Picks

    What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization

    June 14, 2025

    Successful Entrepreneurs Are Using This New Platform to Improve International Connections

    May 3, 2025

    Simplify Investing With Stock Recommendations App

    May 24, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.