Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Today’s NYT Connections Hints, Answers for April 20 #1044
    • AI Machine-Vision Earns Man Overboard Certification
    • Battery recycling startup Renewable Metals charges up on $12 million Series A
    • The Influencers Normalizing Not Having Sex
    • Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)
    • Today’s NYT Wordle Hints, Answer and Help for April 20 #1766
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»AI Agents from Zero to Hero – Part 1
    Artificial Intelligence

    AI Agents from Zero to Hero – Part 1

    Editor Times FeaturedBy Editor Times FeaturedFebruary 20, 2025No Comments11 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link

    Intro

    AI Brokers are autonomous packages that carry out duties, make choices, and talk with others. Usually, they use a set of instruments to assist full duties. In GenAI purposes, these Brokers course of sequential reasoning and might use exterior instruments (like net searches or database queries) when the LLM data isn’t sufficient. In contrast to a fundamental chatbot, which generates random textual content when unsure, an AI Agent prompts instruments to supply extra correct, particular responses.

    We’re transferring nearer and nearer to the idea of Agentic Ai: methods that exhibit a better degree of autonomy and decision-making means, with out direct human intervention. Whereas as we speak’s AI Brokers reply reactively to human inputs, tomorrow’s Agentic AIs proactively interact in problem-solving and might regulate their conduct primarily based on the scenario.

    At present, constructing Brokers from scratch is turning into as simple as coaching a logistic regression mannequin 10 years in the past. Again then, Scikit-Study offered an easy library to rapidly practice Machine Studying fashions with just some traces of code, abstracting away a lot of the underlying complexity.

    On this tutorial, I’m going to indicate the best way to construct from scratch several types of AI Brokers, from easy to extra superior methods. I’ll current some helpful Python code that may be simply utilized in different related circumstances (simply copy, paste, run) and stroll by each line of code with feedback as a way to replicate this instance.

    Setup

    As I mentioned, anybody can have a customized Agent operating regionally free of charge with out GPUs or API keys. The one crucial library is Ollama (pip set up ollama==0.4.7), because it permits customers to run LLMs regionally, while not having cloud-based companies, giving extra management over knowledge privateness and efficiency.

    Initially, you should obtain Ollama from the web site. 

    Then, on the immediate shell of your laptop computer, use the command to obtain the chosen LLM. I’m going with Alibaba’s Qwen, because it’s each sensible and lite.

    After the obtain is accomplished, you may transfer on to Python and begin writing code.

    import ollama
    llm = "qwen2.5"

    Let’s take a look at the LLM:

    stream = ollama.generate(mannequin=llm, immediate=""'what time is it?''', stream=True)
    for chunk in stream:
        print(chunk['response'], finish='', flush=True)

    Clearly, the LLM per se may be very restricted and it could possibly’t do a lot in addition to chatting. Due to this fact, we have to present it the chance to take motion, or in different phrases, to activate Instruments.

    Some of the widespread instruments is the flexibility to search the Web. In Python, the simplest method to do it’s with the well-known personal browser DuckDuckGo (pip set up duckduckgo-search==6.3.5). You may instantly use the unique library or import the LangChain wrapper (pip set up langchain-community==0.3.17). 

    With Ollama, in an effort to use a Instrument, the perform should be described in a dictionary.

    from langchain_community.instruments import DuckDuckGoSearchResults
    def search_web(question: str) -> str:
      return DuckDuckGoSearchResults(backend="information").run(question)
    
    tool_search_web = {'sort':'perform', 'perform':{
      'title': 'search_web',
      'description': 'Search the online',
      'parameters': {'sort': 'object',
                    'required': ['query'],
                    'properties': {
                        'question': {'sort':'str', 'description':'the subject or topic to look on the net'},
    }}}}
    ## take a look at
    search_web(question="nvidia")

    Web searches may very well be very broad, and I wish to give the Agent the choice to be extra exact. Let’s say, I’m planning to make use of this Agent to find out about monetary updates, so I can provide it a selected software for that matter, like looking solely a finance web site as an alternative of the entire net.

    def search_yf(question: str) -> str:  engine = DuckDuckGoSearchResults(backend="information")
      return engine.run(f"web site:finance.yahoo.com {question}")
    
    tool_search_yf = {'sort':'perform', 'perform':{
      'title': 'search_yf',
      'description': 'Seek for particular monetary information',
      'parameters': {'sort': 'object',
                    'required': ['query'],
                    'properties': {
                        'question': {'sort':'str', 'description':'the monetary matter or topic to look'},
    }}}}
    
    ## take a look at
    search_yf(question="nvidia")

    Easy Agent (WebSearch)

    For my part, probably the most fundamental Agent ought to no less than be capable of select between one or two Instruments and re-elaborate the output of the motion to offer the consumer a correct and concise reply. 

    First, you should write a immediate to explain the Agent’s objective, the extra detailed the higher (mine may be very generic), and that would be the first message within the chat historical past with the LLM. 

    immediate=""'You're an assistant with entry to instruments, you could resolve when to make use of instruments to reply consumer message.''' 
    messages = [{"role":"system", "content":prompt}]

    So as to hold the chat with the AI alive, I’ll use a loop that begins with consumer’s enter after which the Agent is invoked to reply (which generally is a textual content from the LLM or the activation of a Instrument).

    whereas True:
        ## consumer enter
        attempt:
            q = enter('🙂 >')
        besides EOFError:
            break
        if q == "stop":
            break
        if q.strip() == "":
            proceed
        messages.append( {"position":"consumer", "content material":q} )
       
        ## mannequin
        agent_res = ollama.chat(
            mannequin=llm,
            instruments=[tool_search_web, tool_search_yf],
            messages=messages)

    Up thus far, the chat historical past might look one thing like this:

    If the mannequin needs to make use of a Instrument, the suitable perform must be run with the enter parameters steered by the LLM in its response object:

    So our code must get that info and run the Instrument perform.

    ## response
        dic_tools = {'search_web':search_web, 'search_yf':search_yf}
    
        if "tool_calls" in agent_res["message"].keys():
            for software in agent_res["message"]["tool_calls"]:
                t_name, t_inputs = software["function"]["name"], software["function"]["arguments"]
                if f := dic_tools.get(t_name):
                    ### calling software
                    print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                    messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                    ### tool output
                    t_output = f(**tool["function"]["arguments"])
                    print(t_output)
                    ### ultimate res
                    p = f'''Summarize this to reply consumer query, be as concise as potential: {t_output}'''
                    res = ollama.generate(mannequin=llm, immediate=q+". "+p)["response"]
                else:
                    print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
     
        if agent_res['message']['content'] != '':
            res = agent_res["message"]["content"]
         
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )

    Now, if we run the full code, we can chat with our Agent.

    Advanced Agent (Coding)

    LLMs know how to code by being exposed to a large corpus of both code and natural language text, where they learn patterns, syntax, and semantics of Programming languages. The model learns the relationships between different parts of the code by predicting the next token in a sequence. In short, LLMs can generate Python code but can’t execute it, Agents can.

    I shall prepare a Tool allowing the Agent to execute code. In Python, you can easily create a shell to run code as a string with the native command exec().

    import io
    import contextlib
    
    def code_exec(code: str) -> str:
        output = io.StringIO()
        with contextlib.redirect_stdout(output):
            try:
                exec(code)
            except Exception as e:
                print(f"Error: {e}")
        return output.getvalue()
    
    tool_code_exec = {'type':'function', 'function':{
      'name': 'code_exec',
      'description': 'execute python code',
      'parameters': {'type': 'object',
                    'required': ['code'],
                    'properties': {
                        'code': {'sort':'str', 'description':'code to execute'},
    }}}}
    
    ## take a look at
    code_exec("a=1+1; print(a)")

    Similar to earlier than, I’ll write a immediate, however this time, at the start of the chat-loop, I’ll ask the consumer to supply a file path.

    immediate=""'You're an professional knowledge scientist, and you've got instruments to execute python code.
    Initially, execute the next code precisely as it's: 'df=pd.read_csv(path); print(df.head())'
    When you create a plot, ALWAYS add 'plt.present()' on the finish.
    '''
    messages = [{"role":"system", "content":prompt}]
    begin = True
    
    whereas True:
        ## consumer enter
        attempt:
            if begin is True:
                path = enter('📁 Present a CSV path >')
                q = "path = "+path
            else:
                q = enter('🙂 >')
        besides EOFError:
            break
        if q == "stop":
            break
        if q.strip() == "":
            proceed
       
        messages.append( {"position":"consumer", "content material":q} )

    Since coding duties generally is a little trickier for LLMs, I’m going so as to add additionally reminiscence reinforcement. By default, throughout one session, there isn’t a real long-term reminiscence. LLMs have entry to the chat historical past, to allow them to bear in mind info briefly, and monitor the context and directions you’ve given earlier within the dialog. Nevertheless, reminiscence doesn’t all the time work as anticipated, particularly if the LLM is small. Due to this fact, a great follow is to strengthen the mannequin’s reminiscence by including periodic reminders within the chat historical past.

    immediate=""'You're an professional knowledge scientist, and you've got instruments to execute python code.
    Initially, execute the next code precisely as it's: 'df=pd.read_csv(path); print(df.head())'
    When you create a plot, ALWAYS add 'plt.present()' on the finish.
    '''
    messages = [{"role":"system", "content":prompt}]
    reminiscence = '''Use the dataframe 'df'.'''
    begin = True
    
    whereas True:
        ## consumer enter
        attempt:
            if begin is True:
                path = enter('📁 Present a CSV path >')
                q = "path = "+path
            else:
                q = enter('🙂 >')
        besides EOFError:
            break
        if q == "stop":
            break
        if q.strip() == "":
            proceed
       
        ## reminiscence
        if begin is False:
            q = reminiscence+"n"+q
        messages.append( {"position":"consumer", "content material":q} )

    Please word that the default reminiscence size in Ollama is 2048 characters. In case your machine can deal with it, you may improve it by altering the quantity when the LLM is invoked:

        ## mannequin
        agent_res = ollama.chat(
            mannequin=llm,
            instruments=[tool_code_exec],
            choices={"num_ctx":2048},
            messages=messages)

    On this usecase, the output of the Agent is generally code and knowledge, so I don’t need the LLM to re-elaborate the responses.

        ## response
        dic_tools = {'code_exec':code_exec}
       
        if "tool_calls" in agent_res["message"].keys():
            for software in agent_res["message"]["tool_calls"]:
                t_name, t_inputs = software["function"]["name"], software["function"]["arguments"]
                if f := dic_tools.get(t_name):
                    ### calling software
                    print('🔧 >', f"x1b[1;31m{t_name} -> Inputs: {t_inputs}x1b[0m")
                    messages.append( {"role":"user", "content":"use tool '"+t_name+"' with inputs: "+str(t_inputs)} )
                    ### tool output
                    t_output = f(**tool["function"]["arguments"])
                    ### ultimate res
                    res = t_output
                else:
                    print('🤬 >', f"x1b[1;31m{t_name} -> NotFoundx1b[0m")
     
        if agent_res['message']['content'] != '':
            res = agent_res["message"]["content"]
         
        print("👽 >", f"x1b[1;30m{res}x1b[0m")
        messages.append( {"role":"assistant", "content":res} )
        start = False

    Now, if we run the full code, we can chat with our Agent.

    Conclusion

    This article has covered the foundational steps of creating Agents from scratch using only Ollama. With these building blocks in place, you are already equipped to start developing your own Agents for different use cases. 

    Stay tuned for Part 2, where we will dive deeper into more advanced examples.

    Full code for this article: GitHub

    I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.

    👉 Let’s Connect 👈



    Source link
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    Comments are closed.

    Editors Picks

    Today’s NYT Connections Hints, Answers for April 20 #1044

    April 20, 2026

    AI Machine-Vision Earns Man Overboard Certification

    April 20, 2026

    Battery recycling startup Renewable Metals charges up on $12 million Series A

    April 20, 2026

    The Influencers Normalizing Not Having Sex

    April 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Today’s NYT Connections: Sports Edition Hints, Answers for June 20 #270

    June 20, 2025

    Pine bark removes antibiotics from wastewater

    January 21, 2026

    How to Share a Secret: Shamir’s Secret Sharing | by Jimin Kang | Jan, 2025

    January 31, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.