Image by Author
# Introduction
AI has moved from simply chatting with large language models (LLMs) to giving them arms and legs, which allows them to perform actions in the digital world. These are often called Python AI agents — autonomous software programs powered by LLMs that can perceive their environment, make decisions, use external tools (like APIs or code execution), and take actions to achieve specific goals without constant human intervention.
If you have been wanting to experiment with building your own AI agent but felt weighed down by complex frameworks, you are in the right place. Today, we are going to look at smolagents, a powerful yet incredibly simple library developed by Hugging Face.
By the end of this article, you will understand what makes smolagents unique, and more importantly, you will have a functioning code agent that can fetch live data from the internet. Let’s explore the implementation.
# Understanding Code Agents
Before we start coding, let’s understand the concept. An agent is essentially an LLM equipped with tools. You give the model a goal (like “get the current weather in London”), and it decides which tools to use to achieve that goal.
What makes the Hugging Face agents in the smolagents library special is their approach to reasoning. Unlike many frameworks that generate JSON or text to decide which tool to use, smolagents agents are code agents. This means they write Python code snippets to chain together their tools and logic.
This is powerful because code is precise. It is the most natural way to express complex instructions like loops, conditionals, and data manipulation. Instead of the LLM guessing how to combine tools, it simply writes the Python script to do it. As an open-source agent framework, smolagents is transparent, lightweight, and perfect for learning the fundamentals.
// Prerequisites
To follow along, you will need:
- Python knowledge. You should be comfortable with variables, functions, and pip installs.
- A Hugging Face token. Since we are using the Hugging Face ecosystem, we will use their free inference API. You can get a token by signing up at huggingface.co and visiting your settings.
- A Google account is optional. If you do not want to install anything locally, you can run this code in a Google Colab notebook.
# Setting Up Your Environment
Let’s get our workspace ready. Open your terminal or a new Colab notebook and install the library.
mkdir demo-project
cd demo-project
Next, let’s set up our security token. It is best to store this as an environment variable. If you are using Google Colab, you can use the secrets tab in the left panel to add HF_TOKEN and then access it via userdata.get(‘HF_TOKEN’).
# Building Your First Agent: The Weather Fetcher
For our first project, we will build an agent that can fetch weather data for a given city. To do this, the agent needs a tool. A tool is just a function that the LLM can call. We will use a free, public API called wttr.in, which provides weather data in JSON format.
// Installing and Setting Up
Create a virtual environment:
A virtual environment isolates your project’s dependencies from your system. Now, let’s activate the virtual environment.
Windows:
macOS/Linux:
You will see (env) in your terminal when active.
Install the required packages:
pip install smolagents requests python-dotenv
We are installing smolagents, Hugging Face’s lightweight agent framework for building AI agents with tool-use capabilities; requests, the HTTP library for making API calls; and python-dotenv, which will load environment variables from a .env file.
That is it — all with just one command. This simplicity is a core part of the smolagents philosophy.
Figure 1: Installing smolagents
// Setting Up Your API Token
Create a .env file in your project root and paste this code. Please replace the placeholder with your actual token:
HF_TOKEN=your_huggingface_token_here
Get your token from huggingface.co/settings/tokens. Your project structure should look like this:
Figure 2: Project structure
// Importing Libraries
Open your demo.py file and paste the following code:
import requests
import os
from smolagents import tool, CodeAgent, InferenceClientModel
- requests: For making HTTP calls to the weather API
- os: To securely read environment variables
- smolagents: Hugging Face’s lightweight agent framework providing:
- @tool: A decorator to define agent-callable functions.
- CodeAgent: An agent that writes and executes Python code.
- InferenceClientModel: Connects to Hugging Face’s hosted LLMs.
In smolagents, defining a tool is straightforward. We will create a function that takes a city name as input and returns the weather condition. Add the following code to your demo.py file:
@tool
def get_weather(city: str) -> str:
“””
Returns the current weather forecast for a specified city.
Args:
city: The name of the city to get the weather for.
“””
# Using wttr.in which is a lovely free weather service
response = requests.get(f”https://wttr.in/{city}?format=%C+%t”)
if response.status_code == 200:
# The response is plain text like “Partly cloudy +15°C”
return f”The weather in {city} is: {response.text.strip()}”
else:
return “Sorry, I couldn’t fetch the weather data.”
Let’s break this down:
- We import the tool decorator from smolagents. This decorator transforms our regular Python function into a tool that the agent can understand and use.
- The docstring (“”” … “””) in the get_weather function is critical. The agent reads this description to understand what the tool does and how to use it.
- Inside the function, we make a simple HTTP request to wttr.in, a free weather service that returns plain-text forecasts.
- Type hints (city: str) tell the agent what inputs to provide.
This is a perfect example of tool calling in action. We are giving the agent a new capability.
// Configuring the LLM
hf_token = os.getenv(“HF_TOKEN”)
if hf_token is None:
raise ValueError(“Please set the HF_TOKEN environment variable”)
model = InferenceClientModel(
model_id=”Qwen/Qwen2.5-Coder-32B-Instruct”,
token=hf_token
)
The agent needs a brain — a large language model (LLM) that can reason about tasks. Here we use:
- Qwen2.5-Coder-32B-Instruct: A powerful code-focused model hosted on Hugging Face
- HF_TOKEN: Your Hugging Face API token, stored in a .env file for security
Now, we need to create the agent itself.
agent = CodeAgent(
tools=[get_weather],
model=model,
add_base_tools=False
)
CodeAgent is a special agent type that:
- Writes Python code to solve problems
- Executes that code in a sandboxed environment
- Can chain multiple tool calls together
Here, we are instantiating a CodeAgent. We pass it a list containing our get_weather tool and the model object. The add_base_tools=False argument tells it not to include any default tools, keeping our agent simple for now.
// Running the Agent
This is the exciting part. Let’s give our agent a task. Run the agent with a specific prompt:
response = agent.run(
“Can you tell me the weather in Paris and also in Tokyo?”
)
print(response)
When you call agent.run(), the agent:
- Reads your prompt.
- Reasons about what tools it needs.
- Generates code that calls get_weather(“Paris”) and get_weather(“Tokyo”).
- Executes the code and returns the results.
Figure 3: smolagents response
When you run this code, you will witness the magic of a Hugging Face agent. The agent receives your request. It sees that it has a tool called get_weather. It then writes a small Python script in its “mind” (using the LLM) that looks something like this:
This is what the agent thinks, not code you write.
weather_paris = get_weather(city=”Paris”)
weather_tokyo = get_weather(city=”Tokyo”)
final_answer(f”Here is the weather: {weather_paris} and {weather_tokyo}”)
Figure 4: smolagents final response
It executes this code, fetches the data, and returns a friendly answer. You have just built a code agent that can browse the web via APIs.
// How It Works Behind the Scenes
Figure 5: The inner workings of an AI code agent
// Taking It Further: Adding More Tools
The power of agents grows with their toolkit. What if we wanted to save the weather report to a file? We can create another tool.
@tool
def save_to_file(content: str, filename: str = “weather_report.txt”) -> str:
“””
Saves the provided text content to a file.
Args:
content: The text content to save.
filename: The name of the file to save to (default: weather_report.txt).
“””
with open(filename, “w”) as f:
f.write(content)
return f”Content successfully saved to {filename}”
# Re-initialize the agent with both tools
agent = CodeAgent(
tools=[get_weather, save_to_file],
model=model,
)
agent.run(“Get the weather for London and save the report to a file called london_weather.txt”)
Now, your agent can fetch data and interact with your local file system. This combination of skills is what makes Python AI agents so versatile.
# Conclusion
In just a few minutes and with fewer than 20 lines of core logic, you have built a functional AI agent. We have seen how smolagents simplifies the process of creating code agents that write and execute Python to solve problems.
The beauty of this open-source agent framework is that it removes the boilerplate, allowing you to focus on the fun part: building the tools and defining the tasks. You are no longer just chatting with an AI; you are collaborating with one that can act. This is just the beginning. You can now explore giving your agent access to the internet via search APIs, hook it up to a database, or let it control a web browser.
// References and Learning Resources
Shittu Olumide is a software engineer and technical writer passionate about leveraging cutting-edge technologies to craft compelling narratives, with a keen eye for detail and a knack for simplifying complex concepts. You can also find Shittu on Twitter.

