← Back to articles

How to Build an AI Agent with LangChain (2026)

AI agents don't just answer questions — they take actions. They search the web, query databases, call APIs, and chain multiple steps to accomplish goals. LangChain makes building these agents straightforward. Here's how.

What Is an AI Agent?

Chatbot:  User asks → LLM responds → done
Agent:    User asks → LLM decides what to do → uses tools → evaluates result
          → decides next step → uses more tools → returns final answer

An agent has:

  1. An LLM (the brain — decides what to do)
  2. Tools (what it can do — search, calculate, query APIs)
  3. Memory (what it remembers across turns)
  4. A loop (plan → act → observe → repeat)

Prerequisites

pip install langchain langchain-openai langchain-community langgraph
import os
os.environ["OPENAI_API_KEY"] = "your-key"

Step 1: Define Tools

Tools are functions your agent can call:

from langchain_core.tools import tool
from langchain_community.tools import DuckDuckGoSearchRun

# Built-in search tool
search = DuckDuckGoSearchRun()

# Custom tools
@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression. Use for any math calculations."""
    try:
        result = eval(expression)  # In production, use a safe math parser
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_weather(city: str) -> str:
    """Get current weather for a city. Use when asked about weather."""
    import requests
    response = requests.get(f"https://wttr.in/{city}?format=3")
    return response.text

@tool
def query_database(sql: str) -> str:
    """Run a read-only SQL query against the business database.
    Use for questions about sales, customers, or inventory."""
    import sqlite3
    conn = sqlite3.connect("business.db")
    try:
        result = conn.execute(sql).fetchall()
        return str(result)
    except Exception as e:
        return f"SQL Error: {e}"
    finally:
        conn.close()

tools = [search, calculate, get_weather, query_database]

Key principle: Tool docstrings matter. The LLM reads them to decide which tool to use.

Step 2: Create the Agent

Using LangGraph (the recommended approach in 2026):

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Create the agent
agent = create_react_agent(llm, tools)

# Run it
result = agent.invoke({
    "messages": [("user", "What's the weather in Tokyo and convert the temperature from C to F?")]
})

print(result["messages"][-1].content)

What happens internally:

1. LLM reads the question
2. Decides: "I need the weather tool for Tokyo"
3. Calls get_weather("Tokyo") → "Tokyo: ☁️ +15°C"
4. Decides: "Now I need to convert 15°C to °F"
5. Calls calculate("15 * 9/5 + 32") → "59.0"
6. Returns: "Tokyo is currently 15°C (59°F) and cloudy"

Step 3: Add Memory

Agents need memory to handle follow-up questions:

from langgraph.checkpoint.memory import MemorySaver

# Add memory
memory = MemorySaver()
agent = create_react_agent(llm, tools, checkpointer=memory)

# First message
config = {"configurable": {"thread_id": "user-123"}}
agent.invoke({"messages": [("user", "What's the weather in Tokyo?")]}, config)

# Follow-up — agent remembers context
agent.invoke({"messages": [("user", "What about Osaka?")]}, config)
# Agent knows you're asking about weather because of conversation history

Step 4: Add a System Prompt

Control the agent's personality and behavior:

system_prompt = """You are a helpful business analyst assistant. You have access to:
- Web search for current information
- A calculator for math
- Weather data
- The company database for sales and customer data

Rules:
- Always show your reasoning before giving answers
- When querying the database, explain what data you're looking for
- Format numbers with commas and currency symbols
- If you're unsure, say so — don't make up data
- Never run DELETE or UPDATE queries — read-only access only
"""

agent = create_react_agent(llm, tools, state_modifier=system_prompt)

Step 5: Build a Multi-Step Agent

For complex tasks, agents chain multiple tools:

@tool
def scrape_webpage(url: str) -> str:
    """Fetch and extract text from a URL. Use for reading web pages."""
    import requests
    from bs4 import BeautifulSoup
    response = requests.get(url, timeout=10)
    soup = BeautifulSoup(response.text, 'html.parser')
    return soup.get_text()[:3000]  # Limit to 3000 chars

@tool
def save_report(filename: str, content: str) -> str:
    """Save a report to a file. Use when the user asks to save or export results."""
    with open(f"reports/{filename}", "w") as f:
        f.write(content)
    return f"Report saved to reports/{filename}"

tools = [search, calculate, scrape_webpage, save_report, query_database]
agent = create_react_agent(llm, tools)

# Complex query that requires multiple steps
result = agent.invoke({
    "messages": [("user", """
        Research our top 3 competitors' pricing pages. 
        Compare their prices with our current pricing from the database.
        Save a competitive analysis report.
    """)]
})

The agent will:

  1. Search for competitor pricing pages
  2. Scrape each competitor's pricing page
  3. Query your database for current pricing
  4. Compare the data
  5. Generate and save a report

Step 6: Error Handling and Safety

Production agents need guardrails:

from langchain_core.tools import tool

@tool
def query_database(sql: str) -> str:
    """Run a read-only SQL query against the business database."""
    # Safety checks
    sql_upper = sql.upper().strip()
    
    # Block dangerous operations
    dangerous = ["DROP", "DELETE", "UPDATE", "INSERT", "ALTER", "TRUNCATE"]
    for keyword in dangerous:
        if keyword in sql_upper:
            return f"Error: {keyword} operations are not allowed. Read-only access only."
    
    # Block multiple statements
    if ";" in sql and sql.count(";") > 1:
        return "Error: Multiple SQL statements not allowed."
    
    try:
        conn = sqlite3.connect("business.db")
        conn.execute("PRAGMA query_only = ON")  # Read-only mode
        result = conn.execute(sql).fetchall()
        if len(result) > 100:
            return str(result[:100]) + f"\n... ({len(result)} total rows, showing first 100)"
        return str(result)
    except Exception as e:
        return f"SQL Error: {e}"
    finally:
        conn.close()

Step 7: Deploy as an API

Wrap your agent in a FastAPI endpoint:

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class Query(BaseModel):
    message: str
    thread_id: str = "default"

@app.post("/agent")
async def run_agent(query: Query):
    config = {"configurable": {"thread_id": query.thread_id}}
    result = agent.invoke(
        {"messages": [("user", query.message)]},
        config
    )
    return {"response": result["messages"][-1].content}

Common Patterns

1. RAG Agent (Search Your Docs)

from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings

vectorstore = Chroma(embedding_function=OpenAIEmbeddings())

@tool
def search_knowledge_base(query: str) -> str:
    """Search the company knowledge base for internal information,
    policies, procedures, and documentation."""
    docs = vectorstore.similarity_search(query, k=3)
    return "\n\n".join(doc.page_content for doc in docs)

2. Approval-Required Actions

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Draft an email for review. The email will NOT be sent automatically —
    it will be queued for human approval."""
    # Save to approval queue instead of sending
    save_to_approval_queue(to=to, subject=subject, body=body)
    return f"Email drafted to {to}. Queued for human approval."

3. Structured Output

from pydantic import BaseModel, Field

class AnalysisResult(BaseModel):
    summary: str = Field(description="Brief summary of findings")
    key_metrics: list[str] = Field(description="Important numbers found")
    recommendations: list[str] = Field(description="Actionable recommendations")
    confidence: float = Field(description="Confidence in analysis, 0-1")

structured_llm = llm.with_structured_output(AnalysisResult)

FAQ

LangChain vs building from scratch?

LangChain saves weeks of boilerplate — tool calling, memory, streaming, and error handling are built in. For simple agents, raw API calls work. For anything complex, LangChain's abstractions pay off.

Which LLM is best for agents?

GPT-4o and Claude 3.5 Sonnet are the best for tool calling in 2026. Both handle multi-step reasoning well. GPT-4o is slightly better at tool selection; Claude is better at reasoning about results.

How do I prevent hallucination in agents?

Give agents tools to verify information instead of guessing. An agent with a search tool hallucinates less than one without. Also: explicit system prompt rules like "If you don't know, search first."

Is LangGraph better than AgentExecutor?

Yes. LangGraph (the current recommended approach) gives you more control over agent flow, better error handling, and supports complex multi-agent architectures. AgentExecutor is legacy.

Bottom Line

Building AI agents with LangChain follows a clear pattern: define tools, create the agent, add memory, deploy as an API. Start simple — a search tool and a calculator — then add domain-specific tools as needed.

The key insight: agents are only as good as their tools. Invest time in building reliable, well-documented tools and the LLM will use them effectively.

Get AI tool guides in your inbox

Weekly deep-dives on the best AI coding tools, automation platforms, and productivity software.