Building an Agentic Chatbot with Vercel AI SDK

A practical guide to building AI agents that can call tools, chain actions, and solve real problems using the Vercel AI SDK in Next.js.

BySourav Mishra7 min read

Most AI chatbot tutorials stop at "hello world." They show you how to stream text, maybe add a toy weather tool that returns random numbers, and call it a day.

That's not useful.

This guide builds an actual agentic chatbot—one that chains tools together, makes decisions, and accomplishes real tasks.

What Makes an Agent "Agentic"?

An agent isn't just an LLM that responds to prompts. It's a system that:

  1. Decides what tools to use based on context
  2. Executes those tools with real-world effects
  3. Chains multiple tool calls to accomplish complex tasks
  4. Adapts based on tool results

The key difference from a basic chatbot: an agent operates in a loop, calling tools until the task is done.

The Setup

We're using the Vercel AI SDK with Next.js App Router. If you don't have a project yet:

npx create-next-app@latest my-agent --typescript --tailwind --app
cd my-agent
npm install ai @ai-sdk/openai zod

Create a .env.local file with your OpenAI API key:

OPENAI_API_KEY=sk-your-key-here

Building Real Tools

Forget random number generators. Here are tools that actually do something.

Tool 1: Web Search

// lib/tools/search.ts
import { tool } from 'ai';
import { z } from 'zod';

export const searchWeb = tool({
  description: 'Search the web for current information on a topic',
  inputSchema: z.object({
    query: z.string().describe('The search query'),
  }),
  execute: async ({ query }) => {
    // Using a real search API (Serper, SerpAPI, or similar)
    const response = await fetch('https://google.serper.dev/search', {
      method: 'POST',
      headers: {
        'X-API-KEY': process.env.SERPER_API_KEY!,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({ q: query, num: 5 }),
    });
    
    const data = await response.json();
    return data.organic?.slice(0, 3).map((r: any) => ({
      title: r.title,
      snippet: r.snippet,
      link: r.link,
    })) ?? [];
  },
});

Tool 2: Read URL Content

// lib/tools/read-url.ts
import { tool } from 'ai';
import { z } from 'zod';

export const readUrl = tool({
  description: 'Read and extract the main content from a URL',
  inputSchema: z.object({
    url: z.string().url().describe('The URL to read'),
  }),
  execute: async ({ url }) => {
    const response = await fetch(url);
    const html = await response.text();
    
    // Simple content extraction (use a proper library in production)
    const textContent = html
      .replace(/<script[^>]*>[\s\S]*?<\/script>/gi, '')
      .replace(/<style[^>]*>[\s\S]*?<\/style>/gi, '')
      .replace(/<[^>]+>/g, ' ')
      .replace(/\s+/g, ' ')
      .trim()
      .slice(0, 4000);
    
    return { url, content: textContent };
  },
});

Tool 3: Execute Calculations

// lib/tools/calculate.ts
import { tool } from 'ai';
import { z } from 'zod';

export const calculate = tool({
  description: 'Perform mathematical calculations. Use this for any math.',
  inputSchema: z.object({
    expression: z.string().describe('The mathematical expression to evaluate'),
  }),
  execute: async ({ expression }) => {
    try {
      // Safe math evaluation (never use eval in production)
      const result = Function(`"use strict"; return (${expression})`)();
      return { expression, result };
    } catch (error) {
      return { expression, error: 'Invalid expression' };
    }
  },
});

Tool 4: Create Tasks/Reminders

// lib/tools/tasks.ts
import { tool } from 'ai';
import { z } from 'zod';

// In-memory store (use a database in production)
const tasks: { id: string; task: string; dueDate?: string }[] = [];

export const createTask = tool({
  description: 'Create a new task or reminder for the user',
  inputSchema: z.object({
    task: z.string().describe('The task description'),
    dueDate: z.string().optional().describe('Optional due date in ISO format'),
  }),
  execute: async ({ task, dueDate }) => {
    const id = crypto.randomUUID();
    tasks.push({ id, task, dueDate });
    return { success: true, id, task, dueDate };
  },
});

export const listTasks = tool({
  description: 'List all current tasks',
  inputSchema: z.object({}),
  execute: async () => {
    return { tasks };
  },
});

The Agent Route Handler

Now wire these tools together:

// app/api/chat/route.ts
import { streamText, UIMessage, convertToModelMessages, tool, stepCountIs } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
import { searchWeb } from '@/lib/tools/search';
import { readUrl } from '@/lib/tools/read-url';
import { calculate } from '@/lib/tools/calculate';
import { createTask, listTasks } from '@/lib/tools/tasks';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: openai('gpt-4o'),
    system: `You are a capable AI assistant that helps users accomplish tasks.
    
You have access to tools for:
- Searching the web for current information
- Reading content from URLs
- Performing calculations
- Creating and managing tasks

When given a task:
1. Break it down into steps
2. Use the appropriate tools
3. Synthesize the results into a clear response

Always explain what you're doing and why.`,
    messages: await convertToModelMessages(messages),
    stopWhen: stepCountIs(10), // Allow up to 10 tool calls
    tools: {
      searchWeb,
      readUrl,
      calculate,
      createTask,
      listTasks,
    },
  });

  return result.toUIMessageStreamResponse();
}

The stopWhen: stepCountIs(10) is crucial. This tells the SDK to keep calling tools until either:

  • The model generates a text response (task complete)
  • 10 tool calls have been made (safety limit)

Without this, the model would stop after the first tool call without synthesizing the results.

The Frontend

// app/page.tsx
'use client';

import { useChat } from '@ai-sdk/react';
import { useState } from 'react';

export default function Chat() {
  const [input, setInput] = useState('');
  const { messages, sendMessage, isLoading } = useChat();

  return (
    <div className="flex flex-col w-full max-w-2xl mx-auto p-4 min-h-screen">
      <div className="flex-1 space-y-4 pb-32">
        {messages.map((message) => (
          <div key={message.id} className="space-y-2">
            <div className="font-medium text-sm text-zinc-500">
              {message.role === 'user' ? 'You' : 'Agent'}
            </div>
            {message.parts.map((part, i) => {
              switch (part.type) {
                case 'text':
                  return (
                    <div key={i} className="prose prose-zinc dark:prose-invert">
                      {part.text}
                    </div>
                  );
                default:
                  // Tool calls - show them for transparency
                  if (part.type.startsWith('tool-')) {
                    return (
                      <div key={i} className="text-xs bg-zinc-100 dark:bg-zinc-800 p-2 rounded">
                        <span className="text-zinc-500">Tool: </span>
                        {part.type.replace('tool-', '')}
                      </div>
                    );
                  }
                  return null;
              }
            })}
          </div>
        ))}
        {isLoading && (
          <div className="text-zinc-500 animate-pulse">Thinking...</div>
        )}
      </div>

      <form
        onSubmit={(e) => {
          e.preventDefault();
          if (!input.trim()) return;
          sendMessage({ text: input });
          setInput('');
        }}
        className="fixed bottom-0 left-0 right-0 p-4 bg-white dark:bg-zinc-900 border-t"
      >
        <input
          className="w-full max-w-2xl mx-auto block p-3 border rounded-lg"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Ask me anything..."
          disabled={isLoading}
        />
      </form>
    </div>
  );
}

Using the Agent Class (Cleaner Approach)

For more complex applications, use the ToolLoopAgent class:

// lib/agent.ts
import { ToolLoopAgent } from 'ai';
import { openai } from '@ai-sdk/openai';
import { searchWeb } from './tools/search';
import { readUrl } from './tools/read-url';
import { calculate } from './tools/calculate';
import { createTask, listTasks } from './tools/tasks';

export const researchAgent = new ToolLoopAgent({
  model: openai('gpt-4o'),
  instructions: `You are a research assistant that helps users find and analyze information.
  
When researching a topic:
1. Search for relevant sources
2. Read the most promising URLs
3. Synthesize the information
4. Provide citations`,
  tools: {
    searchWeb,
    readUrl,
    calculate,
    createTask,
    listTasks,
  },
});

Then in your route:

// app/api/chat/route.ts
import { createAgentUIStreamResponse } from 'ai';
import { researchAgent } from '@/lib/agent';

export async function POST(request: Request) {
  const { messages } = await request.json();
  return createAgentUIStreamResponse({
    agent: researchAgent,
    messages,
  });
}

This is cleaner because:

  • Agent configuration is centralized
  • Can be reused across multiple routes
  • Easier to test in isolation

What Makes This Different

The example chatbots in most tutorials use fake tools—random weather, mock APIs, placeholder data. They demonstrate syntax but not capability.

This agent can:

  • Research anything: "What's the latest on React Server Components?" → searches, reads sources, synthesizes
  • Do real math: "Calculate the compound interest on $10,000 at 7% for 20 years" → executes calculation
  • Chain actions: "Find the top 3 AI papers this week, summarize them, and create a task to read them" → search → read multiple URLs → create tasks

The difference isn't in the code structure—it's in connecting to real capabilities.

Key Takeaways

  1. stopWhen is essential - Without it, your agent stops after one tool call
  2. Real tools make real agents - Mock data teaches syntax, not capability
  3. The Agent class centralizes behavior - Use it for production apps
  4. Tool descriptions matter - The LLM uses them to decide what to call

Build something that does things, not something that pretends to.

Share this post

Cover image for Building an Agentic Chatbot with Vercel AI SDK

You might also like

See all