Building AI-powered applications has never been more accessible. In this tutorial, you'll learn how to create a full-featured AI chatbot using Nuxt, Nuxt UI, and the Vercel AI SDK. We'll build everything from scratch, step by step, so you understand how each piece works together.
By the end of this tutorial, you'll have a fully functional AI chatbot with:
Before we start, make sure you have:
Let's start by creating a new Nuxt project using the Nuxt UI starter template. This gives us Nuxt UI pre-configured with Tailwind CSS, color mode support, and the UApp wrapper already in place.
npx nuxi@latest init -t ui nuxt-ai-chat
cd nuxt-ai-chat
Install the AI-specific dependencies:
pnpm add @nuxtjs/mdc ai @ai-sdk/vue @ai-sdk/gateway zod
yarn add @nuxtjs/mdc ai @ai-sdk/vue @ai-sdk/gateway zod
npm install @nuxtjs/mdc ai @ai-sdk/vue @ai-sdk/gateway zod
bun add @nuxtjs/mdc ai @ai-sdk/vue @ai-sdk/gateway zod
Update your nuxt.config.ts to add the MDC module for markdown rendering:
export default defineNuxtConfig({
modules: [
'@nuxt/ui',
'@nuxtjs/mdc'
],
css: ['~/assets/css/main.css'],
mdc: {
headings: {
anchorLinks: false // Disable anchor links in AI responses
}
},
compatibilityDate: '2025-01-01'
})
Create a .env file with your AI Gateway API key:
AI_GATEWAY_API_KEY=your-api-key-here
Nuxt UI provides purpose-built components for AI chat interfaces: UChatPrompt for the input area and UChatMessages for displaying the conversation.
Let's create the home page where users can start a new conversation. The UChatPrompt component provides a beautiful textarea with auto-resize, keyboard shortcuts, and a submit button:
<script setup lang="ts">
const input = ref('')
const loading = ref(false)
async function createChat() {
if (!input.value.trim()) return
loading.value = true
// Create a new chat on the server
const chat = await $fetch('/api/chats', {
method: 'POST',
body: { message: input.value }
})
// Navigate to the chat page
navigateTo(`/chat/${chat.id}`)
}
</script>
<template>
<UContainer class="min-h-dvh flex flex-col justify-center gap-6 py-8">
<h1 class="text-3xl sm:text-4xl text-highlighted font-bold">
How can I help you today?
</h1>
<UChatPrompt
v-model="input"
:status="loading ? 'streaming' : 'ready'"
variant="subtle"
placeholder="Ask me anything..."
@submit="createChat"
>
<UChatPromptSubmit color="neutral" />
</UChatPrompt>
</UContainer>
</template>
The UChatPrompt component automatically handles:
Enterstatus is set to streamingNow let's build the chat page where the actual conversation happens. This is where we'll integrate the AI SDK's Chat class for real-time streaming.
<script setup lang="ts">
import { Chat } from '@ai-sdk/vue'
import { DefaultChatTransport } from 'ai'
import { getTextFromMessage } from '@nuxt/ui/utils/ai'
const route = useRoute()
const toast = useToast()
// Fetch existing chat data
const { data: chatData } = await useFetch(`/api/chats/${route.params.id}`)
if (!chatData.value) {
throw createError({ statusCode: 404, statusMessage: 'Chat not found', fatal: true })
}
const input = ref('')
// Initialize the Chat class from AI SDK
const chat = new Chat({
id: chatData.value.id,
messages: chatData.value.messages,
transport: new DefaultChatTransport({
api: `/api/chats/${chatData.value.id}`
}),
onData(dataPart) {
// Refresh the chat list when a title is generated
if (dataPart.type === 'data-chat-title') {
refreshNuxtData('chats')
}
},
onError(error) {
toast.add({
title: 'Error',
description: error.message,
color: 'error'
})
}
})
function handleSubmit(e: Event) {
e.preventDefault()
if (input.value.trim()) {
chat.sendMessage({ text: input.value })
input.value = ''
}
}
// Auto-generate response for first message
onMounted(() => {
if (chatData.value?.messages.length === 1) {
chat.regenerate()
}
})
</script>
<template>
<UContainer class="min-h-dvh flex flex-col py-4 sm:py-6">
<UChatMessages
:messages="chat.messages"
:status="chat.status"
should-auto-scroll
class="flex-1"
>
<template #content="{ message }">
<MDC
:value="getTextFromMessage(message)"
:cache-key="message.id"
class="*:first:mt-0 *:last:mb-0"
/>
</template>
</UChatMessages>
<UChatPrompt
v-model="input"
:error="chat.error"
variant="subtle"
class="sticky bottom-0"
@submit="handleSubmit"
>
<UChatPromptSubmit
:status="chat.status"
color="neutral"
@stop="chat.stop()"
@reload="chat.regenerate()"
/>
</UChatPrompt>
</UContainer>
</template>
Let's break down the key parts:
The Chat Class
The Chat class from @ai-sdk/vue manages the entire conversation state. It handles:
chat.messageschat.status (ready, submitted, streaming, error)chat.sendMessage()chat.stop()chat.regenerate()The onData callback receives custom data events from the server (like data-chat-title), allowing you to react to server-side events during streaming.
UChatMessages Component
The UChatMessages component is purpose-built for AI chatbots with:
Rendering Markdown with MDC
AI models often respond with markdown formatting (code blocks, lists, bold text, etc.). We use the MDC component from @nuxtjs/mdc to render this content beautifully. The getTextFromMessage utility from @nuxt/ui/utils/ai extracts the text content from AI SDK v5 message parts.
UChatPromptSubmit Component
This component adapts based on the chat status:
Now for the exciting part: integrating AI on the server. We'll create API endpoints using Nitro.
First, let's create the endpoint that initializes a new chat and saves the first message to the database:
export default defineEventHandler(async (event) => {
const { message } = await readBody(event)
const db = useDrizzle()
// Create a new chat
const [chat] = await db.insert(tables.chats).values({}).returning()
// Save the first user message
await db.insert(tables.messages).values({
chatId: chat.id,
role: 'user',
parts: [{ type: 'text', text: message }]
})
return chat
})
Now let's create the endpoint that handles the AI conversation. This is where the magic happens:
import { createGateway } from '@ai-sdk/gateway'
import {
convertToModelMessages,
createUIMessageStream,
createUIMessageStreamResponse,
generateText,
streamText
} from 'ai'
import type { UIMessage } from 'ai'
import { z } from 'zod'
export default defineEventHandler(async (event) => {
const { id } = await getValidatedRouterParams(event, z.object({
id: z.string()
}).parse)
const { model, messages } = await readValidatedBody(event, z.object({
model: z.string().default('openai/gpt-4o-mini'),
messages: z.array(z.custom<UIMessage>())
}).parse)
const db = useDrizzle()
// Fetch the chat from the database
const chat = await db.query.chats.findFirst({
where: (chat, { eq }) => eq(chat.id, id as string)
})
if (!chat) {
throw createError({ statusCode: 404, statusMessage: 'Chat not found' })
}
// Initialize the AI Gateway
const gateway = createGateway({
apiKey: process.env.AI_GATEWAY_API_KEY
})
// Generate a title for the chat if it doesn't have one
if (!chat.title) {
const { text: title } = await generateText({
model: gateway('openai/gpt-4o-mini'),
system: `Generate a short title (max 30 characters) based on the user's message. No quotes or punctuation.`,
prompt: JSON.stringify(messages[0])
})
await db.update(tables.chats).set({ title }).where(eq(tables.chats.id, id))
}
// Save the user message if it's a follow-up
const lastMessage = messages[messages.length - 1]
if (lastMessage?.role === 'user' && messages.length > 1) {
await db.insert(tables.messages).values({
chatId: id,
role: 'user',
parts: lastMessage.parts
})
}
// Create the streaming response
const stream = createUIMessageStream({
execute: ({ writer }) => {
const result = streamText({
model: gateway(model),
system: `You are a helpful AI assistant. Be concise and friendly.`,
messages: convertToModelMessages(messages)
})
// Notify the client that a title was generated
if (!chat.title) {
writer.write({
type: 'data-chat-title',
data: { message: 'Title generated' },
transient: true
})
}
writer.merge(result.toUIMessageStream())
},
onFinish: async ({ messages }) => {
// Save the assistant's response to the database
await db.insert(tables.messages).values(messages.map(message => ({
chatId: chat.id,
role: message.role as 'user' | 'assistant',
parts: message.parts
})))
}
})
return createUIMessageStreamResponse({ stream })
})
Let's understand what's happening:
AI Gateway
The createGateway function creates a unified interface to access any AI model. You specify the model using the format provider/model-name:
openai/gpt-4o-minianthropic/claude-3-5-sonnet-latestgoogle/gemini-2.0-flashAutomatic Title Generation
When a chat doesn't have a title yet, we use generateText to create one based on the first message. This provides a better UX by showing meaningful titles in the chat history instead of "Untitled".
Streaming with streamText
The streamText function generates a streaming response from the AI model. Key options include:
model: The AI model to usesystem: Instructions that guide the AI's behaviormessages: The conversation historyUIMessageStream
The createUIMessageStream and createUIMessageStreamResponse functions create a stream that the AI SDK client can consume. The response streams chunks as they're generated, creating the real-time typing effect.
The writer.write() method allows sending custom data events to the client (like data-chat-title), while onFinish is called when streaming completes, perfect for persisting the assistant's response.
Add an endpoint to fetch existing chat data from your database:
export default defineEventHandler(async (event) => {
const { id } = getRouterParams(event)
const db = useDrizzle()
const chat = await db.query.chats.findFirst({
where: (chat, { eq }) => eq(chat.id, id as string),
with: {
messages: {
orderBy: (message, { asc }) => asc(message.createdAt)
}
}
})
if (!chat) {
throw createError({ statusCode: 404, statusMessage: 'Chat not found' })
}
return chat
})
chats and messages schema. Check the AI Chat template for a complete database setup with PostgreSQL.One of the benefits of using AI Gateway is the ability to switch between models seamlessly. Let's add a model selector to our chat.
export function useModels() {
const models = [
{ value: 'openai/gpt-4o-mini', label: 'GPT-4o Mini', icon: 'i-simple-icons-openai' },
{ value: 'anthropic/claude-3-5-haiku-latest', label: 'Claude 3.5 Haiku', icon: 'i-simple-icons-anthropic' },
{ value: 'google/gemini-2.0-flash', label: 'Gemini 2.0 Flash', icon: 'i-simple-icons-google' }
]
const model = useCookie<string>('ai-model', {
default: () => 'openai/gpt-4o-mini'
})
return {
models,
model
}
}
<script setup lang="ts">
const model = defineModel<string>({ required: true })
const { models } = useModels()
const selectedModel = computed(() =>
models.find(m => m.value === model.value)
)
</script>
<template>
<USelectMenu
v-model="model"
:items="models"
:icon="selectedModel?.icon"
variant="ghost"
value-key="value"
/>
</template>
Update the chat page to include the model selector and pass the selected model to the server:
<script setup lang="ts">
import { Chat } from '@ai-sdk/vue'
import { DefaultChatTransport } from 'ai'
import { getTextFromMessage } from '@nuxt/ui/utils/ai'
const route = useRoute()
const { model } = useModels()
const { data: chatData } = await useFetch(`/api/chats/${route.params.id}`)
const input = ref('')
const chat = new Chat({
id: chatData.value.id,
messages: chatData.value.messages,
transport: new DefaultChatTransport({
api: `/api/chats/${chatData.value.id}`,
body: {
model: model.value // Pass the selected model
}
})
})
// ... rest of the component
</script>
<template>
<UContainer class="min-h-dvh flex flex-col py-4 sm:py-6">
<UChatMessages :messages="chat.messages" :status="chat.status" should-auto-scroll class="flex-1">
<template #content="{ message }">
<MDC :value="getTextFromMessage(message)" :cache-key="message.id" class="*:first:mt-0 *:last:mb-0" />
</template>
</UChatMessages>
<UChatPrompt v-model="input" :error="chat.error" variant="subtle" @submit="handleSubmit">
<template #footer>
<ModelSelect v-model="model" />
</template>
<UChatPromptSubmit :status="chat.status" color="neutral" @stop="chat.stop()" @reload="chat.regenerate()" />
</UChatPrompt>
</UContainer>
</template>
You now have a working AI chatbot! For production applications, you'll want to add:
Chat History Persistence
Store conversations in a database using Drizzle ORM with PostgreSQL or SQLite.
User Authentication
Add authentication with nuxt-auth-utils to let users access their chat history across devices.
AI Tools
Extend your chatbot with tools that can fetch real-time data, generate charts, or interact with external APIs:
import { tool } from 'ai'
import { z } from 'zod'
const weatherTool = tool({
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('The city name')
}),
execute: async ({ location }) => {
// Fetch weather data from an API
return { location, temperature: 22, condition: 'Sunny' }
}
})
npx nuxi@latest init -t ui/chat my-chat-app.Deploy your chatbot to Vercel with zero configuration:
npx vercel
Make sure to add your environment variables in the Vercel dashboard:
AI_GATEWAY_API_KEY: Your Vercel AI Gateway API keyYou've built a complete AI chatbot with:
The combination of Nuxt's full-stack capabilities, Nuxt UI's purpose-built chat components, and the AI SDK's streaming infrastructure makes building AI applications straightforward and enjoyable.
Resources:
We're excited to see what you'll build!