Node.js HTTP Server
You can use the AI SDK in a Node.js HTTP server to generate text and stream it to the client.
Examples
The examples start a simple HTTP server that listens on port 8080. You can e.g. test it using curl
:
curl -X POST http://localhost:8080
The examples use the OpenAI gpt-4o
model. Ensure that the OpenAI API key is
set in the OPENAI_API_KEY
environment variable.
Full example: github.com/vercel/ai/examples/node-http-server
UI Message Stream
You can use the pipeUIMessageStreamToResponse
method to pipe the stream data to the server response.
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';import { createServer } from 'http';
createServer(async (req, res) => { const result = streamText({ model: openai('gpt-4o'), prompt: 'Invent a new holiday and describe its traditions.', });
result.pipeUIMessageStreamToResponse(res);}).listen(8080);
Sending Custom Data
createUIMessageStream
and pipeUIMessageStreamToResponse
can be used to send custom data to the client.
import { openai } from '@ai-sdk/openai';import { createUIMessageStream, pipeUIMessageStreamToResponse, streamText,} from 'ai';import { createServer } from 'http';
createServer(async (req, res) => { switch (req.url) { case '/stream-data': { const stream = createUIMessageStream({ execute: ({ writer }) => { // write some custom data writer.write({ type: 'start' });
writer.write({ type: 'data-custom', data: { custom: 'Hello, world!', }, });
const result = streamText({ model: openai('gpt-4o'), prompt: 'Invent a new holiday and describe its traditions.', });
writer.merge( result.toUIMessageStream({ sendStart: false, onError: error => { // Error messages are masked by default for security reasons. // If you want to expose the error message to the client, you can do so here: return error instanceof Error ? error.message : String(error); }, }), ); }, });
pipeUIMessageStreamToResponse({ stream, response: res });
break; } }}).listen(8080);
Text Stream
You can send a text stream to the client using pipeTextStreamToResponse
.
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';import { createServer } from 'http';
createServer(async (req, res) => { const result = streamText({ model: openai('gpt-4o'), prompt: 'Invent a new holiday and describe its traditions.', });
result.pipeTextStreamToResponse(res);}).listen(8080);
Troubleshooting
- Streaming not working when proxied