Documentation Index
Fetch the complete documentation index at: https://docs.browsernode.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Here’s how to configure the models.
Migration from Langchain
with import { ChatOpenAI } from "browsernode"; or import { ChatOpenAI } from "browsernode/llm"; etc. The methods should be compatible(ish).
We also made and example here to help you stay with Langchain in case your workflow requires it.
Model Recommendations
We recommend using GPT-4.1 for the best performance (best accuracy ~$0.01 per step). The best price to performance can be achieved using gemini-2.5-flash (currently also the most popular model, costs ~$0.001 per step).
Supported Models
Our library natively supports the following models:
- OpenAI
- Anthropic
- Azure OpenAI
- Gemini
We also support all other models that can be called via OpenAI compatible API (deepseek, novita, x, qwen). Please open a PR if you want to add a model.
We have natively switched to structured output when possible,
OpenAI
OpenAI’s GPT-4.1 models are recommended for best performance.
import { Agent } from "browsernode";
import { ChatOpenAI } from "browsernode/llm";
// Initialize the model
const llm = new ChatOpenAI({
model: "gpt-4.1",
apiKey: process.env.OPENAI_API_KEY,
});
// Create agent with the model
const agent = new Agent({
task: task,
llm: llm,
});
Required environment variables:
Anthropic
import { Agent } from "browsernode";
import { ChatAnthropic } from "browsernode/llm";
// Initialize the model
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
apiKey: process.env.ANTHROPIC_API_KEY,
});
// Create agent with the model
const agent = new Agent({
task: task,
llm: llm,
});
And add the variable:
Azure OpenAI
import { Agent } from "browsernode";
import { ChatAzureOpenAI } from "browsernode/llm";
// Initialize the model
const llm = new ChatAzureOpenAI({
model: "gpt-4.1",
apiKey: process.env.AZURE_OPENAI_API_KEY,
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
});
// Create agent with the model
const agent = new Agent({
task: task,
llm: llm,
});
Required environment variables:
AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/
AZURE_OPENAI_API_KEY=
Gemini
[!IMPORTANT] GEMINI_API_KEY was the old environment var name, it should be called GOOGLE_API_KEY as of 2025-05.
import { Agent } from "browsernode";
import { ChatGoogle } from "browsernode/llm";
// Initialize the model
const llm = new ChatGoogle({
model: "gemini-2.5-flash",
apiKey: process.env.GOOGLE_API_KEY,
});
// Create agent with the model
const agent = new Agent({
task: task,
llm: llm,
});
Required environment variables: