Models & Registry
node-llm includes a comprehensive, built-in registry of hundreds of models using data from Parsera. This allows you to discover models and their capabilities programmatically.
Inspecting a Model
You can look up any supported model to check its context window, costs, and features.
import { LLM } from "@node-llm/core";
const model = LLM.models.find("gpt-4o");
if (model) {
console.log(`Provider: ${model.provider}`);
console.log(`Context Window: ${model.context_window} tokens`);
console.log(`Input Price: $${model.pricing.text_tokens.standard.input_per_million}/1M`);
console.log(`Output Price: $${model.pricing.text_tokens.standard.output_per_million}/1M`);
}
Discovery by Capability
You can filter the registry to find models that match your requirements.
Finding Vision Models
const visionModels = LLM.models.list().filter(m =>
m.capabilities.includes("vision")
);
console.log(`Found ${visionModels.length} vision-capable models.`);
visionModels.forEach(m => console.log(m.id));
Finding Tool-Use Models
const toolModels = LLM.models.list().filter(m =>
m.capabilities.includes("tools")
);
Finding Audio Models
const audioModels = LLM.models.list().filter(m =>
m.capabilities.includes("audio_input")
);
Supported Providers
The registry includes models from:
- OpenAI (GPT-4o, GPT-3.5, DALL-E)
- Anthropic (Claude 3.5 Sonnet, Haiku, Opus)
- Google Gemini (Gemini 1.5 Pro, Flash)
- Vertex AI (via Gemini)
Custom Models & Endpoints
Sometimes you need to use models not in the registry, such as Azure OpenAI deployments, Local Models (Ollama/LM Studio), or brand new releases.
Using assumeModelExists
This flag tells node-llm to bypass the registry check.
Important: You MUST specify the provider when using this flag, as the system cannot infer it from the ID.
const chat = LLM.chat("my-custom-deployment", {
provider: "openai", // Mandatory
assumeModelExists: true
});
// Note: Capability checks are bypassed (assumed true) for custom models.
await chat.ask("Hello");
Custom Endpoints (e.g. Azure/Local)
To point to a custom URL (like an Azure endpoint or local proxy), configure the base URL globally.
LLM.configure({
openaiApiBase: "https://my-azure-resource.openai.azure.com",
openaiApiKey: process.env.AZURE_API_KEY
});
// Now valid for all OpenAI requests
const chat = LLM.chat("gpt-4", { provider: "openai" });