n8n Basic LLM Chain Node
đź”—
AI Node

n8n Basic LLM Chain Node

Master the n8n Basic LLM Chain node for AI-powered automation. Learn prompt configuration, output parsers, model selection, error handling, and when to use chains vs agents.

Chains are the reliable workhorses of AI automation. While agents get all the attention with their ability to reason and use tools, chains quietly power the vast majority of production AI workflows. They do one thing exceptionally well: take a prompt, send it to an LLM, and return a predictable response.

The Basic LLM Chain node is your gateway to AI in n8n without the complexity of agents. Need to analyze sentiment? Summarize documents? Extract structured data from messy text? Classify incoming messages? The Basic LLM Chain handles these tasks with straightforward configuration and reliable output.

The Simplicity Advantage

When you see tutorials showing complex AI agents with multiple tools and memory systems, it’s tempting to build everything that way. But that complexity comes at a cost: unpredictable behavior, higher token usage, and harder debugging.

The Basic LLM Chain offers a simpler contract. You provide a prompt. The LLM processes it. You get a response. No tool selection decisions. No memory management. No reasoning loops that might run forever. Just clean, predictable AI processing.

This predictability matters in production. A chain that extracts customer sentiment from support tickets will behave the same way on ticket 1 and ticket 10,000. An agent doing the same task might decide to use a tool, search for context, or go down an unexpected path.

What You’ll Learn

  • When to use the Basic LLM Chain versus AI agents or direct API calls
  • How chains work under the hood and why they don’t support memory
  • Step-by-step setup with any LLM provider (OpenAI, Anthropic, Google, Ollama)
  • Prompt configuration techniques for consistent results
  • Output parsers for structured JSON responses
  • Troubleshooting common errors that frustrate users
  • Real-world examples you can adapt to your workflows
  • Production best practices from experienced builders

When to Use the Basic LLM Chain Node

The Basic LLM Chain isn’t always the right choice. Sometimes you need an agent’s flexibility. Sometimes a direct API call is simpler. This table helps you decide:

ScenarioUse Basic LLM Chain?Better Alternative
Simple text transformationYesPerfect for single-step processing
Classification into categoriesYesAdd output parser for structured results
Sentiment analysisYesChain + Structured Output Parser
Content summarizationYesIdeal for deterministic summarization
Text translationYesSimple prompt with target language
Data extraction from textYesUse output parser for JSON
Multi-step research taskNoAI Agent with tools
Task requiring external toolsNoAI Agent can call APIs and search
Conversation with memoryNoAI Agent with memory sub-node
Dynamic tool selectionNoOnly agents can decide which tools to use
Question answering over documentsDependsUse Q&A Chain for RAG workflows

Rule of thumb: If your task is “take this input and transform it according to these instructions,” use a chain. If your task is “figure out what to do and then do it,” use an agent.

Chains vs Agents: The Core Difference

The AI Agent node operates in a reasoning loop. It thinks about the task, decides which tool to use, executes the tool, observes the result, and repeats until done. This makes agents powerful but unpredictable.

The Basic LLM Chain has no loop. It executes once:

Input → Prompt Template → LLM → Output

That’s it. No tool selection. No iteration. No memory of previous messages. Each execution is independent and deterministic (given the same input and temperature settings).

This architectural difference has practical implications:

AspectBasic LLM ChainAI Agent
Memory supportNoneYes (multiple types)
Tool usageNoneYes (required)
Execution flowSingle passLoop until complete
Token usagePredictableVariable
DebuggingSimpleComplex
ReliabilityHighModerate
FlexibilityLowHigh

When Chains Beat Agents

Reddit users consistently report that trying to use agents for simple tasks creates unnecessary complexity. One common pattern: users build elaborate agent workflows for sentiment analysis, only to realize a simple chain with an output parser does the job better.

Chains excel when:

  • The transformation is well-defined
  • You need consistent, reproducible output
  • Token costs matter (chains use fewer tokens)
  • Debugging simplicity is important
  • The task doesn’t require external data fetching

For a detailed comparison of architectural patterns, see agents vs chains in the official n8n documentation.

Understanding How Chains Work

Before configuring your first chain, understanding the underlying mechanics helps you avoid common pitfalls.

The LangChain Foundation

n8n’s AI nodes are built on LangChain, a popular framework for building LLM applications. The Basic LLM Chain implements LangChain’s chain concept: a sequence of operations that processes input and produces output.

In LangChain terminology, a “chain” is any sequence of calls to LLMs or other components. The Basic LLM Chain is the simplest form: input → LLM → output. More complex chains (like the Summarization Chain or Q&A Chain) add retrieval, splitting, or other processing steps.

The Processing Flow

When you trigger a Basic LLM Chain node, here’s what happens:

  1. Input Collection: The node receives data from the previous node or trigger
  2. Prompt Assembly: Your prompt template is populated with input values
  3. LLM Request: The assembled prompt is sent to your configured chat model
  4. Response Processing: The LLM’s response is captured
  5. Output Parsing (optional): If an output parser is connected, it validates and structures the response
  6. Output: The processed result flows to the next node

Each step is synchronous. There’s no decision-making, no tool selection, no iteration. The chain runs once and completes.

Why Chains Don’t Have Memory

This surprises many users: Basic LLM Chain nodes cannot remember previous messages. Each execution starts fresh with no context from prior interactions.

This is by design. Chains are meant for stateless processing. If you need conversation memory, you have two options:

  1. Use an AI Agent with a memory sub-node
  2. Manually pass conversation history in your prompt using expressions

The second approach works but requires careful prompt engineering. You’d collect previous messages in a database or variable and include them in each chain prompt. For most conversational use cases, agents are the better choice.

Setting Up Your First Chain

Let’s build a working chain step by step. We’ll create a simple sentiment analyzer that classifies text as positive, negative, or neutral.

Step 1: Add the Basic LLM Chain Node

  1. Open your n8n workflow
  2. Click + to add a node
  3. Search for “Basic LLM Chain”
  4. Select Basic LLM Chain from the results

You’ll see a node with connection points for a Chat Model (required) and optional Output Parser. For the complete technical reference, see the official n8n Basic LLM Chain documentation.

Step 2: Connect a Chat Model

The chain needs an LLM to process prompts. Click the + under “Chat Model” and select your provider.

For OpenAI:

  1. Select “OpenAI Chat Model”
  2. Create or select your OpenAI credentials
  3. Choose a model appropriate for your task (see OpenAI’s model documentation for current options)
  4. Set temperature to 0 for consistent classification results

For Anthropic:

  1. Select “Anthropic Chat Model”
  2. Create or select your Anthropic credentials
  3. Choose a Claude model (see Anthropic’s model comparison)
  4. Adjust temperature based on task needs

For self-hosted models:

  1. Select “Ollama Chat Model”
  2. Enter your Ollama server URL
  3. Select your locally running model

Step 3: Configure the Prompt

Click on the Basic LLM Chain node to open its settings. The Prompt section offers two options:

Option 1: Take from previous node automatically

This expects the incoming data to have a field called chatInput. If your trigger or previous node provides this field, the chain uses it as the prompt.

Option 2: Define below

Write your prompt directly. Use expressions to include dynamic data:

Analyze the sentiment of the following text and classify it as POSITIVE, NEGATIVE, or NEUTRAL.

Text: {{ $json.text }}

Respond with only one word: POSITIVE, NEGATIVE, or NEUTRAL.

The {{ $json.text }} expression pulls the text field from the incoming data.

Step 4: Test Your Chain

  1. Add a Manual Trigger node before the chain
  2. Add a Set node to create test data:
    {
      "text": "I absolutely love this product! Best purchase I've ever made."
    }
  3. Connect: Manual Trigger → Set → Basic LLM Chain
  4. Click Test Workflow

You should see the response: POSITIVE

Common Setup Mistakes

Mistake 1: Missing chatInput field

Error: Prompt is empty or invalid
Cause: Using "Take from previous node automatically" without a chatInput field
Fix: Add an Edit Fields node to rename your input field to chatInput

Mistake 2: No chat model connected

Error: No chat model connected
Cause: Forgot to add a chat model sub-node
Fix: Click + under Chat Model and configure a provider

Mistake 3: Invalid credentials

Error: Authentication failed
Cause: API key is incorrect or expired
Fix: Check your credentials in n8n's credential manager

Prompt Configuration Deep Dive

The prompt is everything in a chain. A well-crafted prompt produces consistent results. A vague prompt produces unpredictable output.

Static vs Dynamic Prompts

Static prompts work when your task is fixed:

Summarize the following text in exactly 3 bullet points.

Text: {{ $json.content }}

Dynamic prompts adapt to different scenarios:

{{ $json.taskType === 'summarize' ? 'Summarize' : 'Expand' }} the following text.
Target length: {{ $json.targetLength }} words.
Tone: {{ $json.tone }}

Text: {{ $json.content }}

For expression syntax details, see our n8n expressions guide.

The chatInput Convention

When using “Take from previous node automatically,” the chain looks for a field named chatInput. This convention comes from the Chat Trigger node, which outputs messages in this format.

If your data uses a different field name, use the Edit Fields node to rename it:

  1. Add Edit Fields before the chain
  2. Set mode to “Manual Mapping”
  3. Add field: chatInput with value {{ $json.yourFieldName }}

Adding System Messages

System messages guide the LLM’s behavior and tone. In the chain settings, expand Options and find Messages:

AI Message: Provide an example of ideal output. The model will try to match this style.

System Message: Set context, personality, and constraints:

You are a professional content analyst. You provide objective assessments without personal opinions. You always cite specific evidence from the text to support your conclusions.

System messages are powerful for:

  • Setting consistent tone across all responses
  • Defining output format expectations
  • Establishing constraints and boundaries
  • Providing role-specific context

Prompt Best Practices

Be specific about output format:

Bad: "Analyze this text"
Good: "Analyze this text and respond with a JSON object containing 'sentiment' (positive/negative/neutral), 'confidence' (0-100), and 'keywords' (array of up to 5 strings)"

Include examples for complex tasks:

Extract product information from the text below.

Example input: "The new iPhone 15 Pro costs $999 and comes in 128GB, 256GB, and 512GB storage options."
Example output: {"product": "iPhone 15 Pro", "price": 999, "storage_options": ["128GB", "256GB", "512GB"]}

Now process this text:
{{ $json.text }}

Handle edge cases explicitly:

If the text contains no product information, respond with: {"error": "No product found"}
If multiple products are mentioned, include all of them in an array.

Connecting Chat Models

The Basic LLM Chain works with any chat model that supports the LangChain interface. Your choice affects cost, speed, and capability.

Supported Providers

ProviderNode NameBest ForConsiderations
OpenAIOpenAI Chat ModelGeneral tasks, reliable APIMost popular, good docs
AnthropicAnthropic Chat ModelComplex reasoning, long contextExcellent for nuanced tasks
GoogleGoogle AI Chat ModelGemini models, Google ecosystemStrong multimodal support
Azure OpenAIAzure OpenAI Chat ModelEnterprise complianceSame models as OpenAI
OllamaOllama Chat ModelSelf-hosted, privacyFree, requires local setup
AWS BedrockAWS Bedrock Chat ModelAWS ecosystemEnterprise features
GroqGroq Chat ModelSpeed-critical tasksVery fast inference
MistralMistral AI Chat ModelEuropean hosting, open weightsGood balance of cost/quality

Model Selection Factors

Task complexity:

  • Simple classification: Smaller, faster models from any provider
  • Complex reasoning: Larger, more capable models

Cost sensitivity:

  • High volume: Use smaller models, optimize prompts
  • Quality-critical: Invest in better models

Latency requirements:

  • Real-time: Groq, smaller OpenAI models
  • Batch processing: Any model works

Privacy requirements:

  • Sensitive data: Self-hosted Ollama
  • Standard use: Cloud providers

Temperature Settings

Temperature controls randomness in LLM responses:

TemperatureEffectUse For
0Deterministic, same input = same outputClassification, extraction
0.3-0.5Slightly varied but consistentSummarization, analysis
0.7-0.9Creative, varied responsesContent generation, brainstorming
1.0+Highly randomExperimental only

For chains used in automation, keep temperature low (0-0.3) for predictable results.

Output Parsers for Structured Responses

When you need the chain to return structured data instead of free-form text, connect an output parser. Parsers validate LLM responses against a schema and format the output.

Why Output Parsers Matter

Without a parser, you get raw text:

The sentiment is positive because the customer mentions "love" and "best purchase."

With a Structured Output Parser, you get clean JSON:

{
  "sentiment": "positive",
  "confidence": 95,
  "keywords": ["love", "best purchase"]
}

Structured output integrates cleanly with downstream nodes. No regex extraction. No hoping the LLM formatted things correctly.

Structured Output Parser Setup

  1. In the Basic LLM Chain node, expand Options
  2. Enable Require Specific Output Format
  3. Click + next to the new Output Parser connection
  4. Select Structured Output Parser

Method 1: From Example Provide a JSON example of your desired output:

{
  "sentiment": "positive",
  "confidence": 85,
  "keywords": ["great", "excellent"]
}

The parser infers the schema from your example.

Method 2: From JSON Schema For precise control, define an explicit JSON Schema:

{
  "type": "object",
  "properties": {
    "sentiment": {
      "type": "string",
      "enum": ["positive", "negative", "neutral"]
    },
    "confidence": {
      "type": "number",
      "minimum": 0,
      "maximum": 100
    },
    "keywords": {
      "type": "array",
      "items": { "type": "string" },
      "maxItems": 5
    }
  },
  "required": ["sentiment", "confidence"]
}

Auto-fixing Output Parser

LLMs sometimes produce almost-correct output with minor formatting issues. The Auto-fixing Output Parser adds resilience:

  1. Add Auto-fixing Output Parser
  2. Connect it to a Structured Output Parser
  3. Connect the Auto-fixing Parser to a Chat Model

When parsing fails, this parser sends the malformed output back to the LLM with instructions to fix it. This costs extra tokens but recovers from many failures.

Use when:

  • Production systems where failures are costly
  • Complex schemas that LLMs sometimes get slightly wrong
  • You’d rather retry than fail

Item List Output Parser

For responses that should be simple arrays:

["tag1", "tag2", "tag3", "tag4"]

Use the Item List Output Parser. Configure the separator (default: comma) and connect it to the chain.

Known limitation: Some users report the Item List Output Parser returns a maximum of 3 items regardless of actual output. If you encounter this, use a Structured Output Parser with an array schema instead.

Common Parser Issues

Issue: Schema not included in prompt

Some models don’t receive the schema in the prompt automatically. The parser expects structured output, but the LLM doesn’t know the format.

Fix: Include the expected format explicitly in your prompt:

Respond with a JSON object in this exact format:
{
  "sentiment": "positive" or "negative" or "neutral",
  "confidence": number between 0-100,
  "keywords": array of strings
}

Issue: Parser fails with certain models

Not all models handle structured output equally well. If parsing fails consistently:

  1. Try a different model
  2. Add explicit format instructions to your prompt
  3. Use Auto-fixing Output Parser
  4. Simplify your schema

For JSON formatting issues in your workflow data, try our JSON fixer tool.

Processing Multiple Items

Understanding how the Basic LLM Chain handles multiple input items prevents unexpected costs and performance issues.

Default Behavior: One LLM Call Per Item

When your workflow sends multiple items to a Basic LLM Chain, the node processes each item separately. Ten input items means ten separate LLM API calls.

This differs from platforms like Make.com that process the entire flow for each item sequentially. In n8n, the chain completes all item processing before data flows to the next node.

Cost implication: Processing 100 customer reviews for sentiment analysis means 100 API calls, not one.

Execute Once: Process All Items Together

To process all items in a single LLM call:

  1. Open node Settings (gear icon)
  2. Enable Execute Once
  3. Access all items in your prompt:
{{ $input.all().map(item => item.json.text).join('\n\n') }}

This sends all text content to the LLM in one request. Useful for summarizing multiple items or finding patterns across a dataset.

Warning: Large item sets may exceed the model’s context window. Be selective about which fields you include.

Sub-node Expression Limitation

Expressions in output parser sub-nodes always resolve to the first item only, regardless of which item is being processed.

If you have five items and use {{ $json.category }} in a Structured Output Parser schema, it always returns the first item’s category value.

Workaround: Define static schemas without expressions, or process items in a loop where each iteration has only one item.

Rate Limiting for High Volume

The Basic LLM Chain has no built-in rate limiting. Processing hundreds of items simultaneously can trigger 429 (Too Many Requests) errors from your LLM provider.

Solution: Use the Loop Over Items node before your chain:

  1. Add Loop Over Items node
  2. Set batch size to 1-5 items
  3. Add a Wait node with appropriate delay between batches
  4. Connect to your Basic LLM Chain

This prevents overwhelming the API while still processing all items.

Token Usage Tracking

Token usage is not directly accessible within the chain’s output. To monitor costs:

  • Check your LLM provider’s dashboard for usage metrics
  • Use n8n’s execution logs to count chain invocations
  • Build a separate monitoring workflow using the n8n API to aggregate usage data

For production workflows with cost concerns, track executions and estimate based on average prompt/response lengths.

Error Handling and Troubleshooting

Production chains need robust error handling. Here are the most common issues and their fixes.

”Prompt is empty or invalid” Error

Cause 1: You selected “Take from previous node automatically” but the incoming data has no chatInput field.

Fix: Either:

  • Add an Edit Fields node to create the chatInput field
  • Switch to “Define below” and write your prompt with expressions

Cause 2: Your expression references a field that doesn’t exist.

Fix: Check your expression syntax. Use the expression editor to verify field paths.

Template Escaping Issues

If your prompt contains literal curly braces, they conflict with expression syntax.

Problem:

Format your response as: {key: value}

The {key: value} looks like an expression and breaks.

Fix: Use expression mode and escape properly:

{{ "Format your response as: {key: value}" }}

Or restructure to avoid curly braces in static text.

Output Parser Validation Failures

Error: “Could not parse LLM output”

Causes:

  • LLM didn’t follow the expected format
  • Schema is too complex for the model
  • Temperature is too high causing varied output

Fixes:

  1. Lower temperature to 0 for deterministic output
  2. Add explicit format instructions to your prompt
  3. Use Auto-fixing Output Parser for resilience
  4. Simplify your schema
  5. Test with a more capable model

Timeout Errors

Large prompts or slow models can timeout.

Fix: In the chain’s Settings (gear icon), increase the timeout value. For persistent timeout issues, see our timeout troubleshooting guide.

Continue On Fail

For production workflows, enable Continue On Fail in the node settings. This prevents a single chain failure from crashing your entire workflow.

With Continue On Fail enabled, failures return an error object instead of stopping execution:

// In subsequent nodes, check for errors
{{ $json.error ? "Chain failed: " + $json.error : $json.response }}

Real-World Examples

Example 1: Sentiment Analysis Pipeline

Analyze customer feedback and route based on sentiment.

Chain Configuration:

  • Model: Any cost-effective model (classification doesn’t require large models)
  • Temperature: 0 (deterministic)
  • Output Parser: Structured Output Parser

Prompt:

Analyze the sentiment of this customer feedback.

Feedback: {{ $json.feedback }}

Respond with a JSON object containing:
- sentiment: "positive", "negative", or "neutral"
- confidence: number from 0-100
- summary: one sentence explaining your assessment

Schema:

{
  "type": "object",
  "properties": {
    "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
    "confidence": { "type": "number" },
    "summary": { "type": "string" }
  },
  "required": ["sentiment", "confidence", "summary"]
}

Workflow: Webhook → Basic LLM Chain → Switch node (route by sentiment) → appropriate handler

Example 2: Content Summarization

Summarize long documents into consistent formats.

Chain Configuration:

  • Model: Any capable model with good instruction following
  • Temperature: 0.3 (slight variety in phrasing)
  • System Message: “You are a professional editor who creates concise, informative summaries.”

Prompt:

Summarize the following document in {{ $json.bulletCount }} bullet points.
Focus on: {{ $json.focusAreas.join(", ") }}
Maximum length: {{ $json.maxWords }} words total.

Document:
{{ $json.document }}

Example 3: Data Extraction from Unstructured Text

Extract structured data from emails, documents, or messages.

Chain Configuration:

  • Model: A capable model with strong JSON output abilities
  • Temperature: 0
  • Output Parser: Structured Output Parser with schema

Prompt:

Extract the following information from this email.
If any field is not found, use null.

Email:
{{ $json.emailBody }}

Extract:
- sender_name: full name of the sender
- company: company name if mentioned
- request_type: "inquiry", "complaint", "order", or "other"
- urgency: "high", "medium", or "low"
- action_items: array of specific requests or questions

Example 4: Translation Workflow

Translate content while preserving formatting.

Chain Configuration:

  • Model: Any capable model
  • Temperature: 0.2
  • System Message: “You are a professional translator. Preserve all formatting, including markdown, HTML tags, and special characters.”

Prompt:

Translate the following {{ $json.sourceLanguage }} text to {{ $json.targetLanguage }}.
Maintain the exact formatting and structure.

Text:
{{ $json.content }}

Example 5: Classification with Routing

Classify incoming requests and prepare routing data.

Chain Configuration:

  • Model: Any cost-effective model (classification is straightforward)
  • Temperature: 0
  • Output Parser: Structured Output Parser

Prompt:

Classify this support ticket into one of these categories:
- billing: payment, invoice, subscription issues
- technical: bugs, errors, how-to questions
- account: login, password, profile issues
- sales: pricing, features, upgrades
- other: anything that doesn't fit above

Ticket: {{ $json.ticketBody }}

Respond with:
- category: the classification
- confidence: 0-100
- suggested_priority: "low", "medium", or "high"
- key_issue: one sentence summary

Pro Tips and Best Practices

1. Keep Prompts Focused

Each chain should do one thing well. If you find yourself writing prompts with multiple tasks, split into separate chains connected by n8n nodes.

Bad: "Analyze sentiment, extract entities, translate, and summarize this text"
Good: Separate chains for each task, connected in sequence

2. Use Chains for Reliability, Agents for Flexibility

Chains are deterministic workhorses. Agents are flexible thinkers. Match the tool to the task.

If you’re building something that needs to “figure things out,” use an agent. If you’re building a consistent transformation, use a chain.

3. Test with Edge Cases

Before deploying, test your chain with:

  • Empty input
  • Very long input
  • Input in unexpected format
  • Input in different languages (if relevant)
  • Input with special characters

4. Monitor Token Usage

Chains have predictable token usage, but it still adds up. Add a Code node to log prompt length and response length. Watch for unexpectedly long responses that indicate the LLM is rambling.

5. Separate Parsing from Processing

If you’re using an AI Agent and need structured output, don’t try to parse directly in the agent. Route the agent’s response through a separate Basic LLM Chain with an output parser:

[AI Agent] → [Edit Fields: extract response] → [Basic LLM Chain + Output Parser]

This pattern produces more reliable structured output than trying to get agents to output JSON directly.

6. Version Your Prompts

Store prompts in a separate file or database, not hardcoded in the workflow. When output quality changes, you can review prompt history to find what changed.

For complex workflow architectures, our n8n consulting services can help you design maintainable systems. For hands-on implementation, see our workflow development services.

Frequently Asked Questions

What’s the difference between Basic LLM Chain and AI Agent?

The Basic LLM Chain processes a single prompt and returns a response. It cannot use tools, has no memory of previous messages, and runs exactly once per execution.

The AI Agent operates in a reasoning loop. It can call tools to gather information, maintains conversation history with memory nodes, and continues processing until it decides the task is complete.

Use chains for: Deterministic transformations like sentiment analysis, summarization, and data extraction.

Use agents for: Tasks where the AI needs to decide what actions to take or gather information dynamically.

Why doesn’t my chain remember previous messages?

This is by design. Basic LLM Chain nodes are stateless, meaning each execution starts fresh with no knowledge of prior interactions.

This makes chains predictable and reliable, but unsuitable for conversations.

If you need conversation memory, you have two options:

  • Use an AI Agent with a memory sub-node
  • Manually include conversation history in your prompt using n8n expressions and a database to store previous messages

How do I get structured JSON output from my chain?

Enable Require Specific Output Format in the chain’s options, then connect a Structured Output Parser.

Define your schema either by providing a JSON example or writing an explicit JSON Schema. Additionally, include explicit format instructions in your prompt so the LLM knows exactly what structure to produce.

Set temperature to 0 for deterministic output.

If parsing still fails occasionally, add an Auto-fixing Output Parser which will retry with corrections.

Why is my output parser failing with certain models?

Different LLM providers handle structured output with varying success.

Common causes:

  • The model not receiving the schema in the system prompt
  • Temperature being too high
  • Schema being too complex for the model to follow consistently

Fixes to try in order:

  1. Add explicit format examples in your prompt
  2. Simplify your schema
  3. Use Auto-fixing Output Parser
  4. Switch to a more capable model

Some older model versions have known issues with structured output. Check your provider’s documentation for current recommendations.

Can I use my own self-hosted LLM with the Basic LLM Chain?

Yes. Connect the Ollama Chat Model sub-node to use locally hosted models.

Install Ollama on your server, download your preferred model, and configure the Ollama node with your server URL and model name. This keeps all data on your infrastructure with no external API calls.

Other self-hosted options:

  • LocalAI
  • text-generation-webui with compatible API endpoints

For enterprise deployments requiring specific models or compliance requirements, AWS Bedrock and Azure OpenAI provide managed options with data residency controls.

Ready to Automate Your Business?

Tell us what you need automated. We'll build it, test it, and deploy it fast.

âś“ 48-72 Hour Turnaround
âś“ Production Ready
âś“ Free Consultation
⚡

Create Your Free Account

Sign up once, use all tools free forever. We require accounts to prevent abuse and keep our tools running for everyone.

or

By signing up, you agree to our Terms of Service and Privacy Policy. No spam, unsubscribe anytime.