Documentation Index
Fetch the complete documentation index at: https://docs.fibonacci.today/llms.txt
Use this file to discover all available pages before exploring further.
The LLMNode class sends an instruction to a Claude model and captures the response. It supports template variables so you can inject dynamic data from workflow inputs or previous nodes.
Constructor
from fibonacci import LLMNode
node = LLMNode(
id="analyzer",
name="Analyze Text",
instruction="Analyze the following text and summarize key points: {{input.text}}"
)
Parameters
| Parameter | Type | Default | Description |
|---|
id | str | Required | Unique node identifier (lowercase letters, numbers, underscores, hyphens) |
name | str | Required | Human-readable node name |
instruction | str | Required | Instruction sent to Claude — supports {{variable}} template syntax |
model | str | "claude-haiku-4-5" | Claude model to use (see table below) |
max_tokens | int | 2000 | Maximum tokens in the response |
temperature | float | 1.0 | Sampling temperature — 0.0 (deterministic) to 1.0 (creative) |
dependencies | list[str] | [] | Node IDs this node waits for before running |
enable_retry | bool | False | Retry the node on failure |
max_retries | int | 3 | Maximum retry attempts (used when enable_retry=True) |
retry_delay | float | 1.0 | Initial delay in seconds between retries |
Supported Models
| Model | Identifier | Best For |
|---|
| Claude Haiku 4.5 | claude-haiku-4-5 | Fast, cost-effective tasks — SDK default |
| Claude Sonnet 4.6 | claude-sonnet-4-6 | Balanced quality and speed — general purpose |
| Claude Opus 4.6 | claude-opus-4-6 | Most capable — complex reasoning and analysis |
Template Variables
Use {{variable}} syntax to inject dynamic values into the instruction:
node = LLMNode(
id="writer",
name="Write Email",
instruction="""
Write a {{input.tone}} email about {{input.topic}}.
Previous analysis: {{analyzer}}
"""
)
Available Variable Sources
| Syntax | Where it comes from |
|---|
{{input.field}} | Workflow input data |
{{node_id}} | Full output of another node |
{{node_id.field}} | Specific field from a node’s JSON output |
Selecting a Model
# Default — fast and cheap
quick = LLMNode(
id="quick",
name="Quick Summary",
instruction="Summarize in one sentence: {{input.text}}"
# model defaults to claude-haiku-4-5
)
# Balanced — good for most tasks
balanced = LLMNode(
id="balanced",
name="Balanced Analysis",
instruction="Analyze and explain: {{input.text}}",
model="claude-sonnet-4-6"
)
# Most capable — use for complex tasks
complex_task = LLMNode(
id="complex_task",
name="Deep Reasoning",
instruction="Perform a detailed multi-step analysis of: {{input.data}}",
model="claude-opus-4-6"
)
Temperature
Controls randomness. Range is 0.0 to 1.0:
# Deterministic — good for extraction, classification, structured output
extract = LLMNode(
id="extract",
name="Extract Entities",
instruction="Extract all names from: {{input.text}}",
temperature=0.0
)
# Default (1.0) — good for general tasks
general = LLMNode(
id="general",
name="General Response",
instruction="Respond helpfully to: {{input.message}}"
# temperature defaults to 1.0
)
# More creative — good for writing, brainstorming
creative = LLMNode(
id="creative",
name="Creative Writing",
instruction="Write a short story about: {{input.topic}}",
temperature=0.9
)
Max Tokens
Limit how long the response can be:
# Short response
classify = LLMNode(
id="classify",
name="Classify",
instruction="Classify as positive, negative, or neutral: {{input.text}}",
max_tokens=10 # single word response
)
# Long response
report = LLMNode(
id="report",
name="Generate Report",
instruction="Write a detailed report on: {{input.topic}}",
max_tokens=2000 # default
)
Dependencies
Control execution order by declaring which nodes must finish before this one runs:
from fibonacci import Workflow, LLMNode
wf = Workflow(name="pipeline")
# First node — no dependencies
analyzer = LLMNode(
id="analyzer",
name="Analyze",
instruction="Analyze: {{input.text}}"
)
# Second node — waits for analyzer to complete
summarizer = LLMNode(
id="summarizer",
name="Summarize",
instruction="Summarize this analysis: {{analyzer}}",
dependencies=["analyzer"]
)
# Third node — waits for both
reporter = LLMNode(
id="reporter",
name="Create Report",
instruction="""Create a report:
Analysis: {{analyzer}}
Summary: {{summarizer}}""",
dependencies=["analyzer", "summarizer"]
)
wf.add_nodes([analyzer, summarizer, reporter])
You can also use the fluent .depends_on() method:
reporter = LLMNode(
id="reporter",
name="Create Report",
instruction="Report: {{analyzer}} / {{summarizer}}"
).depends_on("analyzer", "summarizer")
Retry Configuration
Enable retries using enable_retry=True or the fluent .with_retry() method:
# Via constructor
node = LLMNode(
id="analyzer",
name="Analyze",
instruction="Analyze: {{input.text}}",
enable_retry=True,
max_retries=5,
retry_delay=2.0
)
# Via fluent method (equivalent)
node = LLMNode(
id="analyzer",
name="Analyze",
instruction="Analyze: {{input.text}}"
).with_retry(max_retries=5, delay=2.0)
Conditional Execution
Run this node only when a condition is met using .with_condition():
# Only run expensive model when analysis level is detailed
deep = LLMNode(
id="deep",
name="Deep Analysis",
instruction="Perform deep analysis of: {{input.text}}",
model="claude-opus-4-6"
).with_condition(
left_value="{{input.analysis_level}}",
operator="equals",
right_value="detailed"
)
YAML Configuration
nodes:
- id: summarizer
name: Summarize Text
type: llm
instruction: "Summarize the following text: {{input.text}}"
config:
model: claude-haiku-4-5 # optional, this is the default
max_tokens: 500
temperature: 0.5
dependencies:
- fetch_data
Complete Example
from fibonacci import Workflow, LLMNode
wf = Workflow(name="blog-pipeline")
# Outline
outliner = LLMNode(
id="outliner",
name="Create Outline",
instruction="Create a structured outline for a blog post about: {{input.topic}}",
temperature=0.7
)
# Draft
writer = LLMNode(
id="writer",
name="Write Draft",
instruction="""
Write a blog post following this outline:
{{outliner}}
Style: {{input.style}}
""",
model="claude-sonnet-4-6",
max_tokens=2000,
dependencies=["outliner"]
)
# Edit
editor = LLMNode(
id="editor",
name="Edit Draft",
instruction="Edit for clarity and engagement: {{writer}}",
model="claude-opus-4-6",
temperature=0.3,
dependencies=["writer"]
)
wf.add_nodes([outliner, writer, editor])
result = wf.run(input_data={
"topic": "AI in Healthcare",
"style": "informative yet approachable"
})
print(result.output_data["editor"])