Intent Graphs: Protocol-Driven Automation for the LLM Era


Key Points
How do intent graphs differ from traditional workflow engines?
Intent graphs are protocol-driven and AI-native by design. Unlike traditional workflow engines that rely on imperative handlers, intent graphs represent workflows as explicit, inspectable data structures that LLMs can read, understand, and even generate.
Why are intent graphs better for LLM automation than imperative code?
Imperative code creates hidden state and brittle handlers that break when LLMs are involved. Intent graphs keep everything explicit—every node, transition, and parameter is part of the protocol, making them testable, debuggable, and AI-friendly.
What's the relationship between intent graphs and MCP (Model Context Protocol)?
Both embrace protocol-driven design principles. Intent graphs provide the structure for LLM workflows, while MCP defines the communication protocols. Together, they create a foundation for reliable, composable AI automation that's explicit and inspectable.
Why Imperative Code Breaks in the LLM Era
I’ve spent years writing both imperative and functional code. Here’s the hard truth: Imperative code just doesn’t scale when LLMs, APIs, and automation are in the mix.
- You end up with a mess of step-by-step handlers, global state, and hidden side effects.
- Debugging is a nightmare. Testing is worse.
- Change one thing, and you risk breaking everything else.
LLM-powered automation needs a new kind of structure: Explicit. Inspectable. Protocol-driven. That’s where intent graphs come in.
What Is an Intent Graph?
An intent graph is a protocol-driven map of your workflow.
- Every node is explicit—splitters, classifiers, actions.
- The graph structure itself is the system.
- No more glue code or handler spaghetti.
You model what needs to happen, not how to do it imperatively.
Example: Real-World Intent Graph (LLM-Native Pattern)
{
"root": "llm_splitter",
"nodes": {
"llm_splitter": {
"id": "llm_splitter",
"type": "splitter",
"name": "llm_splitter",
"description": "LLM-powered splitter for multi-intent handling",
"splitter_function": "llm_splitter",
"llm_config": {
"provider": "openrouter",
"api_key": "${OPENROUTER_API_KEY}",
"model": "moonshotai/kimi-k2"
},
"children": [
"main_classifier"
]
},
"main_classifier": {
"id": "main_classifier",
"type": "classifier",
"name": "main_classifier",
"description": "LLM-powered intent classifier",
"classifier_function": "llm_classifier",
"children": [
"greet_action",
"calculate_action",
"weather_action",
"help_action"
]
},
"greet_action": {
"id": "greet_action",
"type": "action",
"name": "greet_action",
"description": "Greet the user",
"function": "greet_action",
"param_schema": {"name": "str"}
},
"calculate_action": {
"id": "calculate_action",
"type": "action",
"name": "calculate_action",
"description": "Perform a calculation",
"function": "calculate_action",
"param_schema": {"operation": "str", "a": "float", "b": "float"}
},
"weather_action": {
"id": "weather_action",
"type": "action",
"name": "weather_action",
"description": "Get weather information",
"function": "weather_action",
"param_schema": {"location": "str"}
},
"help_action": {
"id": "help_action",
"type": "action",
"name": "help_action",
"description": "Get help",
"function": "help_action",
"param_schema": {}
}
}
}
What’s Happening Here?
- The
root
node (llm_splitter
) is your entry point—an LLM-powered splitter that routes input. - The splitter passes off to a classifier (
main_classifier
), another LLM node that decides user intent. - Each action node is pure, explicit, and has a clear schema—no hidden state, no mystery meat.
LLM configs are data, not code. Children are just references. No magic, no black box.
Why Intent Graphs Crush Imperative Workflows
- Everything’s explicit. You see the entire workflow in one place.
- No hidden state. Every transition, param, and side effect is part of the protocol.
- Composable and testable. Each node is reusable, inspectable, and AI-native.
- AI-ready. LLMs can read, extend, or even generate intent graphs.
- Easy to evolve. Add or swap nodes without a full rewrite.
Compare that to an imperative mess of nested if/else, call stacks, and hidden bugs—and it’s no contest.
Protocols: The Real Superpower
Intent graphs work because they embrace protocol-driven design (see MCP, intent-kit, etc.).
- Every workflow is self-documenting and portable.
- Steps/nodes are reusable across projects, clouds, and languages.
- You can version, audit, and test every path in your automation.
Where Should You Use Intent Graphs?
Perfect for:
- LLM-powered APIs, backends, and toolchains
- Automating multi-step processes (with or without AI)
- Systems of “AI experts” built from reusable parts
- Anything you want to debug, inspect, and scale
Not for:
- Tight, low-level system code (where imperative rules)
- One-off scripts you’ll throw away tomorrow
Getting Started with intent-kit
Want to ship reliable, protocol-driven automation? intent-kit is the Python framework I built to make this dead simple.
How to start:
- Model your workflow as an intent graph (just like above, as JSON/YAML).
- Define each node (splitter, classifier, action) as a pure, stateless function.
- Run, test, and iterate—intent-kit takes care of the rest.
Ready to Build Smarter?
Stop fighting with brittle handlers and hidden state. Start modeling your workflows as intent graphs—future-proof, inspectable, and AI-ready.
👉 Check out intent-kit to see how to build protocol-driven LLM automation in Python.
TL;DR:
Protocol-driven intent graphs turn AI automation into explicit, scalable infrastructure. Imperative code is a dead end. The future is graph-based and protocol-native.