Getting Started with LangGraph - Workflows and AI Agents
Master LangGraph by building an intelligent support ticket system. Learn the critical difference between predictable, developer-controlled workflows and flexible, LLM-driven agents. This tutorial provides the foundational skills for orchestrating complex, stateful AI applications.

Simple LLM calls work great for basic tasks, but the moment you need your system to revise its work, choose between tools, or handle complex branching logic, everything falls apart. Wrestling with nested if/else
statements and manual state management turns your clean prototype into an unmaintainable nightmare.
LangGraph1 solves this by letting you build AI systems as explicit graphs instead of tangled code. Think of it like designing a factory assembly line: you define workstations (Nodes
) and the paths between them (Edges
), making your entire process visual, debuggable, and robust. Note that LangGraph is library built by the LangChain team and it is built on top of the LangChain library.
You'll build the same intelligent support ticket system twice. First as a predictable workflow where you control every step, then as an autonomous agent where the LLM calls the shots. By the end, you'll know exactly when to use each approach and have the skills to orchestrate any complex AI system with confidence.
Effective workflows and agents are built upon the foundational capabilities explored in the AI Engineer Toolkit tutorial:
- Memory: Maintaining context across interactions.
- Tool Use: Enabling interaction with external systems.
- Structured Output: Ensuring predictable data formats.
Tutorial Goals
- Master the three core LangGraph building blocks: State, Nodes, and Edges
- Build a complete support ticket system using the structured workflow pattern
- Rebuild the same system as an autonomous agent to see the key differences
- Implement automated quality control with evaluator-feedback loops
- Add human oversight checkpoints to prevent costly AI mistakes
Why LangGraph?
Ever tried building an AI system that needs to revise its work or choose between different tools? You probably started with simple LLM calls, then quickly found yourself writing nested while
loops and if/else
statements to manage the complexity. Soon enough, your clean prototype becomes a debugging nightmare where the actual business logic is buried in control flow code.
Here's what that messy approach looks like:
# The Old Way - Good Luck Debugging This
def draft_and_revise(ticket):
draft = draft_initial_response(ticket)
for i in range(MAX_REVISIONS):
evaluation = evaluate_draft(draft, ticket)
if "PASS" in evaluation:
return draft # Success!
else:
# Logic buried in loops, state passed manually
draft = revise_based_on_feedback(draft, evaluation)
return draft # Hope for the best
LangGraph flips this on its head. Instead of tangled control flow, you design your AI system like a factory assembly line where work moves through stations. You get three building blocks:
-
State: Your system's memory - a shared object that tracks everything from the original
ticket_text
to the currentdraft_response
. -
Nodes: Python functions that do one job well. Your
classify_ticket
node reads the ticket and updates the classification. Yourdraft_response
node writes a response. Each node is simple and testable. -
Edges: The paths between nodes. Go directly from A to B, or add conditions like "if classification is billing, route to escalation; otherwise, retrieve knowledge."
The same revision loop becomes this clean graph:
What you get:
- Zero manual state management - the graph handles it
- Visual debugging - you can literally see what happened
- Natural loops and branches - complex logic becomes simple
- Production-ready reliability - your app logic is explicit, not buried
Use Case: Automating Support Ticket Triage
Many companies are drowning in support tickets, and their teams are manually sorting through "my login is broken" messages while important issues get buried. This is exactly the kind of repetitive, rule-based work that AI excels at, and the perfect way to see LangGraph in action.
We're building a system that takes a raw support ticket and automatically:
- Classifies the ticket - technical issue, billing question, or general inquiry?
- Retrieves relevant solutions from your knowledge base for technical problems
- Drafts a helpful response using the retrieved information
- Reviews and revises its own work until it meets quality standards
Our knowledge base will be simple for this tutorial, but the pattern scales to any size:
knowledge_base = [
"For login issues, tell the user to try resetting their password via the 'Forgot Password' link.",
"Billing inquiries should be escalated to the billing department by creating a ticket in Salesforce.",
"The app is known to crash on startup if the user's cache is corrupted. The standard fix is to clear the application cache.",
]
We'll build this exact same system twice using different LangGraph patterns. First as a structured workflow where you control every step, then as an autonomous agent where the LLM decides its own path. By the end, you'll know exactly when to use each approach in your own projects.
Setup
Project Setup
You can find the complete code on GitHub: LangGraph Getting Started Notebook.
Before we begin, we need to install the necessary libraries. This setup includes the core langgraph
library along with specific integrations for LLM providers, and embeddings (fastembed
):
pip install -Uqqq pip --progress-bar off
pip install -qqq langgraph==0.6.6 --progress-bar off
pip install -qqq langchain-ollama==0.3.7 --progress-bar off
pip install -qqq fastembed==0.7.2 --progress-bar off
With the dependencies installed, let's import the necessary modules:
from dataclasses import dataclass, field
from typing import Annotated, List, TypedDict
from IPython.display import Image, display
from langchain.chat_models import init_chat_model
from langchain_community.embeddings import FastEmbedEmbeddings
from langchain_core.documents import Document
from langchain_core.messages import AnyMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
from langchain_core.vectorstores import InMemoryVectorStore
from langgraph.graph import END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
And initialize the LLM:
llm = init_chat_model("qwen3:8b", model_provider="ollama")
The Workflow - You Are in Control
When you need predictable, reliable AI systems, workflows are your best friend. You design the exact path from start to finish. The LLM executes specific tasks at each step, but you're the architect calling the shots.
This approach is perfect when consistency matters and you have a clear, defined process. Every ticket will follow the same logical flow, making your system reliable and easy to debug. Let's build it step by step.
Defining the State
Think of State as your system's memory - everything important gets stored here as it flows through your workflow. No more juggling variables or worrying about losing data between steps. LangGraph handles all the plumbing.
@dataclass
class TicketTriageState:
ticket_text: str
classification: str = ""
retrieved_docs: List[Document] = field(default_factory=lambda: [])
draft_response: str = ""
evaluation_feedback: str = ""
revision_count: int = 0
Here's what we're tracking:
ticket_text
- The original problem from the customerclassification
- What type of issue this is (technical, billing, etc.)retrieved_docs
- Relevant solutions we found in our knowledge basedraft_response
- Our current attempt at a helpful responseevaluation_feedback
- How good our draft is (and how to improve it)revision_count
- How many times we've tried to improve the response
Building the Nodes
Nodes are where the real work happens. Each one is just a Python function that takes the current state, does something useful, and returns what it wants to update. Simple, focused, and easy to test.
1. classify_ticket
First things first - what kind of problem are we dealing with? This node reads the ticket and figures out if it's technical, billing, or something else entirely.
CLASSIFY_PROMPT = """
Classify this support ticket into one of the following categories:
'Technical Issue', 'Billing Inquiry', 'General Question'.
<ticket>
{ticket_text}
</ticket>
""".strip()
def classify_ticket(state: TicketTriageState) -> dict:
classification = llm.invoke(CLASSIFY_PROMPT.format(ticket_text=state.ticket_text))
return {"classification": classification}
2. retrieve_knowledge
Now we know what type of problem we're solving, so let's find the relevant solutions. This node searches through our knowledge base and pulls up anything that might help answer the customer's question:
def retrieve_knowledge(state: TicketTriageState) -> dict:
retrieved_docs = retriever.invoke(state.ticket_text)
return {"retrieved_docs": retrieved_docs}
3. draft_response
Time to write the actual response. This node combines the customer's original question with the solutions we found, then asks the LLM to craft a helpful, professional reply:
DRAFT_PROMPT = """
Based on this context:
<context>
{context}
</context>
Draft a response for this ticket:
<ticket>
{ticket_text}
</ticket>
""".strip()
def draft_response(state: TicketTriageState) -> dict:
context = "\n".join([doc.page_content for doc in state.retrieved_docs])
prompt = DRAFT_PROMPT.format(context=context, ticket_text=state.ticket_text)
draft = llm.invoke(prompt)
return {"draft_response": draft}
4. evaluate_draft
and revise_response
(Quality Control Loop)
Here we're implementing quality control. Instead of sending the first draft, we have the system critique its own work and revise until it's actually good.
The evaluate_draft
node plays the tough critic, checking if our response actually solves the customer's problem:
EVALUATE_PROMPT = """
Does this draft
<draft>
{draft_response}
</draft>
fully address the ticket
<ticket>
{ticket_text}
</ticket>
If not, provide feedback.
Respond with 'PASS' or 'FAIL: [feedback]'."
""".strip()
def evaluate_draft(state: TicketTriageState) -> dict:
evaluation_prompt = EVALUATE_PROMPT.format(
draft_response=state.draft_response, ticket_text=state.ticket_text
)
evaluation_result = llm.invoke(evaluation_prompt)
revision_count = state.revision_count + 1
return {"evaluation_feedback": evaluation_result, "revision_count": revision_count}
When the critic says "not good enough," the revise_response
node takes that feedback and writes a better version:
REVISE_PROMPT = """
Revise this draft:
<draft>
{draft_response}
</draft>
based on the following feedback:
<feedback>
{evaluation_feedback}
</feedback>
""".strip()
def revise_response(state: TicketTriageState) -> dict:
revised_draft = llm.invoke(
REVISE_PROMPT.format(
draft_response=state.draft_response,
evaluation_feedback=state.evaluation_feedback,
)
)
return {"draft_response": revised_draft}
Connecting the Flow with Edges
Now for the fun part - wiring everything together. Edges tell LangGraph how work flows from one node to the next. You get to be the traffic controller, deciding exactly where each piece of data goes.
First, let's create our graph and add all our worker nodes:
graph = StateGraph(TicketTriageState)
# Add all our functions as nodes
graph.add_node("classify", classify_ticket)
graph.add_node("retrieve", retrieve_knowledge)
graph.add_node("draft", draft_response)
graph.add_node("evaluate", evaluate_draft)
graph.add_node("revise", revise_response)
The main flow is straightforward - classify, then retrieve, then draft, then evaluate:
# Define the main sequence of operations
graph.add_edge("classify", "retrieve")
graph.add_edge("retrieve", "draft")
graph.add_edge("draft", "evaluate")
# After revising, the draft must be evaluated again
graph.add_edge("revise", "evaluate")
Here's where it gets interesting - the conditional edge that creates our quality control loop. After evaluation, we need to decide: is this response good enough, or do we need another round of revision?
def should_revise(state: TicketTriageState) -> str:
feedback = state.evaluation_feedback
revision_count = state.revision_count
# If the draft failed evaluation and we haven't hit our revision limit, try again
if "FAIL" in feedback and revision_count < 3:
return "revise"
# Otherwise, we're done
else:
return "end"
graph.add_conditional_edges(
"evaluate", # After evaluation...
should_revise, # Ask this function what to do next
{
"revise": "revise", # Go back for another round
"end": END, # Or finish up
},
)
Finally, we wire it all together and compile our workflow into something we can actually run:
graph.set_entry_point("classify")
app = graph.compile()
Want to see what we just built? LangGraph can generate a visual diagram of your entire system:
display(Image(app.get_graph().draw_mermaid_png()))

Perfect! Our workflow flows from classification to drafting, then loops through evaluation and revision until we get a quality response.
Running the Workflow
Time to put our system to work! Just give it a ticket and watch the magic happen:
initial_state = TicketTriageState(ticket_text="My login is broken, please help!")
final_state = app.invoke(initial_state)
When it's done, check the final_state
to see what happened. The draft_response
contains your polished, quality-checked answer:
**Ticket Response:**
Your login issue can be resolved by resetting your password using the 'Forgot Password' link.
If you continue to experience issues, please provide additional details so we can assist further.
That's it! You've built a complete AI workflow that classifies tickets, finds solutions, drafts responses, and automatically improves its own work. Every step was predictable and under your control - exactly what you want for production systems that need to be reliable.
The Autonomous Agent - The LLM is in Control
Now let's flip the script. Instead of you controlling every step, what if you gave the LLM a toolbox and let it figure out the best way to solve the problem? This is where autonomous agents shine - they adapt, improvise, and handle complexity you didn't anticipate.
The magic happens through ReAct (Reason + Act)2 - a simple but powerful loop:
- Reason: "What should I do next based on what I know?"
- Act: Execute a tool to gather more information or perform an action
- Observe: "What did that tell me? What's my next move?"
The agent keeps cycling through these steps until it has everything it needs to give you a final answer. You provide the capabilities; the LLM provides the strategy.
Giving Your Agent Superpowers with Tools
An agent without tools is just an expensive chatbot. The real power comes from what it can do - and that's entirely up to you. Let's take our workflow functions and transform them into agent tools using the @tool
decorator:
@tool
def classify_ticket(ticket_text: str) -> str:
"""
Classifies a support ticket into 'Technical Issue', 'Billing Inquiry', or 'General Question'.
Use this tool first to understand the nature of the ticket.
"""
return llm.invoke(CLASSIFY_PROMPT.format(ticket_text=ticket_text)).content.strip()
@tool
def retrieve_knowledge(ticket_text: str) -> list[str]:
"""
Retrieves relevant knowledge base articles for a given ticket.
Use this for 'Technical Issue' tickets to find potential solutions.
"""
return [doc.page_content for doc in retriever.invoke(ticket_text)]
@tool
def draft_response(ticket_text: str, context: list[str]) -> str:
"""
Drafts a helpful response to a support ticket, using provided context.
"""
context_str = "\n".join([doc for doc in context])
return llm.invoke(
DRAFT_PROMPT.format(context=context_str, ticket_text=ticket_text)
).content.strip()
tools = [classify_ticket, retrieve_knowledge, draft_response]
That's it - three simple functions, but the agent can combine them in countless ways to solve problems. The @tool
decorator automatically tells the LLM what each function does and how to use it. Your agent's intelligence comes from figuring out which tools to use and when.
Programming Your Agent's Personality
Your agent's "brain" is really just a well-crafted system prompt-the instructions that tell it how to behave and what its job is. This is where you set the rules of engagement:
AGENT_SYSTEM_PROMPT = """
You are an expert support ticket triager. Your goal is to process a user's ticket by taking the following steps:
1. First, classify the ticket to understand its category.
2. If the ticket is a 'Technical Issue', retrieve relevant knowledge.
3. Finally, draft a response to the user.
You must use the provided tools to perform these actions in sequence. Respond ONLY with the final drafted response once all steps are complete.
"""
Now we wire the LLM to both the system prompt and the tools. The .bind_tools()
method tells the LLM about every tool it can use:
agent_prompt = ChatPromptTemplate.from_messages(
[
("system", AGENT_SYSTEM_PROMPT),
("placeholder", "{messages}"),
]
)
llm_with_tools = llm.bind_tools(tools)
agent = agent_prompt | llm_with_tools
Building the Agent Graph
Here's the beautiful thing about agents - they're way simpler than workflows. Instead of complex pipelines, you get a loop: the agent thinks, acts, observes, then thinks again.
State: Just Keep the Conversation
Your agent just needs to remember the conversation - every message, tool call, and result gets stored in one simple list:
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
You only need two workers in your agent factory:
agent_node
- The thinker that decides what to do nexttool_node
- The doer that executes the chosen tools
def agent_node(state: AgentState):
response = agent.invoke(state)
return {"messages": [response]}
tool_node = ToolNode(tools)
The ReAct Loop: Where Magic Happens
The conditional edge creates the agent's decision - making loop. After the agent thinks, we check: did it want to use a tool, or is it ready to give the final answer?
def should_continue(state: AgentState) -> str:
last_message = state["messages"][-1]
if not last_message.tool_calls:
return "end" # Agent is done
return "continue" # Agent wants to use tools
# Build the graph
agent_graph = StateGraph(AgentState)
agent_graph.add_node("agent", agent_node)
agent_graph.add_node("tools", tool_node)
agent_graph.set_entry_point("agent")
agent_graph.add_conditional_edges(
"agent",
should_continue,
{"continue": "tools", "end": END},
)
agent_graph.add_edge("tools", "agent") # Tools always go back to agent
agent_app = agent_graph.compile()
Watch Your Agent Think
The graph structure is elegantly simple - just a thinking loop that can handle infinite complexity:
display(Image(agent_app.get_graph().draw_mermaid_png()))

That's it - agent thinks, tools execute, agent thinks again. Simple structure, sophisticated behavior.
Let's see it in action! We'll stream the execution so you can watch the agent's thought process unfold in real-time:
initial_state = AgentState(
messages=[HumanMessage(content="My login is broken, please help!")]
)
for entry in agent_app.stream(initial_state, stream_mode="updates"):
# ... print statements for agent and tool outputs ...
Here's what happens when we run it:
Agent
Tool calls:
[{'name': 'classify_ticket', 'args': {'ticket_text': 'My login is broken, please help!'}, 'id': '893810d4-349e-4d2e-904f-91c870bc573c', 'type': 'tool_call'}]
---
Tool response:
The ticket "My login is broken, please help!" should be classified under **Technical Issue**.
**Reasoning**: The user is reporting a problem with their login functionality, which is a technical malfunction related to system access. This does not pertain to billing or general informational queries.
...
Agent
**Ticket Response:**
Hello, we're sorry to hear about your login issue. Please try resetting your password via the **'Forgot Password'** link. If you continue to experience problems, feel free to provide additional details so we can assist you further.
**Note:** If clearing the application cache resolves the issue, you may also try that as an additional troubleshooting step. Let us know how it goes!
Notice something incredible? We only told the agent what plan to follow using plain text. We just gave it tools and a goal. The LLM figured out the strategy on its own. This is the power of autonomous agents - they adapt to whatever comes their way without you having to specify every possible scenario. Of course, they can also get stuck or go off track, but that's where human oversight comes in.
Human-in-the-Loop (HITL): The Essential Safety Net
Autonomous agents are incredibly powerful, but they can also be incredibly expensive when they go wrong. An agent that misunderstands a customer request, accidentally escalates every minor issue, or gets stuck burning through API calls in an infinite loop? That can lead to a business nightmare.
This is why Human-in-the-Loop (HITL) isn't optional for production systems - it can be the difference between a demo and a AI system that won't bankrupt you. HITL gives you strategic checkpoints where humans can review, approve, or redirect the AI before it does anything costly or irreversible.
How LangGraph Makes HITL Simple
LangGraph was built with production reality in mind. It has a built-in "pause button" that lets you stop your AI mid-process and wait for human input-no hacky workarounds needed.
Here's how it works: when any node raises an Interrupt
, the entire graph freezes exactly where it is. Your application gets control back, can show the current state to a human (via UI, email, Slack, whatever), collect their decision, and then resume the graph from the exact same spot. It's like hitting pause on a video game to ask a friend what they'd do next.
Where to Add Your Safety Checkpoints
Here are some common places to add human oversight in our triage system - think of them as circuit breakers that prevent expensive mistakes:
1. Before High-Impact Actions Escalating to billing costs your team time and might create unnecessary work. Add a checkpoint: "Hey manager, the AI wants to escalate this ticket to billing. Here's the draft - approve or reject?"
2. When the AI is Uncertain Modify your classifier to output confidence scores. If it's only 60% sure about a classification, pause and ask a human: "I think this is billing, but I'm not confident. What do you think?"
3. Final Quality Gate Before any response goes to a customer, especially for complex technical issues, let a human agent review and edit the AI's draft. One quick review can save you from embarrassing mistakes.
How to Actually Build It
Just create a node that raises an Interrupt
, wire it up with conditional edges, and you're done. Here's how to add approval before escalating tickets:
from langgraph.graph import Interrupt
def request_human_approval(state: TicketTriageState) -> dict:
"""Pause the graph and wait for human approval."""
print("--- NEEDS HUMAN REVIEW ---")
print("AI wants to escalate this ticket:")
print(f"Draft: {state['draft_response']}")
# In production, show this in your UI and wait for approval
raise Interrupt() # Graph freezes here
def route_for_escalation(state: TicketTriageState) -> str:
# Decide if we need approval based on the classification
if state['classification'] == 'Billing Inquiry':
return "request_approval" # Pause for human review
else:
return "retrieve_knowledge" # Continue automatically
# Wire up your safety checkpoint
graph.add_node("request_approval", request_human_approval)
graph.add_conditional_edges(
"classify",
route_for_escalation,
{
"request_approval": "request_approval",
"retrieve_knowledge": "retrieve"
}
)
When your human approves (via a button click in your UI), just resume the graph with the updated state. It picks up exactly where it left off. This is how you go from "cool AI demo" to "production system I actually trust."
Conclusion: What to Choose
Building AI systems that actually work in production is hard, but LangGraph makes it easier. No more wrestling with tangled if/else
statements or manually passing state around - you now have two powerful patterns that can handle any complexity:
Workflows give you complete control. Every step is predictable, every path is defined by you. Perfect when consistency matters more than creativity.
Agents give you adaptive intelligence. The LLM figures out its own strategy, handling complexity you couldn't anticipate. Perfect for open-ended problems where flexibility is key.
The decision comes down to one simple question: Who's in charge? You or the LLM?
Criteria | Workflow | Agent |
---|---|---|
Process Path | Fixed, predictable steps you define | Dynamic strategy the LLM decides |
Best For | Data pipelines, known processes, compliance | Research tasks, creative work, complex problem-solving |
Reliability | Rock solid-same input, same output | Variable-depends on LLM reasoning quality |
Flexibility | Low-follows your exact blueprint | High-adapts to new situations on the fly |
Start With a Workflow
Always start with a workflow, if you can. It's easier to build and debug, and it gives you a solid foundation. You can use (sub)agents to add more complexity later.
LangGraph transforms AI development from hacky scripting to real systems engineering. Your applications become robust, debuggable, and maintainable - the kind of systems you can actually deploy with confidence.
Ready to build more sophisticated systems? Here's where to go next:
- Build Agentic Workflow - Multi-agent systems with human oversight
- Build an AI Agent - Database-connected agents that answer complex queries
Homework Exercises
Time to test your new skills! These exercises will cement your understanding of when and how to use each pattern:
1. Handle Simple Questions Without the Full Pipeline
Your workflow retrieves knowledge for every ticket, but what about simple questions like "What are your business hours?" that don't need the knowledge base?
- Add a third classification:
'General Question'
- Route general questions directly from
classify
to a newdraft_general_response
node, skipping retrieval and revision - Sketch out the conditional edge logic: when should the classifier route to this new path vs. the existing knowledge retrieval flow?
2. Give Your Agent Customer Context
Your agent is smart, but it's answering tickets without knowing anything about the customer's history. That's like a doctor treating you without checking your medical records.
- Design a
lookup_customer_history(ticket_text: str)
tool that finds the customer ID and returns their recent support interactions - Update the
AGENT_SYSTEM_PROMPT
to teach the agent when to use this tool. Write the exact instruction you'd add (hint: consider checking for customer IDs before drafting responses)
3. Social Media Content Generation: Workflow or Agent?
You need to build a system that creates weekly social media content by reading 5 recent blog posts, analyzing 3 industry news articles, and generating 7 tweets.
- Which pattern would you choose: structured workflow or autonomous agent?
- Defend your choice in 2-3 sentences, considering the predictability of the process and reliability requirements