CodeSteps

Python, C, C++, C#, PowerShell, Android, Visual C++, Java ...

How LangGraph works?

Here’s how LangGraph works — it’s a framework for building stateful, multi-step AI agent workflows as a graph of nodes and edges.

The core idea — LangGraph models an agent as a directed graph. Instead of a linear chain of LLM calls, you have nodes (Python/JS functions) connected by edges (transitions). A single shared state object flows through the graph and gets updated at each node.

The three building blocks:

  1. Nodes — plain functions that receive the current state, do something (call an LLM, run a tool, parse output), and return an updated state. The agent node is usually where the LLM reasons and decides what to do next.
  2. Edges — define how control flows between nodes. A regular edge always goes A → B. A conditional edge inspects the state and routes to different nodes based on the result — this is how you get loops (tool → agent → tool → …) and branching (done? → respond, or → use another tool).
  3. State — a typed dict (or Pydantic model) shared across the entire run. Every node reads from it and can write to it. This replaces the fragile “pass variables through function arguments” pattern.

Why it’s powerful:

  • Cycles are first-class — an LLM can call tools, get results, reason again, call more tools, and loop until it decides it’s done. Stateless chains can’t do this cleanly.
  • Human-in-the-loop — you can pause the graph at any edge and wait for a human to approve or modify state before continuing.
  • Persistence — state can be checkpointed to a database, so a long-running agent can be resumed after a crash or across sessions.
  • Multi-agent — you can compose multiple graphs together, where one agent’s output becomes another’s input.

In short: LangGraph gives you the scaffolding to build agents that can loop, branch, pause, and recover — things that are awkward or impossible in a simple sequential chain.

The Agent Node

The agent node is the brain of a LangGraph graph — it’s where the LLM lives and makes decisions. Every time control reaches it, it does three things:

1. Reads the full state. It receives the current state["messages"] — the entire conversation history including the original user query, any previous tool calls, and all tool results so far. This is how the agent “remembers” what it’s already tried.

2. Runs the LLM. It calls the language model with that message history. The LLM has been bound to a list of tool schemas, so it knows what tools are available. Based on everything in context, it decides what to do next.

3. Returns a new message. The output is always an AIMessage, which gets appended to state. That message is either:

  • A regular text response (no tool calls) → the conditional edge routes to respond/END
  • A message with tool_calls → the conditional edge routes to the tool node

A minimal agent node looks like this:

from langchain_core.messages import AIMessage

def agent(state: AgentState) -> AgentState:
    # LLM already knows about tools via bind_tools()
    response: AIMessage = llm_with_tools.invoke(state["messages"])
    return {"messages": [response]}

That’s genuinely it. The node itself is thin — the intelligence is in the LLM, and the routing logic lives in the conditional edge, not inside the node.

What makes it powerful is the loop. Because the agent node is revisited after every tool call, the LLM can see the result of its last action and adapt. It can call a second tool based on what the first one returned, correct course if a tool errored, or decide it now has enough information to answer. This is what separates a LangGraph agent from a simple one-shot LLM call — the agent node runs as many times as needed until it decides it’s done.

The agent node is also where you’d inject a system prompt, add memory from previous sessions, or pass in user context — anything you want the LLM to reason over goes into state["messages"] before this node runs.

Tool node

Tools in LangGraph are how the agent reaches out to the world — search engines, databases, code runners, APIs.

Step 1 — The LLM decides to use a tool. The agent node runs the LLM with a list of available tool schemas (name, description, parameter types). If the LLM decides it needs one, it doesn’t call it directly — it emits an AIMessage that contains a tool_calls field with the tool name and arguments it wants to use.

Step 2 — The message goes into state. That AIMessage is appended to state["messages"]. LangGraph then checks the conditional edge: does the message contain tool_calls? If yes, route to the tool node. If no, route to END (or a respond node).

Step 3 — The tool node executes. LangGraph’s built-in ToolNode (or your custom one) reads the tool_calls from the last message, looks up the corresponding Python function by name, and calls it with the provided arguments. This is where real work happens — an HTTP request, a database query, a file read, running code.

Step 4 — The result goes back into state. The tool’s return value is wrapped in a ToolMessage (which carries the tool_call_id so the LLM can match it up) and appended to state["messages"]. Control loops back to the agent node, which now sees the full conversation including the tool result and reasons again.

A few important things to know about how tools are registered:

from langchain_core.tools import tool

@tool
def search_web(query: str) -> str:
    """Search the web for current information."""
    return do_actual_search(query)

# Pass tools to the LLM so it knows they exist
llm_with_tools = llm.bind_tools([search_web])

# The ToolNode gets the same list so it can dispatch calls
tool_node = ToolNode([search_web])

The @tool decorator does two things: it generates a JSON schema from the function’s signature and docstring (which gets sent to the LLM), and it makes the function dispatchable by name (which the ToolNode uses to actually run it).

The docstring matters — the LLM uses it to decide when to call the tool. A vague docstring means the LLM will misuse or ignore the tool. A specific one (e.g. “Use this to search for real-time information unavailable in training data”) gives the model the context it needs to make good routing decisions.

The Respond node

The respond node is the final step in a LangGraph agent — it’s where the agent stops looping and produces a clean answer for the user.

It’s actually the simplest node in the graph. After the agent has finished calling tools and gathering results, the conditional edge routes to the respond node instead of looping back. All the respond node does is take state["messages"], pull out the last AIMessage, and return it (or reformat it) as the final output.

In many simple graphs, there isn’t even a dedicated respond node — the agent node’s last message is the output, and the graph just ends. But explicit respond nodes are useful when you want to:

Format the final answer. The raw last AIMessage might contain JSON, intermediate reasoning, or tool result references. A respond node can clean that up into a structured response or a user-friendly string.

Validate before returning. You can check that the answer actually addresses the original question, or run guardrails, before the output leaves the graph.

Separate concerns cleanly. Keeping “reasoning + tool use” in the agent node and “output formatting” in the respond node makes each node easier to test and swap out independently.

A minimal respond node looks like this:

def respond(state: AgentState) -> AgentState:
    # The last message is already the LLM's final answer
    last_message = state["messages"][-1]
    return {"final_answer": last_message.content}

And the conditional edge that routes to it is the key piece:

def should_continue(state: AgentState) -> str:
    last = state["messages"][-1]
    # If no tool calls pending → we're done, go to respond
    if not last.tool_calls:
        return "respond"
    # Otherwise → keep looping through tools
    return "call_tools"

graph.add_conditional_edges("agent", should_continue, {
    "call_tools": "tool_node",
    "respond": "respond_node",
})

The respond node is essentially the graph’s exit ramp — it’s where the loop terminates and the result surfaces back to whoever called graph.invoke(...).

 

/ Malin

How LangGraph works?

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top