Skip to main content
Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.

Overview

LangChain’s create_agent runs on LangGraph’s runtime under the hood. LangGraph exposes a Runtime object with the following information:
  1. Context: static information like user id, db connections, or other dependencies for an agent invocation
  2. Store: a BaseStore instance used for long-term memory
  3. Stream writer: an object used for streaming information via the "custom" stream mode
You can access the runtime information within tools and middleware.

Access

When creating an agent with create_agent, you can specify a context_schema to define the structure of the context stored in the agent Runtime. When invoking the agent, pass the context argument with the relevant configuration for the run:
from dataclasses import dataclass

from langchain.agents import create_agent

@dataclass
class Context:
    user_name: str

agent = create_agent(
    model="openai:gpt-5-nano",
    tools=[...],
    context_schema=Context  
)

agent.invoke(
    {"messages": [{"role": "user", "content": "What's my name?"}]},
    context=Context(user_name="John Smith")  
)

Inside tools

You can access the runtime information inside tools to:
  • Access the context
  • Read or write long-term memory
  • Write to the custom stream (ex, tool progress / updates)
Use the ToolRuntime parameter to access the Runtime object inside a tool.
from dataclasses import dataclass
from langchain.tools import tool, ToolRuntime  

@dataclass
class Context:
    user_id: str

@tool
def fetch_user_email_preferences(runtime: ToolRuntime[Context]) -> str:  
    """Fetch the user's email preferences from the store."""
    user_id = runtime.context.user_id  

    preferences: str = "The user prefers you to write a brief and polite email."
    if runtime.store:  
        if memory := runtime.store.get(("users",), user_id):  
            preferences = memory.value["preferences"]

    return preferences

Inside middleware

You can access runtime information in middleware to create dynamic prompts, modify messages, or control agent behavior based on user context. Use request.runtime to access the Runtime object inside middleware decorators. The runtime object is available in the ModelRequest parameter passed to middleware functions.
from dataclasses import dataclass

from langchain.messages import AnyMessage
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import dynamic_prompt, ModelRequest, before_model, after_model
from langgraph.runtime import Runtime

@dataclass
class Context:
    user_name: str

# Dynamic prompts
@dynamic_prompt
def dynamic_system_prompt(request: ModelRequest) -> str:
    user_name = request.runtime.context.user_name  
    system_prompt = f"You are a helpful assistant. Address the user as {user_name}."
    return system_prompt

# Before model hook
@before_model
def log_before_model(state: AgentState, runtime: Runtime[Context]) -> dict | None:  
    print(f"Processing request for user: {runtime.context.user_name}")  
    return None

# After model hook
@after_model
def log_after_model(state: AgentState, runtime: Runtime[Context]) -> dict | None:  
    print(f"Completed request for user: {runtime.context.user_name}")  
    return None

agent = create_agent(
    model="openai:gpt-5-nano",
    tools=[...],
    middleware=[dynamic_system_prompt, log_before_model, log_after_model],  
    context_schema=Context
)

agent.invoke(
    {"messages": [{"role": "user", "content": "What's my name?"}]},
    context=Context(user_name="John Smith")
)

I