Model Context Protocol (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. LangChain agents can use tools defined on MCP servers using the langchain-mcp-adapters
library.
Install
Install the langchain-mcp-adapters
library to use MCP tools in LangGraph:
pip install langchain-mcp-adapters
Transport types
MCP supports different transport mechanisms for client-server communication:
stdio: Client launches server as a subprocess and communicates via standard input/output. Best for local tools and simple setups.
Streamable HTTP: Server runs as an independent process handling HTTP requests. Supports remote connections and multiple clients.
Server-Sent Events (SSE): a variant of streamable HTTP optimized for real-time streaming communication.
langchain-mcp-adapters
enables agents to use tools defined across one or more MCP server.
Accessing multiple MCP servers
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
client = MultiServerMCPClient(
{
"math" : {
"transport" : "stdio" , # Local subprocess communication
"command" : "python" ,
# Absolute path to your math_server.py file
"args" : [ "/path/to/math_server.py" ],
},
"weather" : {
"transport" : "streamable_http" , # HTTP-based remote server
# Ensure you start your weather server on port 8000
"url" : "http://localhost:8000/mcp" ,
}
}
)
tools = await client.get_tools()
agent = create_agent(
"anthropic:claude-3-7-sonnet-latest" ,
tools
)
math_response = await agent.ainvoke(
{ "messages" : [{ "role" : "user" , "content" : "what's (3 + 5) x 12?" }]}
)
weather_response = await agent.ainvoke(
{ "messages" : [{ "role" : "user" , "content" : "what is the weather in nyc?" }]}
)
MultiServerMCPClient
is stateless by default . Each tool invocation creates a fresh MCP ClientSession
, executes the tool, and then cleans up.
Custom MCP servers
To create your own MCP servers, you can use the mcp
library. This library provides a simple way to define tools and run them as servers.
Use the following reference implementations to test your agent with MCP tool servers.
Math server (stdio transport)
from mcp.server.fastmcp import FastMCP
mcp = FastMCP( "Math" )
@mcp.tool ()
def add ( a : int , b : int ) -> int :
"""Add two numbers"""
return a + b
@mcp.tool ()
def multiply ( a : int , b : int ) -> int :
"""Multiply two numbers"""
return a * b
if __name__ == "__main__" :
mcp.run( transport = "stdio" )
Weather server (streamable HTTP transport)
from mcp.server.fastmcp import FastMCP
mcp = FastMCP( "Weather" )
@mcp.tool ()
async def get_weather ( location : str ) -> str :
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__" :
mcp.run( transport = "streamable-http" )
For stateful servers that maintain context between tool calls, use client.session()
to create a persistent ClientSession
.
Using MCP ClientSession for stateful tool usage
from langchain_mcp_adapters.tools import load_mcp_tools
client = MultiServerMCPClient({ ... })
async with client.session( "math" ) as session:
tools = await load_mcp_tools(session)
Additional resources