Skip to content

sparun160782/mcp_client

 
 

Repository files navigation

🚀 MCP Client with Gemini AI

📢 Subscribe to The AI Language on YouTube!

Welcome! This project features multiple MCP clients integrated with Google Gemini AI to execute tasks via the Model Context Protocol (MCP) — with and without LangChain.

Happy building, and don’t forget to subscribe!

MCP Client Options

This repository includes four MCP client options for various use cases:

Option Client Script LangChain Config Support Transport Tutorial
1 client.py STDIO Legacy Client
2 langchain_mcp_client.py STDIO LangChain Client
3 langchain_mcp_client_wconfig.py STDIO Multi-Server
4 client_sse.py SSE (Loca & Web) SSE Client

If you want to add or reuse MCP Servers, check out the MCP Servers repo.


✪ Features

✅ Connects to an MCP server (STDIO or SSE)
✅ Uses Google Gemini AI to interpret user prompts
✅ Allows Gemini to call MCP tools via server
✅ Executes tool commands and returns results
✅ (Upcoming) Maintains context and history for conversations


Running the MCP Client

Choose the appropriate command for your preferred client:

  • Legacy STDIOuv run client.py path/to/server.py
  • LangChain STDIOuv run langchain_mcp_client.py path/to/server.py
  • LangChain Multi-Server STDIOuv run langchain_mcp_client_wconfig.py path/to/config.json
  • SSE Clientuv run client_sse.py sse_server_url

Project Structure

mcp-client-gemini/
├── client.py                        # Basic client (STDIO)
├── langchain_mcp_client.py         # LangChain + Gemini
├── langchain_mcp_client_wconfig.py # LangChain + config.json (multi-server)
├── client_sse.py                   # SSE transport client (local or remote)
├── .env                            # API key environment file
├── README.md                       # Project documentation
├── requirements.txt                # Dependency list
├── .gitignore                      # Git ignore rules
├── LICENSE                         # License information

How It Works

  1. You send a prompt:

    Create a file named test.txt

  2. The prompt is sent to Google Gemini AI.
  3. Gemini uses available MCP tools to determine a response.
  4. The selected tool is executed on the connected server.
  5. The AI returns results and maintains conversation context (if supported).

🤝 Contributing

At this time, this project does not accept external code contributions.

This is to keep licensing simple and avoid any shared copyright.

You're very welcome to: ✅ Report bugs or request features (via GitHub Issues)
✅ Fork the repo and build your own version
✅ Suggest documentation improvements

If you'd like to collaborate in another way, feel free to open a discussion!

About

MCP Client Implementation using Python, LangGraph and Gemini

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%