TANZUPLATFORM-MCP-Client is a Spring application that can be deployed to Cloud Foundry and consume the platform's AI services. It's built with the AI Framework Spring AI, leveraging the Model Context Protocol (MCP) and memGPT, to provide advanced memory service capabilities:
- Java 21 or higher
- Maven 3.8+
- Access to a Cloud Foundry Foundation with the GenAI tile or other LLM services
- Developer access to your Cloud Foundry environment
- Build the application package:
mvn clean package- Push the application to Tanzu Platform:
cf push- Create a service instance that provides chat LLM capabilities:
cf create-service genai ryan-chat chat-llm- Bind the service to your application:
cf bind-service ryan-chat chat-llm- Restart your application to apply the binding:
cf restart ryan-chatNow your chatbot will use the LLM to respond to chat requests.
- Create a service instance that provides embedding LLM capabilities
cf create-service genai ryan-chat embedding-llm - Create a Postgres service instance to use as a vector database
cf create-service postgres on-demand-postgres-db vector-db- Bind the services to your application
cf bind-service ryan-chat embedding-llm
cf bind-service ryan-chat vector-db- Restart your application to apply the binding:
cf restart ryan-chatNow your chatbot will respond to queries about the uploaded document
Model Context Protocol (MCP) servers are lightweight programs that expose specific capabilities to AI models through a standardized interface. These servers act as bridges between LLMs and external tools, data sources, or services, allowing your AI application to perform actions like searching databases, accessing files, or calling external APIs without complex custom integrations.
- Create a user-provided service that provides the URL for an existing MCP server:
cf cups mcp-server -p '{"mcpServiceURL":"https://your-mcp-server.example.com"}'- Bind the MCP service to your application:
cf bind-service ryan-chat mcp-server- Restart your application:
cf restart ryan-chatYour chatbot will now register with the MCP agent, and the LLM will be able to invoke the agent's capabilities when responding to chat requests.
If you have access to a compatible memGPT implementation service:
- Create a user-provided service for the memGPT service:
cf cups memGPT -p '{"memGPTUrl":"https://your-memgpt-service.example.com"}'- Bind the memGPT service to your application:
cf bind-service ryan-chat memGPT- Restart your application:
cf restart ryan-chat



