Description
🧭 Epic
Title: Dynamic Tool Creation from LLM Providers
Goal: Enable MCP Gateway operators to connect LLM providers and dynamically generate new tools on the fly by sending a prompt to the LLM that returns tool definitions, which the gateway then registers automatically.
Why now: Current MCP Gateway tooling is largely static or manually configured. Integrating LLM-driven tool creation unlocks automated generation and extension of tool catalogs, enabling more adaptive and intelligent workflows.
🧭 Type of Feature
- Enhancement to existing functionality
- New feature or capability
- Security Related (requires review)
🙋♂️ User Story 1
As a: gateway operator
I want: to connect an LLM provider (e.g., OpenAI, Anthropic, watsonx) to the gateway
So that: I can dynamically generate new tools by sending a prompt describing needed capabilities
✅ Acceptance Criteria
Scenario: Register an LLM provider with credentials
Given an admin registers an LLM provider with API key and model details
When the provider is validated
Then the provider is stored securely for use in tool generation
🙋♂️ User Story 2
As a: gateway operator
I want: to submit a prompt to the LLM provider asking it to generate JSON tool definitions
So that: the gateway can parse and register these tools dynamically without manual input
✅ Acceptance Criteria
Scenario: Generate tools from LLM prompt
Given an LLM provider configured
When I submit a prompt "Create tools for finance and risk analysis"
And the LLM returns valid tool JSON definitions
Then those tools appear in the tool catalog dynamically
🙋♂️ User Story 3
As a: API consumer
I want: the generated tools to behave like any other tool with input validation and invocation
So that: I can invoke LLM-generated tools seamlessly
✅ Acceptance Criteria
Scenario: Invoke dynamically generated tool
Given a tool generated from LLM with input schema
When I call the tool with valid parameters
Then the tool invokes its configured backend or simulated logic correctly
🙋♂️ User Story 4
As a: gateway admin
I want: to cache or store generated tools persistently
So that: tools remain available after gateway restarts
📐 Design Sketch
📁 New API group: /llm-tools
Endpoint | Description |
---|---|
POST /llm-tools/providers | Register a new LLM provider (API key, model) |
GET /llm-tools/providers | List configured LLM providers |
POST /llm-tools/generate | Submit a prompt to generate tools dynamically |
GET /tools/generated | List all LLM-generated tools |
DELETE /tools/generated/{id} | Remove a generated tool |
🧩 Provider Configuration Schema
{
"name": "OpenAI",
"api_key": "sk-xxx",
"model": "gpt-4o-mini",
"endpoint": "https://api.openai.com/v1/chat/completions"
}
🧩 Tool Generation Request Schema
{
"provider_name": "OpenAI",
"prompt": "Generate JSON tool definitions for legal document analysis."
}
🔄 Workflow
- Operator registers an LLM provider with API credentials.
- Operator submits a prompt via POST /llm-tools/generate.
- Gateway sends the prompt to the configured LLM provider API.
- Gateway receives JSON describing one or more tools (name, description, URL, input schema, etc.).
- Gateway validates and registers these tools in the database.
- Tools become available through the normal tool-catalog APIs.
- Prompts can optionally be reused from the existing MCP prompt library for consistent generation behavior.
📦 Runtime
- LLM call uses existing
httpx.AsyncClient
with auth from provider config. - Tool validation reuses existing ToolCreate validation logic.
- Prompts may be selected from or stored in the MCP prompt registry.
- Generated tools are tagged
generated=true
for admin visibility. - Optional manual approval workflow.
- Cache LLM responses for prompt reuse and rate-limit control.
🔄 Alternatives Considered
- Manually writing JSON tool configs → cumbersome and non-scalable.
- External orchestration → harder to integrate and less discoverable.
📓 Additional Context
- Reuse MCP Gateway’s authentication and database abstractions.
- Strictly validate LLM output to avoid injection or malformed tools.
- Extend Admin UI for provider management and generated-tool review.
- Reuse the prompt system to standardize generation logic (e.g., use a named prompt with templated variables).
- Collect metrics on generation success/failure and prompt usage.
🧭 Tasks
Area | Task | Notes |
---|---|---|
Schema | [ ] Define LLMProvider model & ToolGenerationRequest schema |
|
API | [ ] Implement /llm-tools endpoints (provider, generate) |
FastAPI router |
Logic | [ ] LLM API client integration (OpenAI, Anthropic, etc.) | httpx.AsyncClient |
[ ] Parse & validate LLM-generated tool definitions | Reuse Tool schemas | |
[ ] Register generated tools into DB | Reuse ToolService |
|
[ ] Support prompt name + vars as alternative to raw text input | Integrate with PromptService |
|
Security | [ ] Sanitize & validate all LLM responses | Limit size, strict JSON schema |
UI | [ ] Admin UI for providers & generated tools | Preview, approve |
Tests | [ ] Unit: provider registration & LLM call mocking | |
[ ] Integration: tool generation & catalog listing | ||
Docs | [ ] Document new API & examples | |
Metrics | [ ] Add Prometheus metrics for LLM calls & tool generation |
🔗 MCP Standards Check
- ✔️ Does not affect wire-level MCP spec
- ✔️ No backward-incompatibility for existing APIs
- ✔️ Optional — defaults to static-only mode if unused