A standardization tool for Ergo MCP API responses that transforms various output formats (JSON, Markdown, plaintext) into a consistent JSON structure for improved integration and usability.
The MCP API returns responses in inconsistent formats:
- Some endpoints return JSON
- Some return Markdown
- Some return plain text
- Some return mixed formats (Markdown with embedded JSON)
This inconsistency makes it difficult to integrate with other systems and requires custom handling for each endpoint.
The MCPResponseStandardizer
transforms all responses into a consistent JSON structure:
{
"success": true,
"data": {
// Standardized response data extracted from the original
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
For error responses:
{
"success": false,
"error": {
"code": 400,
"message": "Error message"
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
- Automatically detects response format (JSON, Markdown, plaintext)
- Extracts structured data from Markdown responses
- Preserves original data structure for JSON responses
- Extracts embedded JSON from mixed-format responses
- Provides consistent error handling
- Includes metadata about original format and processing timestamp
from mcp_response_standardizer import MCPResponseStandardizer
# Initialize the standardizer
standardizer = MCPResponseStandardizer()
# Standardize a response
endpoint_name = "blockchain_status"
response_content = "..." # Content from the MCP API
status_code = 200 # HTTP status code from the API call
# Get standardized response
standardized = standardizer.standardize_response(
endpoint_name,
response_content,
status_code
)
# Access the standardized data
if standardized["success"]:
data = standardized["data"]
# Use the standardized data...
else:
error = standardized["error"]
print(f"Error {error['code']}: {error['message']}")
You can also use the standardizer from the command line:
python mcp_response_standardizer.py blockchain_status response.txt
Where:
blockchain_status
is the endpoint nameresponse.txt
is a file containing the response content
A test script test_standardizer.py
is provided to demonstrate the standardizer with sample responses:
python test_standardizer.py
This script:
- Creates sample responses in different formats
- Saves them to the
sample_responses
directory - Processes each sample using the standardizer
- Saves the standardized output for comparison
The standardizer uses the following approach:
- Check if response is an error based on HTTP status code
- Determine the original format (JSON, Markdown, text)
- Process the response according to its format:
- JSON: Parse and preserve the structure
- Markdown: Extract structured data (headers, lists, tables, code blocks)
- Text: Convert to key-value pairs when possible
- Mixed: Extract embedded JSON and combine with other extracted data
- Format the result in the standardized structure
- Include metadata about the original format and processing
- Python 3.6+
- No external dependencies required
Ergo Explorer Model Context Protocol (MCP) is a comprehensive server that provides AI assistants with direct access to Ergo blockchain data through a standardized interface.
This project bridges the gap between AI assistants and the Ergo blockchain ecosystem by:
- Providing structured blockchain data in AI-friendly formats
- Enabling complex blockchain analysis through simple natural language queries
- Supporting token analytics, address intelligence, and ecosystem monitoring
- Standardizing blockchain data access patterns for AI models
- Blockchain Exploration: Retrieve blocks, transactions, and network statistics
- Address Analysis: Query balances, transaction history, and perform forensic analysis
- Token Intelligence: View token information, holder distributions, historical ownership tracking, and collection data
- Ecosystem Integration: Access EIP information, oracle pool data, and address book
- Advanced Analytics: Analyze blockchain patterns, token metrics, and transaction flows
- Entity Identification: Detect related addresses using advanced address clustering algorithms
- Interactive Visualizations: Generate and interact with network visualizations for entity analysis
All endpoints in the Ergo Explorer MCP implement a standardized response format system that:
- Supports both human-readable (markdown) and machine-readable (JSON) formats
- Provides consistent structure across all endpoints
- Maintains backward compatibility through dual-format support
- Implements comprehensive error handling
- Uses the
@standardize_response
decorator for automatic format conversion
{
"status": "success", // or "error"
"data": {
// Endpoint-specific structured data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789
}
}
For more information on response standardization, see RESPONSE_STANDARDIZATION.md.
The Ergo Explorer MCP provides advanced entity identification capabilities through address clustering algorithms. This feature helps identify groups of addresses likely controlled by the same entity.
- Graph-based Clustering: Identifies related addresses using transaction graph analysis
- Co-spending Detection: Detects addresses used together in transaction inputs
- Confidence Scoring: Assigns confidence levels to detected entity clusters
- Address Relationship Mapping: Shows how addresses are related within an entity
- Interactive Visualization: Provides network graph visualization of entities
The following endpoints are available for entity identification:
/address_clustering/identify
/address_clustering/visualize
/address_clustering/openwebui_entity_tool
/address_clustering/openwebui_viz_tool
Ergo Explorer MCP integrates with Open WebUI to provide enhanced visualization and interaction capabilities:
- Entity Text Tool: Returns a text summary of detected entities for an address
- Interactive Visualization Tool: Renders an interactive D3.js network graph visualization
- Customizable Views: Filter and search entities, zoom and pan visualization
- Entity Analysis: Explore relationships between addresses and entities
To identify entities related to an address:
from ergo_explorer.api import make_request
# Identify entities for an address
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Get visualization for an address
viz_response = make_request("address_clustering/visualize", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Access entity clusters
entities = response["data"]["clusters"]
for entity_id, entity_data in entities.items():
print(f"Entity {entity_id}: {len(entity_data['addresses'])} addresses")
print(f"Confidence: {entity_data['confidence_score']}")
To use the Open WebUI tools:
[Tool: openwebui_entity_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
[Tool: openwebui_viz_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
The Ergo Explorer MCP includes built-in token estimation capabilities to help AI assistants optimize their context window usage. This feature provides an estimate of the number of tokens in each response for various LLM models.
- Automatic Token Counting: Each response includes an estimate of its token count
- Model-Specific Estimation: Supports various LLM models (Claude, GPT, Mistral, etc.)
- Breakdown by Response Section: Provides token counts for data, metadata, and status
- Configurable Thresholds: Response truncation based on token count thresholds
- Fallback Mechanism: Works even if
tiktoken
is not available
Token estimation is included in the metadata
section of all standardized responses:
{
"status": "success",
"data": {
// Response data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789,
"token_breakdown": {
"data": 650,
"metadata": 89,
"status": 50
}
}
}
To access token estimates in responses:
from ergo_explorer.api import make_request
# Make a request to any endpoint
response = make_request("blockchain/status")
# Access token estimation information
token_count = response["metadata"]["token_estimate"]
is_truncated = response["metadata"]["is_truncated"]
print(f"Response contains approximately {token_count} tokens")
if is_truncated:
print("Response was truncated to fit within token limits")
You can specify which LLM model to use for token estimation:
from ergo_explorer.api import make_request
# Request with specific model type for token estimation
response = make_request("blockchain/address_info",
{"address": "9hdcMw4eRpJPJGx8RJhvdRgFRsE1URpQCsAWM3wG547gQ9awZgi"},
model_type="gpt-4")
# The token_estimate will be calculated based on GPT-4's tokenization
Response Type | Target Token Range | Optimization Strategy |
---|---|---|
Simple queries | < 500 tokens | Full response without truncation |
Standard queries | 500-2000 tokens | Selective field inclusion |
Complex queries | 2000-5000 tokens | Pagination or truncated response |
Data-intensive | > 5000 tokens | Summary with optional detail retrieval |
The Ergo Explorer MCP includes comprehensive functionality for tracking the historical ownership of tokens and analyzing how distribution changes over time:
- Complete Token History: Track all boxes that have ever contained the token to provide a comprehensive view of token movements through the blockchain
- Block Height Tracking: Includes block height information for all token transfers
- Token Transfer Monitoring: Follow tokens as they move between addresses
- Distribution Metrics: Calculate concentration metrics (Gini coefficient) for token distribution
- Advanced Box Analysis: Uses efficient box-based method to analyze all transactions involving a token
// Simple request with just essential parameters
GET /token/historical_token_holders
{
"token_id": "d71693c49a84fbbecd4908c94813b46514b18b67a99952dc1e6e4791556de413",
"max_transactions": 200
}
Response format includes detailed token transfer history and snapshots of token distribution at various points in time (or block heights).
- Python 3.8+
- Access to Ergo Explorer API
- Optional: Access to Ergo Node API (for advanced features)
-
Clone the repository:
git clone https://github.com/ergo-mcp/ergo-explorer-mcp.git cd ergo-explorer-mcp
-
Install dependencies:
pip install -r requirements.txt
-
Configure your environment:
# Set up environment variables export ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" export ERGO_NODE_API="http://your-node-address:9053" # Optional export ERGO_NODE_API_KEY="your-api-key" # Optional
-
Run the MCP server:
python -m ergo_explorer.server
-
Build the Docker image:
docker build -t ergo-explorer-mcp .
-
Run the container:
docker run -d -p 8000:8000 \ -e ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" \ -e ERGO_NODE_API="http://your-node-address:9053" \ -e ERGO_NODE_API_KEY="your-api-key" \ --name ergo-mcp ergo-explorer-mcp
To contribute to the project:
- Fork the repository
- Create a feature branch
- Set up a development environment:
pip install -r requirements.txt pip install -r requirements.test.txt
- Run tests:
pytest
- Submit a pull request
For comprehensive documentation, see:
This project is licensed under the MIT License - see the LICENSE file for details.