This project demonstrates an Agentic AI system that interacts with a Feast feature store to perform machine learning tasks. The AI agent can retrieve features, process them, and make intelligent decisions for use cases like recommendations, fraud detection, and customer segmentation.
The application consists of three main components:
- Feature Store (Feast): Stores and serves ML features
- Agentic AI System: Retrieves features and makes decisions
- Frontend Dashboard: Visualizes the workflow and results
Features are registered in the Feast feature store with appropriate metadata:
- Feature Views: Define logical groupings of features (e.g., customer_features, product_features)
- Feature Services: Define collections of features for specific use cases
- Entity: Define the keys used to retrieve features (e.g., customer_id, product_id)
Example feature registration:
customer_features = FeatureView(
name="customer_features",
entities=["customer_id"],
features=[
Feature("age", ValueType.INT64),
Feature("income", ValueType.FLOAT),
Feature("credit_score", ValueType.INT64),
Feature("purchase_history", ValueType.INT64)
],
batch_source=file_source
)The AI agent is initialized with:
- Access to the feature store
- A set of tools for specific tasks
- A LLM for natural language reasoning
- Memory to retain context
class AIAgent:
def __init__(self, feature_store: FeatureStore):
self.feature_store = feature_store
self.llm = OllamaLLM(model="mistral", temperature=0.7)
self.memory = ConversationBufferMemory(return_messages=True)
self.tools = [
Tool(name="recommend_products", func=self._handle_recommendation),
Tool(name="detect_fraud", func=self._handle_fraud_detection),
# ... more tools
]
self.agent = initialize_agent(tools=self.tools, llm=self.llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=self.memory)The agent retrieves features using:
- Online Features: Real-time feature values for current predictions
- Historical Features: Past feature values for training or analysis
# Online feature retrieval
online_features = feature_store.get_online_features(
entity_rows=[{"customer_id": customer_id}],
features=["customer_features:age", "customer_features:income"]
)
# Historical feature retrieval
entity_df = pd.DataFrame({"customer_id": [customer_id]})
historical_features = feature_store.get_historical_features(
entity_df=entity_df,
features=["customer_features:age", "customer_features:income"]
)The agent processes features through:
- Task Identification: Determine the type of task (recommendation, fraud detection, etc.)
- Feature Selection: Select relevant features for the task
- LLM Reasoning: Use LLM to reason about the features
- Output Generation: Generate appropriate outputs or predictions
async def process_action(self, action: AgentAction) -> AgentResponse:
# Convert action to a natural language query
query = self._create_agent_query(action)
# Run the agent with the query
agent_response = await self.agent.ainvoke({"input": query})
# Process the agent's response
response = self._process_agent_response(action, agent_response)
# Record the action in history
self.add_to_history(action_type=action.action_type, description=action.description)
return responseResults are structured as AgentResponse objects containing:
- Status information
- Processed data
- Explanations or recommendations
return AgentResponse(
message=f"Generated product recommendations for customer {customer_id}",
status="success",
data={
"customer_id": customer_id,
"customer_features": customer_features,
"recommendations": recommendations
}
)In this demo, we use a mock Feast implementation that simulates:
- Feature registration through feature views and services
- Online feature retrieval for real-time serving
- Historical feature retrieval for analysis
In a production setting, you would replace this with a real Feast instance connected to data sources.
The AI agent uses LangChain with the following components:
- LLM: Ollama running Mistral for reasoning
- Tools: Specialized functions for different tasks
- Memory: Conversation buffer for context retention
- Agent: REACT-style agent for reasoning and tool selection
The system implements robust error handling:
- If the LLM is unavailable, falls back to direct function calls
- If feature retrieval fails, uses defaults or cached values
- All errors are recorded in the agent history
The agent maintains a chronological history of all actions:
def add_to_history(self, action_type: str, description: str, status: str = "success"):
history_action = AgentHistoryAction(
timestamp=datetime.utcnow().isoformat(),
action=action_type,
description=description,
status=status
)
self.action_history.insert(0, history_action)- Agent retrieves customer features (age, income, purchase history)
- Features are processed to determine customer preferences
- Agent generates personalized product recommendations
- Recommendations are ranked by match score
- Agent retrieves transaction features and customer credit profile
- Features are analyzed for suspicious patterns
- A fraud risk score is calculated
- Risk factors are identified and explained
- Agent retrieves comprehensive customer features
- Features are analyzed to determine customer value and behavior
- Customer is assigned to a segment (VIP, High Value, etc.)
- Tailored strategies are recommended for the segment
docker-compose up# Backend
cd backend
pip install -r requirements.txt
python app.py
# Frontend
cd frontend
npm install
npm start
# Ollama (required for LLM)
ollama run mistral/agent/action: Process an agent action/demo/recommendation: Generate product recommendations/demo/fraud-detection: Analyze transaction for fraud/demo/customer-segmentation: Segment a customer
- Backend ↔ Feature Store: Feature retrieval and storage
- Backend ↔ LLM: Natural language reasoning
- Frontend ↔ Backend: API calls for agent actions
- Frontend ↔ User: Visualization and interaction
- Define new feature views in the feature store
- Register the features with appropriate metadata
- Create feature services for specific use cases
- Implement a new handler function
- Add the function as a tool in the agent initialization
- Update the agent query creation for the new action type
- Add appropriate prompt templates if needed
- Replace the mock feature store with a real Feast instance
- Configure offline and online stores
- Set up data ingestion pipelines
- Update feature retrieval code as needed
This agentic AI system demonstrates how AI agents can leverage feature stores for machine learning tasks. By combining structured feature data with LLM reasoning capabilities, the system can provide intelligent insights and recommendations in a variety of use cases.