Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 108 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,9 @@ uv add graphiti-core[neptune]
### You can also install optional LLM providers as extras:

```bash
# Install with VS Code models support (no external API keys required)
pip install graphiti-core[vscodemodels]

# Install with Anthropic support
pip install graphiti-core[anthropic]

Expand All @@ -197,10 +200,10 @@ pip install graphiti-core[groq]
pip install graphiti-core[google-genai]

# Install with multiple providers
pip install graphiti-core[anthropic,groq,google-genai]
pip install graphiti-core[vscodemodels,anthropic,groq,google-genai]

# Install with FalkorDB and LLM providers
pip install graphiti-core[falkordb,anthropic,google-genai]
pip install graphiti-core[falkordb,vscodemodels,google-genai]

# Install with Amazon Neptune
pip install graphiti-core[neptune]
Expand All @@ -222,8 +225,8 @@ performance.

> [!IMPORTANT]
> Graphiti defaults to using OpenAI for LLM inference and embedding. Ensure that an `OPENAI_API_KEY` is set in your
> environment.
> Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAI
> environment, or use VS Code models by installing `graphiti-core[vscodemodels]` for no external API key requirements.
> Support for Anthropic, Groq, and Google Gemini LLM inferences is also available. Other LLM providers may be supported via OpenAI
> compatible APIs.

For a complete working example, see the [Quickstart Example](./examples/quickstart/README.md) in the examples directory.
Expand Down Expand Up @@ -269,6 +272,24 @@ In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a
If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variables
must be set.

### VS Code Models Configuration

When using VS Code models, no external API keys are required. However, you can configure the behavior using these optional environment variables:

```bash
# Enable VS Code models (automatically detected when available)
USE_VSCODE_MODELS=true

# Optional: Override default model names (uses VS Code's available models)
VSCODE_LLM_MODEL="gpt-4o-mini"
VSCODE_EMBEDDING_MODEL="embedding-001"

# Optional: Configure embedding dimensions (default: 1024)
VSCODE_EMBEDDING_DIM=1024
```

The VS Code integration automatically detects when VS Code is available and provides intelligent fallbacks when it's not, ensuring your application works consistently across different environments.

### Database Configuration

Database names are configured directly in the driver constructors:
Expand Down Expand Up @@ -353,6 +374,89 @@ driver = NeptuneDriver(host=neptune_uri, aoss_host=aoss_host, port=neptune_port)
graphiti = Graphiti(graph_driver=driver)
```

## Using Graphiti with VS Code Models

Graphiti supports VS Code's built-in language models and embeddings for LLM inference, embedding generation, and cross-encoding. This integration provides a seamless experience when working within VS Code, utilizing the editor's native AI capabilities without requiring external API keys.

Install Graphiti with VS Code models support:

```bash
uv add "graphiti-core[vscodemodels]"

# or

pip install "graphiti-core[vscodemodels]"
```

```python
from graphiti_core import Graphiti
from graphiti_core.llm_client.vscode_client import VSCodeClient
from graphiti_core.embedder.vscode_embedder import VSCodeEmbedder
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.embedder.client import EmbedderConfig

# Initialize Graphiti with VS Code clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=VSCodeClient(
config=LLMConfig(
model="gpt-4o-mini", # VS Code model name
small_model="gpt-4o-mini"
)
),
embedder=VSCodeEmbedder(
config=EmbedderConfig(
embedding_model="embedding-001", # VS Code embedding model
embedding_dim=1024 # 1024-dimensional vectors
)
)
)

# Now you can use Graphiti with VS Code's native models
```

### VS Code Configuration

The VS Code integration automatically detects available models in your VS Code environment. Make sure you have:

1. **Language Models**: Any compatible VS Code language model extension (GitHub Copilot, Azure OpenAI, etc.)
2. **Embedding Models**: Compatible embedding model extensions or fallback to semantic chunking

**Environment Variables for VS Code:**
```bash
# Optional: Specify preferred models
VSCODE_LLM_MODEL=gpt-4o
VSCODE_EMBEDDING_MODEL=text-embedding-ada-002
VSCODE_EMBEDDING_DIM=1536

# For development/testing
USE_VSCODE_MODELS=true
```

The VS Code integration provides:
- **Native VS Code LLM support** with intelligent fallbacks for consistent responses
- **1024-dimensional embeddings** with semantic clustering for consistent similarity preservation
- **No external API keys required** - uses VS Code's built-in AI capabilities
- **Seamless editor integration** - works directly within your VS Code environment

> [!NOTE]
> The VS Code models integration automatically detects VS Code availability and provides intelligent fallbacks when VS Code is not available, ensuring your application works across different environments.

### Troubleshooting VS Code Integration

**Common Issues:**

1. **Models not detected**: Ensure you have VS Code language model extensions installed and active
2. **Embedding dimension mismatch**: Configure `VSCODE_EMBEDDING_DIM` to match your model's output dimension
3. **Authentication errors**: Make sure your VS Code extensions are properly authenticated

**Compatibility:**
- Works with GitHub Copilot, Azure OpenAI, and other VS Code AI extensions
- Requires VS Code with language model API support
- Falls back gracefully to semantic chunking when embeddings are unavailable

## Using Graphiti with Azure OpenAI

Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different
Expand Down
101 changes: 101 additions & 0 deletions examples/vscode_models/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# VS Code Models Integration Example

This example demonstrates how to use Graphiti with VS Code's built-in AI models and embeddings.

## Prerequisites

1. **VS Code with AI Extensions**: Make sure you have VS Code with compatible language model extensions:
- GitHub Copilot
- Azure OpenAI extension
- Any other VS Code language model provider

2. **Neo4j Database**: Running Neo4j instance (can be local or remote)

3. **Python Dependencies**:
```bash
pip install "graphiti-core[vscodemodels]"
```

## Environment Setup

Set up your environment variables:

```bash
# Neo4j Configuration
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password

# Optional VS Code Configuration
VSCODE_LLM_MODEL=gpt-4o-mini
VSCODE_EMBEDDING_MODEL=embedding-001
VSCODE_EMBEDDING_DIM=1024
USE_VSCODE_MODELS=true
```

## Running the Example

```bash
python basic_usage.py
```

## What the Example Does

1. **Initializes VS Code Clients**:
- Creates a `VSCodeClient` for language model operations
- Creates a `VSCodeEmbedder` for embedding generation
- Both clients automatically detect available VS Code models

2. **Creates Graphiti Instance**:
- Connects to Neo4j database
- Uses VS Code models for all AI operations

3. **Adds Knowledge Episodes**:
- Adds sample data about a fictional company "TechCorp"
- Each episode is processed and added to the knowledge graph

4. **Performs Search**:
- Searches the knowledge graph for information about TechCorp
- Returns relevant facts and relationships

## Expected Output

```
Adding episodes to the knowledge graph...
✓ Added episode 1
✓ Added episode 2
✓ Added episode 3
✓ Added episode 4

Searching for information about TechCorp...
Search Results:
1. John is a software engineer who works at TechCorp and specializes in Python development...
2. Sarah is the CTO at TechCorp and has been leading the engineering team for 5 years...
3. TechCorp is developing a new AI-powered application using machine learning...
4. John and Sarah collaborate on the AI project with John handling backend implementation...

Example completed successfully!
VS Code models integration is working properly.
```

## Key Features Demonstrated

- **Zero External Dependencies**: No API keys required, uses VS Code's built-in AI
- **Automatic Model Detection**: Detects available VS Code models automatically
- **Intelligent Fallbacks**: Falls back gracefully when VS Code models are unavailable
- **Semantic Search**: Performs hybrid search across the knowledge graph
- **Relationship Extraction**: Automatically extracts entities and relationships from text

## Troubleshooting

**Models not detected**:
- Ensure VS Code language model extensions are installed and active
- Check that you're running the script within VS Code or with VS Code in your PATH

**Connection errors**:
- Verify Neo4j is running and accessible
- Check NEO4J_URI, NEO4J_USER, and NEO4J_PASSWORD environment variables

**Embedding dimension mismatch**:
- Set VSCODE_EMBEDDING_DIM to match your model's output dimension
- Default is 1024 for consistent similarity preservation
88 changes: 88 additions & 0 deletions examples/vscode_models/basic_usage.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/usr/bin/env python3
"""
Basic usage example for Graphiti with VS Code Models integration.

This example demonstrates how to use Graphiti with VS Code's built-in AI models
without requiring external API keys.

Prerequisites:
- VS Code with language model extensions (GitHub Copilot, Azure OpenAI, etc.)
- graphiti-core[vscodemodels] installed
- Running Neo4j instance

Usage:
python basic_usage.py
"""

import asyncio
import os
from datetime import datetime
from graphiti_core import Graphiti
from graphiti_core.llm_client.vscode_client import VSCodeClient
from graphiti_core.embedder.vscode_embedder import VSCodeEmbedder, VSCodeEmbedderConfig
from graphiti_core.llm_client.config import LLMConfig

async def main():
"""Basic example of using Graphiti with VS Code models."""

# Configure VS Code clients
llm_client = VSCodeClient(
config=LLMConfig(
model="gpt-4o-mini", # VS Code model name
small_model="gpt-4o-mini"
)
)

embedder = VSCodeEmbedder(
config=VSCodeEmbedderConfig(
embedding_model="embedding-001", # VS Code embedding model
embedding_dim=1024, # 1024-dimensional vectors
use_fallback=True
)
)

# Initialize Graphiti
graphiti = Graphiti(
uri=os.getenv("NEO4J_URI", "bolt://localhost:7687"),
user=os.getenv("NEO4J_USER", "neo4j"),
password=os.getenv("NEO4J_PASSWORD", "password"),
llm_client=llm_client,
embedder=embedder
)

# Add some example episodes
episodes = [
"John is a software engineer who works at TechCorp. He specializes in Python development.",
"Sarah is the CTO at TechCorp. She has been leading the engineering team for 5 years.",
"TechCorp is developing a new AI-powered application using machine learning.",
"John and Sarah are collaborating on the AI project, with John handling the backend implementation."
]

print("Adding episodes to the knowledge graph...")
current_time = datetime.now()
for i, episode in enumerate(episodes):
await graphiti.add_episode(
name=f"Episode {i+1}",
episode_body=episode,
source_description="Example data",
reference_time=current_time
)
print(f"✓ Added episode {i+1}")

# Search for information
print("\nSearching for information about TechCorp...")
search_results = await graphiti.search(
query="Tell me about TechCorp and its employees",
center_node_uuid=None,
num_results=5
)

print("Search Results:")
for i, result in enumerate(search_results):
print(f"{i+1}. {result.fact[:100]}...")

print("\nExample completed successfully!")
print("VS Code models integration is working properly.")

if __name__ == "__main__":
asyncio.run(main())
Loading