Skip to content

Commit 620c6b0

Browse files
authored
docs: list googles model provider (#1920)
1 parent dcfd58b commit 620c6b0

File tree

2 files changed

+234
-5
lines changed

2 files changed

+234
-5
lines changed

docs/extra/components/choose_evaluator_llm.md

Lines changed: 115 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
21
=== "OpenAI"
32
Install the langchain-openai package
43

@@ -24,7 +23,7 @@
2423
```
2524

2625

27-
=== "Amazon Bedrock"
26+
=== "AWS"
2827
Install the langchain-aws package
2928

3029
```bash
@@ -67,7 +66,120 @@
6766

6867
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
6968

70-
=== "Azure OpenAI"
69+
=== "Google Cloud"
70+
Google offers two ways to access their models: Google AI Studio and Google Cloud Vertex AI. Google AI Studio requires just a Google account and API key, while Vertex AI requires a Google Cloud account. Use Google AI Studio if you're just starting out.
71+
72+
First, install the required packages (only the packages you need based on your choice of API):
73+
74+
```bash
75+
# for Google AI Studio
76+
pip install langchain-google-genai
77+
# for Google Cloud Vertex AI
78+
pip install langchain-google-vertexai
79+
```
80+
81+
Then set up your credentials based on your chosen API:
82+
83+
For Google AI Studio:
84+
```python
85+
import os
86+
os.environ["GOOGLE_API_KEY"] = "your-google-ai-key" # From https://ai.google.dev/
87+
```
88+
89+
For Google Cloud Vertex AI:
90+
```python
91+
# Ensure you have credentials configured (gcloud, workload identity, etc.)
92+
# Or set service account JSON path:
93+
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/service-account.json"
94+
```
95+
96+
Define your configuration:
97+
98+
```python
99+
config = {
100+
"model": "gemini-1.5-pro", # or other model IDs
101+
"temperature": 0.4,
102+
"max_tokens": None,
103+
"top_p": 0.8,
104+
# For Vertex AI only:
105+
"project": "your-project-id", # Required for Vertex AI
106+
"location": "us-central1", # Required for Vertex AI
107+
}
108+
```
109+
110+
Initialize the LLM and wrap it for use with ragas:
111+
112+
```python
113+
from ragas.llms import LangchainLLMWrapper
114+
from ragas.embeddings import LangchainEmbeddingsWrapper
115+
116+
# Choose the appropriate import based on your API:
117+
from langchain_google_genai import ChatGoogleGenerativeAI
118+
from langchain_google_vertexai import ChatVertexAI
119+
120+
# Initialize with Google AI Studio
121+
evaluator_llm = LangchainLLMWrapper(ChatGoogleGenerativeAI(
122+
model=config["model"],
123+
temperature=config["temperature"],
124+
max_tokens=config["max_tokens"],
125+
top_p=config["top_p"],
126+
))
127+
128+
# Or initialize with Vertex AI
129+
evaluator_llm = LangchainLLMWrapper(ChatVertexAI(
130+
model=config["model"],
131+
temperature=config["temperature"],
132+
max_tokens=config["max_tokens"],
133+
top_p=config["top_p"],
134+
project=config["project"],
135+
location=config["location"],
136+
))
137+
```
138+
139+
You can optionally configure safety settings:
140+
141+
```python
142+
from langchain_google_genai import HarmCategory, HarmBlockThreshold
143+
144+
safety_settings = {
145+
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
146+
# Add other safety settings as needed
147+
}
148+
149+
# Apply to your LLM initialization
150+
evaluator_llm = LangchainLLMWrapper(ChatGoogleGenerativeAI(
151+
model=config["model"],
152+
temperature=config["temperature"],
153+
safety_settings=safety_settings,
154+
))
155+
```
156+
157+
Initialize the embeddings and wrap them for use with ragas (choose one of the following):
158+
159+
```python
160+
# Google AI Studio Embeddings
161+
from langchain_google_genai import GoogleGenerativeAIEmbeddings
162+
163+
evaluator_embeddings = LangchainEmbeddingsWrapper(GoogleGenerativeAIEmbeddings(
164+
model="models/embedding-001", # Google's text embedding model
165+
task_type="retrieval_document" # Optional: specify the task type
166+
))
167+
```
168+
169+
```python
170+
# Vertex AI Embeddings
171+
from langchain_google_vertexai import VertexAIEmbeddings
172+
173+
evaluator_embeddings = LangchainEmbeddingsWrapper(VertexAIEmbeddings(
174+
model_name="textembedding-gecko@001", # or other available model
175+
project=config["project"], # Your GCP project ID
176+
location=config["location"] # Your GCP location
177+
))
178+
```
179+
180+
For more information on available models, features, and configurations, refer to: [Google AI Studio documentation](https://ai.google.dev/docs), [Google Cloud Vertex AI documentation](https://cloud.google.com/vertex-ai/docs), [LangChain Google AI integration](https://python.langchain.com/docs/integrations/chat/google_generative_ai), [LangChain Vertex AI integration](https://python.langchain.com/docs/integrations/chat/google_vertex_ai)
181+
182+
=== "Azure"
71183
Install the langchain-openai package
72184

73185
```bash

docs/extra/components/choose_generator_llm.md

Lines changed: 119 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
```
2525

2626

27-
=== "Amazon Bedrock"
27+
=== "AWS"
2828
Install the langchain-aws package
2929

3030
```bash
@@ -67,7 +67,124 @@
6767

6868
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
6969

70-
=== "Azure OpenAI"
70+
=== "Google Cloud"
71+
Google offers two ways to access their models: Google AI and Google Cloud Vertex AI. Google AI requires just a Google account and API key, while Vertex AI requires a Google Cloud account with enterprise features.
72+
73+
First, install the required packages:
74+
75+
```bash
76+
pip install langchain-google-genai langchain-google-vertexai
77+
```
78+
79+
Then set up your credentials based on your chosen API:
80+
81+
For Google AI:
82+
83+
```python
84+
import os
85+
os.environ["GOOGLE_API_KEY"] = "your-google-ai-key" # From https://ai.google.dev/
86+
```
87+
88+
For Vertex AI:
89+
90+
```python
91+
# Ensure you have credentials configured (gcloud, workload identity, etc.)
92+
# Or set service account JSON path:
93+
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/service-account.json"
94+
```
95+
96+
Define your configuration:
97+
98+
```python
99+
config = {
100+
"model": "gemini-1.5-pro", # or other model IDs
101+
"temperature": 0.4,
102+
"max_tokens": None,
103+
"top_p": 0.8,
104+
# For Vertex AI only:
105+
"project": "your-project-id", # Required for Vertex AI
106+
"location": "us-central1", # Required for Vertex AI
107+
}
108+
```
109+
110+
Initialize the LLM and wrap it for use with ragas:
111+
112+
```python
113+
from ragas.llms import LangchainLLMWrapper
114+
from ragas.embeddings import LangchainEmbeddingsWrapper
115+
116+
# Choose the appropriate import based on your API:
117+
from langchain_google_genai import ChatGoogleGenerativeAI
118+
from langchain_google_vertexai import ChatVertexAI
119+
120+
# Initialize with Google AI Studio
121+
generator_llm = LangchainLLMWrapper(ChatGoogleGenerativeAI(
122+
model=config["model"],
123+
temperature=config["temperature"],
124+
max_tokens=config["max_tokens"],
125+
top_p=config["top_p"],
126+
))
127+
128+
# Or initialize with Vertex AI
129+
generator_llm = LangchainLLMWrapper(ChatVertexAI(
130+
model=config["model"],
131+
temperature=config["temperature"],
132+
max_tokens=config["max_tokens"],
133+
top_p=config["top_p"],
134+
project=config["project"],
135+
location=config["location"],
136+
))
137+
```
138+
139+
140+
You can optionally configure safety settings:
141+
142+
```python
143+
from langchain_google_genai import HarmCategory, HarmBlockThreshold
144+
145+
safety_settings = {
146+
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
147+
# Add other safety settings as needed
148+
}
149+
150+
# Apply to your LLM initialization
151+
generator_llm = LangchainLLMWrapper(ChatGoogleGenerativeAI(
152+
model=config["model"],
153+
temperature=config["temperature"],
154+
safety_settings=safety_settings,
155+
))
156+
```
157+
158+
Initialize the embeddings and wrap them for use with ragas:
159+
160+
```python
161+
# Google AI Studio Embeddings
162+
from langchain_google_genai import GoogleGenerativeAIEmbeddings
163+
164+
generator_embeddings = LangchainEmbeddingsWrapper(GoogleGenerativeAIEmbeddings(
165+
model="models/embedding-001", # Google's text embedding model
166+
task_type="retrieval_document" # Optional: specify the task type
167+
))
168+
```
169+
170+
```python
171+
# Vertex AI Embeddings
172+
from langchain_google_vertexai import VertexAIEmbeddings
173+
174+
generator_embeddings = LangchainEmbeddingsWrapper(VertexAIEmbeddings(
175+
model_name="textembedding-gecko@001", # or other available model
176+
project=config["project"], # Your GCP project ID
177+
location=config["location"] # Your GCP location
178+
))
179+
```
180+
181+
For more information on available models, features, and configurations, refer to: [Google AI documentation](https://ai.google.dev/docs)
182+
- [Vertex AI documentation](https://cloud.google.com/vertex-ai/docs)
183+
- [LangChain Google AI integration](https://python.langchain.com/docs/integrations/chat/google_generative_ai)
184+
- [LangChain Vertex AI integration](https://python.langchain.com/docs/integrations/chat/google_vertex_ai)
185+
186+
187+
=== "Azure"
71188
Install the langchain-openai package
72189

73190
```bash

0 commit comments

Comments
 (0)