-
Notifications
You must be signed in to change notification settings - Fork 683
Generator: Add support for Google Gemini models #1306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
jmartin-tech
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Working with GCP based services has shown that auth via api_key is not always as straight forward as providing a single environment variable value. Can you provide details on how the key used would be scoped or generated.
Also consider enabling the google library to attempt auth even when no key is provided.
garak/generators/gemini.py
Outdated
| responses = [] | ||
| import logging | ||
|
|
||
| for _ in range(generations_this_call): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iterating generations like this is will not combine with backoff correctly. As written any raised backoff exception will throw away all completed generations and start over.
Looking at gemini docs multiple generations can be obtained in a single call using GenerateContentConfig by setting candidateCount= generations_this_call and passing generate_content() a config named parameter.
If calling for more than one generation please validate how the response object will be formatted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed - The implementation now uses candidate_count=generations_this_call in GenerateContentConfig and makes a single API call via _generate_content_with_backoff(). This ensures backoff retries don't discard completed generations. The response format has been validated and properly handles multiple candidates from the API response.
1. Update Gemini generator to handle test model names gracefully 2. Add missing documentation file for Gemini generator 3. Add Gemini generator to documentation toctree 4. Add google-generativeai dependency to pyproject.toml
remove api validation Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Divya Chitimalla <[email protected]>
- Update _call_model to use Gemini's native candidateCount parameter for multiple generations - Process response candidates correctly to extract text from each generation - Remove generation config from model initialization and set it per request - Fix backoff handling to properly retry the entire batch of generations - Ensure consistent number of responses are returned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General comments, the auth concern may be reasonable to defer to require environment variable based API key for initial acceptance.
Another concern that may be important to address:
This PR uses the package google-generativeai however current docs recommend using google-genai which has the same imported package name but a different interface.
tests/generators/test_gemini.py
Outdated
|
|
||
| # Create the generator with a native audio model | ||
| generator = GeminiGenerator(name="gemini-2.5-flash-native-audio") | ||
| output = generator._call_model("Transcribe this text.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not make sense the input modality suggested by this test is audio not text.
The current generator also inherits the default modality, {"in": {"text"}, "out": {"text"}} unless overridden this generator should only accept text prompts.
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Divya Chitimalla <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Divya Chitimalla <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Divya Chitimalla <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Divya Chitimalla <[email protected]>
Fixed Failing Tests
Migrating it to the new google-genai , Support both API Keys and Vertex AI (ADC) and Improving the backoff logic
|
DCO Assistant Lite bot All contributors have signed the DCO ✍️ ✅ |
|
I have read the DCO Document and I hereby sign the DCO |
|
recheck |
|
jmartin-tech
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code only review, full functional testing will be performed after comments and are addressed.
| """Clear the model to avoid pickling issues.""" | ||
| self.model = None | ||
|
|
||
| def __init__(self, name="", config_root=_config, api_key=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
api_key can be injected via the config_root parameter, please use signature consistent with base.Generator:
| def __init__(self, name="", config_root=_config, api_key=None): | |
| def __init__(self, name="", config_root=_config): |
| self.temperature = self.DEFAULT_PARAMS["temperature"] | ||
| self.top_p = self.DEFAULT_PARAMS["top_p"] | ||
| self.top_k = self.DEFAULT_PARAMS["top_k"] | ||
| self.max_output_tokens = self.DEFAULT_PARAMS["max_output_tokens"] | ||
| self.api_key = api_key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These values are populated by the Configurable._load_config() call made in super().__init__() defer to common code:
| self.temperature = self.DEFAULT_PARAMS["temperature"] | |
| self.top_p = self.DEFAULT_PARAMS["top_p"] | |
| self.top_k = self.DEFAULT_PARAMS["top_k"] | |
| self.max_output_tokens = self.DEFAULT_PARAMS["max_output_tokens"] | |
| self.api_key = api_key |
| # Defer model loading until actually needed | ||
| self.client = None | ||
| self.model = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest, early validation that all required configuration is available during construction outweighs benefits of deferred loading here.
| # Defer model loading until actually needed | |
| self.client = None | |
| self.model = None | |
| self._load_model() |
This becomes more clearly required when considering the fact that _load_model() currently impacts self.modality for the generator which need to be viable to evaluate on the instantiating thread before any request for inference has been performed.
| logging.error(f"All retries failed for {self.name}: {e}") | ||
| return [None] * generations_this_call | ||
|
|
||
| @backoff.on_exception(backoff.expo, APIError, max_tries=5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note this suggestion mirrors how other generators currently implement backoff, garak is tenacious and currently is not expected to abort or skip a generation unless the error indicates that there is no possibility of recovery.
| @backoff.on_exception(backoff.expo, APIError, max_tries=5) | |
| @backoff.on_exception(backoff.fibo, APIError, max_value=70) |
| config=generation_config | ||
| ) | ||
| return response | ||
| raise e # This will trigger backoff retry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would not trigger backoff, e does not exist and return ends execution of the block.
| raise e # This will trigger backoff retry |
| try: | ||
| response = self.client.models.generate_content( | ||
| model=self.name, | ||
| contents=prompt, | ||
| config=generation_config | ||
| ) | ||
| return response | ||
| raise e # This will trigger backoff retry | ||
| except Exception as e: | ||
| logging.error(f"Unexpected error when calling {self.name}: {e}") | ||
| raise e # Re-raise to be handled by caller |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What value add is there in this try block?
| try: | |
| response = self.client.models.generate_content( | |
| model=self.name, | |
| contents=prompt, | |
| config=generation_config | |
| ) | |
| return response | |
| raise e # This will trigger backoff retry | |
| except Exception as e: | |
| logging.error(f"Unexpected error when calling {self.name}: {e}") | |
| raise e # Re-raise to be handled by caller | |
| response = self.client.models.generate_content( | |
| model=self.name, | |
| contents=prompt, | |
| config=generation_config | |
| ) | |
| return response |
If the exception is of type APIError then @backoff.on_exception will trigger otherwise it will bubble up for the caller to handle.
| responses = [None] * generations_this_call | ||
|
|
||
| # Ensure we return the expected number of responses | ||
| while len(responses) < generations_this_call: | ||
| responses.append(None) | ||
|
|
||
| return responses[:generations_this_call] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic seems odd, I would expect the response format either to have all generations required or none (Maybe that is an invalid assumption on my part). I suspect this method can be a bit less opinionated and simply extract results from the response. The caller can be responsible for deciding what to do if the number of found responses do not total generations_this_call.
To that end generations_this_call does not need to be passed into this method.
| responses = [None] * generations_this_call | |
| # Ensure we return the expected number of responses | |
| while len(responses) < generations_this_call: | |
| responses.append(None) | |
| return responses[:generations_this_call] | |
| responses = [None] | |
| return responses |
| ] | ||
|
|
||
| DEFAULT_PARAMS = Generator.DEFAULT_PARAMS | { | ||
| "name": "gemini-2.5-pro", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want a default model here, the churn as model names change could be tedious to maintain.
| "name": "gemini-2.5-pro", | |
| "name": "gemini-2.5-pro", |
| SUPPORTED_MODELS = [ | ||
| "gemini-2.5-pro", | ||
| "gemini-2.5-flash", | ||
| "gemini-2.5-flash-lite-preview", | ||
| "gemini-2.5-flash-native-audio", | ||
| "gemini-2.5-flash-preview-text-to-speech", | ||
| "gemini-2.5-pro-preview-text-to-speech", | ||
| "gemini-2.0-flash" | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of these models have very short TTL, retirement dates suggests that models listed are Auto-updated aliases and are versioned further. To denote possible need for clarification of results it might be helpful to have some method added to extract and document the Stable version reference to the model tested by garak during a run.
| Supported models: | ||
| - gemini-2.5-pro: Gemini 2.5 Pro model (default) | ||
| - gemini-2.5-flash: Gemini 2.5 Flash model | ||
| - gemini-2.5-flash-lite-preview: Gemini 2.5 Flash Lite Preview model | ||
| - gemini-2.5-flash-native-audio: Gemini 2.5 Flash Native Audio model | ||
| - gemini-2.5-flash-preview-text-to-speech: Gemini 2.5 Flash Preview Text-to-Speech model | ||
| - gemini-2.5-pro-preview-text-to-speech: Gemini 2.5 Pro Preview Text-to-Speech model | ||
| - gemini-2.0-flash: Gemini 2.0 Flash model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like having this level of detail however I think the lifecycle of models will cause the list to become stale quickly. Consider removing from the static docs.
Faced issues trying to run gemini models using Litellm. This generator natively supports google models using Google’s official google.generativeai library—more reliable than LiteLLM. It supports multiple models (2.5 Pro, Flash, etc.) with error handling and model-specific config options. #443