-
Notifications
You must be signed in to change notification settings - Fork 1.2k
How to properly configure timeouts and retries in Elasticsearch Python client? #2856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
These are for the latest version, I can't find the same page for v7.17 but I guess will be the same logic. |
Got it thank you for your help @gioboa! So, does changing the |
To test this, I set the Passing self.client.index(
index=index,
body=object_to_save,
request_timeout=0.001,
)
Is there a functional difference between setting request_timeout at the client instance level versus passing it directly to the index method? |
From my understanding the key difference lies in when and where the timeout is applied.
|
Perfect, thank you! |
Hello,
I'm currently using Elasticsearch Python version 7.17.12 in Python 3.10, and I want to eliminate the following read timeout error message. As shown in the following log, it saves the document in the second retry, but I want to make sure to increase the timeout so it doesn't fail, or if it fails print a logging message of retrying or something similar. I got confused with the different parameters of
timeout
andrequest_timeout
and if they are in milliseconds or secondsHere's my code:
and the index logic is:
The text was updated successfully, but these errors were encountered: