-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Redis connection gone from close event #247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't have much experience with this, apologies. |
I've noticed that I get timeout's when using redis on nodejitsu if no data has been sent on the connection for a while (idle connection). I noticed there is a PING command on redis but I believe that doesn't work on publish or subscribe connections so I need a way to "keep alive" connections. I was thinking about a simple 1 minute timer that just published messages to a channel I know nobody else will be listening on. While this is ugly I think it will be a simple fix until some other way to keep the connection alive or better detection for disconnected connections is created. I agree that it takes way too long for a command to timeout and having an auto-retry would be ideal. I was going to try and find you (@DTrejo) on the nodejitsu channel to talk about it. EDIT: I should mention that I also have the problem of connections getting "stuck" for idle MongoDB connections. If I don't keep the connection active I run into really really strange issues on nodejitsu where connections will stay "active" for 140+ minutes and eventually mongoose will try connection until it times out (1 minute per connection with 5 connections) at which point it will finally throw a disconnected event and will then attempt to reconnect at that point. |
This sounds like a reasonable option to include — is this issue still at large? If so, would one of you like to submit a PR for this? Cheers, |
I've implemented the ugly hack to publish to a channel every 60 seconds and it has been running great ever since I went down that route. Honestly I haven't even thought about it since I submitted the ticket and implemented the "hack" |
I actually also run into this same issue and have done the same "hack" -- whenever my monitor hits it does a couple lookups with the dual purpose of logging stats and refreshing the Redis connection. The issue really seems to be the several minutes it takes the client to realize it has timed out and force a reconnection to Redis. Given how fast it is, this could be addressed by pessimistic PING checks, forcing a new connection after a certain amount of time, or smart connection pooling. |
No status on this in over 11 months? |
What's the code for the "hack"? |
This is a simple workaround to prevent the timeout: Just call ping at regular intervals.
|
2 thoughts. First is minor, the second reflects a possibly real issue either in code or in documentation.
|
Hi @benfleis -- hopefully the main issue here is fixed with 0.11.0, your second issue looks like it is something else and should probably be opened into its own issue. |
@brycebaril Does that mean that this should no longer be an issue with v0.11, because of |
In our redis configuration:
timeout: 7 seconds
Whenever the connection is closed from the redis end, we are able to catch the end event because of the timeout.
But in some cases (most probably redis is closing the connection without notifying the client) we see the command queue getting piled up and requests are taking too much time to get the response [till the time node-redis client able to sense the close event]. In all such cases command callback is returned with this error
Redis connection gone from close event.
even after so much waiting.Issue seems to be similar to this - http://code.google.com/p/redis/issues/detail?id=368
Is there a way to specify that execution of a command [sending and receiving a reply back] should not exceed the threshold and reply with an error in that case, instead of making the client stall. When we run the node-redis on debug mode we are clearly able to see the client getting stalled with the requests getting piled up in the command queue. We logged the
why
and queue length inside flush_on_error function. We have kept offline_queuing disabled. Or is there anyother way of triggering close event in such cases like socket_timeout?Sample Log
Redis connection is gone from close event.
offline queue 0
command queue 8
Response time of failed request
{2012-07-11 08:06:48.306] [INFO] Production - {"debug":[{"time":"2012-07-11T08:06:17.918Z","data":"xxxx"},{"time":"2012-07-11T08:06:17.918Z","data":"xxxredis"},{"time":"2012-07-11T08:06:48.306Z","data":{"xxxxrediserror":"Redis connection gone from close event."}},{"time":"2012-07-11T08:06:48.306Z","data":{"YYY"}}],"responsetime":"30388 ms"}
Usual Resonse time
{"debug":[{"time":"2012-07-11T08:21:21.241Z","data":"xxxx"},{"time":"2012-07-11T08:21:21.241Z","data":"xxxxredis"},{"time":"2012-07-11T08:21:21.242Z","data":{"xxxxredisreply":"hai","xxxxrediserror":null}},{"time":"2012-07-11T08:21:21.242Z","data":"yyy"},{"time":"2012-07-11T08:21:21.242Z","data":{"xxxxredisreply":"YYY","xxxxrediserror":null}},{"time":"2012-07-11T08:21:21.242Z","data":{"YYY"}}],"responsetime":"1 ms"}
The text was updated successfully, but these errors were encountered: