You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When doing some load testing, the upstream S3 gateway system started returning connection resets due to rate limiting.
This error doesn't seem to be caught gracefully by the gateway nginx javascript , which likely caused the nginx process to exit and the container to restart as a result.
To Reproduce
Steps to reproduce the behavior:
Start container
Configure against a S3 compatible backend
S3 backend returns connection reset
javascript error
nginx process exit
Expected behavior
On http response errors from requests made from the s3 gateway serverside javascript, nginx process should not end up exiting.
Your environment
Version of the repo - latest nginx-s3-gateway image
Version of the container used (if downloaded from Docker Hub or Github) nginx-s3-gateway:latest
S3 backend implementation you are using : internal compatible S3 backend
How you are deploying Docker/Stand-alone, etc kubernetes
NGINX type (OSS/Plus) OSS
Authentication method (IAM, IAM with Fargate, IAM with K8S, AWS Credentials, etc) AWS key secret credential auth
Additional context
Add any other context about the problem here. Any log files you want to share.
In order for me to understand this report, I'd like to try to restate it. Basically, when NGINX is running under the S3 Gateway configuration, outbound client connections to S3 that are closed on the S3 side are causing nginx to shutdown.
Is that a correct assessment?
Also, I assume based on the log messages above, that there is no core dump.
Are you able to consistently reproduce the issue?
If so, can you turn the verbosity of logs up for the error log to debug, and then post the output?
For our reference, at the time that this bug was file, the latest container image was:
Tried running a load test with an AWS S3 bucket backend and encountered the same issue.
I didn't see anything useful with debug logging enabled.
It turns out the issue is the /health endpoint used as a liveness check will fail under load and kill the pod, showing the "graceful shutdown message" in the logs.
When I remove the liveness check (keeping the /health readiness check), the load tests were able to complete without the pods shutting down.
Describe the bug
When doing some load testing, the upstream S3 gateway system started returning connection resets due to rate limiting.
This error doesn't seem to be caught gracefully by the gateway nginx javascript , which likely caused the nginx process to exit and the container to restart as a result.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
On http response errors from requests made from the s3 gateway serverside javascript, nginx process should not end up exiting.
Your environment
Additional context
Add any other context about the problem here. Any log files you want to share.
1.2.3.4- - [17/Jul/2023:15:15:55 +0000] "GET /caching-test/something.mp4 HTTP/1.1" 200 18874715 "-" "Apache-HttpClient/4.5.13 (Java/11.0.15)" "100.100.100.200"
2023/07/17 15:16:01 [info] 79#79: *867 client prematurely closed connection (104: Connection reset by peer), client: 1.2.5.6, server: , request: "GET /caching-test/something.mp4 HTTP/1.1", host: "somehost.com"
1.2.5.6 - - [17/Jul/2023:15:16:01 +0000] "GET /caching-test/something.mp4 HTTP/1.1" 200 11260809 "-" "Apache-HttpClient/4.5.13 (Java/11.0.15)" "100.100.100.201"
2023/07/17 15:16:01 [notice] 79#79: exiting
2023/07/17 15:16:01 [notice] 79#79: exit
2023/07/17 15:16:01 [notice] 1#1: signal 17 (SIGCHLD) received from 79
2023/07/17 15:16:01 [notice] 1#1: worker process 79 exited with code 0
2023/07/17 15:16:01 [notice] 1#1: exit
The text was updated successfully, but these errors were encountered: