Description
Describe the bug
When doing some load testing, the upstream S3 gateway system started returning connection resets due to rate limiting.
This error doesn't seem to be caught gracefully by the gateway nginx javascript , which likely caused the nginx process to exit and the container to restart as a result.
To Reproduce
Steps to reproduce the behavior:
- Start container
- Configure against a S3 compatible backend
- S3 backend returns connection reset
- javascript error
- nginx process exit
Expected behavior
On http response errors from requests made from the s3 gateway serverside javascript, nginx process should not end up exiting.
Your environment
- Version of the repo - latest nginx-s3-gateway image
- Version of the container used (if downloaded from Docker Hub or Github) nginx-s3-gateway:latest
- S3 backend implementation you are using : internal compatible S3 backend
- How you are deploying Docker/Stand-alone, etc kubernetes
- NGINX type (OSS/Plus) OSS
- Authentication method (IAM, IAM with Fargate, IAM with K8S, AWS Credentials, etc) AWS key secret credential auth
Additional context
Add any other context about the problem here. Any log files you want to share.
1.2.3.4- - [17/Jul/2023:15:15:55 +0000] "GET /caching-test/something.mp4 HTTP/1.1" 200 18874715 "-" "Apache-HttpClient/4.5.13 (Java/11.0.15)" "100.100.100.200"
2023/07/17 15:16:01 [info] 79#79: *867 client prematurely closed connection (104: Connection reset by peer), client: 1.2.5.6, server: , request: "GET /caching-test/something.mp4 HTTP/1.1", host: "somehost.com"
1.2.5.6 - - [17/Jul/2023:15:16:01 +0000] "GET /caching-test/something.mp4 HTTP/1.1" 200 11260809 "-" "Apache-HttpClient/4.5.13 (Java/11.0.15)" "100.100.100.201"
2023/07/17 15:16:01 [notice] 79#79: exiting
2023/07/17 15:16:01 [notice] 79#79: exit
2023/07/17 15:16:01 [notice] 1#1: signal 17 (SIGCHLD) received from 79
2023/07/17 15:16:01 [notice] 1#1: worker process 79 exited with code 0
2023/07/17 15:16:01 [notice] 1#1: exit