Skip to content

PYTHON-436 Change max_pool_size to limit the maximum concurrent connections #174

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 18 commits into from

Conversation

reversefold
Copy link
Contributor

PYTHON-436 Change max_pool_size to limit the maximum concurrent connections rather than just the idle connections in the pool. Also add support for waitQueueTimeoutMS and waitQueueMultiple.

https://jira.mongodb.org/browse/PYTHON-436

See previous pull request for history:
#163

Justin Patrin added 3 commits April 15, 2013 17:35
…ctions rather than just the idle connections in the pool. Also add support for waitQueueTimeoutMS and waitQueueMultiple.
@reversefold
Copy link
Contributor Author

@ajdavis @behackett Good to pull?

@ajdavis
Copy link
Member

ajdavis commented Apr 18, 2013

Thanks Justin; I'm applying this patch locally and doing some final testing and fixup before we push to master. So don't change anything else, and thanks again for your patience!

@reversefold
Copy link
Contributor Author

ping Are there some changes you'd like me to make so this can be pulled? Do you have a patch or pull request I should apply before this can be pulled?

@ajdavis
Copy link
Member

ajdavis commented Apr 24, 2013

Hi Justin, this is very much not forgotten. There are subtle problems in
Python 2.6; I'm trying to determine whether they were introduced by this
patch or only revealed by it, and in any case how to address them. My goal
is to figure it out once & for all, then accept this patch.

On Wed, Apr 24, 2013 at 2:45 AM, Justin Patrin [email protected]:

ping Are there some changes you'd like me to make so this can be
pulled? Do you have a patch or pull request I should apply before this can
be pulled?


Reply to this email directly or view it on GitHubhttps://github.com//pull/174#issuecomment-16910504
.

@reversefold
Copy link
Contributor Author

Ah, I see. If you have any patches for me I'd love to have them so I can help with the effort. I'm fairly sure that the issue I uncovered was existing, but simply not triggered by existing tests. I'll see if I can write a test in upstream master to trigger the issue. As I mentioned elsewhere, I wasn't able to track down anything wrong with the code to cause the random socket leaks in Python 2.6. It was definitely the threadlocal that was not being reclaimed that was causing on_thread_died not to get called, but nothing appeared to be holding a reference to it. However, if I remember correctly, when I had my extra monitor thread working, I believe the threadlocal was cleaned up when the monitor cleaned up the socket... I'm not 100% sure of that, though.

@ajdavis
Copy link
Member

ajdavis commented Apr 24, 2013

I'm suspicious that there really is a reclamation bug in PyMongo with old
Pythons. I built a minimal version of our thread-death-detection code here:

https://gist.github.com/ajdavis/5438545
The code should print "foobar" 1000 times. It sometimes misses a few in
Python 2.7.0. It always works in Python 2.7.1, so it's probably part of the
constellation of symptoms of Python bug 1868, which was first fixed in
Python 2.7.1:

http://bugs.python.org/issue1868

So that shows the possibility of a problem with our thread-cleanup
strategy. Is it actually reproducible with the full PyMongo pool? After
some effort, I can occasionally repro by adding a test to the code
currently on master. Check out the last 2 commits on this branch:

https://github.com/mongodb/mongo-python-driver/commits/test-socket-reclamation-2.6

Apparently if each thread calls start_request more times than
end_request, and calls end_request at least once, we sometimes leak one
socket out of 1000. I'm going to investigate further and fix this bug. That
may clear the way for your patch. I believe your patch merely changed the
tests and the timings such that the existing bug was revealed, but that
belief is yet unproven. =)

@ghost ghost assigned ajdavis Apr 24, 2013
@ajdavis
Copy link
Member

ajdavis commented Apr 25, 2013

OK, here's that bug:

https://jira.mongodb.org/browse/PYTHON-509

Once my fix for that is merged we'll apply your patch on top of the fix and
see how it goes.

On Wed, Apr 24, 2013 at 1:17 PM, A. Jesse Jiryu Davis [email protected]:

I'm suspicious that there really is a reclamation bug in PyMongo with old
Pythons. I built a minimal version of our thread-death-detection code here:

https://gist.github.com/ajdavis/5438545
The code should print "foobar" 1000 times. It sometimes misses a few in
Python 2.7.0. It always works in Python 2.7.1, so it's probably part of the
constellation of symptoms of Python bug 1868, which was first fixed in
Python 2.7.1:

http://bugs.python.org/issue1868

So that shows the possibility of a problem with our thread-cleanup
strategy. Is it actually reproducible with the full PyMongo pool? After
some effort, I can occasionally repro by adding a test to the code
currently on master. Check out the last 2 commits on this branch:

https://github.com/mongodb/mongo-python-driver/commits/test-socket-reclamation-2.6

Apparently if each thread calls start_request more times than
end_request, and calls end_request at least once, we sometimes leak one
socket out of 1000. I'm going to investigate further and fix this bug. That
may clear the way for your patch. I believe your patch merely changed the
tests and the timings such that the existing bug was revealed, but that
belief is yet unproven. =)

@reversefold
Copy link
Contributor Author

Nice find. Adding a lock is a much better solution than my extra watcher thread.

Justin Patrin added 2 commits April 24, 2013 19:15
@reversefold
Copy link
Contributor Author

I've pulled from upstream and fixed the conflict in the tests.

@ajdavis
Copy link
Member

ajdavis commented Apr 25, 2013

Cool, but don't worry about it; I'm going to rebase your patch on this fix
and push it very soon.

On Wed, Apr 24, 2013 at 10:18 PM, Justin Patrin [email protected]:

I've pulled from upstream and fixed the conflict in the tests.


Reply to this email directly or view it on GitHubhttps://github.com//pull/174#issuecomment-16985272
.

@reversefold
Copy link
Contributor Author

The tests were failing after I merged and I tracked it down to the CreateAndReleaseSocket tests having a smaller max_pool_size than nthreads, which of course breaks now that max_pool_size is strict. 9528e95 fixes this by waiting for a min of nthreads or max_pool_size at a time.

@ajdavis
Copy link
Member

ajdavis commented Apr 26, 2013

FYI I have your patch ready to go in my repo. Your work is done. We're going to delay merging it for a bit longer while we push a bugfix release PyMongo 2.5.1. Your patch will go out in PyMongo 2.6.

@reversefold
Copy link
Contributor Author

Ok, sounds good. :-) Did you get my test fixes as well?

On Fri, Apr 26, 2013 at 5:53 AM, A. Jesse Jiryu Davis <
[email protected]> wrote:

FYI I have your patch ready to go in my repo. Your work is done. We're
going to delay merging it for a bit longer while we push a bugfix release
PyMongo 2.5.1. Your patch will go out in PyMongo 2.6.

Justin Patrin added 2 commits April 28, 2013 22:59
… into PYTHON-436

Conflicts:
	pymongo/mongo_replica_set_client.py
	test/high_availability/test_ha.py
@reversefold
Copy link
Contributor Author

pymongo 2.5.1 is out and this branch has been merged with upstream/master and tests are passing:
https://travis-ci.org/reversefold/mongo-python-driver/builds/7141128

I suggest that this patch and Pull Request #176 be merged together to decrease divergence.

@ajdavis
Copy link
Member

ajdavis commented May 23, 2013

We've merged this manually. Thanks again!

@ajdavis ajdavis closed this May 23, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants