-
Notifications
You must be signed in to change notification settings - Fork 45.6k
Fix bug on distributed training in mnist using MirroredStrategy API #5183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here (e.g. What to do if you already signed the CLAIndividual signers
Corporate signers
|
I signed it! |
CLAs look good, thanks! |
Thanks for the PR, @parkjaeman . @robieta , @guptapriya -- can you take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two broad changes to make:
-
Remove the
multi_gpu
flag and replace it withnum_gpus
. (See other models such as wide deep and resnet) This will also allow you to useutils.misc.distribution_utils.get_distribution_strategy()
to get the distribution. -
Remove TowerOptimizer; it is not needed once replicate_model_fn is removed.
official/mnist/mnist.py
Outdated
@@ -125,7 +125,10 @@ def model_fn(features, labels, mode, params): | |||
|
|||
logits = model(image, training=True) | |||
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) | |||
accuracy = tf.metrics.accuracy( | |||
if params.get('multi_gpu'): | |||
accuracy = (tf.no_op(), tf.constant(0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be necessary. If you are having issues make sure your version of tf-nightly is up to date.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robieta, I applied your comments to this PR like
- Remove multi_gpu
- Remove TowerOptimizer
- Replace MirroredStrategy with distribution_utils.get_distribution_strategy()
And I checked mnist run without error when I add parameter '--num_gpus'.
- Remove multi-gpu - Remove TowerOptimizer - Change from MirroredStrategy to distribution_utils.get_distribution_strategy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for looking into this.
I tried to run distributed tensorflow with mnist but it did not work. So I fixed such problem with MirroredStrategy API.