Skip to content

Speed up default FNet testing speedups. #894

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Mar 28, 2023

Conversation

Cyber-Machine
Copy link
Contributor

Partially fixes: #866

  • Add a serialization test for all classes.
  • Mark the saved model testing as "large" for all classes.
  • Make the backbone sizes as small as possible while still testing all logic (e.g. num_layers=2).
  • Only run a single jit_compile=False test per model class. E.g. test_classifier_fit_no_xla.
  • Other minor readability and efficiency improvements (check the original PR carefully).

@Cyber-Machine Cyber-Machine changed the title F net test speedups Speed up default FNet testing speedups. Mar 20, 2023
@mattdangerw
Copy link
Member

This looks like it has some legitimate test failures to work through before we can merge. Related to sentencepiece it seems.

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's fix the tests!

@Cyber-Machine
Copy link
Contributor Author

@mattdangerw, Will be working towards checking all the tests.

Copy link
Contributor

@chenmoneygithub chenmoneygithub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! One minor comment.

Meanwhile, could you post the time cost change before and after this PR? You can run

pytest keras_nlp/models/f_net 

on master and your branch separately.

@@ -25,68 +25,56 @@

class FNetBackboneTest(tf.test.TestCase, parameterized.TestCase):
def setUp(self):
self.model = FNetBackbone(
self.backbone = FNetBackbone(
vocabulary_size=1000,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's reduce the vocab size too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! One minor comment.

Meanwhile, could you post the time cost change before and after this PR? You can run

pytest keras_nlp/models/f_net 

on master and your branch separately.

Sure, I have performed tests on my FNet-test-speedups branch and master branch, and here are the results.
image

Before Changes:

  • master branch
  • Time taken: 140.05s (2mins and 20 sec)
  • 55 Tests passed, 32 Tests skipped

After Changes:

  • FNet-test-speedups branch
  • Time taken: 45.73s
  • 40 Tests passed, 38 Tests skipped

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@chenmoneygithub chenmoneygithub merged commit 46e0259 into keras-team:master Mar 28, 2023
@mattdangerw mattdangerw mentioned this pull request Apr 4, 2023
@Cyber-Machine Cyber-Machine deleted the FNet-test-speedups branch August 12, 2023 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Mirror BERT testing speedups for other models
3 participants