Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue in /server/embedding_as_service (by P3) #69

Open
DLPerf opened this issue Aug 27, 2021 · 2 comments
Open

Performance issue in /server/embedding_as_service (by P3) #69

DLPerf opened this issue Aug 27, 2021 · 2 comments

Comments

@DLPerf
Copy link
Contributor

DLPerf commented Aug 27, 2021

Hello! I've found a performance issue in /text/xlnet/models/data_utils.py: dataset.batch(bsz_per_core, drop_remainder=True)(line 571) should be called before dataset.cache().map(parser).repeat()(line 570), which could make your program more efficient.

Here is the tensorflow document to support it.

Besides, you need to check the function parser called in .map(parser) whether to be affected or not to make the changed code work properly. For example, if parser needs data with shape (x, y, z) as its input before fix, it would require data with shape (batch_size, x, y, z) after fix.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

@DLPerf
Copy link
Contributor Author

DLPerf commented Nov 4, 2021

Hello, I'm looking forward to your reply~

@ashutoshsingh0223
Copy link
Collaborator

Hi

Sorry for the delayed response.
Please feel free to create a PR. We will review it.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants