fix: transform function to support proper batch inference #125
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #, if available:
This PR is related to #108 and #123.
Description of changes:
As mentioned in #123, the batch inference provided by torchserve literally provides batch inference in a 'batch' format. However, the batch inference implementation in #108 simply runs a single inference through a loop. This is not a correct implementation of batch inference. TorchServe's documentation on batch inference, shows an example where the developer handles this logic and feeds the entire input batch to the model.
If I understand correctly, keeping the batch inference implementation in its current state would be deceptive to the user.
To make batch inference work correctly, we've modified it so that a list of requests can be sent to _transform_fn() in list format.
However, the current implementation requires modifications to related functions such as default_input_fn() and associated documentation, examples, etc. As far as I know, there is no better alternative, so it would be good to review and discuss this PR before proceeding with modifications to other functions.
Testing done:
yes
Merge Checklist
Put an
x
in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.General
Tests
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.