Refactor Outdated PyTorch Operations for AlexNet Training on CIFAR #57
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request includes several important updates and fixes to ensure compatibility with the latest version of PyTorch and to resolve issues related to tensor operations while training AlexNet on the CIFAR dataset.
The key changes are as follows:
Removed
async=True
argument fromcuda()
calls: This argument is deprecated in PyTorch 1.0.0 and has been removed to maintain compatibility.5c70d06
Refactored code to use
torch.no_grad()
instead ofvolatile=True
: Updated to the new context manager approach to handle operations without tracking gradients for better efficiency with the latest PyTorch version.4d393ff
Fixed tensor reshaping in the accuracy function: Replaced
view
withreshape
to correct issues when reshaping tensors, improving reliability.d378d87
Addressed all 0-dim tensor indexing errors in cifar.py: Accessing 0-dim tensors caused index errors, ensuring smooth operation during training and metrics update.
c570bd6
,2935578
Additional Information: