Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Outdated PyTorch Operations for AlexNet Training on CIFAR #57

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

KaledDahleh
Copy link

This pull request includes several important updates and fixes to ensure compatibility with the latest version of PyTorch and to resolve issues related to tensor operations while training AlexNet on the CIFAR dataset.

The key changes are as follows:

  1. Removed async=True argument from cuda() calls: This argument is deprecated in PyTorch 1.0.0 and has been removed to maintain compatibility.

    • Commit: 5c70d06
  2. Refactored code to use torch.no_grad() instead of volatile=True: Updated to the new context manager approach to handle operations without tracking gradients for better efficiency with the latest PyTorch version.

    • Commit: 4d393ff
  3. Fixed tensor reshaping in the accuracy function: Replaced view with reshape to correct issues when reshaping tensors, improving reliability.

    • Commit: d378d87
  4. Addressed all 0-dim tensor indexing errors in cifar.py: Accessing 0-dim tensors caused index errors, ensuring smooth operation during training and metrics update.

    • Commits: c570bd6, 2935578

Additional Information:

  • Files Changed:
    • cifar.py: Updated to remove deprecated or obsolete arguments and fix tensor operations
    • eval.py: Adjusted tensor reshaping logic for accuracy calculation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant