Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update deprecated torch.cuda.amp API calls for PyTorch 2.0+ compatibility #1077

Open
2 tasks done
lksrz opened this issue Mar 20, 2025 · 1 comment
Open
2 tasks done
Labels
enhancement New feature or request

Comments

@lksrz
Copy link

lksrz commented Mar 20, 2025

Search before asking

  • I have searched the YOLOv6 issues and found no similar feature requests.

Description

YOLOv6 currently uses deprecated PyTorch API calls for automatic mixed precision (AMP) training. This causes deprecation warnings when using PyTorch 2.0 or newer.

When training with PyTorch 2.0+, the following warnings appear:
FutureWarning: torch.cuda.amp.GradScaler(args...) is deprecated. Please use torch.amp.GradScaler('cuda', args...) instead.
FutureWarning: torch.cuda.amp.autocast(args...) is deprecated. Please use torch.amp.autocast('cuda', args...) instead.

Proposed Solution
Update the following code patterns throughout the codebase:

  • Replace torch.cuda.amp.GradScaler() with torch.amp.GradScaler('cuda')
  • Replace torch.cuda.amp.autocast() with torch.amp.autocast('cuda')

This change maintains the same functionality while making the code compatible with PyTorch 2.0+ and eliminating deprecation warnings.

Files that need changes

  • yolov6/core/engine.py
  • yolov6/models/losses/loss.py

I'm happy to contribute a PR for this change if desired.

Use case

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@lksrz lksrz added the enhancement New feature or request label Mar 20, 2025
lksrz added a commit to lksrz/YOLOv6 that referenced this issue Mar 20, 2025
…lity meituan#1077

- Replace torch.cuda.amp with torch.amp

- Add device specification ('cuda') to amp.autocast and GradScaler

- Update imports to use torch.amp directly
@roxroxroxrox
Copy link

good job. i dont update my stack so frequently. from biz to alg.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants