Accelerate PyTorch with distributed training and inference
Accelerate is a library that enables the same PyTorch code to be run across any distributed configuration by adding just four lines of code! In short, training and inference at scale made simple, efficient and adaptable.
| Release | Stable | Testing |
|---|---|---|
| Fedora Rawhide | 1.12.0-3.fc44 | - |
You can contact the maintainers of this package via email at
python-accelerate dash maintainers at fedoraproject dot org.