This repository was archived by the owner on Nov 11, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
This repository was archived by the owner on Nov 11, 2023. It is now read-only.
[mps] issue with Apple silicon compatibility #170
Copy link
Copy link
Open
Labels
bug?The issue author think this is a bugThe issue author think this is a bug
Description
OS version
Darwin arm64
GPU
mps
Python version
Python 3.8.16
PyTorch version
2.0.0
Branch of sovits
4.0(Default)
Dataset source (Used to judge the dataset quality)
N/A
Where thr problem occurs or what command you executed
inference
Situation description
Tips:
- use
PYTORCH_ENABLE_MPS_FALLBACK=1 - use
-d mps
issues:
- f0_mean_pooling fix: cast
f0_coarseto int #142 - enhance
related codes:
so-vits-svc/vdecoder/nsf_hifigan/models.py
Lines 144 to 146 in 0298cd4
| is_half = rad_values.dtype is not torch.float32 | |
| tmp_over_one = torch.cumsum(rad_values.double(), 1) # % 1 #####%1 means the following cumsum can no longer be optimized | |
| if is_half: |
so-vits-svc/vdecoder/nsf_hifigan/models.py
Lines 159 to 162 in 0298cd4
| cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 | |
| rad_values = rad_values.double() | |
| cumsum_shift = cumsum_shift.double() | |
| sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi) |
There are some double type casts in the source code. Is it required?
Some methods related to double are not implemented in mps devices.
I think float is enough, but I am not sure.
I have modified and tested locally, and it works well.
Is there a significant loss of precision in moving the torch.cumsum operation from double to float?
CC: @ylzz1997
Log
N/ASupplementary description
No response
Metadata
Metadata
Assignees
Labels
bug?The issue author think this is a bugThe issue author think this is a bug