Skip to content
This repository was archived by the owner on Nov 11, 2023. It is now read-only.
This repository was archived by the owner on Nov 11, 2023. It is now read-only.

[mps] issue with Apple silicon compatibility #170

@magic-akari

Description

@magic-akari

OS version

Darwin arm64

GPU

mps

Python version

Python 3.8.16

PyTorch version

2.0.0

Branch of sovits

4.0(Default)

Dataset source (Used to judge the dataset quality)

N/A

Where thr problem occurs or what command you executed

inference

Situation description

Tips:

  • use PYTORCH_ENABLE_MPS_FALLBACK=1
  • use -d mps

issues:

related codes:

is_half = rad_values.dtype is not torch.float32
tmp_over_one = torch.cumsum(rad_values.double(), 1) # % 1 #####%1 means the following cumsum can no longer be optimized
if is_half:

cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
rad_values = rad_values.double()
cumsum_shift = cumsum_shift.double()
sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi)

There are some double type casts in the source code. Is it required?

Some methods related to double are not implemented in mps devices.

I think float is enough, but I am not sure.
I have modified and tested locally, and it works well.

Is there a significant loss of precision in moving the torch.cumsum operation from double to float?

CC: @ylzz1997

Log

N/A

Supplementary description

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug?The issue author think this is a bug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions