From the FSDP docs:
"FSDP currently does not support gradient accumulation outside no_sync() when using CPU offloading. This is because FSDP uses the newly-reduced gradient instead of accumulating with any existing gradient, which can lead to incorrect results."
https://pytorch.org/docs/stable/fsdp.html