Skip to content

Revert "Parallel test with pytest-xdist" #526

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .github/workflows/regression_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,5 +68,4 @@ jobs:
pip install ${{ matrix.torch-spec }}
pip install -r dev-requirements.txt
pip install .
pytest test --verbose -s -m "not multi_gpu" --dist load --tx popen//env:CUDA_VISIBLE_DEVICES=0 --tx popen//env:CUDA_VISIBLE_DEVICES=1 --tx popen//env:CUDA_VISIBLE_DEVICES=2 --tx popen//env:CUDA_VISIBLE_DEVICES=3
pytest test --verbose -s -m "multi_gpu"
pytest test --verbose -s
1 change: 0 additions & 1 deletion dev-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ transformers
hypothesis # Avoid test derandomization warning
sentencepiece # for gpt-fast tokenizer
expecttest
pytest-xdist

# For prototype features and benchmarks
bitsandbytes #needed for testing triton quant / dequant ops for 8-bit optimizers
Expand Down
3 changes: 0 additions & 3 deletions pytest.ini

This file was deleted.

1 change: 0 additions & 1 deletion test/dtypes/test_nf4.py
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,6 @@ class TestQLoRA(FSDPTest):
def world_size(self) -> int:
return 2

@pytest.mark.multi_gpu
@pytest.mark.skipif(
version.parse(torch.__version__).base_version < "2.4.0",
reason="torch >= 2.4 required",
Expand Down
9 changes: 3 additions & 6 deletions test/integration/test_integration.py
Original file line number Diff line number Diff line change
Expand Up @@ -985,10 +985,7 @@ def forward(self, x):
# save quantized state_dict
api(model)

# unique filename to avoid collision in parallel tests
ckpt_name = f"{api.__name__}_{test_device}_{test_dtype}_test.pth"

torch.save(model.state_dict(), ckpt_name)
torch.save(model.state_dict(), "test.pth")
# get quantized reference
model_qc = torch.compile(model, mode="max-autotune")
ref_q = model_qc(x).detach()
Expand All @@ -1001,8 +998,8 @@ def forward(self, x):
api(model)

# load quantized state_dict
state_dict = torch.load(ckpt_name, mmap=True)
os.remove(ckpt_name)
state_dict = torch.load("test.pth", mmap=True)
os.remove("test.pth")

model.load_state_dict(state_dict, assign=True)
model = model.to(device=test_device, dtype=test_dtype).eval()
Expand Down
1 change: 0 additions & 1 deletion test/prototype/test_low_bit_optim.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,6 @@ class TestFSDP2(FSDPTest):
def world_size(self) -> int:
return 2

@pytest.mark.multi_gpu
@pytest.mark.skipif(not TORCH_VERSION_AFTER_2_4, reason="torch >= 2.4 required")
@skip_if_lt_x_gpu(2)
def test_fsdp2(self):
Expand Down
Loading