Skip to content

Drop cuda 11 usages #19386

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jul 18, 2025
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ function buildAll {
}

function buildLibCudfJniInDocker {
local cudaVersion="11.8.0"
local cudaVersion="12.9.0"
local imageName="cudf-build:${cudaVersion}-devel-rocky8"
local CMAKE_GENERATOR="${CMAKE_GENERATOR:-Ninja}"
local workspaceDir="/rapids"
Expand Down
2 changes: 1 addition & 1 deletion docs/cudf/source/cudf_pandas/benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ source pandas/py-pandas/bin/activate
4. Install cudf:

```bash
pip install --extra-index-url=https://pypi.nvidia.com cudf-cu12 # or cudf-cu11
pip install --extra-index-url=https://pypi.nvidia.com cudf-cu12
```

5. Modify pandas join/group code to use `cudf.pandas` and remove the `dtype_backend` keyword argument (not supported):
Expand Down
315 changes: 159 additions & 156 deletions docs/cudf/source/user_guide/10min.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/cudf/source/user_guide/io/io.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ IO operations. GDS enables a direct data path for direct memory access
(DMA) transfers between GPU memory and storage, which avoids a bounce
buffer through the CPU. The SDK is available for download
[here](https://developer.nvidia.com/gpudirect-storage). GDS is also
included in CUDA Toolkit 11.4 and higher.
included in CUDA Toolkit.

Use of GDS in cuDF is controlled by KvikIO's environment variable `KVIKIO_COMPAT_MODE`. It has
3 options (case-insensitive):
Expand Down
8 changes: 1 addition & 7 deletions java/src/test/java/ai/rapids/cudf/RmmTest.java
Original file line number Diff line number Diff line change
Expand Up @@ -547,13 +547,7 @@ public void testPoolSize() {
@Tag("noSanitizer")
@Test
public void testCudaAsyncMemoryResourceSize() {
try {
Rmm.initialize(RmmAllocationMode.CUDA_ASYNC, Rmm.logToStderr(), 1024);
} catch (CudfException e) {
// CUDA 11.2 introduced cudaMallocAsync, older CUDA Toolkit will skip this test.
assumeFalse(e.getMessage().contains("cudaMallocAsync not supported"));
throw e;
}
Rmm.initialize(RmmAllocationMode.CUDA_ASYNC, Rmm.logToStderr(), 1024);
try (DeviceMemoryBuffer ignored1 = Rmm.alloc(1024)) {
assertThrows(OutOfMemoryError.class,
() -> {
Expand Down
12 changes: 3 additions & 9 deletions python/cudf/cudf/tests/test_groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@
from numba import cuda
from numpy.testing import assert_array_equal

import rmm

import cudf
from cudf import DataFrame, Series
from cudf.api.extensions import no_default
Expand Down Expand Up @@ -1062,13 +1060,9 @@ def test_groupby_agg_decimal(num_groups, nelem_per_group, func):
)

expect_df = pdf.groupby("idx", sort=True).agg(func)
if rmm._cuda.gpu.runtimeGetVersion() < 11000:
with pytest.raises(RuntimeError):
got_df = gdf.groupby("idx", sort=True).agg(func)
else:
got_df = gdf.groupby("idx", sort=True).agg(func)
assert_eq(expect_df["x"], got_df["x"], check_dtype=False)
assert_eq(expect_df["y"], got_df["y"], check_dtype=False)
got_df = gdf.groupby("idx", sort=True).agg(func)
assert_eq(expect_df["x"], got_df["x"], check_dtype=False)
assert_eq(expect_df["y"], got_df["y"], check_dtype=False)


@pytest.mark.parametrize(
Expand Down
3 changes: 0 additions & 3 deletions python/cudf/cudf/tests/test_parquet.py
Original file line number Diff line number Diff line change
Expand Up @@ -4547,9 +4547,6 @@ def my_pdf(request):

@pytest.mark.parametrize("compression", ["brotli", "gzip", "snappy", "zstd"])
def test_parquet_decompression(set_decomp_env_vars, my_pdf, compression):
if compression == "snappy":
pytest.skip("Skipping because of a known issue on CUDA 11.8")

# PANDAS returns category objects whereas cuDF returns hashes
expect = my_pdf.drop(columns=["col_category"])

Expand Down
16 changes: 8 additions & 8 deletions python/cudf/cudf/utils/gpu_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,14 +108,14 @@ def validate_setup():

cuda_runtime_version = runtimeGetVersion()

if cuda_runtime_version < 11000:
# Require CUDA Runtime version 11.0 or greater.
if cuda_runtime_version < 12000:
# Require CUDA Runtime version 12.0 or greater.
major_version = cuda_runtime_version // 1000
minor_version = (cuda_runtime_version % 1000) // 10
raise UnsupportedCUDAError(
"Detected CUDA Runtime version is "
f"{major_version}.{minor_version}. "
"Please update your CUDA Runtime to 11.0 or above."
"Please update your CUDA Runtime to 12.0 or above."
)

cuda_driver_supported_rt_version = driverGetVersion()
Expand All @@ -142,13 +142,13 @@ def validate_setup():
# Driver Runtime version is >= Runtime version
pass
elif (
cuda_driver_supported_rt_version >= 11000
and cuda_runtime_version >= 11000
cuda_driver_supported_rt_version >= 12000
and cuda_runtime_version >= 12000
):
# With cuda enhanced compatibility any code compiled
# with 11.x version of cuda can now run on any
# driver >= 450.80.02. 11000 is the minimum cuda
# version 450.80.02 supports.
# with 12.x version of cuda can now run on any
# driver >= 525.60.13. 12000 is the minimum cuda
# version 525.60.13 supports.
pass
else:
raise UnsupportedCUDAError(
Expand Down
4 changes: 2 additions & 2 deletions python/cudf_polars/cudf_polars/callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,8 @@ def default_memory_resource(
):
raise ComputeError(
"GPU engine requested, but incorrect cudf-polars package installed. "
"If your system has a CUDA 11 driver, please uninstall `cudf-polars-cu12` "
"and install `cudf-polars-cu11`"
Comment on lines -107 to -108
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"If your system has a CUDA `x` driver, please uninstall `cudf-polars-cu12` "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can this error be reached? Do we need this to say “Requires CUDA 12+” instead of recommending a reinstall?

I tried to trace down how this code is reached. Is it one of these two paths? https://github.com/search?q=repo%3Arapidsai%2Frmm+%22not+supported+with+this+CUDA+driver%2Fruntime+version%22&type=code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is the paths. I updated the error message.

"and install `cudf-polars-cu<x>` instead."
) from None
else:
raise
Expand Down
4 changes: 2 additions & 2 deletions python/custreamz/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ Please see the [Demo Docker Repository](https://hub.docker.com/r/rapidsai/rapids

### CUDA/GPU requirements

* CUDA 11.0+
* NVIDIA driver 450.80.02+
* CUDA 12.0+
* NVIDIA driver 525.60.13+
* Volta architecture or better (Compute Capability >=7.0)

### Conda
Expand Down
Loading