-
Notifications
You must be signed in to change notification settings - Fork 241
(0.97.0) Move CUDA stuff to an extension #4499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 20 commits
Commits
Show all changes
57 commits
Select commit
Hold shift + click to select a range
6743772
Update .gitlab-ci.yml file
michel2323 314ddea
Adding Aurora CI
michel2323 5656758
Fix
michel2323 15e1fb1
Fix
michel2323 0eeb10f
Isolate CUDA
michel2323 146bbec
Create a CUDA extension
michel2323 d000990
Add basic CUDA extension test
michel2323 44b312d
Rebase and various fixes
michel2323 65884d4
No GPU in example
michel2323 a39d7f0
Fix unified_array
michel2323 01abe13
Add CUDA to test_init.jl
michel2323 56bb0f8
Move CUDA to KA in docstrings
michel2323 d511a1c
Populate AMDGPU extension
michel2323 6e151dd
AMDGPU fixes
michel2323 b052dc6
Fix MultiRegionObject
michel2323 38c5b77
Fix docs
michel2323 9238946
More MultiRegion fixes
michel2323 ae953fe
Fix test_tripolar_grid
michel2323 34ce8b0
Fix test_tripolar_grid
michel2323 adcb00e
Fix allowscalar
michel2323 c01eabf
One more MultiObject
michel2323 ae48a9c
Fix multi_region_implicit
michel2323 9b6104b
Fix docs
michel2323 36f28c0
backend -> device for now
michel2323 150bd0f
CI, why did I comment out this line?
michel2323 85f154e
Update OceananigansAMDGPUExt.jl
glwagner f862d5f
Update OceananigansCUDAExt.jl
glwagner eb2a138
Update src/MultiRegion/multi_region_utils.jl
glwagner 14ee3f3
Merge branch 'main' into ms/ka
glwagner 26dc29b
Update ext/OceananigansCUDAExt.jl
glwagner 129b0dd
Update ext/OceananigansAMDGPUExt.jl
glwagner 5ac15cf
Update ext/OceananigansAMDGPUExt.jl
glwagner 8b9923d
Update src/Fields/set!.jl
glwagner 83951f5
Merge branch 'main' into ms/ka
glwagner fb37d42
Merge branch 'main' into ms/ka
glwagner 1d5f5b7
Merge branch 'main' into ms/ka
glwagner cdb70da
Merge branch 'main' into ms/ka
glwagner ad377b4
Merge branch 'main' into ms/ka
navidcy 56c3c72
Merge remote-tracking branch 'upstream/main'
navidcy d2c19fa
Merge branch 'main' into ms/ka
navidcy 22d62c2
leave empty line
navidcy 4f79474
Delete .gitlab-ci.yml
navidcy 492c259
load CUDA + disambiguate record method
navidcy 97674d3
bump minor release
navidcy 1b51351
Merge branch 'main' into ms/ka
navidcy 0a73212
install CUDA
navidcy f065e2a
delete stray empty line
navidcy f960b63
add backend when creating MultiRegionObject
navidcy 63539ed
missed =
navidcy 05b1d99
remove some duplicate defs and gather tests together
navidcy 609e5a7
convert_output(mo::MultiRegionObject, model) always on CPU?
navidcy 984eb19
MultiRegionOuputWriter fix
michel2323 3209c40
let go of arch_array
navidcy 1c6fa73
reorganize imports
navidcy ff957c8
add method for architecture(::Type{<:AbstractArray})
navidcy 93a75d7
Merge branch 'main' into ms/ka
navidcy 3696c43
Merge branch 'main' into ms/ka
simone-silvestri File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
module OceananigansCUDAExt | ||
|
||
using Oceananigans | ||
using InteractiveUtils | ||
using CUDA, CUDA.CUSPARSE, CUDA.CUFFT | ||
using Oceananigans.Utils: linear_expand, __linear_ndrange, MappedCompilerMetadata | ||
using KernelAbstractions: __dynamic_checkbounds, __iterspace | ||
using KernelAbstractions | ||
import Oceananigans.Architectures as AC | ||
import Oceananigans.BoundaryConditions as BC | ||
import Oceananigans.DistributedComputations as DC | ||
import Oceananigans.Fields as FD | ||
import Oceananigans.Grids as GD | ||
import Oceananigans.Solvers as SO | ||
import Oceananigans.Utils as UT | ||
import SparseArrays: SparseMatrixCSC | ||
import KernelAbstractions: __iterspace, __groupindex, __dynamic_checkbounds, | ||
__validindex, CompilerMetadata | ||
import Oceananigans.DistributedComputations: Distributed | ||
|
||
const GPUVar = Union{CuArray, CuContext, CuPtr, Ptr} | ||
|
||
function __init__() | ||
if CUDA.functional() | ||
@debug "CUDA-enabled GPU(s) detected:" | ||
for (gpu, dev) in enumerate(CUDA.devices()) | ||
@debug "$dev: $(CUDA.name(dev))" | ||
end | ||
|
||
CUDA.allowscalar(false) | ||
end | ||
end | ||
|
||
const CUDAGPU = AC.GPU{<:CUDABackend} | ||
CUDAGPU() = AC.GPU(CUDABackend(always_inline=true)) | ||
|
||
# Keep default CUDA backend | ||
function AC.GPU() | ||
if CUDA.has_cuda_gpu() | ||
return CUDAGPU() | ||
else | ||
msg = """We cannot make a GPU with the CUDA backend: | ||
a CUDA GPU was not found!""" | ||
throw(ArgumentError(msg)) | ||
end | ||
end | ||
|
||
function UT.versioninfo_with_gpu(::CUDAGPU) | ||
s = sprint(versioninfo) | ||
gpu_name = CUDA.CuDevice(0) |> CUDA.name | ||
return "CUDA GPU: $gpu_name" | ||
end | ||
|
||
|
||
Base.summary(::CUDAGPU) = "CUDAGPU" | ||
|
||
AC.architecture(::CuArray) = CUDAGPU() | ||
AC.architecture(::CuSparseMatrixCSC) = CUDAGPU() | ||
AC.array_type(::AC.GPU{CUDABackend}) = CuArray | ||
|
||
AC.on_architecture(::CUDAGPU, a::Number) = a | ||
AC.on_architecture(::AC.CPU, a::CuArray) = Array(a) | ||
AC.on_architecture(::CUDAGPU, a::Array) = CuArray(a) | ||
AC.on_architecture(::CUDAGPU, a::CuArray) = a | ||
AC.on_architecture(::CUDAGPU, a::BitArray) = CuArray(a) | ||
AC.on_architecture(::CUDAGPU, a::SubArray{<:Any, <:Any, <:CuArray}) = a | ||
AC.on_architecture(::CUDAGPU, a::SubArray{<:Any, <:Any, <:Array}) = CuArray(a) | ||
AC.on_architecture(::AC.CPU, a::SubArray{<:Any, <:Any, <:CuArray}) = Array(a) | ||
AC.on_architecture(::CUDAGPU, a::StepRangeLen) = a | ||
AC.on_architecture(arch::Distributed, a::CuArray) = AC.on_architecture(AC.child_architecture(arch), a) | ||
AC.on_architecture(arch::Distributed, a::SubArray{<:Any, <:Any, <:CuArray}) = AC.on_architecture(child_architecture(arch), a) | ||
|
||
# cu alters the type of `a`, so we convert it back to the correct type | ||
AC.unified_array(::CUDAGPU, a::AbstractArray) = map(eltype(a), cu(a; unified = true)) | ||
|
||
## GPU to GPU copy of contiguous data | ||
@inline function AC.device_copy_to!(dst::CuArray, src::CuArray; async::Bool = false) | ||
n = length(src) | ||
context!(context(src)) do | ||
GC.@preserve src dst begin | ||
unsafe_copyto!(pointer(dst, 1), pointer(src, 1), n; async) | ||
end | ||
end | ||
return dst | ||
end | ||
|
||
@inline AC.unsafe_free!(a::CuArray) = CUDA.unsafe_free!(a) | ||
|
||
@inline AC.constructors(::AC.GPU{CUDABackend}, A::SparseMatrixCSC) = (CuArray(A.colptr), CuArray(A.rowval), CuArray(A.nzval), (A.m, A.n)) | ||
@inline AC.constructors(::AC.CPU, A::CuSparseMatrixCSC) = (A.dims[1], A.dims[2], Int64.(Array(A.colPtr)), Int64.(Array(A.rowVal)), Array(A.nzVal)) | ||
@inline AC.constructors(::AC.GPU{CUDABackend}, A::CuSparseMatrixCSC) = (A.colPtr, A.rowVal, A.nzVal, A.dims) | ||
|
||
@inline AC.arch_sparse_matrix(::AC.GPU{CUDABackend}, constr::Tuple) = CuSparseMatrixCSC(constr...) | ||
@inline AC.arch_sparse_matrix(::AC.CPU, A::CuSparseMatrixCSC) = SparseMatrixCSC(AC.constructors(AC.CPU(), A)...) | ||
@inline AC.arch_sparse_matrix(::AC.GPU{CUDABackend}, A::SparseMatrixCSC) = CuSparseMatrixCSC(AC.constructors(AC.GPU(), A)...) | ||
@inline AC.arch_sparse_matrix(::AC.GPU{CUDABackend}, A::CuSparseMatrixCSC) = A | ||
|
||
@inline AC.convert_to_device(::CUDAGPU, args) = CUDA.cudaconvert(args) | ||
@inline AC.convert_to_device(::CUDAGPU, args::Tuple) = map(CUDA.cudaconvert, args) | ||
|
||
|
||
BC.validate_boundary_condition_architecture(::CuArray, ::AC.GPU, bc, side) = nothing | ||
|
||
BC.validate_boundary_condition_architecture(::CuArray, ::AC.CPU, bc, side) = | ||
throw(ArgumentError("$side $bc must use `Array` rather than `CuArray` on CPU architectures!")) | ||
|
||
function SO.plan_forward_transform(A::CuArray, ::Union{GD.Bounded, GD.Periodic}, dims, planner_flag) | ||
length(dims) == 0 && return nothing | ||
return CUDA.CUFFT.plan_fft!(A, dims) | ||
end | ||
|
||
FD.set!(v::Field, a::CuArray) = FD._set!(v, a) | ||
DC.set!(v::DC.DistributedField, a::CuArray) = DC._set!(v, a) | ||
glwagner marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
function SO.plan_backward_transform(A::CuArray, ::Union{GD.Bounded, GD.Periodic}, dims, planner_flag) | ||
length(dims) == 0 && return nothing | ||
return CUDA.CUFFT.plan_ifft!(A, dims) | ||
end | ||
|
||
# CUDA version, the indices are passed implicitly | ||
# You must not use KA here as this code is executed in another scope | ||
CUDA.@device_override @inline function __validindex(ctx::MappedCompilerMetadata) | ||
if __dynamic_checkbounds(ctx) | ||
index = @inbounds linear_expand(__iterspace(ctx), CUDA.blockIdx().x, CUDA.threadIdx().x) | ||
return index ≤ __linear_ndrange(ctx) | ||
else | ||
return true | ||
end | ||
end | ||
|
||
@inline UT.sync_device!(::CuDevice) = CUDA.synchronize() | ||
@inline UT.getdevice(cu::GPUVar, i) = device(cu) | ||
@inline UT.getdevice(cu::GPUVar) = device(cu) | ||
@inline UT.switch_device!(dev::CuDevice) = device!(dev) | ||
@inline UT.sync_device!(::CUDAGPU) = CUDA.synchronize() | ||
@inline UT.sync_device!(::CUDABackend) = CUDA.synchronize() | ||
|
||
end # module OceananigansCUDAExt |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.