-
Notifications
You must be signed in to change notification settings - Fork 13
Description
A colleague wants to use aospy on 0.1 degree ocean data; see /archive/hmz/CM2.6/ocean/ on GFDL's filesystem. This is GFDL 'av' data organized as annual means, one year per file, for 200 years: ocean.0001.ann.nc, ..., ocean.0200.ann.nc. Each file is ~14GB, and in total it's ~3.1TB
While we generally use xarray.open_mfdataset and hence lazily-load, there are three places where data explicitly gets loaded into memory via load()
:
- DataLoader.load_variable: https://github.com/spencerahill/aospy/blob/develop/aospy/data_loader.py#L212
- Calc._add_grid_attributes: https://github.com/spencerahill/aospy/blob/develop/aospy/calc.py#L330
- Model._get_grid_files: https://github.com/spencerahill/aospy/blob/develop/aospy/model.py#L221. This was implemented to prevent bugs occurring if the grid attributes were dask arrays. Also, we sometimes call xarray.open_dataset here instead, without passing a
chunks={}
option that would make it lazily load.
In this particular case, the grid attributes can come from the smaller /archive/hmz/CM2.6/ocean.static.nc
file, but that itself isn't trivially small, at 371 MB.
@spencerkclark, do you recall the nature of the bugs when we didn't force loading? Any thoughts more generally about making all of the above logic more performant with large datasets? Ideally we never call load() on a full dataset; rather we take individual variables, reduce them as much as possible (in space and time), and then load.