-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Writing a netCDF file is unexpectedly slow #2912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You're using dask, so the Dataset is being lazily computed. If one part of your pipeline is very expensive (perhaps reading the original data from disk?) then the process of saving can be very slow. I would suggest doing some profiling, e.g., as shown in this example: http://docs.dask.org/en/latest/diagnostics-local.html#example Once we know what the slow part is, that will hopefully make opportunities for improvement more obvious. |
It really depends on the underlying cause. In most cases, writing a file to disk is not the slow part, only the place where the slow-down is manifested. |
Since the final dataset size is quite manageable, I would start by forcing computation before the write step: ncdat.load().to_netcdf(...) While writing of xarray datasets backed by dask is possible, its a poorly optimized operation. Most of this comes from constraints in netCDF4/HDF5. There are ways to side step some of these challenges ( |
DiagnosisThank you very much! I found this. For now, I will use the load() option. Loading netCDFs
Slower export
Faster export
|
@jhamman Could you elaborate on these ways ? I am having severe slow-downs when writing Datasets by blocks (backed by dask). I have also noticed that the slowdowns do not occur when writing to ramdisk. Here are the timings of
The workaround suggested here works, but the datasets may not always fit in memory, and it fails the essential purpose of dask... Note: I am using dask 2.3.0 and xarray 0.12.3 |
@fsteinmetz - in my experience, the main thing to consider here is how and when xarray's backends lock/block for certain operations. The hdf5 library is not thread safe and so we implement a global lock around all hdf5 read/write operations. In most cases, this means we can only do one read or one write at a time per process. We have found that using Dask's distributed (or mulitprocessing) scheduler allows us to bypass the thread locks required by hdf5 by using multiple processes. We also need a per file lock when writing, so using multiple output datasets theoretically allows for concurrent writes (provided your filesystem and OS support this). Finally, its best not to jump to the complicated explanations first. If you have many small dask chunks in your dataset, both reading and writing will be quite inefficient. This is simply because there is some non-trivial overhead when accessing partial datasets. This is even worse when the dataset is chunked/compressed. Hope that helps. |
I suspect it could work pretty well to explicitly rechunk your dataset into larger chunks (e.g., with the |
I am trying to perform a fairly simplistic operation on a dataset involving editing of variable and global attributes on individual netcdf files of 3.5GB each. The files load instantly using
|
I had a similar issue. I am trying to save a big xarray (~2 GB) dataset using I tried the following three approaches:
All three approaches failed to write to file which cause the python kernel to hang indefinitely or die. Any suggestion? |
closing as stale. |
Uh oh!
There was an error while loading. Please reload this page.
Problem description
After some processing, I am left with this xarray dataset
ncdat
which I want to export to a netCDF file.But the problem is it takes an inordinately long time to export. Almost 10 mins for this particular file which is only 35M.
How can I expedite this process? Is there anything wrong with the structure of
ncdat
?Expected Output
A netCDF file
Output of
xr.show_versions()
xarray: 0.12.1
pandas: 0.24.2
numpy: 1.16.2
scipy: 1.2.1
netCDF4: 1.5.0.1
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.0.3.4
nc_time_axis: None
PseudonetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.2.1
dask: 1.2.0
distributed: 1.27.0
matplotlib: 3.0.3
cartopy: 0.17.0
seaborn: 0.9.0
setuptools: 41.0.0
pip: 19.0.3
conda: None
pytest: None
IPython: 7.4.0
sphinx: None
The text was updated successfully, but these errors were encountered: