Skip to content

Fix "Chunksize cannot exceed dimension size" #1707

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Nov 13, 2017
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ Bug fixes

- Fixed ``apply_ufunc`` with ``dask='parallelized'`` for scalar arguments
(:issue:`1697`).
- Fix "Chunksize cannot exceed dimension size" error when writing netCDF4 files
loaded from disk (:issue:`1225`).
By `Stephan Hoyer <https://github.com/shoyer>`_.

Testing
Expand Down
10 changes: 6 additions & 4 deletions xarray/backends/netCDF4_.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,10 +156,12 @@ def _extract_nc4_variable_encoding(variable, raise_on_invalid=False,
if lsd_okay:
valid_encodings.add('least_significant_digit')

if (encoding.get('chunksizes') is not None and
(encoding.get('original_shape', variable.shape) !=
variable.shape) and not raise_on_invalid):
del encoding['chunksizes']
if not raise_on_invalid and 'chunksizes' in encoding:
chunks_too_big = any(
c > d for c, d in zip(encoding['chunksizes'], variable.shape))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you'll need to make sure encoding['chunksizes'] is an iterable of length variable.ndim

changed_shape = encoding.get('original_shape') != variable.shape
if chunks_too_big or changed_shape:
del encoding['chunksizes']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks fine. Can you add a comment here that explains that we are dropping the encoding chunksizes so that netCDF4-python can write this dataset


for k in safe_to_drop:
if k in encoding:
Expand Down
16 changes: 16 additions & 0 deletions xarray/tests/test_backends.py
Original file line number Diff line number Diff line change
Expand Up @@ -909,6 +909,21 @@ def test_compression_encoding(self):
with self.roundtrip(expected) as actual:
self.assertDatasetEqual(expected, actual)

def test_encoding_chunksizes_unlimited(self):
# regression test for GH1225
ds = Dataset({'x': [1, 2, 3], 'y': ('x', [2, 3, 4])})
ds.variables['x'].encoding = {
'zlib': False,
'shuffle': False,
'complevel': 0,
'fletcher32': False,
'contiguous': False,
'chunksizes': (2 ** 20,),
'original_shape': (3,),
}
with self.roundtrip(ds) as actual:
self.assertDatasetEqual(ds, actual)

def test_mask_and_scale(self):
with create_tmp_file() as tmp_file:
with nc4.Dataset(tmp_file, mode='w') as nc:
Expand Down Expand Up @@ -1230,6 +1245,7 @@ def test_encoding_unlimited_dims(self):
save_kwargs=dict(unlimited_dims=['y'])) as actual:
self.assertEqual(actual.encoding['unlimited_dims'], set('y'))
self.assertDatasetEqual(ds, actual)

ds.encoding = {'unlimited_dims': ['y']}
with self.roundtrip(ds) as actual:
self.assertEqual(actual.encoding['unlimited_dims'], set('y'))
Expand Down