Skip to content

open_mfdataset() memory error in v0.10 #1745

Closed
@nick-weber

Description

@nick-weber

Code Sample

import xarray

ncfiles = '/example/path/to/wrf/netcdfs/*'
dropvars = ['list', 'of', 'many', 'vars', 'to', 'drop']

dset = xarray.open_mfdataset(ncfiles, drop_variables=dropvars, concat_dim='Time',  
                                                autoclose=True, decode_cf=False)

Problem description

I am trying to load 73 model (WRF) output files using open_mfdataset(). (Thus, 'Time' is a new dimension). Each netcdf has dimensions {'x' : 405, 'y' : 282, 'z': 37} and roughly 20 variables (excluding the other ~20 in dropvars).

When I run the above code with v0.9.6, it completes in roughly 7 seconds. But with v0.10, it crashes with the following error:

*** Error in `~/anaconda3/bin/python': corrupted size vs. prev_size: 0x0000560e9b6ca7b0 ***

which, as I understand, means I'm exceeding my memory allocation. Any thoughts on what could be the source of this issue?

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.9.0-3-amd64 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: C LANG: C LOCALE: None.None

xarray: 0.10.0
pandas: 0.20.3
numpy: 1.13.1
scipy: 0.19.1
netCDF4: 1.2.4
h5netcdf: 0.5.0
Nio: None
bottleneck: 1.2.1
cyordereddict: None
dask: 0.16.0
matplotlib: 2.0.2
cartopy: None
seaborn: 0.8.0
setuptools: 27.2.0
pip: 9.0.1
conda: 4.3.29
pytest: 3.1.3
IPython: 6.1.0
sphinx: 1.6.2

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions