File tree Expand file tree Collapse file tree 1 file changed +3
-3
lines changed Expand file tree Collapse file tree 1 file changed +3
-3
lines changed Original file line number Diff line number Diff line change @@ -423,7 +423,7 @@ library::
423
423
combined = xr.concat(dataset, dim)
424
424
return combined
425
425
426
- read_netcdfs('/all/my/files/*.nc', dim='time')
426
+ combined = read_netcdfs('/all/my/files/*.nc', dim='time')
427
427
428
428
This function will work in many cases, but it's not very robust. First, it
429
429
never closes files, which means it will fail one you need to load more than
@@ -454,8 +454,8 @@ deficiencies::
454
454
455
455
# here we suppose we only care about the combined mean of each file;
456
456
# you might also use indexing operations like .sel to subset datasets
457
- read_netcdfs('/all/my/files/*.nc', dim='time',
458
- transform_func=lambda ds: ds.mean())
457
+ combined = read_netcdfs('/all/my/files/*.nc', dim='time',
458
+ transform_func=lambda ds: ds.mean())
459
459
460
460
This pattern works well and is very robust. We've used similar code to process
461
461
tens of thousands of files constituting 100s of GB of data.
You can’t perform that action at this time.
0 commit comments