-
Notifications
You must be signed in to change notification settings - Fork 13
Output filtered data to a netCDF file #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
23cc698
to
132e6c2
Compare
52fd8e8
to
c8d70a7
Compare
While I believe I have the core functionality and some unit tests in place, I don't know whether this is quite ready for a merge. However, it does deserve some real-world testing. One issue I suspect will become more apparent is #27 (warn on existing output), since this will wipe existing filtered output with an experiment of the same name! I've tagged this as |
Using the source files, and the variable/dimension dictionaries, a netCDF dataset is created to hold the filtered data. The approach I've taken here is pretty ugly, and makes too many assumptions. Ideally, I'd like to use xarray to do most of the work, but incremental writes to a dataset are one of the unsupported operations.
This is really just the most basic case, and doesn't cover any tricky corner cases.
This adds a few more test cases. In particular, the staggered test case caught missing behaviour when variables have their own separate dimensions (as is the case for staggered variables).
I had been testing wildcard filenames before, and in those cases it seems like parcels expanded the wildcard when creating the FieldSet. However, with two wildcards, this wasn't happening. We manually expand wildcards when querying the input datasets for the metadata needed to create the output file. Closes #34.
This adds a flag to filtering and create_out to control whether an existing output file should be clobbered or not. Closes #27.
a229710
to
6795575
Compare
For example, a data variable might only have dims ("time", "lat", "lon"), but a "depth" dimension was specified in indices.
Using the source files, and the variable/dimension dictionaries, a netCDF dataset is created to hold the filtered data. The approach I've taken here is pretty ugly, and makes too many assumptions. Ideally, I'd like to use xarray to do most of the work, but incremental writes to a dataset are one of the unsupported operations.
See #1.