-
-
Notifications
You must be signed in to change notification settings - Fork 329
FSStore
not handling .zmetadata
correctly when dimension_separator
is set to /
at store level.
#1121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have also encountered this bug, I am passing |
points to the key lookup itself. Still investigating, but one thought: add a call to |
In FSStore, adding this hard-coding:
corrects the problem, though it has very, very strong code smells. |
The normalization code in FSStore uses a list of well-known files to prevent incorrectly re-writing keys when dimension separator is "/" rather than ".". This was initialized only with the names specified in the spec, which left out ".zmetadata" from consolidated metadata. see: zarr-developers#1121
The normalization code in FSStore uses a list of well-known files to prevent incorrectly re-writing keys when dimension separator is "/" rather than ".". This was initialized only with the names specified in the spec, which left out ".zmetadata" from consolidated metadata. see: zarr-developers#1121
Now that dimension separator is not part of the store api, this is no longer an issue. |
Zarr version
v2.12.0
Numcodecs version
0.10.2
Python Version
3.10
Operating System
Linux
Installation
using pip into virtual environment
Description
When creating an
FSStore
with dimension separator/
the.zmetadata
file also gets renamed to/zmetadata
.I have tested this on Google Cloud Storage.
This causes inconsistent behavior in local vs. cloud stores. The local files get structured like this:
Not the missing
.
from.zmetadata
.And on the cloud storage, this is what happens:
Note that extra prefix
/
before.zmetadata
and the missing.
.It appears that the local FSStore (or path handler) removes one of the slashes from
path_to_root//zmetadata
, whereas, on the cloud stores, it doesn't.When we use
zarr.open_consolidated
, in some edge cases this fails. Such as: creating a Zarr on-prem and copying it to Google Cloud doesn't work because file structures are different.The ideal behavior would be
as usual,
my_array
has a dimension separator set to/
, and Zarr should parse that properly.One possible workaround is NOT to use the
dimension_separator
at the store level but use it when creating arrays.However, this is prone to error since we would have to specify it every time we create an array, or else there could be inconsistent
.
or/
arrays within the store.This works as expected, on both cloud and local:
Steps to reproduce
Additional output
No response
The text was updated successfully, but these errors were encountered: