-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
json_normalize gives KeyError in 0.23 #21158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I couldn't reproduce your error with the information provide (was getting others) - can you please update it so the example can be fully copy/pasted to reproduce? |
I'm not sure what errors you are getting. Here's a version with the JSON contents directly in the Python file as a dict: from pandas import show_versions
from pandas.io.json import json_normalize
print(show_versions())
d = {
"subject": {
"pairs": {
"A1-A2": {
"atlases": {
"avg.corrected": {
"region": None,
"x": 49.151580810546875,
"y": -33.148521423339844,
"z": 27.572303771972656
}
}
}
}
}
}
normed = json_normalize(d)
print(normed) This results in the same error. |
Running his code I have the same error as well. https://github.com/pandas-dev/pandas/blob/master/pandas/io/json/normalize.py#L79 I think the problem is here. If I add two print statements:
I get the following printout: cond1: True So new_d is getting popped twice which causes an error on the second time? I added a continue in the code: if level != 0: # so we skip copying for top level, common case and the code looks to run fine. |
Also as mivade pointed out these lines of code were changed in #20399 |
I ran into this same error, when the level != 0 and v is None, the code tries to pop k twice. @ssikdar1 Do you know what the purpose is of popping k in the first place is? If the intention was to not include keys whose value is None then your update undermines that intention. It will only remove keys with None values at the first level of the dictionary. That inconsistency may be confusing to users. I would either make sure to exclude all keys whose value is None, e.g.:
or perhaps not bother popping k when v is None in the first place. I don't know what the intention was in removing key's with None values is, so this is my guess. |
@lauraathena I see your point. Based on the test cases the previous commit is using it looks like they were only considering None values at the first level of the dictionary.
#20030 But I'm also not sure what they wanted the expected behavior to be for when level != 0. |
@lauraathena @ssikdar1
but would strongly like to keep an option to preserve the 0.22.0 behavior (for level = 0 and level != 0):
My use case involve processing batches of jsons from an api and then storing them with others. Handling exceptions for when a given batch happens to be everywhere None and is suddenly missing one or more columns would add a large headache. I imagine this is relatively common. |
Code Sample, a copy-pastable example if possible
The
test.json
file is rather lengthy, with a structure similar to:This minimal version is enough to show the error below.
Problem description
This problem is new in pandas 0.23. I get the following traceback:
Traceback:
Note that running the same code on pandas 0.22 does not result in any errors. I suspect this could be related to #20399.
Expected Output
Expected output is a flattened
DataFrame
without any errors.Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.4.final.0
python-bits: 64
OS: Darwin
OS-release: 16.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.0
pytest: 3.5.1
pip: 9.0.1
setuptools: 38.4.0
Cython: None
numpy: 1.14.2
scipy: None
pyarrow: None
xarray: 0.10.3
IPython: 6.3.1
sphinx: 1.7.2
patsy: None
dateutil: 2.7.2
pytz: 2018.3
blosc: None
bottleneck: None
tables: 3.4.2
numexpr: 2.6.4
feather: None
matplotlib: 2.2.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: 4.6.0
html5lib: 1.0.1
sqlalchemy: 1.2.7
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
The text was updated successfully, but these errors were encountered: