-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
BUG: Reading csv files with numbers with multiple leading zeros losses a lot of precision #39514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Him thanks for your report. Could you trim your example down a bit to only use relevant information? Additionally it would be great if you could provide something which does not need a csv as input. |
Hi Patrick! The absolutely minimal reproducible example can indeed be written without the file input by using the StringIO class. Here is the code doing that. Please let me know if there is anything I can do to help.
Sincerely, |
moving to 1.2.3 |
@simonjayhawkins Looks like @CalebBell created a PR to address this . |
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample, a copy-pastable example
Problem description
Pandas is no longer reading a CVS file with numbers that look like "0.00000000000001953" correctly. In this case, Pandas reads that as 1.950000e-14 - a clear loss of precision. This appears to be a bug in the new "high precision" floating point parsing engine that was made the default in Pandas 1.2.0.
Expected Output
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : 9d598a5
python : 3.8.7.final.0
python-bits : 64
OS : Linux
OS-release : 5.7.0-2-amd64
Version : #1 SMP Debian 5.7.10-1 (2020-07-26)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_CA.UTF-8
LOCALE : en_CA.UTF-8
pandas : 1.2.1
numpy : 1.19.2
pytz : 2020.5
dateutil : 2.8.1
pip : 20.1.1
setuptools : 51.3.3
Cython : 0.29.21
pytest : 6.1.2
hypothesis : None
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.6.1
html5lib : 1.1
pymysql : None
psycopg2 : 2.8.5 (dt dec pq3 ext lo64)
jinja2 : 2.11.2
IPython : 7.17.0
pandas_datareader: None
bs4 : 4.9.3
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.1
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.22
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.52.0
I was able to work around this with the {'float_precision': 'legacy'} option, but it is not great behavior and old versions of the library I wrote that experienced this issue will silently break.
The text was updated successfully, but these errors were encountered: