You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With jagged CSV's, we risk being too quick
to dump memory that we need to allocate
because previous chunks would have
indicated much larger rows than we can
anticipate in subsequent chunks.
Closesgh-23509.
Copy file name to clipboardExpand all lines: doc/source/whatsnew/v0.24.0.txt
+1
Original file line number
Diff line number
Diff line change
@@ -1298,6 +1298,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form
1298
1298
- :func:`read_excel()` will correctly show the deprecation warning for previously deprecated ``sheetname`` (:issue:`17994`)
1299
1299
- :func:`read_csv()` and func:`read_table()` will throw ``UnicodeError`` and not coredump on badly encoded strings (:issue:`22748`)
1300
1300
- :func:`read_csv()` will correctly parse timezone-aware datetimes (:issue:`22256`)
1301
+
- Bug in :func:`read_csv()` in which memory management was prematurely optimized for the C engine when the data was being read in chunks (:issue:`23509`)
1301
1302
- :func:`read_sas()` will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (:issue:`21616`)
1302
1303
- :func:`read_sas()` will correctly parse sas7bdat files with many columns (:issue:`22628`)
1303
1304
- :func:`read_sas()` will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (:issue:`16615`)
0 commit comments