-
-
Notifications
You must be signed in to change notification settings - Fork 0
Use table.iter api with readable stream for loading large data files #50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
might need to use a worker threads for larger data files too ... https://nodejs.org/api/worker_threads.html
Tried it with vizgen mouse genome data: https://vizgen.com/data-release-program/ One of their data files is ~207 Mbytes, has 650 columns and over 78K rows. That's about 50 million of dense wide column mostly numeric data points to parse. This took a while to load and Tabulator table is not very responsive on scrolling ... I might need to move reading large data files to a worker thread because of all the CSV line data parsing. Docs on nodejs worker threads: https://nodejs.org/api/worker_threads.html Even with the added |
let tabulator table handle it for now
…to table view on init and add data requests (#50)
…ests after initial table is created (#50)
see changes in #49 and table schema docs: https://github.com/frictionlessdata/tableschema-js#tableiterkeyed-extended-cast-forcecast-relations-stream--asynciterator--stream
The text was updated successfully, but these errors were encountered: