Skip to content

Large http file download fails in 2.4.0 rc1 #3273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
bbx10 opened this issue May 22, 2017 · 10 comments
Closed

Large http file download fails in 2.4.0 rc1 #3273

bbx10 opened this issue May 22, 2017 · 10 comments

Comments

@bbx10
Copy link
Contributor

bbx10 commented May 22, 2017

Hardware

Hardware: ESP12
Core Version: 2.4.0 rc1

Description

Run FSBrowser example then upload 1 MByte file to SPIFFS. The ESP12 has 4MB Flash with 3MB for SPIFFS. The upload works fine.

The problem occurs when downloading the large file using curl/wget. The transfer aborts after a few hundreds of K bytes.

$ curl http://192.168.x.y/1MB.DAT >/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 36 1024k   36  376k    0     0  53976      0  0:00:19  0:00:07  0:00:12 45560
curl: (18) transfer closed with 663240 bytes remaining to read

The last commit where large file download works is 2aeac91. The following commits modify WiFiClient.cpp and .h but the code is way over my head.

@igrr
Copy link
Member

igrr commented May 22, 2017

Thanks for pointing this out. WiFiClient::write now times out after _timeout milliseconds (which defaults to 5000), and this doesn't depend on the amount of data written. I suppose in ESP8266WebServer we need to increase this timeout depending on data length.

@JamesGKent
Copy link

i have to admit that i haven't looked at the code (and would probably be over my head) but assuming this writes in chunks would it not make more sense to reset the timeout each time a chunk is written successfully? that way the timeout setting would still have meaning, but would allow normal writes to succeed. Just a thought

@igrr
Copy link
Member

igrr commented May 22, 2017

Splitting into chunks currently happens inside WiFiClient::write now, but the timeout is applied to the whole write operation. Turning it into a per-chunk timeout does make sense, i guess.

@bbx10
Copy link
Contributor Author

bbx10 commented May 22, 2017

WiFiClient::write(Stream& stream) is passed the 1MB file stream so the write() timeout has to cover the entire file transfer.

@JamesGKent
Copy link

@bbx10 you say has to, but that doesn't make sense, if you have a large file it will always time out given the limitations of the ESP with regards to throughput, it makes more sense to timeout after a period of inactivity which i think is more logical to think of each chunk as an activity rather than the whole write. that way if the connection is terminated early, the write will timeout after 5 seconds from the last successful chunk.

@igrr
Copy link
Member

igrr commented May 22, 2017

Unfortunately the other wifi libraries (WiFi shield library and WiFi101 library) don't help answer the question how Stream::_timeout should be applied to write operation. Both libraries delegate timeout handling to the underlying firmware, so _timeout isn't actually used for writes.

Personally i think that being able to specify timeouts for writes is useful. The proper solution would probably be to define per-chunk (or per-byte) timeout and the total timeout, similar to the way serial timeouts are defined in WinAPI. But that wouldn't fit into Stream::setTimeout idiom.

@bbx10
Copy link
Contributor Author

bbx10 commented May 22, 2017

@JamesGKent I am describing how the code works right now. The default 5 second timeout IS terminating the file xfer. I confirmed this by increasing it to 20 seconds and observing the file xfer finished. I will let @igrr decide how to fix.

igrr added a commit that referenced this issue May 22, 2017
WiFiClient write timeouts introduced in #3257 applied to the whole write
operation, which could take long time if data size was large. This
change makes the timeout happen per chunk. Timeout now happens if no
data has been delivered within a given interval.
@igrr
Copy link
Member

igrr commented May 22, 2017

Fixed in d6f1f0d. Timeout now happens if no data was sent for _timeout milliseconds.

@bbx10
Copy link
Contributor Author

bbx10 commented May 22, 2017

I confirmed d6f1f0d fixes the reported bug.

@bbx10 bbx10 closed this as completed May 22, 2017
@igrr
Copy link
Member

igrr commented May 23, 2017

Will keep this open for reference, until the fix is released.

@igrr igrr reopened this May 23, 2017
@igrr igrr closed this as completed Jan 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants