You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Added client caching to reuse an existing s3 client rather than creating a new one for each upload. Fixes #6
7
+
* Updated the maxPartSize to be a hard limit instead of a soft one so that generated ETAG are consistent to to the reliable size of the uploaded parts. Fixes #7
8
+
* Added this file. Fixes #8
9
+
* New feature: concurrent part uploads. Now you can optionally enable concurrent part uploads if you wish to allow your application to drain the source stream more quickly and absorb some of the bottle neck when uploading to S3.
10
+
11
+
### 0.4.0 (2014-06-23)
12
+
13
+
* Now with better error handling. If an error occurs while uploading a part to S3, or completing a multipart upload then the in progress multipart upload will be aborted (to delete the uploaded parts from S3) and a more descriptive error message will be emitted instead of the raw error response from S3.
14
+
15
+
### 0.3.0 (2014-05-06)
16
+
17
+
* Added tests using a stubbed out version of the Amazon S3 client. These tests will ensure that the upload stream behaves properly, calls S3 correctly, and emits the proper events.
18
+
* Added Travis integration
19
+
* Also fixed bug with the functionality to dynamically adjust the part size.
20
+
21
+
### 0.2.0 (2014-04-25)
22
+
23
+
* Fixed a race condition bug that occured occasionally with streams very close to the 5 MB size threshold where the multipart upload would be finalized on S3 prior to the last data buffer being flushed, resulting in the last part of the stream being cut off in the resulting S3 file. (Notice: If you are using an older version of this module I highly recommend upgrading to get this latest bugfix.)
24
+
* Added a method for adjusting the part size dynamically.
25
+
26
+
### 0.1.0 (2014-04-17)
27
+
28
+
* Code cleanups and stylistic goodness.
29
+
* Made the connection parameters optional for those who are following Amazon's best practices of allowing the SDK to get AWS credentials from environment variables or AMI roles.
30
+
31
+
### 0.0.3 (2013-12-25)
32
+
33
+
* Merge for pull request #2 to fix an issue where the latest version of the AWS SDK required a strict type on part number.
Copy file name to clipboardExpand all lines: README.md
+31-5Lines changed: 31 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -6,13 +6,14 @@ A pipeable write stream which uploads to Amazon S3 using the multipart file uplo
6
6
7
7
### Changelog
8
8
9
-
_June 23, 2014_ - Now with better error handling. If an error occurs while uploading a part to S3, or completing a multipart upload then the in progress multipart upload will be aborted (to delete the uploaded parts from S3) and a more descriptive error message will be emitted instead of the raw error response from S3.
9
+
## 0.5.0 (2014-08-11)
10
10
11
-
_May 6, 2014_ - Added tests using a stubbed out version of the Amazon S3 client. These tests will ensure that the upload stream behaves properly, calls S3 correctly, and emits the proper events. Also fixed bug with the functionality to dynamically adjust the part size.
11
+
* Added client caching to reuse an existing s3 client rather than creating a new one for each upload. Fixes #6
12
+
* Updated the maxPartSize to be a hard limit instead of a soft one so that generated ETAG's are consistent due to the reliable size of the uploaded parts. Fixes #7
13
+
* Added a changelog.md file. Fixes #8
14
+
* New feature: concurrent part uploads. Now you can optionally enable concurrent part uploads if you wish to allow your application to drain the source stream more quickly and absorb some of the backpressure from a fast incoming stream when uploading to S3.
12
15
13
-
_April 25, 2014_ - Fixed a race condition bug that occured occasionally with streams very close to the 5 MB size threshold where the multipart upload would be finalized on S3 prior to the last data buffer being flushed, resulting in the last part of the stream being cut off in the resulting S3 file. Also added a method for adjusting the part size dynamically. (__Notice:__ If you are using an older version of this module I highly recommend upgrading to get this latest bugfix.)
14
-
15
-
_April 17, 2014_ - Made the connection parameters optional for those who are following Amazon's best practices of allowing the SDK to get AWS credentials from environment variables or AMI roles.
16
+
[Historical Changelogs](CHANGELOG.md)
16
17
17
18
### Why use this stream?
18
19
@@ -146,6 +147,31 @@ var UploadStreamObject = new Uploader(
146
147
);
147
148
```
148
149
150
+
### stream.concurrentParts(numberOfParts)
151
+
152
+
Used to adjust the number of parts that are concurrently uploaded to S3. By default this is just one at a time, to keep memory usage low and allow the upstream to deal with backpressure. However, in some cases you may wish to drain the stream that you are piping in quickly, and then issue concurrent upload requests to upload multiple parts.
153
+
154
+
Keep in mind that total memory usage will be at least `maxPartSize` * `concurrentParts` as each concurrent part will be `maxPartSize` large, so it is not recommended that you set both `maxPartSize` and `concurrentParts` to high values, or your process will be buffering large amounts of data in its memory.
155
+
156
+
```js
157
+
var UploadStreamObject =newUploader(
158
+
{
159
+
"Bucket":"your-bucket-name",
160
+
"Key":"uploaded-file-name "+newDate()
161
+
},
162
+
function (err, uploadStream)
163
+
{
164
+
uploadStream.concurrentParts(5)
165
+
166
+
uploadStream.on('uploaded', function (data) {
167
+
console.log('done');
168
+
});
169
+
170
+
read.pipe(uploadStream);
171
+
}
172
+
);
173
+
```
174
+
149
175
### Tuning configuration of the AWS SDK
150
176
151
177
The following configuration tuning can help prevent errors when using less reliable internet connections (such as 3G data if you are using Node.js on the Tessel) by causing the AWS SDK to detect upload timeouts and retry.
0 commit comments