Skip to content

Commit

Permalink
Merge pull request #106 from yaxia/master
Browse files Browse the repository at this point in the history
Storage Client Library - 0.7.0
  • Loading branch information
vinjiang committed Dec 18, 2015
2 parents fade01a + 17dce1a commit f548c06
Show file tree
Hide file tree
Showing 63 changed files with 31,382 additions and 13,016 deletions.
Empty file added .gitattributes
Empty file.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,5 @@ results

npm-debug.log
node_modules
coverage
docs
8 changes: 7 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,17 @@
language: node_js
node_js:
- "4.1"
- "4.0"
- "0.12"
- "0.10"
- "0.8"

after_script:
- npm run coveralls

install:
- npm install -g [email protected]
- npm --version
- npm install


sudo: false
9 changes: 9 additions & 0 deletions BreakingChanges.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
Tracking Breaking Changes in 0.7.0
ALL
* The generateDevelopmentStorageCredendentials function in the azure-storage.js is renamed to generateDevelopmentStorageCredentials.

BLOB
* The AppendFromLocalFile function in the blobservice.js is renamed to appendFromLocalFile.
* The AppendFromStream function in the blobservice.js is renamed to appendFromStream.
* The AppendFromText function in the blobservice.js is renamed to appendFromText.

Tracking Breaking Changes in 0.5.0
ALL
* The suffix "_HEADER" is removed from all the http header constants.
Expand Down
23 changes: 23 additions & 0 deletions ChangeLog.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,29 @@
Note: This is an Azure Storage only package. The all up Azure node sdk still has the old storage bits in there. In a future release, those storage bits will be removed and an npm dependency to this storage node sdk will
be taken. This is a CTP v1 release and the changes described below indicate the changes from the Azure node SDK 0.9.8 available here - https://github.com/Azure/azure-sdk-for-node.

2015.12 Version 0.7.0

ALL
* Fixed the typo in the function generateDevelopmentStorageCredentials.
* Fixed the issue that the HTTP global agent setting is changed during parallel uploading and downloading and impacts on other Node.js applications.
* Fixed the issue that the chunked stream writing methods do not accept string.
* Fixed the issue that the request fails when the content-length is set to string '0' in the 'sendingRequestEvent' event handler.
* Supported retry on XML parsing errors when the XML in the response body is corrupted.
* Replaced the dependency "mime" to "browserify-mime" to work with Browserify.

BLOB
* Added an option to skip the blob or file size checking prior to the actual downloading.
* Fixed the issue that it doesn't callback when loses the internet connection during uploading/uploading.
* Fixed the issue that the local file cannot be removed in the callback when uploading a blob from a local file.
* Fixed the issue that the stream length doesn't work when it is larger than 32MB in the createBlockBlobFromStream, createPageBlobFromStream, createAppendBlobFromStream and appendFromStream functions.
* Fixed the issue that it doesn't return error in the page range validation when the size exceeds the limit.
* Renamed the function AppendFromLocalFile to appendFromLocalFile.
* Renamed the function AppendFromStream to appendFromStream.
* Renamed the function AppendFromText to appendFromText.

TABLE
* Fixed the issue that listTablesSegmentedWithPrefix with maxResult option throws exception.

2015.09 Version 0.6.0

ALL
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Microsoft Azure Storage SDK for Node.js

[![NPM version](https://badge.fury.io/js/azure-storage.svg)](http://badge.fury.io/js/azure-storage) [![Build Status](https://travis-ci.org/Azure/azure-storage-node.svg?branch=master)](https://travis-ci.org/Azure/azure-storage-node)
[![Coverage Status](https://coveralls.io/repos/Azure/azure-storage-node/badge.svg?branch=master&service=github)](https://coveralls.io/github/Azure/azure-storage-node?branch=master)

This project provides a Node.js package that makes it easy to consume and manage Microsoft Azure Storage Services.

Expand Down
4 changes: 2 additions & 2 deletions lib/azure-storage.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ var exports = module.exports;
* @return {string} A connection string representing the development storage credentials.
* @example
* var azure = require('azure-storage');
* var devStoreCreds = azure.generateDevelopmentStorageCredendentials();
* var devStoreCreds = azure.generateDevelopmentStorageCredentials();
* var blobService = azure.createBlobService(devStoreCreds);
*/
exports.generateDevelopmentStorageCredendentials = function (proxyUri) {
exports.generateDevelopmentStorageCredentials = function (proxyUri) {
var devStore = 'UseDevelopmentStorage=true;';
if(proxyUri){
devStore += 'DevelopmentStorageProxyUri=' + proxyUri;
Expand Down
2 changes: 1 addition & 1 deletion lib/common/common.js
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ exports.ISO8061Date = require('./util/iso8061date');
exports.util = require('./util/util');
exports.validate = require('./util/validate');
exports.StorageUtilities = require('./util/storageutilities');
exports.AccessCondition = require('./util/accesscondition');
exports.AccessCondition = require('./util/accesscondition');
2 changes: 1 addition & 1 deletion lib/common/filters/exponentialretrypolicyfilter.js
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ ExponentialRetryPolicyFilter.prototype.shouldRetry = function (statusCode, reque
retryData.retryInterval = Math.min(this.minRetryInterval + incrementDelta, this.maxRetryInterval);
retryData.retryable = retryData.retryCount ? retryData.retryCount < this.retryCount : true;

return RetryPolicyFilter._shouldAbsorbConditionalError(statusCode, requestOptions);
return RetryPolicyFilter._shouldRetryOnError(statusCode, requestOptions);
};

/**
Expand Down
2 changes: 1 addition & 1 deletion lib/common/filters/linearretrypolicyfilter.js
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ LinearRetryPolicyFilter.prototype.shouldRetry = function (statusCode, requestOpt
retryData.retryInterval = this.retryInterval;
retryData.retryable = retryData.retryCount ? retryData.retryCount < this.retryCount : true;

return RetryPolicyFilter._shouldAbsorbConditionalError(statusCode, requestOptions);
return RetryPolicyFilter._shouldRetryOnError(statusCode, requestOptions);
};

/**
Expand Down
45 changes: 26 additions & 19 deletions lib/common/filters/retrypolicyfilter.js
Original file line number Diff line number Diff line change
Expand Up @@ -126,11 +126,11 @@ RetryPolicyFilter._handle = function (self, requestOptions, next) {
retryInfo.retryInterval = self.retryInterval;
}

// Only in the case of success from server but client side failure like MD5 or length mismatch, returnObject.retryable has a value(we explicitly set it to false). In this case, we should not retry
// the request.
// Only in the case of success from server but client side failure like MD5 or length mismatch, returnObject.retryable has a value(we explicitly set it to false).
// In this case, we should not retry the request.
if (returnObject.error && azureutil.objectIsNull(returnObject.retryable) &&
((!azureutil.objectIsNull(returnObject.response) &&
retryInfo.retryable) || (returnObject.error.code === 'ETIMEDOUT' || returnObject.error.code === 'ESOCKETTIMEDOUT' || returnObject.error.code === 'ECONNRESET'))) {
((!azureutil.objectIsNull(returnObject.response) && retryInfo.retryable) ||
(returnObject.error.code === 'ETIMEDOUT' || returnObject.error.code === 'ESOCKETTIMEDOUT' || returnObject.error.code === 'ECONNRESET'))) {

if (retryRequestOptions.currentLocation === Constants.StorageLocation.PRIMARY) {
lastPrimaryAttempt = returnObject.operationEndTime;
Expand Down Expand Up @@ -185,28 +185,35 @@ RetryPolicyFilter._handle = function (self, requestOptions, next) {
operation();
};

RetryPolicyFilter._shouldAbsorbConditionalError = function (statusCode, requestOptions) {
RetryPolicyFilter._shouldRetryOnError = function (statusCode, requestOptions) {
var retryInfo = (requestOptions && requestOptions.retryContext) ? requestOptions.retryContext : {};

if (statusCode >= 300) {
if (requestOptions && !requestOptions.absorbConditionalErrorsOnRetry) {
// Non-timeout Cases
if (statusCode >= 300 && statusCode != 408) {
// Always no retry on "not implemented" and "version not supported"
if (statusCode == 501 || statusCode == 505) {
retryInfo.retryable = false;
return retryInfo;
}

if (statusCode == 501 || statusCode == 505) {
retryInfo.retryable = false;
} else if (statusCode == 412) {
// When appending block with precondition failure and their was a server error before, we ignore the error.
if (retryInfo.lastServerError) {
retryInfo.ignore = true;

// When absorbConditionalErrorsOnRetry is set (for append blob)
if (requestOptions && requestOptions.absorbConditionalErrorsOnRetry) {
if (statusCode == 412) {
// When appending block with precondition failure and their was a server error before, we ignore the error.
if (retryInfo.lastServerError) {
retryInfo.ignore = true;
retryInfo.retryable = true;
} else {
retryInfo.retryable = false;
}
} else if (retryInfo.retryable && statusCode >= 500 && statusCode < 600) {
// Retry on the server error
retryInfo.retryable = true;
} else {
retryInfo.retryable = false;
retryInfo.lastServerError = true;
}
} else if (retryInfo.retryable && statusCode >= 500 && statusCode < 600) {
retryInfo.retryable = true;
retryInfo.lastServerError = true;
} else if (statusCode < 500) {
// No retry on the client error
retryInfo.retryable = false;
}
}

Expand Down
12 changes: 8 additions & 4 deletions lib/common/services/storageserviceclient.js
Original file line number Diff line number Diff line change
Expand Up @@ -575,8 +575,7 @@ StorageServiceClient.prototype._processResponse = function (webResource, respons

if (validResponse && webResource.rawResponse) {
responseObject = { error: null, response: rsp };
}
else {
} else {
// attempt to parse the response body, errors will be returned in rsp.error without modifying the body
rsp = StorageServiceClient._parseResponse(rsp, self.xml2jsSettings);

Expand Down Expand Up @@ -707,8 +706,11 @@ StorageServiceClient._parseResponse = function (response, xml2jsSettings) {
var parsed;
var parser = new xml2js.Parser(xml2jsSettings);
parser.parseString(azureutil.removeBOM(body.toString()), function (err, parsedBody) {
if (err) { throw err; }
else { parsed = parsedBody; }
if (err) {
var xmlError = new Error('EXMLFORMAT');
xmlError.innerError = err;
throw xmlError;
} else { parsed = parsedBody; }
});

return parsed;
Expand Down Expand Up @@ -982,6 +984,8 @@ StorageServiceClient._normalizeError = function (error, response) {

// blob/queue errors should have error.Error, table errors should have error['odata.error']
var errorProperties = error.Error || error.error || error['odata.error'] || error;
normalizedError.code = errorProperties.message; // The message exists when there is error.Error.

for (var property in errorProperties) {
if (errorProperties.hasOwnProperty(property)) {
var key = property.toLowerCase();
Expand Down
2 changes: 1 addition & 1 deletion lib/common/services/storageservicesettings.js
Original file line number Diff line number Diff line change
Expand Up @@ -401,4 +401,4 @@ StorageServiceSettings._createStorageServiceSettings = function (settings) {

StorageServiceSettings.validKeys = validKeys;

exports = module.exports = StorageServiceSettings;
exports = module.exports = StorageServiceSettings;
2 changes: 1 addition & 1 deletion lib/common/signing/sharedkey.js
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ function SharedKey(storageAccount, storageAccessKey, usePathStyleUri) {
SharedKey.prototype.signRequest = function (webResource, callback) {
var getvalueToAppend = function (value, headerName) {
// Do not sign content-length 0 in 2014-08-16 and later
if (headerName === HeaderConstants.CONTENT_LENGTH && (azureutil.objectIsNull(value[headerName]) || value[headerName] === 0)) {
if (headerName === HeaderConstants.CONTENT_LENGTH && (azureutil.objectIsNull(value[headerName]) || value[headerName].toString() === '0')) {
return '\n';
} else if (azureutil.objectIsNull(value) || azureutil.objectIsNull(value[headerName])) {
return '\n';
Expand Down
6 changes: 3 additions & 3 deletions lib/common/streams/batchoperation.js
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,8 @@ BatchOperation.OperationState = OperationState;
BatchOperation.prototype.setConcurrency = function(concurrency) {
if (concurrency) {
this.concurrency = concurrency;
http.globalAgent.maxSockets = this.concurrency;
https.globalAgent.maxSockets = this.concurrency;
http.Agent.maxSockets = this.concurrency;
https.Agent.maxSockets = this.concurrency;
}
};

Expand All @@ -108,7 +108,7 @@ BatchOperation.prototype.IsWorkloadHeavy = function() {
//RestOperation start to run in order of id
var sharedRequest = 1;
if(enableReuseSocket && !this.callInOrder) {
sharedRequest = 5;
sharedRequest = 10;
}
return this._activeOperation >= sharedRequest * this.concurrency ||
this._isLowMemory() ||
Expand Down
35 changes: 30 additions & 5 deletions lib/common/streams/chunkstream.js
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ function ChunkStream(options) {
this._md5hash = null;
this._buffer = null;
this._internalBufferSize = 0;
this._outputLengthLimit = 0;
this._md5sum = undefined;

if (options.calcContentMd5) {
Expand All @@ -64,6 +65,15 @@ ChunkStream.prototype.setMemoryAllocator = function(allocator) {
this._allocator = allocator;
};

/**
* Set the output length.
*/
ChunkStream.prototype.setOutputLength = function(length) {
if (length) {
this._outputLengthLimit = length;
}
};

/**
* Internal stream ended
*/
Expand All @@ -87,6 +97,7 @@ ChunkStream.prototype.end = function (chunk, encoding, cb) {
if (cb) {
this.once('end', cb);
}

this.emit('end');
};

Expand Down Expand Up @@ -124,10 +135,9 @@ ChunkStream.prototype.write = function (chunk, encoding) {
* Buffer the data into a chunk and emit it
*/
ChunkStream.prototype._buildChunk = function (data) {
if(this._md5hash) {
this._md5hash.update(data);
if (typeof data == 'string') {
data = new Buffer(data);
}

var dataSize = data.length;
var dataOffset = 0;
do {
Expand Down Expand Up @@ -157,12 +167,10 @@ ChunkStream.prototype._buildChunk = function (data) {
dataOffset += copySize;
buffer = this._popInternalBuffer();
}

this._emitBufferData(buffer);
} while(dataSize > 0);
};


/**
* Emit the buffer
*/
Expand All @@ -175,6 +183,23 @@ ChunkStream.prototype._emitBufferData = function(buffer) {
};

this._offset = newOffset;

if (this._outputLengthLimit > 0) {
// When the start postion is larger than the limit, no data will be consumed though there is an event to be emitted.
// So the buffer should not be calculated.
if (range.start <= this._outputLengthLimit) {
if (this._offset > this._outputLengthLimit) {
// Don't use negative end parameter which means the index starting from the end of the buffer
// to be compatible with node 0.8.
buffer = buffer.slice(0, buffer.length - (this._offset - this._outputLengthLimit));
}
if (this._md5hash) {
this._md5hash.update(buffer);
}
}
} else if (this._md5hash) {
this._md5hash.update(buffer);
}

this.emit('data', buffer, range);
};
Expand Down
13 changes: 13 additions & 0 deletions lib/common/streams/chunkstreamwithstream.js
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
var ChunkStream = require('./chunkstream');
var EventEmitter = require('events').EventEmitter;
var util = require('util');
var azureutil = require('./../util/util');

/**
* Chunk stream
Expand Down Expand Up @@ -59,6 +60,18 @@ ChunkStreamWithStream.prototype.on = function(event, listener) {
return this;
};

/**
* Stop stream from external
*/
ChunkStreamWithStream.prototype.stop = function (chunk, encoding, cb) {
if (azureutil.objectIsFunction(this._stream.destroy)) {
this._stream.destroy();
} else {
this.pause();
}
ChunkStream.prototype.end.call(this, chunk, encoding, cb);
};

/**
* Pause chunk stream
*/
Expand Down
Loading

0 comments on commit f548c06

Please sign in to comment.