So a couple of weeks ago I posted this blog post on how to upload files to blob storage through Mobile Services. In it, I described how one could do a Base64 encoded string upload of the file, and then let the mobile service endpoint convert it and send it to blob storage.
The upsides to this is that the client doesn’t have to know anything about where the files are actually stored, and it doesn’t need to have blob storage specific code. Instead, it can go on happily knowing nothing about Azure except Mobile Services. It also means that you don’t have to distribute the access keys to your storage together with the application.
I did however mention that there was another way, using shared access signatures (SAS). Unfortunately, these have to be generated by some form of service that has knowledge of the storage keys. Something like a Azure compute instance. However, paying for a compute instance just to generate SASes (plural of SAS…?) seems unnecessary, which is why I opted to go with the other solution.
However, Ryan CrawCour, a dear friend of mine, just had to say that he wasn’t convinced, which has now been nagging me for a while. So to solve that, I have deviced another way to use SAS while using only Mobile Services. And even though he is likely to have some opinion about this as well, it at least made the nagging feeling go away for a while.
DISCLAIMER: This is somewhat of a hack. I assume that there will be better ways to do this in the future, but for now it works even if it might not be my finest solution to date. My biggest issue with it is a part of the JavaScript that I will point out later, but it works. But don’t blame me when it causes Azure to explode and tear down the internet when you use it… 
Ok, let’s go! Like everything else in the current version of Mobile Services, we need a table to create an endpoint to play with. In this case, I have created a table called “sas”. The table itself will not be used, it is only there to enable me to execute my serverside scripts… Because of this, I have restricted access to everything but “read”, as that is the only thing that will be used…
The next part is to create an entity to be used to send and receive data from the service. I called it SAS and it looks like this
[DataTable(Name = "sas")]
public class SAS
{
public int Id { get; set; }
[DataMember(Name = "container")]
public string Container { get; set; }
[DataMember(Name = "filename")]
public string FileName { get; set; }
[DataMember(Name = "url")]
public string Url { get; set; }
}
As you can see, it includes a Name and a FileName property as well as the mandatory Id property. These properties will be used to pass the requred information to the endpoint. The Url property will be used for returning the signed Url.
(You could get away with removing the SAS entity and writing an OData query instead, but I prefer LINQ…)
Using it looks like this
var sas = (await App.MobileService.GetTable<SAS>().Where(x => x.FileName == "myfile.txt" && x.Container == "mycontainer").ToEnumerableAsync()).Single();
var url = sas.Url;
The real functionality is obvously in the other end, at the server. Here, I have created a “read script” for the table. This script will take the query and use the information in it to create a signed url.
The read() method looks like this
function read(query, user, request) {
var filename = query._parsed.filter.left.right.value;
var containername = query._parsed.filter.right.right.value;
var url = getSignedBlobUrl(60, "teched", containername, filename);
console.log('Created new SAS: ' + url);
request.respond(statusCodes.OK, [{ url: url }]);
}
As you can see, it populates the filename and containername variables using the query’s _parsed member. I know that JavaScript members starting with an underscore are supposed to be private, and using the _parsed member is really not a good practice, but it was the only way I could find to easily get hold of the data sent to the server. There might be better ways to solve this, and I will look into it, but for now, this works…
Next it uses a method called getSignedBlobUrl(), which I will talk about in just a minute. Once the signed url has been generated, it is returned to the client using request.respond() instead of actually executing the query.
Ok, so what does the getSignedBlobUrl() do? Well, it just creates a well-formed signed url to the specified blob. Like this
function getSignedBlobUrl(expiryTimeout, accountName, containerName, blobName)
{
var start = new Date();
var end = new Date(start.getTime() + (1000 * 60 * expiryTimeout));
var signature = generateSignature(start, end, accountName, containerName, blobName);
var queryString = "?st=" + encodeURIComponent(start.toIsoString()) + "&se=" +
encodeURIComponent(end.toIsoString()) + "&sr=b&sp=w&sig=" +
encodeURIComponent(signature);
return "http://" + accountName + ".blob.core.windows.net/" + containerName + "/" + blobName + queryString;
}
First it creates a timespan, within which the signature is valid, by using 2 Date objects. Azure limits this to 60 minutes or something, but that should be more than enough.
As you can see, it uses a method called generateSignature() to generate the actual signature. This method generates a HMAC-SHA256 signature generated using the blob storage key and a predefined string presentation of the parameters used in the querystring that is passed to the blob storage.
The actual Uri is then created by combining the path to the blob and a very funky querystring. The querystring includes a bunch of parameters such as start och end time for the access, what type of access (blob or container) it should have, what access rights it needs (read or write), and finally it includes the newly generated signature.
The signature generation looks like this
var crypto = require('crypto')
var key = new Buffer('XXXXX', 'base64')
function generateSignature(startTime, endTime, account, container, blobName) {
var stringToSign = "w\n" +
startTime.toIsoString() + "\n" +
endTime.toIsoString() + "\n/" +
account + "/" + container + "/" + blobName + "\n"
var hash = crypto.createHmac('sha256', key).update(stringToSign).digest('base64')
return hash;
}
It isn’t very complicated. It concatenates a string using a predefined format and then uses the crypto package to create the signature.
The “w” at the start of the stringToSign string defines the access, in this case write access, then it is the start time and end time of the SAS in the correct format, and finally it is the path to the blob to access.
Ok, that’s about it! The only thing that the very focused people will have noticed is that JavaScript does not include a toIsoString() method on the Date object. That is a separate method I have declared on the Date object’s prototype as follows
Date.prototype.toIsoString = function() {
var d = this;
function p(i) { return ("0" + i).slice(-2); }
return "yyyy-MM-ddTHH:mm:ssZ"
.replace("yyyy", d.getFullYear())
.replace("MM", p(d.getUTCMonth() + 1))
.replace("dd", p(d.getUTCDate()))
.replace("HH", p(d.getUTCHours()))
.replace("mm", p(d.getUTCMinutes()))
.replace("ss", p(d.getUTCSeconds()));
};
It is just a helper to get the date string in a format that works for the call…
Ok, that’s it! For real this time!
Except for the somewhat annoying use of the _parsed member in the JavaScript and the slightly odd way to execute the query on the client, it is actually quite a neat solution. Being able to generate SAS urls without a compute instance is actually quite useful in some cases. And even though I prefer uploading files the other way, this could still be really useful. And cheaper… Incoming data is free in Azure, so uploading the file is free either way, but if your storage is not in the same datacenter as the Mobile Service instance, then doing it the other way would incur charges when passing the file from the Mobile Service to the blob storage. Something that this solution does not.
Well, I guess it is better that I end this post before I get into talking about all the pros and cons of the 2 different solutions. They both do the job, so it is up to you to decide…
And no…there is no code for download this time. I have already shown it all, and it wasn’t that much…