In my last post, I wrote about how to run Gulp as part of your deployment process when doing continuous deployment from GitHub to an Azure Web App using Kudu. As part of that post, I used Gulp to generate bundled and minified JavaScript and CSS files that was to be served to the client.
The files were generated by using Gulp, and included in the deployment under a directory called dist. However, they were still part of the website. So they are still taking up resources from the webserver as they need to be served from it. And also, they are taking up precious connections from the browser to the server… By offloading them to Azure Blob Storage, we can decrease the amount of requests the webserver gets, and increase the number of connections used by the browser to retrieve resources. And it isn’t that hard to do…
Modifying the deployment.cmd file
I’m not going to go through all the steps of setting the deployment up. All of that was already done in the previous post, so if you haven’t read that, I suggest you do that first…
The first thing that I need to do, is add some more functionality to my deployment.cmd file. So right after I “pop” back to the original directory, I add some more code. More specifically, I add the following code
echo Pushing dist folder to blobstorage
call :ExecuteCmd npm install azure-storage
call :ExecuteCmd "node" blobupload.js "dist" "%STORAGE_ACCOUNT%" "%STORAGE_KEY%"
echo Done pushing dist folder to blobstorage
Ok, so I start by echoing out that I am about to upload the dist folder to blob storage. Next, I use npm to install the azure-storage package, which includes code to help out with working with Azure storage in Node.
Next, I execute a script called blobuplad.js using Node. I pass in 3 parameters, a string containing the name of the folder to upload, a storage account name, and a storage account key.
And finally I echo out that I’m done.
So what is in that blobupload.js file? Well, it is “just” a node script to upload the files in the specified folder. It starts out like this
var azure = require('azure-storage');
var fs = require('fs');
var distDirName = process.argv[2];
var accountName = process.argv[3];
var accountKey = process.argv[4];
var sourceDir = 'Src\\DeploymentDemo\\dist\\';
It “requires” the newly installed azure-storage package, and fs, which is what one use to work with the file system in node.
Next, it pulls out the arguments that were passed in. They are available in the process.argv array, from index 2 and forward. (0 says “node” and 1 says “blobupload.js”, which I don’t need). And finally, it just defines the source directory to copy from.
Note: The source directory should probably be passed in as a parameter as well, making the script more generic. But it’s just a demo…
After all the important variables have been collected, it goes about doing the upload
var blobService = azure.createBlobService(accountName, accountKey);
blobService.createContainerIfNotExists(distDirName, { publicAccessLevel: 'blob' }, function(error, result, response) {
if (error) {
console.log(result);
throw Error("Failed to create container");
}
var files = fs.readdirSync(sourceDir);
for (i=0;i<files.length;i++) {
console.log("Uploading: " + files[i]);
blobService.createBlockBlobFromLocalFile(distDirName, files[i], sourceDir + files[i], function(error, result, response) {
if (error) {
console.log(error);
throw Error("Failed to upload file");
}
});
}});
It starts out by creating a blob service, which it uses to talk to blob storage. Next,it creates the target container if it doesn’t already exist. If that fails, it logs the result from the call and throws an error.
Once it knows that there is a target container, it uses fs to get the names of the files in the target directory. It then loops through those names, and uses the blob services createBlockBlobFromLocalFile method to upload the local file to blob storage.
That’s it… It isn’t harder than that…
Parameters
But wait a second! Where did those magical parameters called %STORAGE_ACCOUNT% and %STORAGE_KEY% that I used in deploy.cmd come from? Well, since Kudu runs in a context that knows about the Web App it is deploying to, it is nice enough to set up any app setting that you have configured for the target Web App, as a variable that you can use in your script using %[AppSettingName]%.
So I just went to the Azure Portal and added 2 app settings to the target Web App, and inserted the values there. This makes it very easy to have different values for different targets when using Kudu. It also means that you never have to check in you credentials.
Warning: You should NEVER EVER EVER EVER check in your credentials to places like GitHub. they should be kept VERY safe. Why? Well, read this, and you will understand.
Changing the website
Now that the resources are available in storage instead of in the local application, the web app needs to be modified to include them from there instead.
This could easily be done by changing
@if (HttpContext.Current.IsDebuggingEnabled)
{
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
}
else
{
<link href="https://[StorageAccountName].blob.core.windows.net/dist/deploymentdemo.min.css" rel="stylesheet" />
}
However, that is a bit limiting to me… I prefer changing it into
@if (HttpContext.Current.IsDebuggingEnabled) {
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
} else {
<link href="@ConfigurationManager.AppSettings["cdn.prefix"]/dist/deploymentdemo.min.css" rel="stylesheet" />
}
The difference, if it isn’t quite obvious, is that I am not hardcoding the storage account that I am using. Instead, I am reading in from the web apps app settings. This gives me a few advantages. First of all, I can just not set the value at all, and it would just default to using the files from the webserver. I can also set it to be different storage accounts for different deployments. However, you might also have noticed that I called the setting cdn.prefix. The reason for this, is that I can also just turn on the CDN in Azure, and then configure my setting to use this instead of the storage account straight up. So using this little twist, I can use my local files, files from any storage account, as well as a CDN if that is what I want…
This is a small twist to just using storage, but it offers a whole heap of more flexibility, so why wouldn’t you…?
That’s actually all there is to it! Not very complicated at all!