Building a simple PicPaste replacement using Azure Web Apps and WebJobs

This post was supposed to be an introduction to Azure WebJobs, but it took a weird turn somewhere and became a guide to building a simple PicPaste replacement using just a wimple Azure Web App and a WebJob.

As such, it might not be a really useful app, but it does show how simple it is to build quite powerful things using Azure.

So, what is the goal? Well, the goal is to build a website that you can upload images to, and then get a simple Url to use when sharing the image. This is not complicated, but as I want to resize the image, and add a little overlay to it as well before giving the user the Url, I might run into performance issues if it becomes popular. So, instead I want the web app to upload the image to blob storage, and then have a WebJob process it in the background. Doing it like this, I can limit the number of images that are processed at the time, and use a queue to handle any peaks.

[note]Note: Is this a serious project? No, not really. Does it have some performance issues if it becomes popular? Yes. Did I build it as a way to try out background processing with WebJobs? Yes… So don’t take it too serious.

The first thing I need is a web app that I can use to upload images through. So I create a new empty ASP.NET project, adding support for MVC. Next I add a NuGet packages called WindowsAzure.Storage. And to be able to work with Azure storage, I need a new storage account. Luckily, that is as easy as opening the “Server Explorer” window, right-clicking the “Storage” node, and selecting “Create Storage Account…”. After that, I am ready to start building my application.

Inside the application, I add a single controller called HomeController. However, I don’t want to use the default route configuration. instead I want to have a nice, short and simple route that looks like this

routes.MapRoute(
name: "Image",
url: "{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);

Ok, now that that is done, I add my “Index” view, which is ridiculously simple, and looks like this

<!DOCTYPE html>

<html>
<head>
<meta name="viewport" content="width=device-width" />
<title>AzurePaste</title>
</head>
<body>
@using (Html.BeginForm("Index", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<label>Chose file: </label>
<input type="file" name="file" accept="image/*"/><br/>
<input type="submit" value="Upload" />
}
</body>
</html>

As you can see, it contains a simple form that allows the user to post a single file to the server using the Index method on the HomeController.

The only problem with this is that there’s no controller action called Index that accepts HTTP POSTs. So let’s add one.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file)
{
if (file == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}

var id = GenerateRandomString(10);
var blob = StorageHelper.GetUploadBlobReference(id);
blob.UploadFromStream(file.InputStream);

StorageHelper.AddImageAddedMessageToQueue(id);

return View("Working", (object)id);
}

So what does this action do? Well, first of all it returns a HTTP 400 if you forgot to include the file… Next it uses a quick and dirty helper method called GenerateRandomString(). It just generates a random string with the specified length… Next, I use a helper class called StorageHelper, which I will return to shortly, to get a CloudBockBlob instance to which I upload my file. The name of the blob is the random string I just retrieved, and the container for it is a predefined one that the WebJob knows about. The WebJob will then pick it up there, make the necessary tranformations, and then save it to the root container.

Note: By adding a container called $root to a blob storage account, you get a way to put files in the root of the Url. Otherwise you are constrained to a Url like https://[AccountName].blob.core.windows.net/[ContainerName]/[FileName]. But using $root, it can be reduced to https://[AccountName].blob.core.windows.net/[FileName].

Once the image is uploaded to Azure, I once again turn to my StorageHelper class to put a message on a storage queue.

Note: I chose to build a WebJob that listened to queue messages instead of blob creation, as it can take up to approximately 10 minutes before the WebJob is called after a blob is created. The reason for this is that the blob storage logs are buffered and only written approximately every 10 minutes. By using a queue, this delay is decreased quite a bit. Bit it is still not instantaneous… If you need to decrease it even further, you can switch to a ServiceBus queue instead.

Ok, if that is all I am doing, that StorageHelper class must be really complicated. Right? No, not really. It is just a little helper to keep my code a bit more DRY.

The StorageHelper has 4 public methods. EnsureStorageIsSetUp(), which I will come back to, GetUploadBlobReference(), GetRootBlobReference() and AddImageAddedMessageToQueue(). And they are pretty self explanatory. The two GetXXXBlobReference() methods are just helpers to get hold of a CloudBlockBlob reference. By keeping it in this helper class, I can keep the logic of where blobs are placed in one place… The AddImageAddedMessageToQueue() adds a simple CloudQueueMessage, containing the name of the added image, on a defined queue. And finally, the EnsureStorageIsSetUp() will make sure that the required containers and queues are set up, and that the root container has read permission turned on for everyone.

public static void EnsureStorageIsSetUp()
{
UploadContainer.CreateIfNotExists();
ImageAddedQueue.CreateIfNotExists();
RootContainer.CreateIfNotExists();
RootContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}

public static CloudBlockBlob GetUploadBlobReference(string blobName)
{
return UploadContainer.GetBlockBlobReference(blobName);
}

public static CloudBlockBlob GetRootBlobReference(string blobName)
{
return RootContainer.GetBlockBlobReference(blobName);
}

public static void AddImageAddedMessageToQueue(string filename)
{
ImageAddedQueue.AddMessage(new CloudQueueMessage(filename));
}

Kind of like that…

The returned view that the user gets after uploading the file looks like this

@model string

<!DOCTYPE html>

<html>
<head>
<meta name="viewport" content="width=device-width" />
<title>Azure Paste</title>
<script src="~/Content/jquery.min.js"></script>
<script type="text/javascript">
$(function () {
function checkCompletion() {
setTimeout(function () {
$.ajax({ url: "/@Model" })
.done(function (data, status, jqXhr) {
if (jqXhr.status === 204) {
checkCompletion();
return;
}
document.location.href = data;
})
.fail(function (error) {
alert("Error...sorry about that!");
console.log("error", error);
});
}, 1000);
}
checkCompletion();
});
</script>
</head>
<body>
<div>Working on it...</div>
</body>
</html>

As you can see, the bulk of it is some JavaScript, while the actual content that the users sees is really tiny…

The JavaScript uses jQuery (no, not a fan, but it has some easy ways to do ajax calls) to poll the server every second. It calls the server at “/[FileName]”, which as you might remember from my changed routing, will call the Index method on the HomeController.

If the call returns an HTTP 204, the script keeps on polling. If it returns HTTP 200, it redirects the user to a location specified by the returned content. If something else happens, if just alerts that something went wrong…

Ok, so this kind of indicates that my Index() method needs to be changed a bit. It needs to do something different if the id parameter is supplied. So I start by handling that case

public ActionResult Index(string id)
{
if (string.IsNullOrEmpty(id))
{
return View();
}

...
}

That’s pretty much the same as it is by default. But what if the id is supplied? Well, then I start by looking for the blob that the user is looking for. If that blob exists, I return an HTTP 200, and the Url to the blob.

public ActionResult Index(string id)
{
...

var blob = StorageHelper.GetRootBlobReference(id);
if (blob.Exists())
{
return new ContentResult { Content = blob.Uri.ToString().Replace("/$root", "") };
}

...
}

As you can see, I remove the “/$root” part of the Url before returning it. The Azure Storage SDK will include that container name in the Url even if it is a “special” container that isn’t needed in the Url. So by removing it I get this nicer Url.

If that blob does not exist, I look for the temporary blob in the upload folder. If it exists, I return an HTTP 204. And if doesn’t, then the user is looking for a file that doesn’t exist, so I return a 404.

public ActionResult Index(string id)
{
...

blob = StorageHelper.GetUploadBlobReference(id);
if (blob.Exists())
{
return new HttpStatusCodeResult(HttpStatusCode.NoContent);
}

return new HttpNotFoundResult();
}

Ok, that is all there is to the web app. Well…not quite. I still need to ensure that the storage stuff is set up properly. So I add a call to the StorageHelper.EnsureStorageIsSetUp() in the Application_Start() method in Global.asax.cs.

protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
RouteConfig.RegisterRoutes(RouteTable.Routes);

StorageHelper.EnsureStorageIsSetUp();
}

Next up is the WebJob that will do the heavy lifting. So I add a WebJob project to my solution. This gives me a project with 2 C# files. One called Program.cs, which is the “entry point” for the job, and one called Functions.cs, which contains a sample WebJob.

The first thing I want to do is to make sure that I don’t overload my machine by running too many of these jobs in parallel. Doing image manipulation is hogging resources, and I don’t want it to get too heavy for the machine.

I do this by setting the batch size for queues in a JobHostConfiguration inside the Program.cs file

static void Main()
{
var config = new JobHostConfiguration();
config.Queues.BatchSize = 2;

var host = new JobHost(config);
host.RunAndBlock();
}

Now that that is done, I can start focusing on my WebJob… I start out by deleting the existing sample job, and add in a new one.

A WebJob is just a static method with a “trigger” as the first parameter. A trigger is just a method parameter that has an attribute set on it, which defines when the method should be run. In this case, I want to run it based on a queue, so I add a QueueTrigger attribute to my first parameter.

As my message contains a simple string with the name of the blob to work on, I can define my first parameter as a string. Had it been something more complicated, I could have added a custom type, which the calling code would have populated by deserializing the content in the message. Or, I could have chosen to go with CloudQueueMessage, which gives med total control. But as I said, just a string will do fine. It will also help me with my next  three parameters.

As a lot of WebJobs will be working with Azure storage, the SDK includes helping tools to make this easier. One such tool is an attribute called BlobAttribute. This makes it possible to get a blob reference, or its contents, passed into the method. In this case, getting references to the blobs I want to work with makes things a lot easier. I don’t have to handle getting references to them on my own. All I have to do, is to add parameters of type ICloudBlob, and add a BlobAttribute to them. The attribute takes a name-pattern as the first string. But in this case, the name of the blob will be coming from the queue message… Well, luckily, the SDK people have thought of this, and given us a way to access this by adding “{queueTrigger}” to the pattern. This will be replaced by the string in the message…

Ok, so the signature fore my job method turns in to this

public static void OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log) {
...
}

As you can see, I have also added a final parameter of type TextWriter, called log. This will be a log supplied by the WebJob host, making it possible to log things in a nice a uniform way.

Bu wait a little…why am I taking in 3 blobs? The first one is obviously the uploaded image. The second one is the target where I am going to put the transformed image. What is the last one? Well, I am going to make it a little more complicated than just hosting the image… I am going to host the image under a name which is [UploadedBlobName].png. I am then going to add a very simple HTML file to show the image in a blob at the same name as the uploaded blob. That ay, the Url to the page to view the image will be a nice and simple one, and it will show the image and a little text.

The first thing I need to do is get the content of the blob. This could have been done by requesting a Stream instead of a IClodBlob, but as I want to be able to delete it at the end, that didn’t work…unless I used more parameters, which felt unnecessary…

Once I have my stream, I turn it into a Bitmap class from the System.Drawing assembly. Next, I resize that image to a maximum with or height before adding a little watermark.

public static void OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
var ms = new MemoryStream();
blob.DownloadToStream(ms);

var image = new Bitmap(ms);
var newImage = image.Resize(int.Parse(ConfigurationManager.AppSettings["AzurePaste.MaxSize"]));
newImage.AddWatermark("AzurePaste FTW");

...
}

Ok, now that I have my transformed image, it is time to add it to the “target blob”. I do this by saving the image to a MemoryStream and then uploading that. However, by default, all blobs get the content type “application/octet-stream”, which isn’t that good for images. So I update the blob’s content type to “image/png”.
public static void OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

var outputStream = new MemoryStream();
newImage.Save(outputStream, ImageFormat.Png);
outputStream.Seek(0, SeekOrigin.Begin);
outputImage.UploadFromStream(outputStream);
outputImage.Properties.ContentType = "image/png";
outputImage.SetProperties();

...
}

The last part is to add the fabled HTML page… In this case, I have just quickly hacked a simple HTML page into my assembly as an embedded resource. I could of course have used one stored in blob storage or something, making it easier to update. But I just wanted something simple…so I added it as an embedded resource… But before I add it to the blob, I make sure to replace the Url to the blob, which has been defined as “{0}” in the embedded HTML.

public static void OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

string html;
using (var htmlStream = typeof (Functions).Assembly.GetManifestResourceStream("DarksideCookie.Azure.WebJobs.AzurePaste.WebJob.Resources.page.html"))
using (var reader = new StreamReader(htmlStream))
{
html = string.Format(reader.ReadToEnd(), blobName + ".png");
}
htmlBlob.UploadFromStream(new MemoryStream(Encoding.UTF8.GetBytes(html)));
htmlBlob.Properties.ContentType = "text/html";
htmlBlob.SetProperties();

blob.Delete();
}

As you can see, I also make sure to update the content type of this blob to “text/html”. And yeah…I also make sure to delete the original file.

That is about it… I guess I should mention that the Resize() and AddWatermark() methods in the Bitmap are extension methods I have added. They are not important for the topic, so I will just leave them out. But they are available in the downloadable code below.

There is one more thing though… What happens if my code is borked, or someone uploads a file that isn’t an image? Well, in that case, the code will fail, and fail, and fail… If it fails, the WebJob will be re-run at a later point. Unfortunately, this can turn ugly, and turn into what is known as a poison message. Luckily, this is handled by default. After a configurable amount of retries, the message is considered a poison message, and will be discarded. As it is, a new message is automatically added to a dynamically created queue to notify us about it. So it might be a good idea for us to add a quick little job to that queue as well, and log any poison messages.

The name of the queue that is created is “[OriginalQueueName]-poison”, and the handler for it looks like any other WebJob. Just try not to add code in here that turns these messages into poison messages…

public static void LogPoisonBlob([QueueTrigger("image-added-poison")]string blobname, TextWriter logger)
{
logger.WriteLine("WebJob failed: Failed to prep blob named {0}", blobname);
}

That’s it! Uploading an image through the web app will now place it in blob storage. it will then wait until the WebJob has picked it up, transformed it, and stored it, and a new html file, at the root of the storage account. Giving the user a quick and easy address to share with his or her friends.

Note: If you want to run something like this in the real world, you probably want to add some form of clean-up solution as well. Maybe a WebJob on a schedule, that removes any images older than a certain age…

And as usual, there is a code sample to play with. Just remember that you need to set up a storage account for it, and set the correct connectionstrings in the web.config file. As well as the WebJob project’s app.config if you want to run it locally.

Note: It might be good to know that logs and information about current wWebJobs can be found at https://[ApplicationName].scm.azurewebsites.net/azurejobs/#/jobs

Source code: DarksideCookie.Azure.WebJobs.AzurePaste.zip (52.9KB)

Cheers!

Making X.509 authenticated HTTP requests in Windows 8 apps – a.k.a Calling the Azure Management API from Store apps

Recently I decided that I wanted to see how easy it would be to build a Windows 8 application that consumed the Windows Azure Management API. It seemed like it should be an easy thing, and something that could potentially end up in a nice management/overview/dashboard kind of application. However, it isn’t quite that simple as I thought as Windows Azure uses certificates for authentication of the HTTP requests being used.

Using certificates for HTTP requests isn’t really that hard, at least not when working in .NET. But in Windows 8 apps, we are using WinRT, which is way more sandboxed, and to be honest, a bit more complicated, which makes it a little bit more complex…

More...

Getting started with Node.js or Hello World in JavaScript

Node.js has got a whole lot of attention for some time now, and I guess it is time for me to get myself an opinion on what it is, and why it is so cool. And while doing so, I will try and write some blog posts offering my opinion and learnings regarding the platform.

In this first post, I will walk through setting up the “environment” needed to build apps, build an initial “Hello World” app to see that it al works. But let’s start with the first question you will ask if you have never worked with Node. What is Node? Well, it is basically a application runtime built on Chrome’s JavaScript runtime called V8. And what does that mean? Well, it means that you get a way to run JavaScript efficiently outside of the browser.

More...

Building a simple custom STS using VS2012 & ASP.NET MVC

In my previous post, I walked through how “easily” one can take advantage of claims based authentication in ASP.NET. In that post, I switched out the good old forms authentication stuff for the new FedAuth stuff. In this post, I want to take it a step further and actually federate my security, but instead of just using the Windows Azure ACS’s built in identity providers, I want to build a very simple one of my own.

A lot of the solution is based on the STS project that we could get by using VS2010 and the WIF SDK. However, this project was a Web Site project using Web Forms, and I really wanted a MVC version for different reasons.

If you are fine with using VS2010 and the WIF SDK, adding a custom STS is really easy. Just create a new web project, right-click the project and choose “Add STS Reference…” and then, walking through the wizard, there will be a step that offers you to select an STS. In this step, you choose “Create a new STS project…”, which will generate a custom STS project that you can modify to your needs. Unfortunately, that option isn’t available in VS2012. Using the “Identity and Access” add-on, you are only allowed to connect to an existing STS, the ACS or a local test STS, not an STS project.

More...

Compressing messages for the Windows Azure Service Bus

As a follow up to my previous post about encrypting messages for the Service Bus, I thought I would re-use the concepts but instead of encrypting the messages I would compress them.

As the Service bus has limitations on how big messages are allowed to be, compressing the message body is actually something that can be really helpful. Not that I think sending massive messages is the best thing in all cases, the 256kb limit can be a little low some times.

Anyhow… The basic idea is exactly the same as last time, no news there…but to be honest, I think this type of compressions should be there by default, or at least be available as a feature of BrokeredMessage by default… However, as it isn’t I will just make do with extension methods…

More...

Encrypting messages for the Windows Azure Service Bus

A week ago I ran into Alan Smith at the Stockholm Cental Station on the way to the Scandinavian Developer Conference. We were both doing talks about Windows Azure, and we started talking about different Windows Azure features and thoughts. At some point, Alan mentioned that he had heard a few people say that they would like to have their BrokeredMessages encrypted. For some reason this stuck with me, and I decided to give it a try…

My first thought was to enherit the BrokeredMessage class, and introduce encryption like that. Basically pass in an encryption startegy in the constructor, and handle all encryption and decryption inside this subclass. However, about 2 seconds in to my attempt, I realized that the BrokeredMessage class was sealed. An annoying, but somewhat understandable  decision made by Microsoft. Ok, so I couldn’t inherit the class, what can you do then? Well, there is no way to stop me from creating a couple of extension methods…

More...

SDC 2013 Service Bus Talk Demo Code

Yesterday I did a talk about the Widnows Azure Service Bus at the Scandinavian Developer Coneference in Gotheburg. As a part of that, I promised to make all the code I demoed available here on my blog, so here it is. The only thing you need to do to be able to run it is to set up a new Service Bus service in the Azure portal, and the copy the namespace and key into the App.config file available in the “Shared” folder.

The App.config in the “Shared” folder is shared throughout all the projects in the solution, so you only need to change it in that single file. The code will however default to use the “owner” account, which I made pretty clear during the talk that you shouldn’t use. But for a demo like this, it will have to do.

Code: GetOnTheBus - Demo Code.zip (314.08 kb)

Fileuploads through Windows Azure Mobile Services - take 2

So a couple of weeks ago I posted this blog post on how to upload files to blob storage through Mobile Services. In it, I described how one could do a Base64 encoded string upload of the file, and then let the mobile service endpoint convert it and send it to blob storage.

The upsides to this is that the client doesn’t have to know anything about where the files are actually stored, and it doesn’t need to have blob storage specific code. Instead, it can go on happily knowing nothing about Azure except Mobile Services. It also means that you don’t have to distribute the access keys to your storage together with the application.

I did however mention that there was another way, using shared access signatures (SAS). Unfortunately, these have to be generated by some form of service that has knowledge of the storage keys. Something like a Azure compute instance. However, paying for a compute instance just to generate SASes (plural of SAS…?) seems unnecessary, which is why I opted to go with the other solution.

More...

A way to upload files to Windows Azure Mobile Services

Ok, so it is time for another Mobile Services post I believe. My previous posts about the subject has covered the basics as well as authentication when it comes to Mobile Service. But so far, I have only been doing the most simple tasks, such as added and read data from a SQL Database. However, I have mentioned that Mobile Services is supposed to be sort of a layer on top of more of Microsoft’s cloud offering like for example the Service Bus, storage etc. So in this post, I want to demo how you can utilize Mobile Services to upload files to blob storage.

There are probably a lot of different ways to do this, but 2 stood out for me. The one I am about to describe, using public containers, as well as using shared access signatures (SAS). So before going about it “my way”, I am going to explain SAS, and why I don’t like it even though it might be a “cleaner” way to do it.

More...

Authenticating users in Windows Azure Mobile Services

In my previous post about Mobile Services, I talked about how to get started with the service. I also promised that I would follow up with a post about how to authenticate the users, so that is what this post is going to be about.

You currently have 4 different options when it comes to authentication, Microsoft ID (previously Live ID), Facebook, Twitter and Google. They are all 3rd party services, and requires your users to have accounts with one of the providers. Luckily, most users already do. And the neat thing about using 3rd party authentication is that you don’t have to care about handling sensitive data such as usernames and passwords. And leaving that to someone else is making your life a lot less complicated. Not to mention that having Mobile Services handle all of the actual interaction with them makes your life ridiculously simple, as you will see.

More...