My intro to Docker - Part 2 of something

In the previous post, I talked a bit about what Docker is, how it works, and so on. And I even got to the point of showing how you can create and start containers using existing images from Docker Hub. However, just downloading images and running containers like that, is not very useful. Sure, as a Microsoft dev, it's kind of cool to start up a Linux container and try out some leet Linux commands in bash. But other than that it is a little limiting. So I thought I would have a look at the next steps involved in making this and actually useful thing…

Creating something to host in a container

The first step in using Docker is to have something that should run inside of our containers. And since I am a .NET developer, and .NET Core has Linux support, I thought I would write a small ASP.NET Core application to run in my container.

Note: Before you can run this, you need to install the .NET Core SDK

Step one is to create an ASP.NET Core application. So I'll open up a command line window and navigate somewhere where I want to store my files. And then I'll run

mkdir DockerApp
cd DockerApp
dotnet new web

This will create a new ASP.NET Core application for us. The next step is to set it up to do something. And to do that, you can use whatever editor or IDE you want, but I'll use VS Code, so I'll just go and type

code .

And since I just want to create the simplest possible ASP.NET MVC app, I'll go ahead and add MVC in the Startup

public void ConfigureServices(IServiceCollection services)
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    if (env.IsDevelopment()) {
     app.Run(async (context) => { await context.Response.WriteAsync("Hello World!"); });

Then I'll add a new folder called Controllers, and a class inside of it called HomeController that looks like this

using Microsoft.AspNetCore.Mvc;
namespace DockerApp.Controllers {
    public class HomeController : Controller {
        public IActionResult Index() {
            return View(); }

And then I'll add the actual view by adding a folder called Views, and inside of that one called Home, and inside that a Razor view called Index.cshtml that looks like this

<!DOCTYPE html>
         <h1>Hello from Docker</h1>

All of that is stock standard ASP.NET stuff, so hopefully you already knew how to do this. But I thought I would cover it anyway…

To run this application from the command line, all you have to do is go to the DockerApp folder and type

dotnet run

This will automatically compile your application and start an HTTP server that hosts you application on port 5000.

Note: The port 500 is the default port used by default. This can be changed using an environment variable called ASPNETCORE_URLS

Creating your own images

Now that we have an app that we want to run as a container, we want to package it up in an image, that we can then use to start a container from. There are 2 ways to do this.

Option 1 is to do it interactively. This means that we start up a container based on the image we want to use as our base, passing in -it, to make it interactive. Then we configure and set everything up that we need inside that container. And finally, we call docker commit [container name] to create an image based on that container.

Option 2 is to create a dockerfile, and use that to build an image. And since option 1 isn't recommended for several reasons, I'll focus on this instead.

So I'll go ahead and create a file called dockerfile. And being a Windows person, I normally just create it using Explorer, but if you want to do it in the command line like a "real developer", you can run the following command

touch dockerfile

Note: This requires that touch is in your path. The easiest way is to have Git installed, and add C:\Program Files\Git\usr\bin to your path… Or just use Explorer…

Ok, so what are we doing with this dockerfile? Well, this file defines how to set up the image that you want. It starts out by saying what image it should use as a base, and includes all the commands that need to be run, and files that need to be copied and so on, to make create the state that you want on your image.

So the first thing we need to do is to define what image we want to use as our base, by using the FROM keyword. And since I am doing an ASP.NET Core application, that is run using "dotnet run", I need an image that has the .NET Core SDK on it. Luckily, Microsoft has one of those for us called microsoft/aspnetcore-build. So I'll put this

FROM microsoft/aspnetcore-build

at the top of my file…

Then I need to tell it that I want to include everything inside the folder I am currently in. And I'll do that using the COPY keyword.

Note: There is also an ADD keyword which does kind of the same thing, but it will also expand tar files and a few other things. COPY will do just that…

The COPY command takes 2 arguments. The source and the target inside the container. And remember, the image is Linux-based, so it doesn't have a drive letter and so on. Instead it just has "root folders". So I'll add

COPY . /app

To copy ".", which means current directory, to "/app" in the image.

Then I want to set this directory as the working directory, meaning that all commands executed will have this as the active directory. And this is done using the WORKDIR keyword like this


And then I want to specify that my container will expose a port, using the EXPOSE keyword like this


Wait…what!? The ASP.NET application used port 5000…didn't it? Well, it _did_. Inside the microsoft/aspnetcore-build image, the environment variable called ASPNETCORE_URLS is set to http://+:80, which tells ASP.NET Core to use that port instead of the default 5000. So that is what we want to expose…

The other option is to change the ASPNETCORE_URLS environment variable using the ENV keyword, and then expose whatever port you want. Like this


But I'll stick to using port 80.

Note: The fact that your container exposes port 80 doesn't mean that it will conflict with anything on port 80 on the host. Exposing a port just exposes it from the container, on the container's network interface. To have it exposed on the host requires us to specifically order Docker to do so, as you will see later on.

And finally, I add an entrypoint. Basically the command to run when starting a container based on this image. And I do that by adding

ENTRYPOINT ["dotnet", "run"]

There are 2 different syntaxes for this. One is this JSON array based syntax called exec form, which is preferred. The other one is called shell form, and is written just writing it like you would on the command prompt… Like this

ENTRYPOINT dotnet run

Sidenote - ENTRYPOINT and CMD

Oh shiny! I’m just going to go into some detail about this, because there are some finer details that might be interesting to note with this…

The JSON array syntax has the executable as the first entry in the array, and then passes any parameters as individual entries in the array. The downside to this is that it doesn't go through a shell, so it won't do environment variable substitution for example. So running something like


won't echo out the value of the ASPNETCORE_URLS environment variable.

And the second "but" is that when you run this container, and pass in a parameter at the end of the "docker run" command, it will actually not change the executable that is being run, instead it adds the parameter as a parameter to the executable defined in the ENTRYPOINT.

On the other hand, running in shell form, you will run your command under /bin/sh -c. So it will do environment variable substitution etc. But…it won't handle any passed in parameters from the "docker run" command…

There are more fine points in the choice between exec form and shell form. But you will have to read up on the on the Docker docs.

It's also worth mentioning a keyword called CMD now as well. I won't use it, but I still want to cover it, because it can be somewhat confusing…

CMD also has both exec form and shell form, and works pretty much just like ENTRYPOINT. BUT…it is only defaults. Anything set using a CMD keyword will be the default parameters, but overridden if parameters are passed in when using the "docker run" command.

CMD can be used in 2 ways (3 if you count exec and shell form as 2 different ways). It can be used just like the instead of the ENTRYPOINT, telling the container what should happen when it starts, but still leave it open to be changed when starting the container. For example

CMD ["dotnet", "run"]


CMD dotnet run

This will cause "dotnet run" to be executed at the start of the container, either on its own, or through "bin/sh -c" depending on form. But it allows me to pass in a new startup executable when starting the container.

Remember: This requires you to NOT have an ENTRYPOINT in your file.

If you have an ENTRYPOINT as well, the CMD will end up just being parameters passed to the entrypoint executable by default. And any passed in parameters from "docker run" will replace the defaults and be passed to the entrypoint instead.

This gives us a bit of flexibility for how our image allows the user to use it when starting the container.

I don't need any CMD in this case, but I found it reasonable to cover it here as well…

Note: The last little thing to mention is that it IS possible to change the entrypoint when running "docker run" by passing in --entrypoint flag.

Creating your own image - back on track

Ok…after that side note on ENTRYPOINT and CMD and stuff, it is time to move on…

Now that we have a dockerfile that looks like this

FROM microsoft/aspnetcore-build
COPY . /app
ENTRYPOINT ["dotnet", "run"]

So we can now go and build an image. To do this, you go back to the terminal and run

docker build -t myimage .

Hint: If you get a weird exception about unexpected characters, make sure that your file is in UTF-8. If you create your dockerfile using the command line, or PowerShell, the file will be in UTF-16 and will need re-saving in UTF-8…

This will tell Docker to build an image based on your file, and name it myimage. So, after running that command, if you run

docker images

you will see an image called myimage in the list.

Extra: What this will really do, is that it will automatically spin up the base image, and then execute the first command. It will then stop that container and save that storage layer as its own layer. It then starts a new container based on that layer, and executes the next command, and then stops and saves. It then keeps doing that until all commands have executed. At least that is the way I understand it. But the technical details aren't that important to be honest. Just knowing that in the end, you will have an image that looks just like you want every time you build it using that dockerfile is the important part. But…if the build fails, it will leave the container at the last layer, making it possible for you to start it up and attach to it, and figure out what went wrong, which can be really helpful…

And with our new image in place, we can go ahead and run

docker run --rm -d -p 8080:80 --name web myimage

This will start a container based on the newly created image, mapping the hosts port 8080 to the containers port 80, naming it to web, and making sure the container is removed when it is stopped. So opening up a browser and browsing to http://localhost:8080, allows you to browse the application running inside the container.

And to stop it when you are done, you call

docker stop web

which will also remove the container since we added --rm.

That’s it for this time. The next post covers how you can set up mutliple containers and have them work together to create more complex applications.

Pingbacks and trackbacks (3)+

Add comment