Earlier this year, I spoke about running Sitecore in a Docker Container for testing purposes on a Sitecore User Group meeting in the Netherlands, together with my colleagues Raymond Clemens and Roel Snetselaar. We formed a team at the Colours Hackathon 2016, spending almost 24 hours figuring out Docker and building out our very cool use case. We learned a lot, but also hit a lot of challenges due to the early stages Windows 2016, Windows on Docker and Sitecore on 2016 were in. Since Microsoft released Windows Server 2016 last week, of which Docker containers is one of the coolest new features, I think it’s time to share our use case and pick up where we left it!
Check out this informative blog post about Docker for Windows by Michael Friis.
So what is Docker?
For those who are not already familiar with the concept, in short, Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
How is that different from Virtual Machines?
Well, each virtual machine includes the application, binaries and libraries and an entire guest operating system, whereas containers include the application and all of its dependencies, but share the kernel with other containers. They run as an isolated process in user space on the host operating system.
Why we want Docker to meet Sitecore?
Our environments and team structures get more demanding, so a single QA instance doesn’t fulfill our requirements anymore. We require more flexibility for our ultimate CI/CD process and the classic DTAP model is too static for our DevOps teams. And it’s just plain cool!
How cool would it be if we could spin up a new test instance for each newly developed feature, so we can test it fully isolated before filing the pull request and merging it back to the trunk?
We want faster, more reliable, production-like testing of single features or changes in a Sitecore implementation, saving time for developers & facilitating a clean branching model.
Better, time saving workflow
We can test User Stories earlier in the process, in the release cycle. Before even filing the pull request and merging it back into the trunk if you like. Continuous Integration principles dictate to fail fast, or in other words: the sooner you know there’s an issue, the more efficient it is. We don’t want to merge multiple features or US into a release and deploy that to our QA instance, before we are able to test these new features.
Technical independency & live content
Because we can isolate features, there’s no technical dependency or integration issue between multiple changes on the trunk. You are testing a copy of the production environment with only one change to it: the new feature or US you are testing.
Clean & volatile test environment
No other tests were run on the instance you will be testing on and destructive tests are allowed. Want to test what happens if you delete a language? Or the home item? Be my guest. We’ll kick off a new container if you break this one!
This mechanism is endlessly scalable to any number of Scrum teams or ongoing User Stories. Because you are only running those instances that are currently being tested and interference is never an issue.
Steps to take
So what needs to be done? We have built a working proof of concept that shows how this could work:
- Create two Docker containers on Windows Server 2016: one IIS & one SQL instance.
- The SQL image contains nightly rolled back production data.
- Our controller downloads a specific feature branch from the VCS.
- A change is made by the controller to configure the domain name of the specific feature.
- Our controller (built in C#, but could be done via TeamCity as well, which is on our roadmap) deploys the Sitecore solution into the IIS container, via volume mapping.
- Unicorn runs to synchronize any changes from the feature branch to the test instance.
- A dashboard shows links and the status of all running containers (per User Story).
- Wish: build a Jira plugin with a direct link per feature, directly to the test container.
- Dream: use Docker images as a source for the release to production.
We have two repositories running for this demo: a Controller repository that does the downloading of the feature branches and the displaying of an overview of all the running containers aka feature branch test websites (the Dashboard), and a repository with the actual website. For now, this is a vanilla Sitecore 8 instance with some custom configurations (like the database etc., but no special changes for running within a Docker container).
We also have created a new image based on the “microsoft/iis” image, exposing port 80 and running ASP.NET 4.5:
RUN dism /online /enable-feature /all /featurename:IIS-ASPNET45 /NoRestart
Then, we’ve installed Sitecore 8 (same version as the one in our VCS) on this image by submitting changes to this image. Ultimately, we’ve created our own image from this one, called “iisbasecontainer”.
Spinning up a new Sitecore instance
Well, in the end, it all boils down to starting the PowerShell CLI and running the following one line command:
docker run -v c:/consoleboard/checkoutdir2/_publish/master:c:/workdir --name master --rm -it -p 1005:80 iisbasecontainer cmd
What does this do?
- “run” tells docker to run a command in new docker container, so it creates a new container to execute all of the following commands on;
- “-v source:target” mounts a volume of the host (source) to the container (target), which in our case mounts the webroot of the newly created container to the checked out working directory from the VCS with the specific feature branch code in it (on the host);
- “–name master” names the new container “master” – we’re matching the feature branch name here;
- “–rm” automatically removes the container after exiting it, and this is actually very important, because else it gets cached (Docker keeps it), it’ll lock up the name and consequently, you’re filling up the hard drive partition of your host with our concept! We only want the container to exist while we’re using it;
- “-it” allocates a pseudo-TTY connected to the container’s “stdin”, creating an interactive bash shell in the container, which is how we’re gonna communicate with our container;
- “-p port:port” binds external port 1005 to the container’s port 80, so we can distinguish feature branches by port numbers;
- “iisbasecontainer” is the image we have created and the one we’re using to spin up new Sitecore containers;
- “cmd” starts the CLI in the newly created container – as long as this one’s open, the container exists.
In our controller, we did this using the Docker Remote API. We can now visit the new container via http://dockerdemo.westeurope.cloudapp.azure.com:1005 (no real link because it isn’t actually running at the moment) and login into Sitecore. Minutes after creating the container!
The finishing touch
But we’re not there yet. We actually need our controller application (or TeamCity or whatever automation tool you’re using), to copy the Unicorn item sync directory to the correct location within the container (let’s assume it’s temporarily in our working directory mounted before) and kick off Unicorn to synchronize the Sitecore databases to the latest state we’re going to test our site on. At the moment, we synchronize all changes, because our demo isn’t that big, but we are planning to create a backup of the production database daily and build a SQL Docker image on that, so we only have to synchronize the latest changes from our specific feature branch.
Do’s and don’ts
So what are the most important learnings from this exercise?
- Use volume mapping instead of a published solution in a container: at first, we created an image per feature, but this is very slow, opposed to the volume mapping principle shown above.
- Create a basic container with committing changes to it instead of trying to stuff everything in a single dockerfile: at first, we tried to create a dockerfile that contains everything to initiate a new container, but along the way, it turned out to be way more convenient to start up a basic container and commit all additional changes to it, creating your new image gradually.
- Use Remote API instead of PowerShell for advanced operations, it’s time saving. But PowerShell is very powerfull and teaches you along the way, so if you want to get a good understanding of what’s under the hood, PowerShell could be your friend. Then, switch to the API afterwards.
There’s far more to it than I can go over in this single blog post. We learned a lot and are eager to take the concept to the next level. Let me know if you’re interested in one or more details around this topic and I can share it. I’m also interested in similar ideas in the community or market at this moment!
- Container host deployment (to manually install Container feature on Windows Server 2016 TP5)
- Create IIS container (introduction to using dockerfile and Docker images)
- Dockerize SQL Server using Windows Server 2016 Containers (to create sql express 2014 Docker image)
- Enable Docker insecure (to be able to easily test Docker Remote API, do not use in production environment!)
- Docker for Windows (beta)
- LibGit2Sharp (programmatically use Git used for our controller)
- Docker.DotNet (programmatically use Docker Remote API including authentication options)
- Docker cheat sheet