running docker for windows with rancher on hyper-v

Posted : Monday, 07 August 2017 17:43:39

At Croud we are currently in the process of moving from EC2 hosted monolithic applications to Docker hosted microservices. The reasons for doing this are many and varied but include productivity, support, scalability, availability at so on and so on.

The platform that we are using to manage our containers is an open source product called Rancher, using Rancher delivers a number of benefits such as load balancing, networking, great integration with Docker Cloud and many others. Anyway, I wanted to get our stack running my local machine (a Surface Book running Windows10) but as I’m the only one in the team running Windows (6 Macs, 1 Linux) so if I wanted to do this I was going to have to do it on my own! Looking at the Rancher docs I hit my first bump:


Never one to pay attention to docs, I decided to try it out. Downloading and installing the latest version of Docker for Windows was a breeze and after firing up Hyper-V, I could see my brand new Docker Host Virtual Machine had been created:


As you can see the new Virtual Machine is called MobyLinuxVM, Moby is a new (ish) open-source initiative to make the Docker container ecosystem flexible, modular and extensible. It also creates an internal virtual network that the Containers can use to communicate.


A quick test shows Docker for Windows is up and running:


So with Docker for Windows installed, I read the Rancher docs worked out the desired state I wanted which is shown below:


At this point I should point out the following…


Anyway the first step was to install the Rancher Server, working through the instructions this is a one line command, I ran it and voila…


Further inspection shows the Rancher server running:


Browsing http://localhost:8080 returns the Rancher homepage:


So the next step is to add a host, this is where it gets a bit meta! What I wanted to do was add the Docker host running the Rancher Server (MobyLinuxVM), as a Rancher Host, to the Rancher Server. Essentially to do this you need to run a command on the Docker host, and this is where I hit a major roadblock. I could not find a way to connect to the MobyLinuxVM – I tried the Hyper-V console, SSH, Putty and could not make a successful connection. I Googled, Binged and trawled forums but got nowhere. Then, nearly on the point of giving up, I was watching this Docker video and came across the following gem:

$> docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh

What this essentially does is open a root terminal into a privileged container via the init process (which always runs on the Docker host) thereby enabling you to run commands on the Docker host itself. So pasting the command shown below into the host shell should download the image and then launch a Rancher Agent container (NB the sudo directive is not required in this case):


Like so:


I had to fiddle about with this to get it to work. The key is to provide a Rancher Server endpoint that is accessible from within the Rancher Agent container. If you are doing this on a desktop computer where your IP is typically unchanging then its fine to just grab your local IP and set this (and port 8080) in the Rancher admin page, however if you’re installing on a Laptop (where your IP often changes) then I would recommend using the internal IP of the Docker Host from the internal network created when installing Docker for Windows – this will typically be Anyway once the host is registered it will show up as follows:


One thing to note is that my desired state diagram above does not tell the whole story, as you can see from the screenshot above there are bunch of other services running (all in containers, (obviously)). that deliver DNS, load balancing, scheduling and so on:


Once installed its trivial – and I do mean trivial – to launch containers using a few lines of YAML. For example this will install a Wordpress container locally:

version: '2'
  image: wordpress

Now while this a simple example that could be done much more easily just using docker or docker-compose CLI tools, when you consider the fact that much more complex stacks can be defined, configured, upgraded and scaled as easily then the argument for Rancher becomes more compelling. Layer into that the addition of infrastructure services (DNS, load balancing etc) a Rancher CLI tool and a Rancher REST API then it looks even more attractive. The icing on the cake is the support for orchestration schemes including Kubernetes and Swarm which is what we are currently exploring. Rancher has been designed to be complementary to the Docker platform and rather than get in the way, it simply runs alongside your containers and just makes life easier. Personally I have been blown away with how is it is to use (once setup issues are out of the way) and am really looking forward to exploring the potential even further.

  • (This will not appear on the site)

Augmenting 3rd party employee leave system using the Microsoft Graph API

Posted : Saturday, 11 February 2017 10:34:00

At TruRating we’ve been using the Appogee HR system for leave booking for a while now. It provides a pretty good employee absence management system that allows for approval flows for booking time off, sickness requests and good cross-department absence reporting. It also supports integration with Office365 meaning that an employee can request the absence through Appogee (signing in using O365 creds), their manager will receive a notification or the request. Once the manager approves the request, the employees Office365 calendar is updated with the approved leave event. The same applies for sickness.

So the management view of Appogee looks as follows:


And each employees calendar looks as below meaning colleagues can easily see when people are off directly from within Outlook (or through Office365 in the browser, mobile etc)


This all works great and has been in operation company wide for over a year.

I’m all for WFH (Working From Home) and have been doing it myself on and off for years, it undoubtedly brings benefits but needs to be managed properly to keep the team functioning well. In my current role we allow all of the Development team to take a day a week to work from home. When a team is small a set of simple rules that everyone follows is appropriate such as:

    • Days WFH are arranged as far in advance as possible  (ideally 1wk with the appreciation this is not always possible) 
    • Key people should not be WFH (unless exceptional circumstances dictate)
    • Any more than 1 day per week WFH is not standard policy and needs to be requested in advance with justification
    • Any days WFH must be marked clearly on employees calendar
    • Mondays should ideally be avoided as WFH days
    • Throughout any WFH days you must be signed into Skype For Business/Slack/etc

However as the team grows this informal framework starts to break down for a number of reasons, people dont stick to it, new starters forget rules, clashes happen where no one is in the office – the last one can really bite when the CEO comes over as their Mac isn’t printing and no one is around to reboot it for them Smile with tongue out . Essentially an informal set of rules is not a scalable way to manage WFH for a team. I started to look at Appogee to see if we could use it to manage WFH days in the same way as leave and sickness. Appogee natively supports other leave types including non-deducted so we added in the Working From Home as a custom leave type. This works fine except that when the leave is approved, Appogee updates the employees calendar in exactly the same way as leave type. So looking in Outlook there is no way to discern if some is off or working from home:


It should be said that drilling into the calendar event will reveal the leave type:


But its not realistic to expect people to look at their colleagues calendar and click into the event details to see if a person is present of just working from home. I asked Appogee if was possible to augment their system to allow a “subject” override for each leave type to be added. That way, when the employees calendar is updated on approval, if an override is present for a given leave type then that should be used for the meeting subject. This would allow colleagues to know at a glance if the employee in question was actually absent or in fact working from home and therefore available for calls etc. The response was that ‘yes this sounded like a great feature’ but they had a lot of feature requests so I should join the forum and see how popular it might be – with sufficient demand etc. Not what I wanted to hear but I understand the demands on software vendors all too well so started to investigate other options.

In essence all I wanted was as follows: When a particular event is created in Office365 (by Appogee), update that particular event and change the meeting subject – surely not too difficult. As it turns out it was pretty straightforward- although a bit fiddly to set up due to lack of documentation etc. The system I developed consists of two main subsystems, a service to manage subscriptions to each employees calendar which leverages the Microsoft graph API to subscribe to updates on selected users calendars, and a notification service which receives those update notifications and triggers a subsequent update of each matching each calendar event to reset the meeting subject. The process flow is shown below from the point at which the WFH request is approved in Appogee.


The solution makes use of Office365 Notifications and Subscriptions please see this link for more details.

In order to receive event update notifications from the Microsoft Graph an appropriate Subscription must be in place for each user, these subscriptions are managed via the Microsoft Graph API. There is a SQL Azure database in the solution which maintains a list of all users, stores event subscriptions for each user and stores notifications received via the API App for later processing by the WebJob.

There are two custom applications in the solution, an API App hosted in Azure App Service and an Azure WebJob – each is described in detail below. I could have run the custom calendar update inline within the API App (eg as an async task) however during development it quickly became clear that keeping the API as thin as possible made testing the solution easier and much faster to iterate.

API App:

Receive HTTP subscription validation requests and respond appropriately – this is a requirement of creating a Graph notification subscription

Receive HTTP update notifications and store for later processing

Azure WebJob:

(The Azure webjob runs every 5 minutes and for each invocation perform the task list below)

Check that each user has a valid subscription, if not then a new subscription is created.

If the user has a valid subscription that is due to expire within the next 10 hours, the subscription is renewed

Process any pending Notifications received via API App and make calendar updates as appropriate


With the WebJob and API App in place, I create a pending WFH request via Appogee, my manager (hopefully) approves it and this causes Appogee to update my calendar. That update in turn triggers a notification to be sent to the API app which is then stored in the SQL Azure database. On the next invocation of the WebJob, the pending notification is loaded, details of the event are retrieved using the Graph API and if the event Subject is “Approved Leave” and the Event body contains “Working From Home” then the event Subject is updated to “Approved WFH” and the status set to “Working Elsewhere”. Finally the notification is marked as processed.


The Microsoft Graph API is pretty easy to use and exposes a load of really great functionality, the act of creating a subscription uses a credential based OAuth token to create the Graph subscription. Updating employee calendars however, requires the use of a certificate based OAuth token to make the calendar update so is much more time consuming to set up – it can all be done through the Azure portal but took me a bit of trial and error to get it set up correctly. I will blog about the setup in a future post.

The solution itself is written in .Net 4.5 and is available on GitHub

  • (This will not appear on the site)

hosting a minecraft server in windows azure using resource manager templates

Posted : Saturday, 19 March 2016 21:10:00

My eldest child is 5 and like so many other parents of children of that age, Minecraft has become part of my life (by proxy at least). While up until this point it was purely a conversational presence with minimal technical input required, my son recently asked if he could play online with his school friends. Reading up on this there are plenty of service providers out there as well as plenty of guides on how to self host….I figured that it couldn't be that hard so I’d give self hosting a go.

Originally I planned to run it on my home server but after a bit of thought I decided that a cloud based approach would make the most sense (reliability, availability, support etc.) especially as I get regular monthly Azure credits by virtue of my employer being part of the BizSpark program. So how to go about this...

I found a VM Image on the Marketplace:


Going through the setup process I noted that this image relies on the Classic deployment model so might possibly be a bit old...



I ran through the setup process, clicked create and after about 10 minutes the VM was up and running, I fired up the Minecraft client and tried to connect to the server via the DNS name on the settings page of the VM in the Azure portal...


Rats! The Minecraft client on my PC is 1.9 and the VM server in the image is 1.8 - based on that message 1.9 is not backwards compatible. I considered setting up a VNC client and trying to remote in or using SSH to upgrade the server both of which sounded a bit fiddly...after a bit of Googling I found this post:


This post uses a different approach, namely scripting the Minecraft server via an Azure Resource Manager template. This approach is preferable for a number of reasons including the fact that it uses the currently favoured Azure Resource Management API. I followed the blog post which said to go to this GitHub project:


and hit this button:


This button took me to the Azure portal where I filled in a few details as per the post, accepted some Ts+Cs and bosch! I was done – RIDICULOUSLY easy!

10 or so minutes later and my server was up and running, as before I grabbed the DNS name for the VM and connected my Minecraft client to it:


Slightly disappointing but I figured this would be easier to upgrade than the VM image I had used before. I went back to the GitHub repo and of the five files listed:


I figured that “install_minecraft.sh” would probably be the place to look...


Sure enough lines 50, 52 and 74 looked good candidates to update, I made sure that the 1.9 server jar was located where I guessed it should be, https://s3.amazonaws.com/Minecraft.Download/versions/1.9/minecraft_server.1.9.jar, which it was. I forked the repo, edited the file via the GitHub user interface and committed. I had a quick review of the other files and noted a couple of additional required changes:

1) README.md – this is the page that contains the magical “Deploy to Azure” button, I had to update this use the template from my fork


2) azuredeploy.json – I had to alter this file to use the updated “install_minecraft.sh” script from my fork


I committed both these changes and the clicked “Deploy to Azure” button on the home page of my fork (the one I updated in 1) above. I filled out the fields as before, hit create and 10 minutes later my updated server was up and running. I grabbed the DNS name for the VM from the Azure portal:


Connected to it from my Minecraft client (fingers crossed)…



TA DA!!! ace! – my son, his friends and my nephews are all able to connect and play Smile.

For the record I knew nothing about Azure Resource Manager templates before this post and without any prior reading was able to jump straight into modifying them (thanks to GitHub) and seeing the results pretty much instantly. Azure just keeps getting better!

The only other thing I did was add the Minecraft version as a parameter to the script which allows the choice of Minecraft version via the template so future upgrades should now be easier to support. The whole thing was really easy, really satisfying and as a bonus I have submitted my first pull request too – sweet!

  • (This will not appear on the site)

Recently read, highly rated...

previous next