;

Augmenting 3rd party employee leave system using the Microsoft Graph API

Posted : Saturday, 11 February 2017 10:34:00

At TruRating we’ve been using the Appogee HR system for leave booking for a while now. It provides a pretty good employee absence management system that allows for approval flows for booking time off, sickness requests and good cross-department absence reporting. It also supports integration with Office365 meaning that an employee can request the absence through Appogee (signing in using O365 creds), their manager will receive a notification or the request. Once the manager approves the request, the employees Office365 calendar is updated with the approved leave event. The same applies for sickness.

So the management view of Appogee looks as follows:

image

And each employees calendar looks as below meaning colleagues can easily see when people are off directly from within Outlook (or through Office365 in the browser, mobile etc)

image

This all works great and has been in operation company wide for over a year.

I’m all for WFH (Working From Home) and have been doing it myself on and off for years, it undoubtedly brings benefits but needs to be managed properly to keep the team functioning well. In my current role we allow all of the Development team to take a day a week to work from home. When a team is small a set of simple rules that everyone follows is appropriate such as:

    • Days WFH are arranged as far in advance as possible  (ideally 1wk with the appreciation this is not always possible) 
    • Key people should not be WFH (unless exceptional circumstances dictate)
    • Any more than 1 day per week WFH is not standard policy and needs to be requested in advance with justification
    • Any days WFH must be marked clearly on employees calendar
    • Mondays should ideally be avoided as WFH days
    • Throughout any WFH days you must be signed into Skype For Business/Slack/etc

However as the team grows this informal framework starts to break down for a number of reasons, people dont stick to it, new starters forget rules, clashes happen where no one is in the office – the last one can really bite when the CEO comes over as their Mac isn’t printing and no one is around to reboot it for them Smile with tongue out . Essentially an informal set of rules is not a scalable way to manage WFH for a team. I started to look at Appogee to see if we could use it to manage WFH days in the same way as leave and sickness. Appogee natively supports other leave types including non-deducted so we added in the Working From Home as a custom leave type. This works fine except that when the leave is approved, Appogee updates the employees calendar in exactly the same way as leave type. So looking in Outlook there is no way to discern if some is off or working from home:

image

It should be said that drilling into the calendar event will reveal the leave type:

image

But its not realistic to expect people to look at their colleagues calendar and click into the event details to see if a person is present of just working from home. I asked Appogee if was possible to augment their system to allow a “subject” override for each leave type to be added. That way, when the employees calendar is updated on approval, if an override is present for a given leave type then that should be used for the meeting subject. This would allow colleagues to know at a glance if the employee in question was actually absent or in fact working from home and therefore available for calls etc. The response was that ‘yes this sounded like a great feature’ but they had a lot of feature requests so I should join the forum and see how popular it might be – with sufficient demand etc. Not what I wanted to hear but I understand the demands on software vendors all too well so started to investigate other options.

In essence all I wanted was as follows: When a particular event is created in Office365 (by Appogee), update that particular event and change the meeting subject – surely not too difficult. As it turns out it was pretty straightforward- although a bit fiddly to set up due to lack of documentation etc. The system I developed consists of two main subsystems, a service to manage subscriptions to each employees calendar which leverages the Microsoft graph API to subscribe to updates on selected users calendars, and a notification service which receives those update notifications and triggers a subsequent update of each matching each calendar event to reset the meeting subject. The process flow is shown below from the point at which the WFH request is approved in Appogee.

image

The solution makes use of Office365 Notifications and Subscriptions please see this link for more details.

In order to receive event update notifications from the Microsoft Graph an appropriate Subscription must be in place for each user, these subscriptions are managed via the Microsoft Graph API. There is a SQL Azure database in the solution which maintains a list of all users, stores event subscriptions for each user and stores notifications received via the API App for later processing by the WebJob.

There are two custom applications in the solution, an API App hosted in Azure App Service and an Azure WebJob – each is described in detail below. I could have run the custom calendar update inline within the API App (eg as an async task) however during development it quickly became clear that keeping the API as thin as possible made testing the solution easier and much faster to iterate.

API App:

Receive HTTP subscription validation requests and respond appropriately – this is a requirement of creating a Graph notification subscription

Receive HTTP update notifications and store for later processing

Azure WebJob:

(The Azure webjob runs every 5 minutes and for each invocation perform the task list below)

Check that each user has a valid subscription, if not then a new subscription is created.

If the user has a valid subscription that is due to expire within the next 10 hours, the subscription is renewed

Process any pending Notifications received via API App and make calendar updates as appropriate

 

With the WebJob and API App in place, I create a pending WFH request via Appogee, my manager (hopefully) approves it and this causes Appogee to update my calendar. That update in turn triggers a notification to be sent to the API app which is then stored in the SQL Azure database. On the next invocation of the WebJob, the pending notification is loaded, details of the event are retrieved using the Graph API and if the event Subject is “Approved Leave” and the Event body contains “Working From Home” then the event Subject is updated to “Approved WFH” and the status set to “Working Elsewhere”. Finally the notification is marked as processed.

image

The Microsoft Graph API is pretty easy to use and exposes a load of really great functionality, the act of creating a subscription uses a credential based OAuth token to create the Graph subscription. Updating employee calendars however, requires the use of a certificate based OAuth token to make the calendar update so is much more time consuming to set up – it can all be done through the Azure portal but took me a bit of trial and error to get it set up correctly. I will blog about the setup in a future post.

The solution itself is written in .Net 4.5 and is available on GitHub

  • (This will not appear on the site)

hosting a minecraft server in windows azure using resource manager templates

Posted : Saturday, 19 March 2016 21:10:00

My eldest child is 5 and like so many other parents of children of that age, Minecraft has become part of my life (by proxy at least). While up until this point it was purely a conversational presence with minimal technical input required, my son recently asked if he could play online with his school friends. Reading up on this there are plenty of service providers out there as well as plenty of guides on how to self host….I figured that it couldn't be that hard so I’d give self hosting a go.

Originally I planned to run it on my home server but after a bit of thought I decided that a cloud based approach would make the most sense (reliability, availability, support etc.) especially as I get regular monthly Azure credits by virtue of my employer being part of the BizSpark program. So how to go about this...

I found a VM Image on the Marketplace:

https://azure.microsoft.com/en-us/marketplace/partners/microsoft/minecraftserver/

Going through the setup process I noted that this image relies on the Classic deployment model so might possibly be a bit old...

image

 

I ran through the setup process, clicked create and after about 10 minutes the VM was up and running, I fired up the Minecraft client and tried to connect to the server via the DNS name on the settings page of the VM in the Azure portal...

image

Rats! The Minecraft client on my PC is 1.9 and the VM server in the image is 1.8 - based on that message 1.9 is not backwards compatible. I considered setting up a VNC client and trying to remote in or using SSH to upgrade the server both of which sounded a bit fiddly...after a bit of Googling I found this post:

https://msftstack.wordpress.com/2015/09/05/creating-a-minecraft-server-using-an-azure-resource-manager-template/

This post uses a different approach, namely scripting the Minecraft server via an Azure Resource Manager template. This approach is preferable for a number of reasons including the fact that it uses the currently favoured Azure Resource Management API. I followed the blog post which said to go to this GitHub project:

https://github.com/Azure/azure-quickstart-templates/tree/master/minecraft-on-ubuntu

and hit this button:

image

This button took me to the Azure portal where I filled in a few details as per the post, accepted some Ts+Cs and bosch! I was done – RIDICULOUSLY easy!

10 or so minutes later and my server was up and running, as before I grabbed the DNS name for the VM and connected my Minecraft client to it:

image

Slightly disappointing but I figured this would be easier to upgrade than the VM image I had used before. I went back to the GitHub repo and of the five files listed:

image

I figured that “install_minecraft.sh” would probably be the place to look...

image

Sure enough lines 50, 52 and 74 looked good candidates to update, I made sure that the 1.9 server jar was located where I guessed it should be, https://s3.amazonaws.com/Minecraft.Download/versions/1.9/minecraft_server.1.9.jar, which it was. I forked the repo, edited the file via the GitHub user interface and committed. I had a quick review of the other files and noted a couple of additional required changes:

1) README.md – this is the page that contains the magical “Deploy to Azure” button, I had to update this use the template from my fork

image

2) azuredeploy.json – I had to alter this file to use the updated “install_minecraft.sh” script from my fork

image

I committed both these changes and the clicked “Deploy to Azure” button on the home page of my fork (the one I updated in 1) above. I filled out the fields as before, hit create and 10 minutes later my updated server was up and running. I grabbed the DNS name for the VM from the Azure portal:

image

Connected to it from my Minecraft client (fingers crossed)…

image

 

TA DA!!! ace! – my son, his friends and my nephews are all able to connect and play Smile.

For the record I knew nothing about Azure Resource Manager templates before this post and without any prior reading was able to jump straight into modifying them (thanks to GitHub) and seeing the results pretty much instantly. Azure just keeps getting better!

The only other thing I did was add the Minecraft version as a parameter to the script which allows the choice of Minecraft version via the template so future upgrades should now be easier to support. The whole thing was really easy, really satisfying and as a bonus I have submitted my first pull request too – sweet!

  • (This will not appear on the site)

Configuring newrelic In Azure CloudServices

Posted : Wednesday, 20 August 2014 21:58:00

With truRating so close to launch we have just put in place our monitoring service – we looked at the options and newrelic was the stand-out choice. One problem we needed to solve that wasn’t supported natively by newrelic was the ability to dynamically configure application name at deploy time! – the basic portal new relic screen looks as follows showing a list of all Applications.

 

image

 

I have none :-(

IIS Application names (CloudService web roles) are registered via an appsetting:

  <appSettings>

    <add key="NewRelic.AppName" value="LOCAL_APP" />

  </appSettings>

if you want to “promote” this to a cloud setting its bit fiddly as you need to get the cloud package which contains the transforms to set the web.config value. To do this you need to alter any ServiceName.*.cscfg files (the service configuration file for each env) and the ServiceName.csdef (the service definition file) files manually. It looks like Visual Studio doesn’t reliably support much beyond vanilla configuration with these projects but these customisations is fairly niche so I wouldn’t expect it to.

 

To configure this application name to be set from cloud configuration  setting is a three step process:

First the Service Definition file (.csdef).

After installing the new relic nuget package the service definition file should include a startup task for each role in the cloud project:

    5     <Startup>

    6       <Task commandLine="newrelic.cmd" executionContext="elevated" taskType="simple">

    7         <Environment>

    8           <Variable name="EMULATED">

    9             <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />

   10           </Variable>

   11           <Variable name="IsWorkerRole" value="false" />

   12           <Variable name="LICENSE_KEY">

   13             <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelic.LicenseKey']/@value" />

   14           </Variable>

   15         </Environment>

   16       </Task>

   17     </Startup>

This startup task will execute as the role initializes and will execute the specified newrelic.cmd file using supplied environment variables. (more on that later) .

Define another environment variable that will be declared based on a cloud configuration value:

    5     <Startup>

    6       <Task commandLine="newrelic.cmd" executionContext="elevated" taskType="simple">

    7         <Environment>

    8           <Variable name="EMULATED">

    9             <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />

   10           </Variable>

   11           <Variable name="IsWorkerRole" value="false" />

   12           <Variable name="LICENSE_KEY">

   13             <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelic.LicenseKey']/@value" />

   14           </Variable>

   15           <!-- environment variable indicating name of new relic application as it will apppear on the newrelic Dashboard -->

   16           <Variable name="newrelic_APPLICATION" >

   17             <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelic_AppName']/@value" />

   18           </Variable>

   19         </Environment>

   20       </Task>

   21     </Startup>

NB: Ideally for consistency I would have used NewRelic.AppName but the xpath query was failing, I think, due to the “.” character.

Next scroll to the ConfigurationSettings section and add the following

   37     <ConfigurationSettings>

   38       <Setting name="TableStorageConnectionString" />

   39       <Setting name="DbConnection" />

   40       <Setting name="RandomSettingOne" />

   41       <Setting name="NewRelic.LicenseKey" />

   42       <Setting name="newrelic_APPLICATION" />

   43       <Setting name="RandomSettingTwo" />

   44     </ConfigurationSettings>

This is all the configuration thats required for the service definition file.

Second the service configuration files. For each environment add/update the ServiceName.<env>.cscfg configuration file

    1 <?xml version="1.0" encoding="utf-8"?>

    2 <ServiceConfiguration serviceName="MyRoleService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="3" osVersion="*" schemaVersion="2014-01.2.3">

    3   <Role name="MyWebRole">

    4     <Instances count="1" />

    5     <ConfigurationSettings>

    6       <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=false" />

    7       <Setting name="TableStorageConnectionString" value="UseDevelopmentStorage=false" />

    8       <Setting name="DbConnection" value="Server=DBSERVER; Database=Database; Integrated Security=SSPI" />

    9       <Setting name="RandomSettingOne" value="1" />

   10       <Setting name="NewRelic.LicenseKey" value="xxxxxxxxxxxxxxxxxxxxxxxxxx" />

   11       <Setting name="newrelic_APPLICATION" value="LOCAL_APP" />

   12       <Setting name="RandomSettingTwo" value="2" />

   13     </ConfigurationSettings>

   14   </Role>

   15 </ServiceConfiguration>

Create/Update the configuration for each environment and you’re done.

Last the newrelic.cmd file.

This step is optional if you dont mind having a newrelic related appsetting in your application configuration file, anything in developer code that can get changed accidentally will (Sod’s Law) so putting this in the cloud service projects only (and not the target project itself) is worth the effort for the potential error it prevents.

This following files gets added to the .csproj (presumably .vbproj also) as a “copy always” resource.

image

NB versions will differ based on local environment

At role startup the batch file runs and installs all the necessary monitoring software that new relic uses to capture performance metrics.

The top of the file should be something like...

----------

SETLOCAL EnableExtensions

 

for /F "usebackq tokens=1,2 delims==" %%i in (`wmic os get LocalDateTime /VALUE 2^>NUL`) do if '.%%i.'=='.LocalDateTime.' set ldt=%%j

set ldt=%ldt:~0,4%-%ldt:~4,2%-%ldt:~6,2% %ldt:~8,2%:%ldt:~10,2%:%ldt:~12,6%

 

SET NR_ERROR_LEVEL=0

 

:: Comment out the line below if you do not want to install the New Relic Agent

CALL:INSTALL_NEWRELIC_AGENT

 

:: Comment out the line below if you do not want to install the New Relic Windows Server Monitor

CALL:INSTALL_NEWRELIC_SERVER_MONITOR

----------

To dynamically set the appSetting value I opted for a task to execute appcmd.exe from the command file and set the value as a machine level appsetting – changes in red!

----------

SETLOCAL EnableExtensions

 

for /F "usebackq tokens=1,2 delims==" %%i in (`wmic os get LocalDateTime /VALUE 2^>NUL`) do if '.%%i.'=='.LocalDateTime.' set ldt=%%j

set ldt=%ldt:~0,4%-%ldt:~4,2%-%ldt:~6,2% %ldt:~8,2%:%ldt:~10,2%:%ldt:~12,6%

 

SET NR_ERROR_LEVEL=0

 

:: Custom cmd function - execute first

CALL:SET_APP_NAME

 

:: Comment out the line below if you do not want to install the New Relic Agent

CALL:INSTALL_NEWRELIC_AGENT

 

:: Comment out the line below if you do not want to install the New Relic Windows Server Monitor

CALL:INSTALL_NEWRELIC_SERVER_MONITOR

 

IF %NR_ERROR_LEVEL% EQU 0 (

    EXIT /B 0

) ELSE (

    EXIT %NR_ERROR_LEVEL%

)

 

:: --------------

:: Functions

:: --------------

:SET_APP_NAME

 

IF [%newrelic_APPLICATION%] == [] (

    ECHO  no value for APP_MONITOR_NAME -skipping step. >> "d:\tr.log" 2>&1

    GOTO:EOF

)

----------

The custom function first checks if that environment variable newrelic_APPLICATION is set, if not the function is skipped. If the variable is declared then appSetting will be added to the machine config file.  Whizz a few requests through, wait a few mins and bada-bing…

image

N.B. machine.config is used as the structure of the local filesystem in the cloud package has a predicable (but i suspect) changeable path to the virtual root of each application. Using appcmd to set application configuration files requires prior knowledge of the local filesystem. Applying the settings at machine.config level means the value will consistently be available wherever ever the site root is on located on the filesystem.

In summary its a bit of a convoluted setup but what happens is as follows: When the startup task runs, it and passes variables configured for each environment into the newrelic.cmd process. The supplied appsetting value is written to machine.config in the Cloud Service node. This is then read by the NewRelic Agent on first web request. It is also compatible with the loveliness of Azure autoscale – because the configuration is baked into the service package, any new nodes will automatically pull in the correct settings

NB this does not work in the emulator so you need to deploy to “real” Azure to test it out – or run locally in IIS having first installed the .NET Agent and restarted IIS as per the documentation.

newRelic is great – their support isn’t the quickest but its a really good, low maintainence, easy-to-install, monitoring option I would recommend for anyone wanting an affordable enterprise monitoring solution. I set up a billing account but due to a clerical error its the account managed through the Azure portal so we’re not 100% sure we’ll go with it for launch – we are on the trial stage though so provided its resolved soon I have no doubt we’ll use newrelic!

Update 2014-08-20: The problem previously mentioned is, IMHO, down to shortcomings in the Azure Portal AddOn system– we dont have Azure Portal integration but we but we have newrelic for launch, and thats a good place to be :-D

  • (This will not appear on the site)

Recently read, highly rated...

previous next