Skip to content →

Glyn Darkin Posts

Visualising your AWS infrastructure

Building out an AWS infrastructure can get pretty complicated as there are many different elements and dependencies that need to be considered. Visualising the relationships is a great way to understand all the dependencies involved, however the existing tools may create results that were not exactly expected. The visualisations that I have been using are based on my Cloud Formation scripts. These are static JSON files that AWS uses to build out the architecture. The first tool that I used to help visualise the script was Cloud Formation Viz. Its a simple command line tool that can draw out the different components and relationships.
The below diagram is of a single EC2 instance.

This was a pretty simple Cloud Formation Script, but already you can see how complicated its starting to get!!!!
The second image is of a multi EC2 instance VPC.

Now we really start to see how complex architecture is!!! In reality though you need something like this to keep track of what you are building and how it all hangs together!!!

AWS CloudFormation has a builtin designer which visualises your cloud formation scripts, however this requires you to be editing your templates in the online designer. The benefits are you get template validation, however its difficult to tie the designer into a proper devops style workflow.

The below is the AWS visualisation of the single EC2 instance VPC taken as a screenshot.

The following is of the more complex multi instance VPC

What both of these solutions has in common is that they represent the architecture as a graph diagram rather than a what we would expect of a traditional architecture diagram. Typically people expect some thing like the below. (I used Lucid Chart)

The above is much more easily recognised as an Architecture diagram, but the reality is that its an abstraction and pov of an architecture. Its a much more digestible format for communicating your architecture.


Tooling is not great in this area and depending on your audience you will get very different results from handcrafting to auto-generating. At the moment for me I will be using both methods for tracking change and communicating my designs.

Leave a Comment

Setting up Adobe AEM on Amazon Webservices AWS

One of the AWS concepts that I love is the idea of a Stack, a grouping of related resources manage as a single unit. I guess one of the early incarnations of the stack was the idea of a virtual appliance that VMWare used to market. The conceptual difference between the virtual appliance and the AWS stack is that the Stack includes everything above the hypervisor including networks, security groups, network acls, packaged into a single Virtual Private Cloud.

From an architectural perspective the Stack enables us to move away from documenting best practises and target architectures in documents and Visio diagrams and instead encode them in machine readable artefacts that can be used as templates for distribution and implementation.

In AWS the CloudFormation service takes a JSON template file as its input and constructs the AWS environment from the meta data in the script.  Azure has an identical concept called Resource Manager. Hand coding these JSON templates is a little challenging as tooling support for large JSON documents is very limited, its ideal as a machine readable data exchange format but not great as a human readable data construct Sad smile. The design, deploy, debug/test cycle is slow and the environment provisioning has many dependencies and options that are not clear when hand coding the estate.  Both AWS & Azure provide some basic integration in to IDEs.   I have tried the AWS in Eclipse on  a Mac which was better than nothing but not exactly a nice experience. Microsoft’s Visual Studio has much better integration with their Resource Manager however this only applies to Windows.   The alternative method for building out the stack is to use the AWS GUI to build out your resources and then reverse engine the JSON templates. I have used this technique multiple times to get my head around how things work, but you will end up having to clean the final scripts up and the templates contain a lot of autogenerated metadata which is not that helpful for the human eye.

AWS have a number of sample templates that can be used as a basis for building out your own stacks. I am currently working with Adobe AEM so decided to build out a simple single instance EC2 with Adobe AEM installed on it. Its easy to provision and is a self contained instance ideal for development, prototypes etc.

You can use the link below to launch the stack into your own AWS account or you can visualise the stack using the AWS visualisation tools

launch stack button  View the Stack

Next step was to build out a more realistic environment that could be used for testing. This stack deploys AEM across 3 instances including an author, publisher and dispatcher instance.

launch stack button  View the Stack

When you launch either of the above stacks you will be asked to pass in a number of parameters. This include:

AEMDownloadUrl: Each of the script requires that you have a copy of the AEM Quick Start that can be downloaded onto the instance. You can pass the URL into the cloud formation script. I have my quick start uploaded to S3 and it gets pulled down when the EC2 instance initialises.

InstanceType: It is recommended to run AEM on a pretty significant instance to ensure that it is performant, however as costs apply I have left the instance type for you to select.

Keyname: This is required so that you can SSH into your EC2 instances

SSHLocation: This is a firewall rule that enables you to lock down your SSH access to only an IP that you specify

There is still a lot of work to do on these scripts to turn them into production read stacks, however for anybody starting on the journey I hope that they are help full.

Further work and consideration needs to be taken to include:

  1. setting up private subnets & VPN access for author & publisher nodes.
  2. setting up IAM users
  3. CDN configuration
  4. multi AZ set up for HA and DR scenarios.
  5. Shared content stores on S3, MongoDB & TarMK
  6. Auditing, Logging etc
  7. Adobe AEM Configuration for content replication

The scripts are also available on GitHub so please fork the templates, make changes and let me know how you get on.

Leave a Comment

Digital Transformation Talk @Version 1 Customer Expo

On September 17th 2015 Version 1 hosted the Annual Customer Conference @ The Guinness Storehouse in Dublin.  It was an excellent day with talks focused around the theme of Digital Transformation. It was both a networking and learning opportunity for our customers with a series of panel discussions and lighting talks.

I was lucky enough to be invited to talk about Digital Transformation, in particular the technology that is enabling it.

Digital Transformation Goes Mainstream

Social, Mobile, Analytics, Cloud & Internet of Things – the technology trends driving business transformation

Industries such as travel and media have been completely disrupted by digital technologies and now other industries are not far behind with SMACIT technologies having a huge impact on how products and services are designed and delivered. Digital is no longer a competitive advantage it’s a prerequisite to stay in business. Speakers will give their perspectives and practical lessons learned on how to recognise and realise value through these key technology trends. This is an interactive session featuring a live Q&A and audience polling.

Panelists include:

  • Ronan Brady, Head of Marketing & Digital, SSE Airtricity
  • John Barron, Chief Information Officer, The Office of the Revenue Commissioners
  • Dervilla Mullan, Chief Product Officer, Brandtone
  • Debbie Hand, Head of ICT, FBD Insurance
  • Glyn Darkin, Solution Architect, Digital Transformation, Version 1

Here is a YouTube recording of the talk and a link to the original high definition version

And here is a link to the slide deck

I hope you enjoy it and find some insight. Dervilla Mullan has some particularly excellent examples.

Leave a Comment

SEAI and Microsoft Azure Case Study

Microsoft have put together a nice little case study with SEAI on a project that we recently built. It was a full PaaS solution built on Azure websites, using a Cloud style architecture incorporating queues and webjobs for processing carbon credits and managing load on the site.

Leave a Comment

Building a Microservices Architecture on Azure with .NET – Part 2

Decomposing an ecommerce solution into an Azure Microservice Architecture

In the previous article I describe the highlevel requirements of a typical enterprise architecture that needs to expose its services over more touch points. I also provided a background into Microservice architectures as it is with this type of design that we hope to tackle the problem. In this article we will expand on the discussion moving into a more concrete example using an ecommerce solution.

One of the greatest challenges in any distributed architecture is defining the boundaries that you will use to partition your application.  Domain Driven Design encourages us to separate out our solutions into “bounded contexts” which each of our models can subscribe to.

Using our example it is easy to identify Orders, Products, Inventory etc, but there are also functional requirements that our solution has to provide which do not really sit within a business domain. Image processing and full text search are examples.


The above diagram lists all the functional components of a full multichannel ecommerce solution.  You could deliver most of these capabilities within a single .NET solution like Nopcommerce, but as retailers become omnichannel the demands on their ecommerce systems have grown to the point that they can not service all the touch points from a single solution whilst maintaining agility.

Creating a solution map of your system as above provides a good starting point for driving out all the different components of your Architecture.

The Product Page

ProductPageThumbThe first step in a customers journey usually starts with a product, so it makes a pretty good starting point for us.

If we take a look at an Amazon product page we can quickly see that there are at least 18 different domains being queried for data to build out the page.  Inventory and price may be mastered in a core back end system but product metadata, reviews and product images are probably provided by a 3rd party service.

As we start to decompose the page into its individual elements we can start to draw boundary lines to separate out the architecture.  Business domains are typically a good starting point for boundary identification but data life-cycle should also be considered, a product packshot is unlikely to change during the lifetime of the product, where as the price and inventory will change regularly.

A few years ago I worked with a team where we had large product pages, just like the Amazon one, however all product information was stored into a single datastore. This included everything from large slices of product metadata to price. As the website became more successful the business wanted to be more price competitive, this required us to deliver multiple price changes a day across the whole product catalogue. The net result was every time a price change for a product was delivered we had to remove the product from the different caches and reload it which would cause the product page to reload the entire Product Graph from the database, when the caches were cold this could almost take the website down. We solved the problem by separating out the data. Prices ended up becoming their own entity with their own life-cycle and a price change just required the publication of a new price which would automatically get picked up. Consistency was not a huge issue for us as price was always checked at the last reasonable point with a direct read from the database before the order was processed. This made sure that we did not have zombie price in the  order, due to the ecommerce funnel this final check only happened for 5% of the overall traffic,  a much more manageable number of users.

Another important point when looking at your system for natural boundaries is to consider service availability. If the product page could not render product reviews it might impact sales but only marginally, however if there is no price, product title or availability then its unlikely that you would sell the product. Therefore “service level” makes another logical boundary for separation.

We will also want to expose this data to a number of other interfaces, both user and system interfaces including our search page, basket page, order history and a screens within a mobile app. Additionally we may even expose this data as an external datafeed for 3rd party affiliates and aggregators like Google Product Search to use.

In the traditional n-tier application we would have built a service layer that would probably have included a Catalogue, basket and user service, each implemented as class. To keep things clean we may have put our services into a separate dll. This type of structure can be seen within the Nopcommerce solution **.


The diagram below articulates the different capabilities working together to build the product page. Its not an exhaustive list but should illustrate the point.

DigitalArchitecture- ecommerce

Following the Microservices approach we would build out each of these capabilities as a simple REST service. The product controller of our MVC application can then use an async call to each service to build out the view model and render the page.

For further scalability and optimisation some of these calls can be pushed out to ajax requests that do the call once the page has loaded or when the user starts to scroll the page, creating a snappier user experience and supporting edge caching services like Akami, but I will leave this for another post.

In this example the data within each service can change, however it is unlikely to change by the user. Instead there are external processes that will change this data. Examples include new recommendations, price changes, products reviews etc. These changes will be driven by external systems or feeds. The orange boxes in the diagram represent interfaces that would be changing the data stored in each microservice.


So, how would we build this using Azure?

We have a number of Architectural options available to build out our Microservice Architecture.

  1. Build out the service using Virtual Machine aka IaaS
  2. Build out each service a web role within Cloud Services aka PaaS Version 1
  3. Use the Azure App Service to model the architecture with Web Apps, API Apps and possible Logic Apps aka PaaS Version 2
  4. Using Docker and ASP.NET vNext to build, host and manage the Microservices

Option 1 Building our the service using Virtual Machines aka IaaS

One of the greatest challenges of building Microservice Architectures is the provision, deployment, management and monitoring of the services. Building our Architecture on IaaS would result in us having to build out a solution for provisioning, deployment, management and monitoring. Azure would only provide an SLA at the Virtual Machine level, everything else would be up to us. Azure offers better services for deploying and managing an Application Architecture. For this reason I wont focus on IaaS here but may come back to it at a later date.

Option 2 Building Microservices using Cloud Services

As our architecture landscape grows with many services it becomes important that we manage our run costs by optimising our utilisation of cloud resources. If we were to build out the above architecture using Cloud Services each component would have to be build as an individual WebRole.

Each Cloud Service is provisioned as a single Virtual Machine. To have a high available service 2 instances of each Virtual Machine have to be deployed. Therefore the above architecture would require 42 virtual machines to be deployed. Not a very cost effective us of resources.  It is possible to host multiple WebRoles on a single virtual machine, however deployment happens at the Virtual Machine which would require a complete redeployment of a virtual machine to change a single service. This does not really subscribe to the Principles of a Microservice Architecture and for this reason alone  using Azure Cloud Services is not be recommended approach.

In my next article I will explore how we can build our product page architecture using Option 3 Azure App Service.

** I have used NopCommerce as an example of a particular type of Architectural style. My observations in no mean imply that NopCommerce is an inferior product or that they should have tackled the design differently.


Building a Microservice Architecture on Azure with .NET – Part 1

Microservices architectures have become a bit of a trend lately. As with all trends there are the lovers and the haters, however from my perspective Microservices as defined in the excellent book by Sam Newman describe the next evolution of application development. From my own experience building large scale ecommerce solutions Sam’s book was like a shot of clarity.

Martin Fowler has written an excellent summary of Microservices which is required reading for anybody who is thinking about venturing down this type of architecture. However, Thoughworks are still hesitant to promote a Microservices Architectures due to the inherent complexity that is created by having an application composed of many granular parts.  This complexity comes from having to provision, deploy and managed multiple applications within your eco-system.

Here is a definition from Martin Fowler’s article

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

Microservices have gained increasing popularity in the “Startup/Massively scalable” area with Netflix leading the charge. The Gilt Group is a great example of a company who capitalise on the Microservices Architecture. They are a flash sale clothing retailer who started off with a Ruby On Rails monolith. After the initial success of their company they have spent the last few years moving away from the Monolith and now have a system comprised of over 300 independent services. Netflix is supposed to have over 600 services and Hailo has about 160.

The Microservices label encompasses more than just the architecture, but also the processes and practises that are required to build these architectures. Many of the elements in Sam’s book will be familiar to .NET developers who have been following best practises over the years. Especially those that followed down the CQRS route implementing service buses like NServiceBus.

But Microservices are not just for the startups, they provide an excellent architectural model for Enterprises who now need to:

  1. expose their data and services to ever more increasing touch points.
  2. need to start scaling functions of their architecture
  3. need to start changing functions of their architecture

The image below of the Windows 10 platform displays both the opportunity and challenge in building digital services. How do you build a service that can provide an experience on every device from a smart phone to a large screen or integrate data from an IoT device yet deliver a user experience on a hololense!!!!


Traditional Architectures relied on us building a single monolithic application that probably serviced only a single touch point. As we have to service more touchpoints we will want to project our digital service to each touchpoint only exposing the user experience that is appropriate to the device.  We will also need to change touch points independently and often.

An Architectural Vision for a modern enterprise would be to have an Architecture that can provide an excellent customer/user experience through focused touch points supported by a low cost, scalable, highly available, redundent infrastructure that can support a low cost, high frequency of change with low risk.


The below diagram articulates a highlevel logical architecture of a typical Enterprise scenario or the probably As-Is of your enterprise application.


The architecture is broken up into 5 layers.

  1. Touch points – These are the external devices that your users are going to interact with. They sit outside your firewall, and may intermittent connection to the internet
  2. Presentation Services – These are the services that your devices interact with, they could finished interfaces exposed as websites or REST services exposing the data for apps. They will most likely be HTTP end points.
  3. Frontend Services – These services provide the logic and processing for the Presentation Services. There may be a common set of services here that provide functionality to a number of Presentation services and are generally separated by their domains.  These services will have their own databases.
  4. Integration & Orchestration Services – This is the bridge between the modern, frontend world and the legacy backend services. The orchestration layer is typically provided by an enterprise service bus such as Biztalk Server, Mule or similar.
  5. Backend Services – these are the core backend services that drive and support your enterprise.

There are 2 key characteristics that separate the infrastructure requirements between the top and bottom of this architecture. The top layers need to be available 24/7, will change frequently and are open to the public and therefore need to be able to support traffic spikes and scale. The bottom layers may only require a low availability, sometimes a 9-5 schedule, change rarely and don’t need to scale as they operate on a predominantly batch based life cycle.  This is an important point when reviewing and selecting the types of infrastructure solutions to support the 24/7, frequently changing environments as this can only be achieved with high levels of automation and thus require the selected components to support automation.

As you can see from the diagram below we can use Azure services to deliver this architecture.


In the next article we will decompose an ecommerce solution into the above architecture.

Leave a Comment

Crowd funding, Drones, 3D Printing and the Digital Economy

As software is eating our world; how we buy, build and consume products is changing. This is a story about the Digital Economy and my experience dabbling in it.

Crowd funding

It started about a year ago when I backed a new project on KickStarter. They had a nice idea for a Drone that could collapse down and fit in your pocket (large pocket). They had a slick video and got some good media attention. The price looked good and I had had a pretty good experience with a few other KickStarter projects so thought I would go for it.


Over the course of a year we would get regular updates, pictures from factories, drones being assembled and tested. Everything looked good albeit the project was taking a lot longer than was expected. Finally at the beginning of this year I got an email confirming shipment. To say I was excited was an understatement. Then it went quiet. No delivery, no updates, nothing. About a month later I checked the KickSarter page to see if there were any comments:


It did not take long to find out that something had gone very wrong with the campaign. Lots of people had received the wrong pledge, other people had not received anything and worst of all nobody could get their drone to fly Sad smile.

Luckily a couple of days later a parcel turned up and my drone had arrived. The box looked good and I opened it up to find everything there. I followed the instructions, charged it all up and went outside for my maiden voyage. But just as many other had found out, this drone aint going to fly in a hurry.


I later got a message from AirDroids; the gang behind the campaign who claimed the they ran out of money and that they had to take personal loans just to get the campaign to the end. Not sure I entirely believe it, however I am pretty sure that their intentions had been genuine, just their execution was poor. 


Making products is difficult, product development, manufacturing, logistics, marketing are all hard. Just because a KickStarter campaign gets noticed does not mean that its going  to be a success. Especially when it comes to hardware.

Social Support

Back onto the KickStart page I found a like to Facebook group that had been set up by people who had received the drone and knew what they were doing.

An amazing guy called Rolf van Vliet who obviously knows what he is doing put together a fully comprehensive guide to the pocket drone, including all the modification that you have to make to get it to fly.

Another couple of hours of calibration, modifications and playing about and I finally managed to get the little drone into the air!!!!  But still flying behaviour was poor. It would fly for about 20-30 seconds and then crash, slowly destroying the airframe.


People are amazing, The number of people that came together on Facebook and help each other out is fantastic and a big thanks to the time that Rolf van Vliet spent putting together the support guide.

Just because somebody can build it does not mean its any good. The design of the pocket drone is very poor, breaking on crashes and unable to maintain calibration for flying. These guys were not engineers!!!

Social design & 3D Printing

With all my crashes the drone was not fairing well. Its landing gear had taken a battering and was destroyed. But this is the Digital Economy, thats not going to stop me!!! Somebody had designed a set of new legs and uploaded the designs to Facebook for other to download and get printed up using a 3D printer.


Not to be beaten I used a service called 3D Hubs who are a broker for people with 3D printers. You upload your design and they will calculate the printing cost, suggest people that can print up your component in your area and manage the payment and transaction. Within a couple of hours I had a local guy here in Dublin printing me a new set of legs for the Drone.

2015-05-27 20.10.562015-05-27 20.15.45

So back out with the drone and a few more test flights later I am back to 3 broken legs.


Again, the fact that people have taken the time to model up these legs and distributed them freely is amazing, but just because a person can model something in 3D does not make them an engineer and these legs did not last 1 crash.

Shipping from China

Rather than waste any more time with a poor design I chose to go with a new carbon fibre airframe that I found on Amazon.


Unfortunately it never turned up!!, so now I have tried buying it from a dealer in the UK over Ebay.


There is so much technology wrapped up in these little drones and its amazing how many people are prepared to invest their time in building drones and helping others to have fun. But the number 1 thing I have learn is that just because you can build it does not make you an engineer. But this got me thinking, Just like the ubiquity of cheap PCs and accessible programming languages has turned programmers into the new cool maybe the next big thing will be for engineers!!!! Maybe its time that engineering will become a great profession again as the demand for decent engineers who can design proper hardware that they can now manufacture on their bench. 

Time to dust off my old engineering books!!!

Leave a Comment

Winning with Azure in the Public Sector

Really proud to win Public Sector Project of the Year with the Energy Credits Management System. An energy measures platform built on Azure Platform as a Service for SEAI. A great solution delivered for a great client. I will come back soon with some more details of the project, architecture and how we built it.

2015-05-15 09.36.41

Leave a Comment

ASP.NET vNext – CTP4, Package Managers (Notes)

Naming for ASP.NET is now 5.0

Middleware = IApplicationBuilder  *there is a lot of contention around this

Design time host now compiles the code as you write, this enabled provides better intellisense. Compiled bytecode is pulled from the design time host into IISExpress in realtime in memory. This is the metaprgramming support, there are hooks so that you can do funky stuff on compilation.

KVM is the tool that is used to manage the KRE (K Runtime Environment) on the machine.

Web.config transforms

App knows what the environment is using an Environment variable. Variable is set to production by default, Install VS sets it to Dev. Use the StartUp.cs to evaluate the Environment variable and change the system configuration.

Supporting SemVar2 as part of Nuget 3

Nuget has poor support for Content files, this is a challenge for vNext as the content files needs to be moved as part of the build process. Bower is the package manager for content files.

Leave a Comment

How to get a txt/sms message when an Azure service goes down

Over the last few weeks a number of services I have deployed to Azure have experienced difficulty. Luckily I have pretty good monitoring on my apps that notified me of the problem.

What I really wanted was a way to correlate between my apps having issues and any known problems with Azure. Luckily Azure has a number of excellent status feeds that are available for monitoring their services. With a little help from we can set up an alerting system so that you can get a txt/sms message when the monitoring feed changes.

Here are my shared recipes so that you set them up for yourselves

If you use different services then you need to get the feed url for each service and add them into your recipe. The full status board below has all the different services listed, if you click on the orange feed icon this will take you to the feed. Just copy the url from your browser into the url box in the ifttt recipe.

Technorati Tags:
Leave a Comment