Skip to content →

Category: Architecture

Visualising your AWS infrastructure

Building out an AWS infrastructure can get pretty complicated as there are many different elements and dependencies that need to be considered. Visualising the relationships is a great way to understand all the dependencies involved, however the existing tools may create results that were not exactly expected. The visualisations that I have been using are based on my Cloud Formation scripts. These are static JSON files that AWS uses to build out the architecture. The first tool that I used to help visualise the script was Cloud Formation Viz. Its a simple command line tool that can draw out the different components and relationships.
The below diagram is of a single EC2 instance.

This was a pretty simple Cloud Formation Script, but already you can see how complicated its starting to get!!!!
The second image is of a multi EC2 instance VPC.

Now we really start to see how complex architecture is!!! In reality though you need something like this to keep track of what you are building and how it all hangs together!!!

AWS CloudFormation has a builtin designer which visualises your cloud formation scripts, however this requires you to be editing your templates in the online designer. The benefits are you get template validation, however its difficult to tie the designer into a proper devops style workflow.

The below is the AWS visualisation of the single EC2 instance VPC taken as a screenshot.

The following is of the more complex multi instance VPC

What both of these solutions has in common is that they represent the architecture as a graph diagram rather than a what we would expect of a traditional architecture diagram. Typically people expect some thing like the below. (I used Lucid Chart)

The above is much more easily recognised as an Architecture diagram, but the reality is that its an abstraction and pov of an architecture. Its a much more digestible format for communicating your architecture.

Summary

Tooling is not great in this area and depending on your audience you will get very different results from handcrafting to auto-generating. At the moment for me I will be using both methods for tracking change and communicating my designs.

Leave a Comment

Setting up Adobe AEM on Amazon Webservices AWS

One of the AWS concepts that I love is the idea of a Stack, a grouping of related resources manage as a single unit. I guess one of the early incarnations of the stack was the idea of a virtual appliance that VMWare used to market. The conceptual difference between the virtual appliance and the AWS stack is that the Stack includes everything above the hypervisor including networks, security groups, network acls, packaged into a single Virtual Private Cloud.

From an architectural perspective the Stack enables us to move away from documenting best practises and target architectures in documents and Visio diagrams and instead encode them in machine readable artefacts that can be used as templates for distribution and implementation.

In AWS the CloudFormation service takes a JSON template file as its input and constructs the AWS environment from the meta data in the script.  Azure has an identical concept called Resource Manager. Hand coding these JSON templates is a little challenging as tooling support for large JSON documents is very limited, its ideal as a machine readable data exchange format but not great as a human readable data construct Sad smile. The design, deploy, debug/test cycle is slow and the environment provisioning has many dependencies and options that are not clear when hand coding the estate.  Both AWS & Azure provide some basic integration in to IDEs.   I have tried the AWS in Eclipse on  a Mac which was better than nothing but not exactly a nice experience. Microsoft’s Visual Studio has much better integration with their Resource Manager however this only applies to Windows.   The alternative method for building out the stack is to use the AWS GUI to build out your resources and then reverse engine the JSON templates. I have used this technique multiple times to get my head around how things work, but you will end up having to clean the final scripts up and the templates contain a lot of autogenerated metadata which is not that helpful for the human eye.

AWS have a number of sample templates that can be used as a basis for building out your own stacks. I am currently working with Adobe AEM so decided to build out a simple single instance EC2 with Adobe AEM installed on it. Its easy to provision and is a self contained instance ideal for development, prototypes etc.

You can use the link below to launch the stack into your own AWS account or you can visualise the stack using the AWS visualisation tools

launch stack button  View the Stack

Next step was to build out a more realistic environment that could be used for testing. This stack deploys AEM across 3 instances including an author, publisher and dispatcher instance.

launch stack button  View the Stack

When you launch either of the above stacks you will be asked to pass in a number of parameters. This include:

AEMDownloadUrl: Each of the script requires that you have a copy of the AEM Quick Start that can be downloaded onto the instance. You can pass the URL into the cloud formation script. I have my quick start uploaded to S3 and it gets pulled down when the EC2 instance initialises.

InstanceType: It is recommended to run AEM on a pretty significant instance to ensure that it is performant, however as costs apply I have left the instance type for you to select.

Keyname: This is required so that you can SSH into your EC2 instances

SSHLocation: This is a firewall rule that enables you to lock down your SSH access to only an IP that you specify

There is still a lot of work to do on these scripts to turn them into production read stacks, however for anybody starting on the journey I hope that they are help full.

Further work and consideration needs to be taken to include:

  1. setting up private subnets & VPN access for author & publisher nodes.
  2. setting up IAM users
  3. CDN configuration
  4. multi AZ set up for HA and DR scenarios.
  5. Shared content stores on S3, MongoDB & TarMK
  6. Auditing, Logging etc
  7. Adobe AEM Configuration for content replication

The scripts are also available on GitHub so please fork the templates, make changes and let me know how you get on.

Leave a Comment

Building a Microservices Architecture on Azure with .NET – Part 2

Decomposing an ecommerce solution into an Azure Microservice Architecture

In the previous article I describe the highlevel requirements of a typical enterprise architecture that needs to expose its services over more touch points. I also provided a background into Microservice architectures as it is with this type of design that we hope to tackle the problem. In this article we will expand on the discussion moving into a more concrete example using an ecommerce solution.

One of the greatest challenges in any distributed architecture is defining the boundaries that you will use to partition your application.  Domain Driven Design encourages us to separate out our solutions into “bounded contexts” which each of our models can subscribe to.

Using our example it is easy to identify Orders, Products, Inventory etc, but there are also functional requirements that our solution has to provide which do not really sit within a business domain. Image processing and full text search are examples.

Slide1

The above diagram lists all the functional components of a full multichannel ecommerce solution.  You could deliver most of these capabilities within a single .NET solution like Nopcommerce, but as retailers become omnichannel the demands on their ecommerce systems have grown to the point that they can not service all the touch points from a single solution whilst maintaining agility.

Creating a solution map of your system as above provides a good starting point for driving out all the different components of your Architecture.

The Product Page

ProductPageThumbThe first step in a customers journey usually starts with a product, so it makes a pretty good starting point for us.

If we take a look at an Amazon product page we can quickly see that there are at least 18 different domains being queried for data to build out the page.  Inventory and price may be mastered in a core back end system but product metadata, reviews and product images are probably provided by a 3rd party service.

As we start to decompose the page into its individual elements we can start to draw boundary lines to separate out the architecture.  Business domains are typically a good starting point for boundary identification but data life-cycle should also be considered, a product packshot is unlikely to change during the lifetime of the product, where as the price and inventory will change regularly.

A few years ago I worked with a team where we had large product pages, just like the Amazon one, however all product information was stored into a single datastore. This included everything from large slices of product metadata to price. As the website became more successful the business wanted to be more price competitive, this required us to deliver multiple price changes a day across the whole product catalogue. The net result was every time a price change for a product was delivered we had to remove the product from the different caches and reload it which would cause the product page to reload the entire Product Graph from the database, when the caches were cold this could almost take the website down. We solved the problem by separating out the data. Prices ended up becoming their own entity with their own life-cycle and a price change just required the publication of a new price which would automatically get picked up. Consistency was not a huge issue for us as price was always checked at the last reasonable point with a direct read from the database before the order was processed. This made sure that we did not have zombie price in the  order, due to the ecommerce funnel this final check only happened for 5% of the overall traffic,  a much more manageable number of users.

Another important point when looking at your system for natural boundaries is to consider service availability. If the product page could not render product reviews it might impact sales but only marginally, however if there is no price, product title or availability then its unlikely that you would sell the product. Therefore “service level” makes another logical boundary for separation.

We will also want to expose this data to a number of other interfaces, both user and system interfaces including our search page, basket page, order history and a screens within a mobile app. Additionally we may even expose this data as an external datafeed for 3rd party affiliates and aggregators like Google Product Search to use.

In the traditional n-tier application we would have built a service layer that would probably have included a Catalogue, basket and user service, each implemented as class. To keep things clean we may have put our services into a separate dll. This type of structure can be seen within the Nopcommerce solution **.

NopCommerce-SolutionArch

The diagram below articulates the different capabilities working together to build the product page. Its not an exhaustive list but should illustrate the point.

DigitalArchitecture- ecommerce

Following the Microservices approach we would build out each of these capabilities as a simple REST service. The product controller of our MVC application can then use an async call to each service to build out the view model and render the page.

For further scalability and optimisation some of these calls can be pushed out to ajax requests that do the call once the page has loaded or when the user starts to scroll the page, creating a snappier user experience and supporting edge caching services like Akami, but I will leave this for another post.

In this example the data within each service can change, however it is unlikely to change by the user. Instead there are external processes that will change this data. Examples include new recommendations, price changes, products reviews etc. These changes will be driven by external systems or feeds. The orange boxes in the diagram represent interfaces that would be changing the data stored in each microservice.

Slide3

So, how would we build this using Azure?

We have a number of Architectural options available to build out our Microservice Architecture.

  1. Build out the service using Virtual Machine aka IaaS
  2. Build out each service a web role within Cloud Services aka PaaS Version 1
  3. Use the Azure App Service to model the architecture with Web Apps, API Apps and possible Logic Apps aka PaaS Version 2
  4. Using Docker and ASP.NET vNext to build, host and manage the Microservices

Option 1 Building our the service using Virtual Machines aka IaaS

One of the greatest challenges of building Microservice Architectures is the provision, deployment, management and monitoring of the services. Building our Architecture on IaaS would result in us having to build out a solution for provisioning, deployment, management and monitoring. Azure would only provide an SLA at the Virtual Machine level, everything else would be up to us. Azure offers better services for deploying and managing an Application Architecture. For this reason I wont focus on IaaS here but may come back to it at a later date.

Option 2 Building Microservices using Cloud Services

As our architecture landscape grows with many services it becomes important that we manage our run costs by optimising our utilisation of cloud resources. If we were to build out the above architecture using Cloud Services each component would have to be build as an individual WebRole.

Each Cloud Service is provisioned as a single Virtual Machine. To have a high available service 2 instances of each Virtual Machine have to be deployed. Therefore the above architecture would require 42 virtual machines to be deployed. Not a very cost effective us of resources.  It is possible to host multiple WebRoles on a single virtual machine, however deployment happens at the Virtual Machine which would require a complete redeployment of a virtual machine to change a single service. This does not really subscribe to the Principles of a Microservice Architecture and for this reason alone  using Azure Cloud Services is not be recommended approach.

In my next article I will explore how we can build our product page architecture using Option 3 Azure App Service.

** I have used NopCommerce as an example of a particular type of Architectural style. My observations in no mean imply that NopCommerce is an inferior product or that they should have tackled the design differently.

2 Comments

Building a Microservice Architecture on Azure with .NET – Part 1

Microservices architectures have become a bit of a trend lately. As with all trends there are the lovers and the haters, however from my perspective Microservices as defined in the excellent book by Sam Newman describe the next evolution of application development. From my own experience building large scale ecommerce solutions Sam’s book was like a shot of clarity.

Martin Fowler has written an excellent summary of Microservices which is required reading for anybody who is thinking about venturing down this type of architecture. However, Thoughworks are still hesitant to promote a Microservices Architectures due to the inherent complexity that is created by having an application composed of many granular parts.  This complexity comes from having to provision, deploy and managed multiple applications within your eco-system.

Here is a definition from Martin Fowler’s article

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

Microservices have gained increasing popularity in the “Startup/Massively scalable” area with Netflix leading the charge. The Gilt Group is a great example of a company who capitalise on the Microservices Architecture. They are a flash sale clothing retailer who started off with a Ruby On Rails monolith. After the initial success of their company they have spent the last few years moving away from the Monolith and now have a system comprised of over 300 independent services. Netflix is supposed to have over 600 services and Hailo has about 160.

The Microservices label encompasses more than just the architecture, but also the processes and practises that are required to build these architectures. Many of the elements in Sam’s book will be familiar to .NET developers who have been following best practises over the years. Especially those that followed down the CQRS route implementing service buses like NServiceBus.

But Microservices are not just for the startups, they provide an excellent architectural model for Enterprises who now need to:

  1. expose their data and services to ever more increasing touch points.
  2. need to start scaling functions of their architecture
  3. need to start changing functions of their architecture

The image below of the Windows 10 platform displays both the opportunity and challenge in building digital services. How do you build a service that can provide an experience on every device from a smart phone to a large screen or integrate data from an IoT device yet deliver a user experience on a hololense!!!!

Windows-10_Product-Family

Traditional Architectures relied on us building a single monolithic application that probably serviced only a single touch point. As we have to service more touchpoints we will want to project our digital service to each touchpoint only exposing the user experience that is appropriate to the device.  We will also need to change touch points independently and often.

An Architectural Vision for a modern enterprise would be to have an Architecture that can provide an excellent customer/user experience through focused touch points supported by a low cost, scalable, highly available, redundent infrastructure that can support a low cost, high frequency of change with low risk.

Easy!!!

The below diagram articulates a highlevel logical architecture of a typical Enterprise scenario or the probably As-Is of your enterprise application.

DigitalArchitecture

The architecture is broken up into 5 layers.

  1. Touch points – These are the external devices that your users are going to interact with. They sit outside your firewall, and may intermittent connection to the internet
  2. Presentation Services – These are the services that your devices interact with, they could finished interfaces exposed as websites or REST services exposing the data for apps. They will most likely be HTTP end points.
  3. Frontend Services – These services provide the logic and processing for the Presentation Services. There may be a common set of services here that provide functionality to a number of Presentation services and are generally separated by their domains.  These services will have their own databases.
  4. Integration & Orchestration Services – This is the bridge between the modern, frontend world and the legacy backend services. The orchestration layer is typically provided by an enterprise service bus such as Biztalk Server, Mule or similar.
  5. Backend Services – these are the core backend services that drive and support your enterprise.

There are 2 key characteristics that separate the infrastructure requirements between the top and bottom of this architecture. The top layers need to be available 24/7, will change frequently and are open to the public and therefore need to be able to support traffic spikes and scale. The bottom layers may only require a low availability, sometimes a 9-5 schedule, change rarely and don’t need to scale as they operate on a predominantly batch based life cycle.  This is an important point when reviewing and selecting the types of infrastructure solutions to support the 24/7, frequently changing environments as this can only be achieved with high levels of automation and thus require the selected components to support automation.

As you can see from the diagram below we can use Azure services to deliver this architecture.

Slide2

In the next article we will decompose an ecommerce solution into the above architecture.

Leave a Comment

Designing a Target Architecture for a large scale website built on Amazon Webservices

I am currently helping out a friend of mine with their startup. They wanted me to design an architecture and technology roadmap for their platform so that they can look to get funding to move things forward. They are already hosting their minimum viable product on Amazon Web services so that seemed like the obvious place to start.

To build a tech roadmap your need to start with the target architecture. I used the Amazon Visio Stencils  to draw up what I consider to be a pretty simple but scalable architecture for the site. The site is predominantly a readonly site so there is not a large requirement for a scalable async workflow, as a result scaling will be provided by adding more web & search nodes and caching, caching, caching.Image

There is a requirement to process a number of feeds which would be delivered using Elastic Map Reduce workflows rather than traditional ETLs and pushing the results out to Amazon DynamoDB for fast, scalable read access.

I have also added Solr as the search platform however this could easily be replaced with ElasticSearch or even the new Amazon Cloud Search (still in Beta)

I then plugged in all the details into the Amazon calculatorto get a monthly run cost of $3617.73

I find that cloud architectures tend to be quite prescriptive and thus the above architecture could be considered to be rather generic for a large scale webapp.

I would be really interested to hear what people think of it and what could be improved.

Leave a Comment

A working app in production in a week?

I strongly believe that the only way to build successful products is to get them in front of your customers. This can often be rather difficult when it comes to setting up a project as you have to buy production hardware, set up a delivery process etc, something that is often ignored at the early stages of a project and as a result the architecture that gets built becomes expensive to manage and the cost of change starts to increase. To prevent this from happening I start with a delivery runway.

You could consider the runway the equivalent of a production line in a car factory. You could build the car in your shed in an adhoc process, but this will not scale as you want to build more. So I look for and adopt technologies that make sure that the first line of code written is deployed into a working production environment. Further to this, I like to deploy to production everyday or even multiple times a day. By reducing the amount of change thats deployed you reduce the amount of risk associated with it, reduce risk and we reduce cost. That means more money to spend on the features of your product that will make a difference.

Having run many enterprise scale solutions in the past it is incredible how much time/cost can be consumed in compiling a release together when a delivery process is absent. Doing this at the end of a long term development cycle creates such a barrier to delivery that it makes many software products uncompetitive. By embracing change and building an infrastructure to deliver that change we can adapt to market shifts.

I am definitely not the first to implement a continuous deliver process and the below presentation from Chad Dickerson CEO of Etsy explains the process and value with a few graphs thrown in.

Leave a Comment

Business Architecture and the Business Model Canvas

One of the challenges when modelling a Business/StartUp/Project (proposition) is to make sure that you capture all the inputs & outputs that can effect your decisions & architectural design. There are often so many unknown factors that effect a proposition that it is all to easy to focus on the visible and happy path aspects rather than digging into the deeper elements that could reveal unhappy truths. Stakeholders can unconsciously hold back important and relevant information that can be critical to your decision making process. Effectively teasing out requirements from stakeholders is a skill set in its own right as people tend create solutions in their minds rather than focusing on the problem. It is our job to get to the problem and that can often be rather hard when the stakeholder has already got a predetermined solution in mind.

Over the last 6 months I have been effectively using the Business Model Canvas as a communications tools to document, validate and brainstorm my clients ideas.

Business_Model_Canvas

Its a fantastic tool because its structured in a way that poses questions across the whole business model that may not have been thought of, and because its a visual tool it becomes clear very quickly where there are gaps in the vision.

You start by getting your clients to document all their assumptions onto post-it notes and stick them into the relevant cells on the canvas. I have seen many people print off large versions of the canvas that they hang on the wall, however I tend to draw mine onto a white board and then stick post-it notes onto that.

businessmodelcanvas-whiteboard

The original Business Model Canvas was developed by Alex Osterwalder and is documented in his excellent book Business Model Generation. The Book is a great introduction to business models and provides a number of default models based on well know established businesses. My only complaint about it would be that it is targeted at the Enterprise, as a result the models are rather too “big picture” for me. I like my models to be rather granular as this enables me to define clear actionable next steps. As a result I tend to blend the Business Model Canvas with elements from the Lean Canvas developed by Ash Maurya author of Running Lean. The Lean Canvas is more product centric which I find to be more helpful as my clients tend to think in products and services rather than businesses.

Once you have gone through the process of documenting the idea onto the canvas I tend to draw it up on a PowerPoint slide deck so that I can capture the Business Model as it develops. The PowerPoint slide deck also makes for a convenient method for distributing the Model to your clients or stakeholders.

Leave a Comment

Architecture Principles

In an attempt to put some structure to my learnings around IT Architecture I decided to get certified as a TOGAF Enterprise Architect. One of the elements of the TOGAF Architectural Development Method (ADM) that the architect team should provide is an artefact reffered to as “Architectural Principles”, this artefact is a document listing the principles that the architecture team are bound by. They should act as a guide to the decision making process as the Enterprise Architecture is developed.

I used to call these “Aspects” and would document them on our internal development wiki, but upon reflection I think that these principles form an even more important role than I previously thought. They not only define the foundations for the decision making process that your development team will subscribe to, but they form the basis for the culture of the team.

Enterprise Architecture is often regarding as unsexy and can be viewed negatively due to the Ivory Tower connotations. So do sexy companies do Architecture? I have heard that Facebook have posters on their wall that say:

Move Fast & Break Things

The Facebook meme has massive connotations with regards to architecture, delivery process, testing and ultimately you the user, it is not suitable for every company, but according to the TOGAF definition of a principle it definitely qualifies as a principle.

A fundamental statement of belief which guides the future direction of the architecture and supports the decision-making process.

A good TOGAF principle is more than a short statement of belief but should also have a name so that it can be identified, a rationale that highlights the business benefits and implications that highlight the impact of carrying out the principle in terms of resources, costs and activities for both the business and IT.

To make sure that your principles dont become shelf-ware and actually become a part of your companies ethos they need to be marketed, both internally and externally. They need to be reviewed by your team members, discussed and updated. Your Architectural Principles are an asset to your organisation and should be leveraged accordingly.

So here are my Architectural Principles, they are the basis for for how I run my personal projects and how I like to do things when I have no boundaries. They are a work in progress, so please provide your feedback.

Let me know what you think.

Leave a Comment