Two steps forward with one shift left

Two steps forward with one shift left

Limitation of the linear development process 


Source: Intellectsoft (

Spend enough time in the industry, and you’re sure to have encountered your fair share of development design processes. Often, they’ll be in a form similar to the waterfall design image above, starting from the Design to the Coding, then Testing and, finally, ending in Acceptance and Deployment. What the image doesn’t fully express is how much time and resources is dedicated to fixing the problems that crop up along the way. And, critically, how draining and arduous the process of referring back to previous stages can be. 

Why testing in the testing stage is too late 

By virtue of its purpose, the testing stage is where the majority of project problems and issues are caught. Here is where mountains of error logs are generated and a proportionally high volume of resources are required to identify and eliminate bugs. 

By now, the code would have been months, at best weeks, into development. With each new dependencyintroduced, each script developed, the code becomes exponentially harder to debug. Developers will often spend long periods of time re-familiarising themselves with solutions they themselves built, but forgot how it works. 

Security and Compliance, whether included in the testing stage or in a section of its own, brings its own set of headaches as well. Vulnerable libraries, outdated software and a whole list of similar issues caught here will need addressing, often having regressive effects on other tests and the overall solutions. 

All this effort makes organisations unable to adapt to market demands and industry competition, due to slow and unresponsive processes. Time that could be spent performing actual development work and improving processes is, instead, used trudging through the exact same work simply to be able to understand how to solve problems. Mitigating these disruptions is where the shift left philosophy comes in – allowing you to move forward with greater agility. 

What is Shift Left? 


Source: Checkmarx (


As the name implies, the Shift Left philosophy is all about the moving stages left in the sequence. In this case (image above), Security and Testing. Immediate efficiencies can be derived by including testers in the Development and Design phase, while also ensuring that developers perform Test-Driven Development. All of these actions will measurably reduce unforeseen delays and disruptions, as well as ensuring enough time and resource is dedicated to fixing these issues.  

Debugging solutions that were built only yesterday is a far simpler task than pouring through logs of processes developed months ago and integrated with multiple other projects. Keeping teams lean and flexible makes organisations better equipped to react to market forces and cuts project costs. 

Shifting Left is the DevOps way 

Having established what Shifting Left is, the question now moves to: ‘How to go about Shifting Left? Here is where the DevOps culture and processes comes into play. As a quick refresher, DevOps comes from the idea of combining Development and Operations together and is about continuous feedback and the dissolution of programming silos (groups) to encourage cooperation and quicker responses. 

This culture of cooperation, to continually gather feedback and iterate on the solutions allows you to be flexible and adaptable to market changes. Testing earlier also prevents massive fatal errors from taking the entire system down, instead keeping issues small and manageable. 

Adopting DevOps also entails embracing automation because, and where there is automation, there are resources to be freed. These automated processes facilitate keeping costs low and permit scalability at the same time. With a sufficiently rigorous infrastructure in place, it becomes possible for future projects to benefit from previously generated solutions and processes, thereby improving business gains. 

How to begin 

So, how do you begin to embrace DevOps and start Shifting Left? What tools are available to do so? How much process should be moved left and when? Let us at ECS Digital bring you up to speed on what to do using some of the best practices we’ve established in multiple industries. After this course, you will not only have a better understanding of how DevOps will improve your processes, but in a far better position to apply these learning to your organisation. Step forward and we’ll show you how to Shift Left.  

Register for our Adopting DevOps training course in Singapore from 7th to 9th May 2019.

Matthew SongTwo steps forward with one shift left
read more
DevOps Playground: CI with Blue Ocean

DevOps Playground: CI with Blue Ocean

The Speaker: Matthew Song – 

Cloudbees Jenkins is the most popular open source software orchestration tool on the market due to its wealth of plugins and easy set-up of infrastructure as code. Yet where does one begin using the Jenkinsfile for setting up new project and DevOps pipeline?

Let Blue Ocean take the hassle of setting up a jenkinsfile from scratch by providing an intuitive, modern coat of paint on Jenkins user interface. With its modern design and intuitive features, Blue Ocean is here to facilitate a quick and easy setup of new Jenkins pipeline with minimal hassle.

Following the video below you’ll begin to see how easy it is to set up a new Jenkins maven Job using the Blue Ocean plugin and the intuitive feedback it provides through its modern design.

I’ve also written a step by step guide to help you through it all:

If you’re interested in attending more hands on sessions, DevOps Playgrounds are held once a month in four locations:

You can also find all the information and resources you need about DevOps Playground sessions, upcoming events and past events on our website:

Matthew SongDevOps Playground: CI with Blue Ocean
read more
DevOps Playground Singapore – CI with Blue Ocean

DevOps Playground Singapore – CI with Blue Ocean

We, at ECS Digital, decided to kick off our 1st DevOps Playground Singapore in 2019 with the building of a Continuous Integration (CI) pipeline using Cloudbees Blue Ocean. The event was hosted at the Sandcrawler Building with the help of GovTech.

After forking the open source Jpetstore repository, we proceeded to setup a CI pipeline running a maven build with SonarQube testing as well as a push to JFrog Artifactory, stopping just shy of a full deployment due to time constraints.

Beyond minor hiccoughs brought about by formatting and updates beyond our control, the playground proceeded smoothly and was well-received by attendees.

We were successful in building the CI pipeline, showcasing the strengths of Blue Ocean’s intuitive UI and low barrier to entry. Attendees were shown the steps needed to inject Jenkins’ environmental variables and provided explanations as to why each step and tool were defined as such. The Continuous Integration process was completed without a hitch, with users able to view their generated artefacts in JFrog as well as scan results in SonarQube.

We had such a large turnout and we hope to see everyone again for our next Playground.

Interested in attending our next DevOps Playground in Singapore? Follow us on Meetup to receive a notification about the next event. Coming from the UK? We have Meetups in London and Edinburgh too! 

Matthew SongDevOps Playground Singapore – CI with Blue Ocean
read more
Top 5 AWS Technologies to keep an eye on in 2019

Top 5 AWS Technologies to keep an eye on in 2019

AWS Re:Invent is a learning conference hosted by Amazon Web Services for the global cloud computing community. The event features more than 2,000 technical sessions, a partner expo, after-hours events, training and so much more. It’s the main event to find out the latest with AWS products and new releases.

With several dozen new products announced at the most recent AWS Re:Invent, it’s certainly challenging to decide what to follow. So we’ve filtered it down to the top 5 technologies to pay attention to in 2019.

1. Lambda Layers

First up, we have Lambda Layers. The quick and dirty description of Lambda is: serverless code that is easily scalable, where you only pay for what you use, when you use it, taking much of the hassle out.

Lambda Layers builds on this product by offering a simple way to manage software and data across multiple lambda functions. No longer do you need to deploy shared code with every function that uses it, now you just need to package the components in a zip file within a single Lambda layer and have the function reference it, just like it would do normally.

2. AWS Transit Gateway

Next comes AWS Transit Gateway, a new service with the goal of simplifying the management of network architecture and easing scalability. AWS Transit Gateway acts as the hub, managing and routing all connected networks from this single connection that can be installed on premises data centres or remote offices.

While easing scalability is a major concern for many, the most immediate benefit of AWS Transit Gateways is operational cost reduction.  This is brought about by each network only needing to connect to the singular AWS Transit Gateway.

3. AWS Control Tower

Next up, AWS Control Tower, this technology is so brand new it is only available in preview at the moment. Its core purpose is that it seeks to automate the set-up of your multi-account AWS environment with just a few clicks, whilst providing options to enforce policies using service controls and detect policy violations as well.

On top of the automation of such processes, Control Tower provides an integrated dashboard for a top-level summary of the environment. Facilitating easy monitoring and enforcing of policies, as well as providing intuitive feedback. It’s a great feature to have for any company attempting to embrace and practice DevOps.

4. AWS Marketplace for Containers

With the industry making significant moves towards the micro-service architecture, utilising containers for many services and processes, Amazon now offers more than 180 curated and trusted container products in AWS Marketplace and through the Amazon Elastic Container Services (Amazon ECS) console. This facilitates taking advantage of the rest of Amazon’s services, such as Amazon Elastic Container Services for Kubernetes (Amazon EKS), and AWS Fargate, leveraging on current knowledge and skills without any friction or need to change.

5. AWS Security Hub

Another important and new technology available for preview is AWS Security Hub. This technology aims to keep security monitoring as agile as possible. Currently, with the vast number of security tools available ranging from firewalls to compliance scanners, processing all the data and alerts can be difficult. AWS Security Hub provides a single place to aggregate everything. Integrating with other AWS services like Amazon GuardDuty, Inspector and Macie, AWS Security Hub aims to provide a strong visual summary of the information, while enforcing best practices and compliance. In other words, Security Hub is focused on giving a comprehensive view of high-priority security alerts and compliance statuses across AWS accounts.

While there is not yet a feature for custom rule sets, Amazon has made indications that they are looking to open up more options for policies and standards of best practice. This will hopefully give companies more flexibility to choose and adopt the policies most relevant to them.



Matthew SongTop 5 AWS Technologies to keep an eye on in 2019
read more
Can a new hire benefit from Terraform?

Can a new hire benefit from Terraform?

The short answer? Yes. For that matter, any programmer, new or old, can benefit from Terraform. I’m only a few months into my journey with Terraform and its already proving itself as extremely beneficial.

Before describing how Terraform can benefit you, it would probably be best to explain what Terraform is and get everyone on the same page. Terraform is software created by our partner Hashicorp, responsible for helping us to implement infrastructure as code. With the industry’s bid to turn everything into code, it seemed only natural for Hashicorp to provide a solution to spinning up service providers, whilst enabling easy versioning and replication. In a single sentence: Terraform allows us to turn the entire process of setting up cloud providers into code that can be automated and version controlled.

The main benefits of using Terraform are how replicable everything becomes and how easy it is to make changes and track said changes. Like any good code, all these resources that Terraform creates are easily transferrable.

Problem Scenario

Imagine, for some internal testing you set up a group of resources to test a Continuous Pipeline on AWS. The plan is to use an orchestration tool like Jenkins, test software like SonarQube and a binary repository like Nexus. You set up the resources, the security groups, subnet, etc. You configure the ports the software is expected to run on, along with various other requirements they need, like Java versions.

Four months later after the testing is complete, you need to set up the same solution for a client. Maybe with a different binary repository like JFrog. Only, the resources are gone. No reason to keep resources you weren’t using for four months (or if you did, that was four months of subscription for resources that weren’t being used- a loss either way). Now you have to go through the entire process of setting up everything again. In the best-case scenario with perfect documentation, you’d still have to manually go through each and every step, provisioning each resource and configuring each software package. More than likely, you’ll find a knowledge gap somewhere and you have to fumble around trying to get it all working again.

In addition to this, there are minor changes and updates. For example, a new port needs to be opened and perhaps the keys need changing for security reasons. Perhaps a value was misnamed and needs correction. How can you ensure that the changes won’t impact the setup of the service? Then for every change and update the documentation would also need updating as well to record this.

Terraform to the Rescue

With Terraform, most, if not all of the hassle, can be removed from that messy situation. Code can be reused infinitely, with perhaps some minor changes and updates to suit the new scenario.

Want to pre-install software like Jenkins and Nexus without having to manually download them and their dependencies? Turn it into a script and automate it.

Need to configure ports and environmental variables for said software? Put those in the script too.

A year has passed, maybe the original programmer for the solution has been moved to another project.  Don’t worry, the terraform script remains.

Made a change to security groups and unsure if it works? Copy the code, make the change and run the code to verify its functionality. With some refactoring, it even becomes possible to only copy parts relevant to you. As with the scenario above, simply remove the script and associated config files, if any, for installing Nexus and prepare one for JFrog. Now you have a terraform code for installing a CI pipeline that supports either Nexus or JFrog, which can also be easily modified to work on other OSes and even install other tools.

Not For Everyone

Now that being said, an absolute newcomer, should not start using Terraform immediately. Terraform knowledge in no way supersedes the knowledge of the actual provider. Knowing that a security group exists and is needed for Terraform to create an AWS resource instance is not a suitable replacement for understanding WHY the security group is necessary or WHAT the security group does. Some hands on with what goes on “under the hood” still goes a long way.

My Short Terraform Journey So Far…

As infrastructure as code, Terraform provides an easily mutable code that is version controlled when integrated with other services like Git… And being code, it is easily repurposed for other projects and uses. As a new hire at ECS Digital with only a few months of using Terraform, I’ve already benefited tremendously from this incredible tool. Ranging from repurposing the code and scripts of others in my own setup, to easily providing fresh instances for me to test without all the manual work that goes into setting them up.

Just as excitingly, Hashicorp is far from done with Terraform. As of this article, Terraform is still at version 0.11. This means that there is still vast room for iteration and improvement on Terraform, such as better importing of pre-existing resources into Terraform’s set of managed resources.

Having already benefited so much from Terraform in its early life cycle, I am certainly keen to see what more can be done with it. If you’re interested in starting your Terraform journey with us, feel free to contact us or check out our Hashicorp training we provided as official Hashicorp partners.

Matthew SongCan a new hire benefit from Terraform?
read more
ECS Digital returns to Jenkins World 2018

ECS Digital returns to Jenkins World 2018

ECS Digital returned once again to Jenkins World in San Francisco, hosted by our partner Cloudbees. This year we had the opportunity to listen to a whole host of talks delivered by various industry leaders. We also conducted the ‘Jenkins Pipeline Fundamentals’ – training over 35 students from numerous backgrounds and experiences in Jenkins.

Our very own Ivan Audisio led the training, covering the essential best practices and nature of declarative and scripted pipelines. The real-world experience shared by both him and the various students made for a stimulating and enlightening experience for all. Alongside the theory, there were practical labs to provide an immediate application of the theory learned.

In tandem with the training, there were a variety of courses available during the convention, including Jenkins Pipeline Intermediate, Jenkins Fundamentals and CloudBees Core on Kubernetes – Intermediate.

These full-day training sessions were held over two days to give those interested a chance to expand their knowledge and familiarity with the Jenkins tools and concepts. These ranged from the basic configuration of projects to end-to-end automation.

During the event, Cloudbees hosted their second annual DevOps World Awards Program which aimed to honour all the Jenkins contributors and DevOps innovators. ECS Digital received the award for ‘Service Delivery Partner of the Year’ in recognition of our contributions to the Cloudbees and Jenkins community. We are extremely grateful for this award, thank you to the Cloudbees and Jenkins team!

The Keynotes

Following the conclusion of the training, the rest of the convention was dedicated to hosting talks, demonstrations and presentations of Jenkins and other related Continuous Integration (CI) technologies and concepts.

During one such keynote presentation, Kohsuke Kawaguchi, Cloudbees CTO and creator of Jenkins, introduced the exciting new technologies they have been working on and discussed their vision of the future of CI. The five technologies discussed were:

  1. Jenkins Pipeline
  2. Jenkins Evergreen
  3. Configuration as Code
  4. Cloud Native Jenkins
  5. Jenkins X

Here were some other event announcements that caught our attention:

Jenkins Pipeline

As with before, Cloudbees continues to push forward with improving the Jenkins Pipeline, with updates to the Blue Ocean interface they have been developing since last year. One development Kawaguchi was particularly excited about was the extensibility to facilitate the Jenkins community to contribute to the project; similar to the wealth of plugins developed by the community for Jenkins. He also believes it is time to move away from the old Jenkins User Interface (UI) and begin to fully integrate Blue Ocean as the go-to UI for Jenkins.

Configuration as Code

While only touched on briefly, the idea to have Jenkins’ configuration as a file that is able to be version controlled and tracked is an exciting one. Rather than having users manually make modifications with no means to track changes, which may break builds and functions, support is being developed to allow such version controlling to exist. By facilitating the creation of configurations as a single config file that can be stored in repositories, it becomes possible to easily implement rollbacks in the event of failures and easy replication. The idea to replicate a Jenkins setup by simply copying a single file is one step closer to the final goal of turning everything into code.

Cloudbees Suite

Recognising the increasing desire for greater tools and support for their Cloudbees Software, Christina Noren, Cloudbees’ Chief Product Officer, conducted the keynote introducing the Cloudbees Suite – a software package consisting of Cloudbees DevOptics, Cloudbees Codeship & Cloudbees Core.

Acknowledging the confusion caused by their rapid development of new software and improvements, Christina elaborates on their desire to rebrand their tools. This rebranding will help to alleviate the issue, as well as highlight their continued dedication to improving the tools available and creating more for the community.

DevOptics continues to deliver a means to accurately monitor performance and provide metrics of improve – a key concept in Continuous Integration and Delivery of providing feedback to users. Working together with Core for easy deployment and Codeship for operational maintenance, the software suite provides a strong collection of tools for furthering any company’s digital transformation.

The Conference

The conference served as a good place for networking, on top of providing a place for various talks and technical demonstrations from industry leaders and commentators. These talks and demonstrations ranged from personal insights to experiences with Jenkins deployment.

Our gratitude goes out to Cloudbees for hosting the conference as well as everyone who took the time to come speak with us and attend the training session conducted by us.

If you’re interested in contacting us for Jenkins or other DevOps related consultations, please contact us here.

Matthew SongECS Digital returns to Jenkins World 2018
read more