Getting Hands-On with Jenkins X

Getting Hands-On with Jenkins X

July 25th was a big day for the DevOps Playground. Not only was it an opportunity for ECS Digital to work closely with its partner CloudBees, the Playground and its members had the privilege of welcoming Gareth Evans, who showcased CloudBees’ new tool Jenkins X.

Through the session, Gareth uncovered what Jenkins X is and the challenges it can solve. We’ve summarised his talk below:

Jenkins X is an open source platform offering software developers automated testing, continuous integration (CI), and continuous delivery (CD) specifically in Kubernetes. By managing projects within Jenkins X, users get a complete CI/CD process with a Jenkins pipeline that builds and packages project code for deployment to Kubernetes containers. Users also gain access to pipelines for promoting projects to staging and production environments.

Running the “classic” open source Jenkins and CloudBees’ version of Jenkins on Kubernetes already has it benefits, thanks in part to the Jenkins Kubernetes plugin. This plugin allows users to dynamically spin-up Kubernetes pods to run Jenkins build agents. Not only does it help streamline the process of working with containers, Jenkins X adds what’s missing from Jenkins: comprehensive support for CD and the management of promoting projects to preview, staging, and production environments.

As many of you can attest to, Kubernetes is hard! Jenkins X aims to simplify this by getting you up and running at pace, and keeping you going quickly using some of the industry’s best practices.

In the Playground we learnt how to get up and running with Jenkins X in no time at all, using the CLI to create new applications and promote them to staging and production environments. Gareth also demonstrated CloudBees’ use of GitOps and ChatOps to interact with Jenkins X and how to utilise Preview Environments to get faster feedback to the developer.

The key takeaways from the Playground were:

  • Use the JX cli to create a Jenkins X cluster on GKE.
  • Create an application based on a set of templates
  • Push the application to a staging environment using GitOps
  • Change the application, interact with the PR using ChatOps
  • Learn how Preview Environments can speed up developer feedback

If you’re interested in learning more about how Jenkins X works, you can explore more in this blog.

The Team



This is a community event for the people, run by the people, and we had some pretty amazing ECS Digital team members to help out during the London DevOps Playground. Which was a good thing, considering the Playground was just shy of hitting full numbers again!



This was definitely one of the most success nights we had at the DevOps Playground London, with over 70% of the attendees being first timers. This influx of newbies is amazing, as we not only love to welcome new people into our community, but we also opened up the world of Jenkins to a new audience – which was pretty cool!

Take Some Home

If you were there on the night, or didn’t quite catch something from the Playground, please find all the details below (including a link to the recording from the day):

🐼 Hands-On with Jenkins X Playground – official recording:

Github repo – DevOpsPlayground/Hands-On-With-Jenkins-X

Gareth Evans – Lead speaker and a keen technologist, developer, open-source contributor and cloud advocate engineer at CloudBees. Currently working on the Jenkins X project

Jenkins X is a CI / CD platform for Kubernetes

🐼 DevOps Playground website

🐼 DevOps Playground London Meetup Page


Benjamin ShonubiGetting Hands-On with Jenkins X
read more
Plotting a Container-Centric Future. Part One

Plotting a Container-Centric Future. Part One

Containers are unlocking new and innovative ways of developing and running software. With containerisation, the potential of hybrid cloud computing is finally becoming a reality. The evolution of containers is much akin to that of Virtual Machines (VMs) 15-years ago – eyed with suspicion in the early days but are now a de facto part of every IT infrastructure. Likewise, containers are becoming the default plan for organisations in all sectors and of all shapes and sizes.

Why? For those not yet familiar, containers are lightweight, portable, virtualised, software-defined environments. Their growing popularity is due to the fact they facilitate modularity, portability and simplicity when provisioning virtual infrastructure. They represent, in many ways, a step-change in how IT functions deliver applications; reduced boot times, improved resource utilisation and a lack of infrastructure dependencies facilitating swift deployment and iterative development and test cycles.

ECS Digital’s approach to containers is simple; it’s all about choice. Tooling agnostic – everything from cloud solutions to automation and edge – we’re led by the needs of our customers. Whilst there are many commercial container distributions available today, we choose to work with two main partners; Docker and Kubernetes (specifically, Rancher). Naturally, many organisations have a few requirements when selecting a platform to host their applications. By far the most common one is the desire to attain and retain agility by not being locked into a particular offering that prevents easy migration to other cloud platforms. In reality, this means selecting a platform based on Kubernetes, as this has been proven to be the standard by which other orchestrators are judged.

In this three-part series, we will take a look at the features of Rancher, highlight those that other container orchestration management tools don’t offer out of the box, and help you find the perfect deployment partner. Let’s start with Rancher’s pivotal features…

Rancher – Extra rBACtteries Included

Rancher is widely regarded as the #1 choice for running enterprise-scale containers and Kubernetes in production. It’s the only distro that can manage all Kubernetes clusters on all Clouds. It also accelerates the adoption of open source Kubernetes while complying with corporate security and availability standards.

100% Open Source

All Rancher products are 100% open source and free to use. Rancher deploys upstream, open-source Kubernetes, so the latest features in each Kubernetes release are always available for users. Rancher has also successfully shaped Kubernetes into an enterprise offering by putting security first and making it easy for businesses to control and interact with all of their clusters from a single interface.

No Vendor Lock-In

Rancher remains agnostic about which provider to use. It gives you, the user, the freedom to quickly deploy Kubernetes anywhere, with the configuration that you want. It also abstracts vendor differences so that users can interact with each cluster in the same way. Rancher makes it possible to run multiple clusters whilst enabling you to manage each cluster independently. And if you ever decide to stop using Rancher, you can quickly and cleanly uninstall the platform as if it was never there.

Multi-Cluster Management

Rancher was built to manage Kubernetes everywhere it runs. It can easily deploy new clusters from scratch, launch EKS, GKE and AKS clusters, or even import existing Kubernetes clusters. This month, Rancher went as far as to launch RIO, a MicroPaaS that can be layered on any standard Kubernetes cluster. And the best part? It’s free! Try it out for yourself today.

In short, Rancher is a complete container management platform, with a few added bells and whistles to make using the tool both practical and able to integrate with other applications. This ease of use makes Rancher an ideal partner for businesses scaling change initiatives using containerisation technology. And we should know. After a 14-month engagement with an industry-leading asset tracking client, ECS Digital has been instrumental in delivering and operating globally deployed container applications on Rancher that will revolutionise the industry.

In part two of the series, we’ll explore what you should look for in a partner, and how choosing the right partner can help drive a successful transformation for you and your business.


About the Author:

Morgan Atkins is the container technology lead at ECS Digital and is one of the leading consultants for containerised applications in the UK. Not only is Morgan a certified Docker trainer and consultant, but he also takes great pride working alongside and upskilling customers in the adoption of container products such as Rancher, Docker and Kubernetes.

About ECS Digital

ECS Digital is a leading DevOps and Digital Transformation consultancy based in London, Singapore and Edinburgh. Being deeply embedded in the world of DevOps and the tooling that this movement is driving, ECS Digital is proud to partner with the leading software vendors in this space, including Rancher, Docker, CloudBees, Aqua, Sonatype, HashiCorp, New Relic and ServiceNow.

Want to adopt Rancher in your business? Talk to the team today about how you can get started.

Morgan AtkinsPlotting a Container-Centric Future. Part One
read more
DevOps Playground: more than just another lecture

DevOps Playground: more than just another lecture

As the DevOps Playground enters its fourth year, we take the opportunity to look back at how the DPG was initially formed and its subsequent success.

Why ECS Digital started the DevOps Playground:

Meetups are a great way to meet like-minded people, learn something new and eat as much pizza as is humanly possible. Technology focused meetups however, often leave one excited and hopeful about a new product or technology with no easy way to explore them. Couple that with our busy lives and these new technologies will only ever be added to the long list of “Tools I will definitely try one day soon!”

As a result, we at ECS Digital decided that we could satisfy the tech industry insatiable desire for pizza as well as allowing people to really experience new tooling without impacting their ever-shrinking social calendars.

In addition to showcasing new technologies and allowing people to get hands-on experience with those tools, the DevOps Playground acts as a platform for ECS Digital’s own talent to build a name for themselves and demonstrate the breadth and depth of knowledge ECS Digital wield within a number of different technology areas.

Attendees can expect to follow along with a structured and comprehensive exercise, designed to jumpstart new users with unfamiliar technologies and to highlight the best ways to use the technology going forward.


What happens at a DevOps Playground?

Each month, you are welcome to join us as we explore new technology / tools in one of our four locations – London, Singapore, Pune and Edinburgh. Each Playground lasts for around 2.5 hours, with a chunk of that time set aside for you to run and use the chosen tech / tools on your own laptop.

Our engineers will be on hand throughout the Playground to help you navigate your way round the technology, with the hope that you leave feeling more confident than you did when you arrived. Open to all tech enthusiasts, this is the perfect environment to learn, network and play – and there’s usually free pizza. Pizza AND tech, what’s not to love!

How the Playground has evolved:

Our environments:

With the success of the Playground’s brand and the ever-increasing number of global members, we have had to innovate in order to keep up with demand. During the Playground infancy, the standard method for distributing slide decks, resources and the all-important technology environment was a chucky VDI. Due to its size, we would have to load them onto 8GB USB sticks and physically hand them to attendees on the door. This obviously meant that we would spend the first 15-20 minutes of every meetup waiting for people to copy massive files on to their personal computers and then load up VMs, and that was before we had even started the technical part of the evening.

Realising that this method of distribution was not going to scale, we had to look internally to our engineers for a solution that could be used by a wide variety of capabilities.

In true DevOps fashion, after a few iterations we settled on a dynamic cloud instance for every attendee with a web-based terminal (wetty). This allows us to spin up exactly the number of instances required for an individual event and bring them down once the event has concluded, reducing not only the cost but the potential risk associated with having 80 cloud instances running publicly.

As the success of our London based meetup continues to grow from strength to strength, back in 2018 we took the DevOps playground brand global, setting up three additional meetup events in Singapore, Pune and Edinburgh. This new global reach has help us spread the ECS Digital message and introduce new technologies and concepts to even more people.

A powerful recruiting tool:

The DevOps Playground has been a strategic tool used during our recruitment process, with many candidates being identified and subsequently hired as a direct result of them attending our events. These new additions have been afforded the opportunity to meet the ECS Digital team in a relaxed setting and with no obligations and in fact, with most cases, individuals were not actively seeking new employment opportunities.

What the future looks like:

World domination! Maybe not… but we do want to continue building our reach and contributing to the wider DevOps community. Due to the popularity of our London events, our current location is hitting capacity on a regular basis. We’d love to work with other tech enthusiasts who have access to bigger spaces so we can open the Playgrounds up to more of our community. If you happen to have a large space and want to support the DevOps Playground by letting us borrow it for an evening, we’d love to hear from you!

We would also love the opportunity to collaborate with other meetup groups. If you have an idea of how we can better serve our communities, get in touch and let’s talk over how we can turn those ideas into value for our members.

And last but certainly not least, technology is genderless and we want to continue promoting its application to as diverse a group as possible – starting with hosting more Women In Tech DevOps Playgrounds following the success of our WIT event last year. Whilst men are welcome to attend, these events are super important for creating an environment where women feel comfortable learning about new technology in what is traditionally a male-dominated industry.

How to get involved:

As mentioned above, we host our DevOps Playgrounds once a month in four locations. These are all publicised on Meetup as soon as the team have the details available:

You can also find all the information you need about DevOps Playground, upcoming events, past events and the Playground Panda on our website:

What next?

Hopefully the above has tempted you to come and say hello to the DevOps Playground team in person! Our next events are live on the website / meetup groups (links above) so pick the one most local to you, grab your laptop and follow the smell of pizza. Go on, you’ve got nothing to lose but maybe lots to gain!

Morgan AtkinsDevOps Playground: more than just another lecture
read more
Open source. Are you part of the community?

Open source. Are you part of the community?

Open source is a type of licensing agreement – not very exciting. The exciting bit is that it allows users to create and publish work that can be freely used, modified, integrated into larger projects or derived into new work based on the original by other users.

In an age of trade secrets and profit-driven professions, this is a unique platform that actively promotes a global exchange of innovation. It has been specifically designed to encourage contributions so that the software doesn’t stand still. The collective goal of this barrier-free community is the advancement of creative, scientific and technological tools and applications – which for many is more important than a price tag.

Who uses open source?

Although, it is most commonly used in the software industry, professionals adopt open source licenses in many industries including biotech, fashion, robotics and teaching. This article will focus solely on software applications.

What’s interesting is that more and more businesses are contributing their own source code to the community – Facebook, Airbnb, Cyprus are leading examples. According to a 2018 Tidelift Professional Open Source Survey, 92% of projects amongst European respondents contain open source libraries. Whilst on the surface this contradicts conventional commercial instinct, businesses gain a lot by giving away a little. Whilst the benefits are vast, we are going to focus on five:

  1. Competition:

Since the late 90’s and the advancement of the digital age, competition no longer resides simply between two rival companies. Businesses today also find themselves competing with open source software projects that are free, open to the public and constantly evolving.

Due to the current scale of open source contribution, even the giants in the tech industry are struggling to devote the resources or teams large enough to compete with their community counterparts.

Turning to the open source community enables businesses to outsource resource rich projects to a bottomless sea of innovative capabilities. This potentially reduces cost, pressure and speeds up the feedback loop considerably.

  1. Reputation:

In the same way the Big Bang Theory made traditional science nerds cool, the open source community can boost a business’ profile on the cool/not-cool spectrum.

Not only do businesses become more attractive to potential employees, by initiating an open source software project, or contributing to an existing one, they make their mark on an additional and power channel popular within IT circles. If done well, this has the potential to establish, maintain or improve a brand’s image, as well as attract new business.

  1. Advancement:

Helping to advance something as big as the technology industry isn’t something to turn your nose up at. In fact, businesses revel with the idea of having their name against a leading piece of software that has the potential to make history.

But history moves fast. And building software inhouse can be stifled by other business priorities, resource restrictions and other competitors beating you to the finish line.

Rather than building behind closed doors and waiting until your software it is perfect, opening your source code to the community in its earlier stages has two benefits:

  1. You can plant your flag earlier
  2. You invite an endless list of innovative capability to help advance your idea at a rate unlikely attainable behind closed doors

It also acts as an incentive for individuals to feel part of a project than extends far beyond the business they work for.

  1. Trust:

Fake news, data breaches, shady deals – all of these have encouraged people to lose trust in businesses. Including open source projects in company policy encourages business to be more transparent with its consumers. Whilst it is naive to believe a company will lay down all their cards, companies such as Facebook made 15,682 contributions in 2016. Automattic created WordPress as an open source project and currently powers 31% of the internet, and Netflix frequently open sources the tools they develop in-house.

Not only are they strengthening their brand, sharing is showing the world they have nothing to hide – which is a proven way to start winning back trust.

A great example of building this trust through transparency is the cryptocurrency space where many projects including Bitcoin allow you to browse through the project’s source. A very different approach to their corporate counterparts.

  1. Speed:

Many companies face the same problems. Sometimes companies are kind enough to share the solution. If a problem has been solved before and will provide business value in a fraction of the time and half the man power everybody wins.

Contributing to the community also gives you the capability to ask the projects contributors directly questions, ask for features or raise issues enabling you fast feedback which keeps your project moving

How does open source work? 

Contributors create a project and solve a problem. They realise that other people might benefit from this project to solve their problems. The project is shared on an open platform such as GitHub which can be downloaded and used by other users interested in the project.

If users wish to contribute, they can do this by downloading the project, creating a fork (which is an exact replica of a certain part of the pipeline) and editing the code until they are happy with the changes. Users can then request a pull request which notifies the authors that a suggested change is requested.

It is up to the author to approve the change, before deciding whether they want to include the changes. If they do, this usually becomes part of the next version, which is released at the author’s discretion.

The problem is, this could take some time. The author is under no obligation to release new versions or accept proposed changes. In fact, this is one of the limitations of the open source community. People will only give up as much information as they want to / their projects need. Authors are not there to solve specific problems, and often release software that focuses on their needs rather than trying to create something too generic.

This can be frustrating if an open source project only solves half your problem, however, the community can help bridge knowledge gaps. Users also have the option to download, build and run the project locally in the interim whilst waiting for the official new version – meaning they don’t need to wait for the software to be released with the changes they need.

How it is viable?

Whilst it doesn’t make economic sense on the surface, the community have found a way to make open source viable from a business and individual perspective. Some have capitalised on their projects, making basic versions available at no cost to the user, but adding a price tag to different versions or ‘add-ons’.

Other businesses or individuals actively contributing to the platform have benefited from angel investments, as well as new business after demonstrating successful projects.

It is also often a side project for businesses and individuals. Due to the legal freedom attributed to an open source platform, you’re able to modify the code of the product you’re using endlessly, for free, at no risk of breaching privacy policies or user agreements. This makes it the opportune ‘playground’ for those looking to get into the industry or develop new skills. According to LinkedIn:

“We believe that open sourcing projects makes our engineers better at what they do best. Engineers grow in their craft by having their work shared with the entire community.”


With all open platforms, there is a risk of abuse. Open source communities are no different and have certainly experienced their fair share of malicious activity. However, it is the open source approach that significantly increases the reliability of the projects available to the public.

By establishing a community who believe in the future potential of the projects produced, you immediately have a security indicator in place. Many of them in fact. And with so many eyes looking at projects, malicious activity is quick to be spotted and remedied. This is because open source platforms embody an agile mentality, applied in a community wide approach. Rather than make one big change and focus on ensuring it is okay for the next six-months, contributors and authors are interested in making changes quickly, so things get fixed and evolved just as quick.


ECS Digital love to find value for our clients and give it back to the wider community, which is why we make tools available on open source platforms such as GitHub and NPM.

We will also be hosting a hands-on session and demonstration of AyeSpy– a visual regression testing tool – at an upcoming DevOps Playground on the 29th of November. Come along to learn more about what the AyeSpy has to offer!

Matt LowryOpen source. Are you part of the community?
read more
Running Hubot in Production

Running Hubot in Production

In a previous blog post, we spoke about the basics of ChatOps and Hubot, and how they can be used to make your workflow more efficient. In this blog, we’ll take a slightly more in-depth look at running Hubot in production. If you’d like to know more about setting up and basic scripting of a ChatOps bot, please read the previous part of this blog.

In this post we’ll share some of our experiences and thoughts on running Hubot in a production environment, and go through some practical examples of how to achieve this. Please note that this post is focussed primarily on Linux systems.

This guide starts with the assumption that you’ve already created a Hubot instance using Yeoman. If you’re unsure of how to do this, read the instructions here. All the files mentioned in this post can be found on Github here.

Version Control

Once your Hubot instance has been created, you should commit your changes to a version control system of your choosing. Any changes to Hubot should be committed and updated from version control. See this link for some useful information from the Hubot documentation.

Run Hubot as its own user

From a security standpoint, we advise that you run Hubot as its own user. In Linux you can create a system with the following command:

$ useradd -r hubot

Creating a system user is also good practice, since system users aren’t able to login and don’t have home directories which has some security benefits.

Updating Hubot

At ECS Digital, we don’t update our Hubot instances all that often, so we don’t use an automated deployment process. To update the code-base on the production Hubot server, we do the following:


Of course, this process could be automated, but as we don’t update Hubot too often so we’re happy with this method for now. Ideally, though, we would write a script for Hubot to update itself!

Ensuring the Hubot process is run at startup (and kept running)

I’m personally a big fan of Supervisord. Supervisord is an excellent project which can control processes for you.

Some of the benefits you get from Supervisor are:

  • Log handling for stderr and stdout – this includes log rotation options.
  • Automatic restarts when a process dies.
  • Remote web interface and XML-RPC API for remote controlling processes.
  • Config is much easier to deal with than init or upstart scripts.

Supervisor is available as an .rpm package for Redhat Linux variants and a .deb package for Debian Linux variants. It can also be installed via the Python pip package manager.

As we’re running Hubot on an Ubuntu 14.04 AWS instance, the supervisor package is available in the standard repos and can be installed with the following command:

$ sudo apt-get install supervisor

Supervisor can also be installed via pip, which will ensure a more up-to-date package. You may have to install Python 2.7 and pip if your distribution doesn’t come with Python installed already. You may need to run this command as root:

$ pip install supervisor

Config files for supervisor generally reside in /etc/supervisor. Here is an example config for running Hubot via supervisor:

command=bin/hubot --adapter slack ; command to execute
directory=DIR/WHERE/HUBOT/IS ; cwd for program
; Log file handling
user=hubot ; user to run hubot as
; Add any environment vars needed below
environment =

As you can see, in the example above we are doing the following:

  • Defining a command
  • Defining a working directory
  • Logfile handling for stdout and stderr output and logfile rotation. Note the %(program_name)’s Python variable expansion in the log names.
  • Telling supervisor to run the process as user Hubot
  • Telling supervisor to restart Hubot upon death.
  • Defin a few environment variables to pass to the process.

Once you’ve created or updated config for a program, run the following command:

$ sudo supervisorctl update

Then run this command to ensure Hubot is started.

$ sudo supervisorctl status

To restart Hubot after updating it, run the following command, replacing my-hubot with the name you’ve chosen for your program:

$ sudo supervisorctl restart my-hubot

See here for more information on supervisor config options.

For our production instance, we commit the supervisor config to the Hubot repo and then simply symlink the file into /etc/supervisor/conf.d/my-hubot.conf. That way, our supervisor config is nicely versioned and can easily be rolled back if something breaks.

Handling role based permissions with hubot-auth

Sometimes you want to lock certain Hubot functionality to a particular group of users. Although Hubot has no support for this by default, we can add this functionality with the hubot-auth plugin. The hubot-auth plugin uses Hubot’s “brain”. If you’re using this plugin, you’re going to want to make sure that you’ve connected Hubot up to redis so the “brain” is persistent. Install instructions are on the github page.

You may have noticed the HUBOT_AUTH_ADMIN environment variable in the supervisor configs. This defines which administrators have permission to add or remove users from roles. If you’re using Slack, you’ll need to get the userid – not the username. See here for a more detailed summary.

Once you’ve installed the plugin and started Hubot again, you’ll be able to do things like this:

So, as you can see, I have the ‘admin’ role which allows me to set and remove roles from users. Next, I added myself to the role ‘new-role’. I now have two roles: admin and new-role. Slackbot has none.

To use these roles, we have to create some logic when we are using Hubot scripts. Here’s an example script:

# Description:
# Hubot auth example
module.exports = (robot) ->
  robot.respond /am [iI] authed/i, (res) ->
    user = res.envelope.user
    if robot.auth.hasRole(user, "a-role")
      res.reply "You sure are #{}!"
    else if robot.auth.hasRole(user, "admin")
      res.reply "Nope, but you are an admin. Add yourself!"
      res.reply "NO! Get outta here"

And here’s the script in action:

You may have noticed the slight caveat here: you are going to have to retrofit authorisation logic into any script which requires some form of authentication. Unfortunately, we’ve yet to find a better solution for user authentication with Hubot.

Handling end-to-end testing of Hubot

Note: This section focuses only on using Hubot with slack.

If you need to make sure your Hubot is up and responding with a tool like Sensu, Nagios or Icinga, you can use the following workflow:


The basic premise is that we create a private slack channel which consists of you, Hubot and Slackbot. Next, we use the Slack remote response API to trigger Hubot using the echo command:

We then access the Slack message APIs using Hubot’s API token and retrieve the last message from the API to ensure that it matches the message we sent.

Once you’re happy the test is working correctly, you can leave the Slack channel to avoid being notified about the test every time it runs.

You can find a Sensu plugin in the Github repo for this blog. I’m not a coder by trade so please don’t hold my terrible ruby code against me! If you have any suggestions on how it can be improved, feel free to contact me with your ideas.

To find out more about ECS Digital, and our unique take on DevOps, check out the training courses that we offer on our website.

Angus WilliamsRunning Hubot in Production
read more
How ChatOps is redefining enterprise and open source DevOps

How ChatOps is redefining enterprise and open source DevOps

The concept of ChatOps isn’t new, by any means. People have been using chatbots on IRC and other messaging channels for a long time. ChatOps was only  coined more recently by Github, and is often placed under the DevOps umbrella. This isn’t surprising, since example use cases and stories often describe deploying an entire app to production by simply sending a message to a bot. It really can be as easy as “@mybot deploy myapp to production”.

So, what is ChatOps all about then? In this blog, we’ll take a look at some of its defining features, and how it can be an essential and valuable component of a DevOps implementation.

What is ChatOps, and how does it work?

Essentially,  ChatOps begins with you registering a script-able bot to your chat channel – be it Slack, Hipchat or whichever chat service you prefer- and then ask it to perform routine tasks for you. There are several ChatOps frameworks available,the most notable being  Hubot from the folks at GitHub. Hubot is written in Node.js, with scripts being written in either coffee script or Javascript. The other two notable chatbot frameworks available are  Err, which is written and scripted in Python or  Litawhich is written and scripted in Ruby. For the purposes of this blog, we’re going to focus on Hubot.

What makes ChatOps worthwhile? 

Some of the benefits of ChatOps include:

  • Automating common tasks

Chatbots are excellent for automating common tasks that are hard to trigger automatically or can’t be run by a scheduler. Basically, any tasks that need some human consideration to begin execution are perfectly suited to Chatbot automation.

  • Shared History and operational  visibility

A shared history means that anyone on the chat channel can see what the others have asked the bot already. This makes it possible to see what everyone else in the team has already asked the bot, and reduces the chance of wasted time due to repeat work.

A look at ChatOps in practice

Let’s take a look at an example chatbot. Installation is pretty straight-forward: the only dependencies are Node.js and yeoman, a scaffolding tool. You can find the official install instructions here. Install the bot with the default campfire adapter, any other adapters you want to use (ie Slack or Hipchat) can be installed later.

To give him a bit of personality, I called my bot Garry. Let’s start Garry up in shell mode. You might see some heroku or redis warnings – they’re safe to ignore for now. The robot has a “brain”, which is essentially a persistent store for certain details backed by redis. It isn’t essential in order for the robot to operate.

$ bin/hubot

We’ll need a prompt to be able to talk to Garry in shell mode, so typing ‘garry help’ will trigger hubot to list all the commands it knows about. Hubot comes will a bunch of built-in plugins, but none of them do anything particularly useful, and mostly serve as examples. You can turn them off by removing them from the file ‘external-scripts.json’.

garry> garry help
garry adapter - Reply with the adapter
garry animate me <query> - The same thing as `image me`, except adds a few parameters to try to return an animated GIF instead.
garry echo <text> - Reply back with <text>

Now, let’s run some of the included example scripts:

garry> garry ping

garry> garry echo I am a chatbot!
I am a chatbot!

Chatbots allow custom scripts for total customisation.

Hubot has a large base of custom scripts for performing certain actions, and you can view them here or search through npm ( npm search hubot-scripts <query>). For now, let’s create our own custom script. Hubot scripts can be written in either pure javascript or coffee script.

Let’s create a fictional company directory so we can ask hubot to give us details on particular people.  I’ll place my company directory script in scripts/ I’m defining the users as an array of objects for this example, but you can easily replace this with some logic to query a user directory or database as necessary.

I’m using a Node.js module from the atom text editor (also by Github) called fuzzaldrin to take care of the search logic.


# Description:
#   Lookup user info from company directory
# Dependencies:
#   "fuzzaldrin": "^2.1.0"
# Commands:
#   hubot phone of <user query> - Return phone details for <user query>
#   hubot email of <user query> - Return email details for <user query>
#   hubot details of <user query> - Return all details for <user query>
# Author:
#   Angus Williams <>
{filter} = require 'fuzzaldrin'
# Define a list of users
directory = [
    firstName: "John",
    lastName: "Lennon",
    fullName: "John Lennon",
    email: "",
    phone: "+44 700 700 700"
    firstName: "Paul",
    lastName: "McCartney",
    fullName: "Paul McCartney",
    email: "",
    phone: "+44 700 700 701"
    firstName: "George",
    lastName: "Harrison",
    fullName: "George Harrison",
    email: "",
    phone: "+44 700 700 703"
    firstName: "Ringo",
    lastName: "Starr",
    fullName: "Ringo Starr",
    email: "",
    phone: "+44 700 700 704"

module.exports = (robot) ->
  robot.respond /phone of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()
    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')
    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{}"

  robot.respond /email of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()

    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')

    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{}"

  robot.respond /details of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()

    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')

    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{}, #{}"

A closer look at the script in question.

Let’s dissect the script a little. The comments at the start of the script are reasonably important here, and follow a certain format so that information can be pulled out. The Commands section is particularly of interest:

# Commands:
#   hubot phone of <user query> - Return phone details for <user query>
#   hubot email of <user query> - Return email details for <user query>
#   hubot details of <user query> - Return all details for <user query>

This section is used by hubot’s help module. Documenting available commands here is important, as it will allow users to see the functionality provided by your script using the ‘hubot help’ command. More info about script documentation can be found here.

The next import section is further down the script, under the robot module export.

robot.respond /phone of ([\w .\-]+)\?*$/i, (res) ->
  name = res.match[1].trim()
  results = filter(directory, name, key: 'fullName')
  res.send "Found #{results.length} results for query '#{name}'"
  for person in results
    res.send "#{person.fullName}: #{}"

Here we are telling hubot to respond to anything that matches the regex described ('/phone of ([\w .\-]+)\?*$/i').Regex capture groups ('([\w .\-]+)') are used to pull the name from the command.

Another important thing to note is that we’re using the robot.respond invocation. This means that hubot will only respond to commands that directly address him, I.e. ‘garry do something’. We can also use hear, which will match any messages sent to a particular chat room. Similarly, we are addressing everyone with the res.send method to send a reply, but we could just as easily address the user that invoked the script by using the res.reply method instead.

Let’s restart hubot and test out our script.

garry> garry phone of ringo
Found 1 results for query 'ringo'
Ringo Starr: +44 700 700 704

garry> garry details of in
Found 2 results for query 'in'
Ringo Starr:, +44 700 700 704
George Harrison:, +44 700 700 703 

garry> garry email of paul
Found 1 results for query 'paul'
Paul McCartney:

That’s all good and well, but how do we actually use this with a proper chat client?

Lets use Slack as an example. First, you’re going to need to sign up for a slack account if you don’t already have one. Once you’ve got a slack team set up, you’ll need to enable hubot integration. You’ll find step-by-step instructions here. Once complete, you should be presented with an API token.

Let’s install the Slack adaptor in the root of the hubot repository and start hubot with the slack adaptor using the API token from the Slack hubot integration setup.

$ npm install hubot-slack --save
$ env HUBOT_SLACK_TOKEN=<SLACK API TOKEN> ./bin/hubot --adapter slack
[Wed Nov 18 2015 17:31:16 GMT+0000 (GMT)] INFO Connecting...
[Wed Nov 18 2015 17:31:20 GMT+0000 (GMT)] INFO Logged in as garry of Dummy Corp, but not yet connected
[Wed Nov 18 2015 17:31:21 GMT+0000 (GMT)] INFO Slack client now connected

Now all that’s left for us to do is to talk to Garry in Slack!


As you can see, hubot listens for either @ mentions as well as the bot’s name.  You can find the example chatbot on the forest technologies github account here.

Useful Links

If you’d like to find out more about us and our services, including consultation, training and our extensive experience with enterprise and open source DevOps tools, please don’t hesitate to get in touch.

Image credit: Jason Hand

Angus WilliamsHow ChatOps is redefining enterprise and open source DevOps
read more
What Open Source DevOps means for the future of Enterprise Infrastructure

What Open Source DevOps means for the future of Enterprise Infrastructure

A change, they say, is as good as a holiday. That might have been true in simpler times, but with change being the overarching constant in the IT world today, it very rarely seems like that. Change in today’s IT world is not only ever-present, it’s something that is essential to get to grips with if you want any hope of surviving – let alone excelling – in your respective field. One of the most significant transformations happening in the IT world today is the increasing shift away from on-premise infrastructure management to hybrid and cloud solutions. While it’s by no means unequivocal among IT professionals that on-premise enterprise infrastructure is in its twilight years, it’s hard to argue with the facts: cloud and hybrid infrastructure solutions are disrupting traditional infrastructure models in enterprises. In this blog, we’ll take a look at the contributing factors to this fundamental shift, including the role of open source DevOps and the increasingly common use of virtualisation.

Where are we on a timeline of the on- versus off-premise infrastructure debate?

Although the shift away from traditional infrastructures and the increasing feasibility of cloud architecture has been a long time coming, we’ve only now reached the tipping point of enterprise adoption. Until just two years ago, C-level executives were voting pretty unanimously against the cloud’s ability to replace on-premise applications – their main reason being the security and stability benefits of in-house infrastructure. With high-profile hacks seemingly escalating in regularity and intensity with each year, it’s understandable that security is a primary concern – however, no business today operates in isolation from the internet, and it’s an unfortunate truth that until such a time as we have a true solution to locking down online security, some element of risk to sensitive data is unavoidable whether it is stored on-premise or in the cloud. The question of stability has also been addressed as virtual architectures and Infrastructure as a Service (IaaS) technology have matured alongside increasing broadband capabilities around the world. The levels of stability that can be achieved at scale and across multiple geographies is far beyond that which was economically achievable with an on-premise model.

Granular virtualisation is breaking down the barriers between on- and off-premise infrastructure.

Before the turn of the millennium, virtualisation was something that only massive data-centres were likely to have anything to do with, but after the release of VMWare and ESX, virtualisation became feasible for commercial and personal use. In 2006, the worlds largest book seller entered the cloud market with Amazon Web Services (AWS). Just ten years later, AWS is a $7 billion business servicing 5 million customers in the UK alone. A key enabler for this explosive growth of virtualisation and cloud has been infrastructure automation.  Multiplying the size of your server estate multiplies the overhead of configuration management, and the ability to provision and de-provision quickly and consistently are critical to ensuring that cost is not also multiplied unnecessarily. Today, virtualisation technology has matured to the extent that it’s possible to not only virtualise systems, but also granular processes.  In fact, two of the hottest trends in technology are microservices and containerisation.  It remains to be seen if containerisation will ultimately replace virtualisation, but there is a clear drive towards more granular application services and infrastructure to support them.  Infrastructure automation has made it possible to provide businesses with services that would otherwise be astronomically expensive in hardware terms. Combine infrastructure as code with the surge in popularity of Software Defined Networking (SDN) and enterprise architecture of the near future will look quite different from that of today – the decoupling of network control from the hardware layer not only means less reliance on in-house hardware and less constraints on physical space, it also gives IT professionals an unprecedented level of control over their environment. Ultimately, this enables organisations to deliver faster without infrastructure bottlenecking the process.

Open Source DevOps tools are perfect for hacking out and experimenting with new infrastructure concepts.

DevOps is changing the face of enterprise architecture because it brings these game-changing technologies together under one roof. With open source DevOps tools, it’s unbelievably easy to create a completely new prototype or Minimum Viable Product (MVP)  without disrupting the way things get done in your organisation. By using infrastructure automation to create virtualised systems that you can then tweak in whichever way you please without fear of failure. Open source DevOps software also requires little to no capital investment, so at the worst you may end up wasting a few hours of staff time. Simply put, Open source DevOps software lets your entire organisation come together and work out where and how your infrastructure and processes could be improved. This means you can have the benefit of trying out new technologies or concepts as soon as you hear about them, and instantly roll back to your stable system should anything go awry. Keeping up with the latest infrastructure technology through open source DevOps allows you to keep tabs on the latest trends, while keeping enough distance to invest only in the ones that truly benefit your organisation.

ECS Digital is a DevOps consultancy with 12 years’ experience in implementing DevOps in businesses of all kinds, all around the world. Our team has a combined wealth of knowledge on infrastructure automation and open source DevOps. To find out more about what DevOps could mean for your business, don’t hesitate to contact us.

Andy CuretonWhat Open Source DevOps means for the future of Enterprise Infrastructure
read more
Why building a POC is easy with open source DevOps tools

Why building a POC is easy with open source DevOps tools

Working in a corporate environment with enterprise tools doesn’t often provide opportunities for innovation or experimentation. Any developer with experience in a large organisation or corporate environment knows that the amount of work that needs to get done on a daily basis and the particular processes that need to be followed don’t leave much time for experimenting with or optimising systems. However, the increasing ubiquity of open source software in corporate environments has given organisations first-hand experience of the benefits it can provide, not only to developers, but the company as a whole. As a result, many large corporates are not only advocating the use of open source software, they’re doing everything they can to leverage its potential to create new levels of efficiency and improve the performance of their staff. In this blog, we’ll take a look at what makes open source DevOps software the perfect platform for easily creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), and how some of the major figures in international business are using similar platforms to facilitate workplace innovation.

Hackdays and hackathons are becoming an intrinsic part of tech culture – and beyond.

As open source tools have become more frequently adopted in enterprises, a growing number of organisations are beginning to leverage the business benefits of letting their employees break free from the usual restrictions that come with using proprietary software and experiment on passion projects that might have little or nothing to do with the organisation itself. The scope and scale of how this is implemented can vary from place to place, from simply encouraging developers to spend time optimising code to improve processes within the business, to hosting a week-long event in which employees are given free rein to work on anything they choose.  Atlassian for example have “ShipIt” days where employees can work on anything for 24 hours which they describe as 20% time on steroids –  20% time being the Google initiative to encourage employees to spend 20% of their time working on what they think will most benefit Google.  Gmail, for example, started life as a 20% time project!

While there’s a trend towards businesses organising their own hacking events, there are also a huge number of popular public hackathons ranging from small meetups to fully-fledged events complete with catering services and sponsors. TechCrunch’s Disrupt Hackathon, which will be taking place in London on the 5th and 6th of December, has become so popular that spectator tickets are now being sold for people who aren’t taking part, but want to experience the hackathon.

How do hackathons benefit your developers and your business?

There are many benefits to hosting a hackathon, whether you’re part of a start-up or an already-established organisation. Start-ups can leverage them to meet developers in their local community, and larger organisations can use them to scout for new talent or get help outside the organisation for innovating or improving their services. Open source DevOps tools make it easy for individuals or teams hack out POCs and MVPs by providing an automatable framework that ties every team member into the project in equal measure. On tight timeframes and even tighter budgets, any way to maximise speed and quality whilst removing errors is invaluable, and open source DevOps tools are the logical way to achieve this.

What are the big players in tech doing to embrace the hackathon trend?

Some of the biggest names in tech, including Dropbox, Twitter, Google and Facebook, periodically host hackathons and hackdays of their own. Google’s DevFest is a community-run combination between a conference and a hackathon, featuring full-day hack days as well as speakers across multiple product areas. DevFest operates on the shared idea that great things happen when developers come together, but the specifics are tailored to the local community organising each particular event, meaning no two DevFests are ever the same.

Dropbox runs an annual Hack Week, during which its 800 employees are given carte blanche to innovate and create anything at all – whether or not it’s related to their job title, or even to Dropbox itself. “We don’t actually set any restrictions,” says Max Belanger, one of the organisers behind Hack Week. “A lot of people are actually going to work on projects that are completely unrelated to Dropbox itself.”

While there’s no pressure to deliver something that will push the company forward, some employees take the opportunity to work on problems that pertain specifically to the business. Dropbox’s multi-account feature, for example, was first conceived in a Hack Week project, and went on to be integrated into the product’s core offering. But the spirit of hackathons goes beyond this – As Alicia Chen said in an article on The Verge, “Part of the spirit of Hack Week is getting out of your comfort zone, learning something new, doing something unusual.”

Hackathons aren’t only beneficial for developers– they’re a great place for organisations to source talent and build relationships

Part of the reason that hackathons are so successful is that everybody takes something out after attending. For the organisation hosting, it’s a great opportunity to source talented developers and build relationships with your local development community. For developers both inside and outside the organisation, it’s an invaluable chance to hone your skills and meet and engage with like-minded professionals who share a passion for coding and innovation. It’s also an excellent opportunity for tech companies to build a name for themselves by supporting the event through sponsorships and prizes. For example, CircleCI, an open source DevOps software provider, sponsored prizes for the recent TechCrunch Disrupt in San Francisco. The cultural component of hackathons can also not be underestimated – many companies use them as a way to advertise their internal culture and source developers and other staff that identify with the way they work and their company values.

ECS Digital has a wealth of experience in open source DevOps tools and offers a variety of services for their implementation including consultation and training.  As you would expect given the ethos of tight timelines and budgets around POCs and MVPs, we offer a variety of “quick starts” enabling organisations to get where they want to be as fast as possible.  If you’d like to find out more about us, including our comprehensive training and enablement programmes, please don’t hesitate to get in touch.

Andy CuretonWhy building a POC is easy with open source DevOps tools
read more