DevOps Playground Meetup #2: Hands on with the ELK Stack

DevOps Playground Meetup #2: Hands on with the ELK Stack

No comments

We’re back! Following on from the success of our first meetup, “Hands on with ChatOps”, we ran a meetup called “Hands on with ELK Stack”, presented by our very own Angus Williams. The course generated a lot of interest, with over 50 members still in the waitlist for the event. #devopsplayground was set up to to give the London #DevOps community an opportunity to go hands-on with some of the latest DevOps tools and provide an environment in which we can collectively share our experiences and knowledge.

To ensure the night ran smoothly, we provided everyone with @Docker compose as well as Vagrant files to get their stack up and running – as we all know, setting this up on the night would take longer than 45 minutes!

Our automated setup looked something like this:

DevOps-Playground_1.png

Angus went on to explain some of his previous experiences with ELK stack and demonstrated how you would set up the log aggregation and display this in Kibana. Unfortunately, we didn’t have a large enough environment on hand to get some real data, so we made do by creating a script that spat out some random logging to allow us to display some data.

After some examples that everyone followed, we created a map showing the geolocations of where logging data was retrieved:

DevOps-Playground_3.png

Dashboards were provided to all course attendees, and the enthusiasm in the room was almost pDevOps-Playground_2.jpgalpable. Nobody could wait to use this in their own environment – the creative juices (possibly helped by the beer) flowed, and everyone started discussing the different matrices and data they could gather!

Everything we ran through is on our GitHub page, and you should be able to re-run this in your own environment by clicking here.

 

We’d like to extend our thanks to everyone that came along and hope that you all enjoyed the experience.  And, of course, we ended in time honored meetup tradition!Dev-Ops_Playground.jpg

#devopsplayground members, please suggest topics for the next meetup.  Anyone can present and the location can be anywhere in London. Offers greatly appreciated!

Andy CuretonDevOps Playground Meetup #2: Hands on with the ELK Stack
read more
How ChatOps drives innovation, transparency & collaboration in enterprise DevOps

How ChatOps drives innovation, transparency & collaboration in enterprise DevOps

No comments

Since the dawn of the digital age, and even long before that, our culture has been fascinated by the prospect of being able to talk to computers. There’s no better evidence of this than in film and literature – indeed, just about every sci-fi universe is bound to feature at least one form of artificial intelligence (AI) as a central role: without KITT, Knight Rider would have just been a guy with a fancy car. Without HAL 9000, the crew of the Discovery One in 2001: A Space Odyssey might have fared dramatically better. And without R2D2’s help, the Jedi prophecy would never have been kick-started and Luke Skywalker might have lived out his days as a simple farmer on Tatooine. In any event, a development that has been taking the DevOps world by storm in recent months is ChatOps – the practice of integrating ChatBots into a DevOps workflow. While it may still be a couple of years before it becomes sentient, there’s a lot to be said for implementing ChatOps in your delivery pipelines. In this blog, we’ll look at how ChatOps drives innovation, collaboration and transparency in enterprise, and how this facilitates good DevOps practice.

ChatOps puts a human face to automation.

ChatOps centres around conversation-driven automation. What this boils down to is that any command can be handled via an English-language ‘conversation’ with a ChatBot of your choice: from monitoring, to provisioning, to deploying code, to responding to security alerts and even making you coffee! And, while there are several freely available ChatBot scripts – the most popular being Hubot (Javascript), Lita (Ruby), and Err (Python), all of which are open source – it’s easy to customise them to work with specific plug-ins and scripts. This makes it easy to customise your ChatBot to suit the purposes of your organisation, or even a particular project. Ultimately, ChatOps abstracts the complexity of the process and allows complex automation tasks to be carried out with a simple, easily typed command. The upshot of this is that a single message sent to your ChatBot can accomplish what might take a significant amount of time – and, consequently, money – to carry out normally. This is also a bonus for non-technical teams by providing them with the ability to execute complex processes that previously they might not have had the technical skills to achieve.

ChatOps brings everyone’s work to one central location.

With ChatOps, wasting time trying to figure out which of your co-workers ran a particular command or whether the command was even run is a thing of the past – by using a chat client everyone’s work exists in one central place that is visible and accessible to everyone. This encourages collaboration among your team members and the inherent transparency ensures that everyone is working towards the same goals. The benefits to the overall quality of work and working environment are huge here – by bringing your entire team’s work together, there are almost limitless opportunities for cross-pollination of ideas across departments that might not happen if they worked in isolation. ChatBots also facilitate innovation in their own right – firstly, by freeing up time for your team to spend on developing new and innovative projects, and secondly, by providing a framework for innovation by creating plug-ins for the ChatBot itself. The only limit to how innovative you can be is in how far you’re willing to go in customising your ChatBot to suit your needs.

Don’t stagnate by taking ChatOps for granted.

It’s (hopefully) pretty clear from this article that ChatOps provides great opportunities for collaboration, innovation and transparency, but taking your ChatBot for granted could have the opposite effect. Remember that behind the ChatBot are complex processes that have been automated. Encouraging all members of your team to maintain the code and scripts that are in place as well develop enhancements to enable new processes to be accessible from the chat client will go a long way towards staving off complacency. Without this you would create a new sub team within your teams of people that can only execute ChatOps commands and not create or maintain them.

At the same time, new starters in your organisation will benefit from first understanding how the nuts and bolts of your processes work before moving on to using a ChatBot to execute those processes. Once again, this comes down in large part to the culture in your workplace, but bear in mind that using ChatOps should encourage the transparency and collaboration that are key elements of a DevOps culture, which ultimately helps to deliver better software faster.

ECS Digital is a DevOps consultancy with 12 years’ experience implementing DevOps solutions for companies all around the world. If you’re interested in finding out more about our approach and the unique insights we can offer into how to transform your business with DevOps, contact us to request a free DevOps Maturity Assessment.

Image Credit :www.phoenix.k12.or.us

Andy CuretonHow ChatOps drives innovation, transparency & collaboration in enterprise DevOps
read more
Running Hubot in Production

Running Hubot in Production

No comments

In a previous blog post, we spoke about the basics of ChatOps and Hubot, and how they can be used to make your workflow more efficient. In this blog, we’ll take a slightly more in-depth look at running Hubot in production. If you’d like to know more about setting up and basic scripting of a ChatOps bot, please read the previous part of this blog.

In this post we’ll share some of our experiences and thoughts on running Hubot in a production environment, and go through some practical examples of how to achieve this. Please note that this post is focussed primarily on Linux systems.

This guide starts with the assumption that you’ve already created a Hubot instance using Yeoman. If you’re unsure of how to do this, read the instructions here. All the files mentioned in this post can be found on Github here.

Version Control

Once your Hubot instance has been created, you should commit your changes to a version control system of your choosing. Any changes to Hubot should be committed and updated from version control. See this link for some useful information from the Hubot documentation.

Run Hubot as its own user

From a security standpoint, we advise that you run Hubot as its own user. In Linux you can create a system with the following command:

$ useradd -r hubot

Creating a system user is also good practice, since system users aren’t able to login and don’t have home directories which has some security benefits.

Updating Hubot

At ECS Digital, we don’t update our Hubot instances all that often, so we don’t use an automated deployment process. To update the code-base on the production Hubot server, we do the following:

hubot1.png

Of course, this process could be automated, but as we don’t update Hubot too often so we’re happy with this method for now. Ideally, though, we would write a script for Hubot to update itself!

Ensuring the Hubot process is run at startup (and kept running)

I’m personally a big fan of Supervisord. Supervisord is an excellent project which can control processes for you.

Some of the benefits you get from Supervisor are:

  • Log handling for stderr and stdout – this includes log rotation options.
  • Automatic restarts when a process dies.
  • Remote web interface and XML-RPC API for remote controlling processes.
  • Config is much easier to deal with than init or upstart scripts.

Supervisor is available as an .rpm package for Redhat Linux variants and a .deb package for Debian Linux variants. It can also be installed via the Python pip package manager.

As we’re running Hubot on an Ubuntu 14.04 AWS instance, the supervisor package is available in the standard repos and can be installed with the following command:

$ sudo apt-get install supervisor

Supervisor can also be installed via pip, which will ensure a more up-to-date package. You may have to install Python 2.7 and pip if your distribution doesn’t come with Python installed already. You may need to run this command as root:

$ pip install supervisor

Config files for supervisor generally reside in /etc/supervisor. Here is an example config for running Hubot via supervisor:

/etc/supervisor/conf.d/my-hubot.conf
[program:my-hubot]
command=bin/hubot --adapter slack ; command to execute
directory=DIR/WHERE/HUBOT/IS ; cwd for program
; Log file handling
stdout_logfile=/var/log/%(program_name)s.log
stderr_logfile=/var/log/%(program_name)s-stderr.log
stdout_logfile_backups=10
stderr_logfile_backups=10
user=hubot ; user to run hubot as
startsecs=10
autorestart=true
; Add any environment vars needed below
environment =
    HUBOT_SLACK_TOKEN="SLACK-TOKEN-HERE",
    HUBOT_AUTH_ADMIN="AUTH,TOKENS,HERE",

As you can see, in the example above we are doing the following:

  • Defining a command
  • Defining a working directory
  • Logfile handling for stdout and stderr output and logfile rotation. Note the %(program_name)’s Python variable expansion in the log names.
  • Telling supervisor to run the process as user Hubot
  • Telling supervisor to restart Hubot upon death.
  • Defin a few environment variables to pass to the process.

Once you’ve created or updated config for a program, run the following command:

$ sudo supervisorctl update

Then run this command to ensure Hubot is started.

$ sudo supervisorctl status

To restart Hubot after updating it, run the following command, replacing my-hubot with the name you’ve chosen for your program:

$ sudo supervisorctl restart my-hubot

See here for more information on supervisor config options.

For our production instance, we commit the supervisor config to the Hubot repo and then simply symlink the file into /etc/supervisor/conf.d/my-hubot.conf. That way, our supervisor config is nicely versioned and can easily be rolled back if something breaks.

Handling role based permissions with hubot-auth

Sometimes you want to lock certain Hubot functionality to a particular group of users. Although Hubot has no support for this by default, we can add this functionality with the hubot-auth plugin. The hubot-auth plugin uses Hubot’s “brain”. If you’re using this plugin, you’re going to want to make sure that you’ve connected Hubot up to redis so the “brain” is persistent. Install instructions are on the github page.

You may have noticed the HUBOT_AUTH_ADMIN environment variable in the supervisor configs. This defines which administrators have permission to add or remove users from roles. If you’re using Slack, you’ll need to get the userid – not the username. See here for a more detailed summary.

Once you’ve installed the plugin and started Hubot again, you’ll be able to do things like this:

hubot2.png

So, as you can see, I have the ‘admin’ role which allows me to set and remove roles from users. Next, I added myself to the role ‘new-role’. I now have two roles: admin and new-role. Slackbot has none.

To use these roles, we have to create some logic when we are using Hubot scripts. Here’s an example script:

# Description:
# Hubot auth example
 
module.exports = (robot) ->
  robot.respond /am [iI] authed/i, (res) ->
    user = res.envelope.user
    if robot.auth.hasRole(user, "a-role")
      res.reply "You sure are #{res.message.user.name}!"
    else if robot.auth.hasRole(user, "admin")
      res.reply "Nope, but you are an admin. Add yourself!"
    else
      res.reply "NO! Get outta here"

And here’s the script in action:

hubot3.png
You may have noticed the slight caveat here: you are going to have to retrofit authorisation logic into any script which requires some form of authentication. Unfortunately, we’ve yet to find a better solution for user authentication with Hubot.

Handling end-to-end testing of Hubot

Note: This section focuses only on using Hubot with slack.

If you need to make sure your Hubot is up and responding with a tool like Sensu, Nagios or Icinga, you can use the following workflow:

Requirements:

The basic premise is that we create a private slack channel which consists of you, Hubot and Slackbot. Next, we use the Slack remote response API to trigger Hubot using the echo command:

hubot4.png
We then access the Slack message APIs using Hubot’s API token and retrieve the last message from the API to ensure that it matches the message we sent.

Once you’re happy the test is working correctly, you can leave the Slack channel to avoid being notified about the test every time it runs.

You can find a Sensu plugin in the Github repo for this blog. I’m not a coder by trade so please don’t hold my terrible ruby code against me! If you have any suggestions on how it can be improved, feel free to contact me with your ideas.

To find out more about ECS Digital, and our unique take on DevOps, check out the training courses that we offer on our website.

Angus WilliamsRunning Hubot in Production
read more