Open source. Are you part of the community?

Open source. Are you part of the community?

No comments

Open source is a type of licensing agreement – not very exciting. The exciting bit is that it allows users to create and publish work that can be freely used, modified, integrated into larger projects or derived into new work based on the original by other users.

In an age of trade secrets and profit-driven professions, this is a unique platform that actively promotes a global exchange of innovation. It has been specifically designed to encourage contributions so that the software doesn’t stand still. The collective goal of this barrier-free community is the advancement of creative, scientific and technological tools and applications – which for many is more important than a price tag.

Who uses open source?

Although, it is most commonly used in the software industry, professionals adopt open source licenses in many industries including biotech, fashion, robotics and teaching. This article will focus solely on software applications.

What’s interesting is that more and more businesses are contributing their own source code to the community – Facebook, Airbnb, Cyprus are leading examples. According to a 2018 Tidelift Professional Open Source Survey, 92% of projects amongst European respondents contain open source libraries. Whilst on the surface this contradicts conventional commercial instinct, businesses gain a lot by giving away a little. Whilst the benefits are vast, we are going to focus on five:

  1. Competition:

Since the late 90’s and the advancement of the digital age, competition no longer resides simply between two rival companies. Businesses today also find themselves competing with open source software projects that are free, open to the public and constantly evolving.

Due to the current scale of open source contribution, even the giants in the tech industry are struggling to devote the resources or teams large enough to compete with their community counterparts.

Turning to the open source community enables businesses to outsource resource rich projects to a bottomless sea of innovative capabilities. This potentially reduces cost, pressure and speeds up the feedback loop considerably.

  1. Reputation:

In the same way the Big Bang Theory made traditional science nerds cool, the open source community can boost a business’ profile on the cool/not-cool spectrum.

Not only do businesses become more attractive to potential employees, by initiating an open source software project, or contributing to an existing one, they make their mark on an additional and power channel popular within IT circles. If done well, this has the potential to establish, maintain or improve a brand’s image, as well as attract new business.

  1. Advancement:

Helping to advance something as big as the technology industry isn’t something to turn your nose up at. In fact, businesses revel with the idea of having their name against a leading piece of software that has the potential to make history.

But history moves fast. And building software inhouse can be stifled by other business priorities, resource restrictions and other competitors beating you to the finish line.

Rather than building behind closed doors and waiting until your software it is perfect, opening your source code to the community in its earlier stages has two benefits:

  1. You can plant your flag earlier
  2. You invite an endless list of innovative capability to help advance your idea at a rate unlikely attainable behind closed doors

It also acts as an incentive for individuals to feel part of a project than extends far beyond the business they work for.

  1. Trust:

Fake news, data breaches, shady deals – all of these have encouraged people to lose trust in businesses. Including open source projects in company policy encourages business to be more transparent with its consumers. Whilst it is naive to believe a company will lay down all their cards, companies such as Facebook made 15,682 contributions in 2016. Automattic created WordPress as an open source project and currently powers 31% of the internet, and Netflix frequently open sources the tools they develop in-house.

Not only are they strengthening their brand, sharing is showing the world they have nothing to hide – which is a proven way to start winning back trust.

A great example of building this trust through transparency is the cryptocurrency space where many projects including Bitcoin allow you to browse through the project’s source. A very different approach to their corporate counterparts.

  1. Speed:

Many companies face the same problems. Sometimes companies are kind enough to share the solution. If a problem has been solved before and will provide business value in a fraction of the time and half the man power everybody wins.

Contributing to the community also gives you the capability to ask the projects contributors directly questions, ask for features or raise issues enabling you fast feedback which keeps your project moving

How does open source work? 

Contributors create a project and solve a problem. They realise that other people might benefit from this project to solve their problems. The project is shared on an open platform such as GitHub which can be downloaded and used by other users interested in the project.

If users wish to contribute, they can do this by downloading the project, creating a fork (which is an exact replica of a certain part of the pipeline) and editing the code until they are happy with the changes. Users can then request a pull request which notifies the authors that a suggested change is requested.

It is up to the author to approve the change, before deciding whether they want to include the changes. If they do, this usually becomes part of the next version, which is released at the author’s discretion.

The problem is, this could take some time. The author is under no obligation to release new versions or accept proposed changes. In fact, this is one of the limitations of the open source community. People will only give up as much information as they want to / their projects need. Authors are not there to solve specific problems, and often release software that focuses on their needs rather than trying to create something too generic.

This can be frustrating if an open source project only solves half your problem, however, the community can help bridge knowledge gaps. Users also have the option to download, build and run the project locally in the interim whilst waiting for the official new version – meaning they don’t need to wait for the software to be released with the changes they need.

How it is viable?

Whilst it doesn’t make economic sense on the surface, the community have found a way to make open source viable from a business and individual perspective. Some have capitalised on their projects, making basic versions available at no cost to the user, but adding a price tag to different versions or ‘add-ons’.

Other businesses or individuals actively contributing to the platform have benefited from angel investments, as well as new business after demonstrating successful projects.

It is also often a side project for businesses and individuals. Due to the legal freedom attributed to an open source platform, you’re able to modify the code of the product you’re using endlessly, for free, at no risk of breaching privacy policies or user agreements. This makes it the opportune ‘playground’ for those looking to get into the industry or develop new skills. According to LinkedIn:

“We believe that open sourcing projects makes our engineers better at what they do best. Engineers grow in their craft by having their work shared with the entire community.”

Risks:

With all open platforms, there is a risk of abuse. Open source communities are no different and have certainly experienced their fair share of malicious activity. However, it is the open source approach that significantly increases the reliability of the projects available to the public.

By establishing a community who believe in the future potential of the projects produced, you immediately have a security indicator in place. Many of them in fact. And with so many eyes looking at projects, malicious activity is quick to be spotted and remedied. This is because open source platforms embody an agile mentality, applied in a community wide approach. Rather than make one big change and focus on ensuring it is okay for the next six-months, contributors and authors are interested in making changes quickly, so things get fixed and evolved just as quick.

******

ECS Digital love to find value for our clients and give it back to the wider community, which is why we make tools available on open source platforms such as GitHub and NPM.

We will also be hosting a hands-on session and demonstration of AyeSpy– a visual regression testing tool – at an upcoming DevOps Playground on the 29th of November. Come along to learn more about what the AyeSpy has to offer!

Matt LowryOpen source. Are you part of the community?
read more
Running Hubot in Production

Running Hubot in Production

No comments

In a previous blog post, we spoke about the basics of ChatOps and Hubot, and how they can be used to make your workflow more efficient. In this blog, we’ll take a slightly more in-depth look at running Hubot in production. If you’d like to know more about setting up and basic scripting of a ChatOps bot, please read the previous part of this blog.

In this post we’ll share some of our experiences and thoughts on running Hubot in a production environment, and go through some practical examples of how to achieve this. Please note that this post is focussed primarily on Linux systems.

This guide starts with the assumption that you’ve already created a Hubot instance using Yeoman. If you’re unsure of how to do this, read the instructions here. All the files mentioned in this post can be found on Github here.

Version Control

Once your Hubot instance has been created, you should commit your changes to a version control system of your choosing. Any changes to Hubot should be committed and updated from version control. See this link for some useful information from the Hubot documentation.

Run Hubot as its own user

From a security standpoint, we advise that you run Hubot as its own user. In Linux you can create a system with the following command:

$ useradd -r hubot

Creating a system user is also good practice, since system users aren’t able to login and don’t have home directories which has some security benefits.

Updating Hubot

At ECS Digital, we don’t update our Hubot instances all that often, so we don’t use an automated deployment process. To update the code-base on the production Hubot server, we do the following:

hubot1.png

Of course, this process could be automated, but as we don’t update Hubot too often so we’re happy with this method for now. Ideally, though, we would write a script for Hubot to update itself!

Ensuring the Hubot process is run at startup (and kept running)

I’m personally a big fan of Supervisord. Supervisord is an excellent project which can control processes for you.

Some of the benefits you get from Supervisor are:

  • Log handling for stderr and stdout – this includes log rotation options.
  • Automatic restarts when a process dies.
  • Remote web interface and XML-RPC API for remote controlling processes.
  • Config is much easier to deal with than init or upstart scripts.

Supervisor is available as an .rpm package for Redhat Linux variants and a .deb package for Debian Linux variants. It can also be installed via the Python pip package manager.

As we’re running Hubot on an Ubuntu 14.04 AWS instance, the supervisor package is available in the standard repos and can be installed with the following command:

$ sudo apt-get install supervisor

Supervisor can also be installed via pip, which will ensure a more up-to-date package. You may have to install Python 2.7 and pip if your distribution doesn’t come with Python installed already. You may need to run this command as root:

$ pip install supervisor

Config files for supervisor generally reside in /etc/supervisor. Here is an example config for running Hubot via supervisor:

/etc/supervisor/conf.d/my-hubot.conf
[program:my-hubot]
command=bin/hubot --adapter slack ; command to execute
directory=DIR/WHERE/HUBOT/IS ; cwd for program
; Log file handling
stdout_logfile=/var/log/%(program_name)s.log
stderr_logfile=/var/log/%(program_name)s-stderr.log
stdout_logfile_backups=10
stderr_logfile_backups=10
user=hubot ; user to run hubot as
startsecs=10
autorestart=true
; Add any environment vars needed below
environment =
    HUBOT_SLACK_TOKEN="SLACK-TOKEN-HERE",
    HUBOT_AUTH_ADMIN="AUTH,TOKENS,HERE",

As you can see, in the example above we are doing the following:

  • Defining a command
  • Defining a working directory
  • Logfile handling for stdout and stderr output and logfile rotation. Note the %(program_name)’s Python variable expansion in the log names.
  • Telling supervisor to run the process as user Hubot
  • Telling supervisor to restart Hubot upon death.
  • Defin a few environment variables to pass to the process.

Once you’ve created or updated config for a program, run the following command:

$ sudo supervisorctl update

Then run this command to ensure Hubot is started.

$ sudo supervisorctl status

To restart Hubot after updating it, run the following command, replacing my-hubot with the name you’ve chosen for your program:

$ sudo supervisorctl restart my-hubot

See here for more information on supervisor config options.

For our production instance, we commit the supervisor config to the Hubot repo and then simply symlink the file into /etc/supervisor/conf.d/my-hubot.conf. That way, our supervisor config is nicely versioned and can easily be rolled back if something breaks.

Handling role based permissions with hubot-auth

Sometimes you want to lock certain Hubot functionality to a particular group of users. Although Hubot has no support for this by default, we can add this functionality with the hubot-auth plugin. The hubot-auth plugin uses Hubot’s “brain”. If you’re using this plugin, you’re going to want to make sure that you’ve connected Hubot up to redis so the “brain” is persistent. Install instructions are on the github page.

You may have noticed the HUBOT_AUTH_ADMIN environment variable in the supervisor configs. This defines which administrators have permission to add or remove users from roles. If you’re using Slack, you’ll need to get the userid – not the username. See here for a more detailed summary.

Once you’ve installed the plugin and started Hubot again, you’ll be able to do things like this:

hubot2.png

So, as you can see, I have the ‘admin’ role which allows me to set and remove roles from users. Next, I added myself to the role ‘new-role’. I now have two roles: admin and new-role. Slackbot has none.

To use these roles, we have to create some logic when we are using Hubot scripts. Here’s an example script:

# Description:
# Hubot auth example
 
module.exports = (robot) ->
  robot.respond /am [iI] authed/i, (res) ->
    user = res.envelope.user
    if robot.auth.hasRole(user, "a-role")
      res.reply "You sure are #{res.message.user.name}!"
    else if robot.auth.hasRole(user, "admin")
      res.reply "Nope, but you are an admin. Add yourself!"
    else
      res.reply "NO! Get outta here"

And here’s the script in action:

hubot3.png
You may have noticed the slight caveat here: you are going to have to retrofit authorisation logic into any script which requires some form of authentication. Unfortunately, we’ve yet to find a better solution for user authentication with Hubot.

Handling end-to-end testing of Hubot

Note: This section focuses only on using Hubot with slack.

If you need to make sure your Hubot is up and responding with a tool like Sensu, Nagios or Icinga, you can use the following workflow:

Requirements:

The basic premise is that we create a private slack channel which consists of you, Hubot and Slackbot. Next, we use the Slack remote response API to trigger Hubot using the echo command:

hubot4.png
We then access the Slack message APIs using Hubot’s API token and retrieve the last message from the API to ensure that it matches the message we sent.

Once you’re happy the test is working correctly, you can leave the Slack channel to avoid being notified about the test every time it runs.

You can find a Sensu plugin in the Github repo for this blog. I’m not a coder by trade so please don’t hold my terrible ruby code against me! If you have any suggestions on how it can be improved, feel free to contact me with your ideas.

To find out more about ECS Digital, and our unique take on DevOps, check out the training courses that we offer on our website.

Angus WilliamsRunning Hubot in Production
read more
How ChatOps is redefining enterprise and open source DevOps

How ChatOps is redefining enterprise and open source DevOps

No comments

The concept of ChatOps isn’t new, by any means. People have been using chatbots on IRC and other messaging channels for a long time. ChatOps was only  coined more recently by Github, and is often placed under the DevOps umbrella. This isn’t surprising, since example use cases and stories often describe deploying an entire app to production by simply sending a message to a bot. It really can be as easy as “@mybot deploy myapp to production”.

So, what is ChatOps all about then? In this blog, we’ll take a look at some of its defining features, and how it can be an essential and valuable component of a DevOps implementation.

What is ChatOps, and how does it work?

Essentially,  ChatOps begins with you registering a script-able bot to your chat channel – be it Slack, Hipchat or whichever chat service you prefer- and then ask it to perform routine tasks for you. There are several ChatOps frameworks available,the most notable being  Hubot from the folks at GitHub. Hubot is written in Node.js, with scripts being written in either coffee script or Javascript. The other two notable chatbot frameworks available are  Err, which is written and scripted in Python or  Litawhich is written and scripted in Ruby. For the purposes of this blog, we’re going to focus on Hubot.

What makes ChatOps worthwhile? 

Some of the benefits of ChatOps include:

  • Automating common tasks

Chatbots are excellent for automating common tasks that are hard to trigger automatically or can’t be run by a scheduler. Basically, any tasks that need some human consideration to begin execution are perfectly suited to Chatbot automation.

  • Shared History and operational  visibility

A shared history means that anyone on the chat channel can see what the others have asked the bot already. This makes it possible to see what everyone else in the team has already asked the bot, and reduces the chance of wasted time due to repeat work.

A look at ChatOps in practice

Let’s take a look at an example chatbot. Installation is pretty straight-forward: the only dependencies are Node.js and yeoman, a scaffolding tool. You can find the official install instructions here. Install the bot with the default campfire adapter, any other adapters you want to use (ie Slack or Hipchat) can be installed later.

To give him a bit of personality, I called my bot Garry. Let’s start Garry up in shell mode. You might see some heroku or redis warnings – they’re safe to ignore for now. The robot has a “brain”, which is essentially a persistent store for certain details backed by redis. It isn’t essential in order for the robot to operate.

$ bin/hubot

We’ll need a prompt to be able to talk to Garry in shell mode, so typing ‘garry help’ will trigger hubot to list all the commands it knows about. Hubot comes will a bunch of built-in plugins, but none of them do anything particularly useful, and mostly serve as examples. You can turn them off by removing them from the file ‘external-scripts.json’.

garry> garry help
garry adapter - Reply with the adapter
garry animate me <query> - The same thing as `image me`, except adds a few parameters to try to return an animated GIF instead.
garry echo <text> - Reply back with <text>
...

Now, let’s run some of the included example scripts:

garry> garry ping
PONG

garry> garry echo I am a chatbot!
I am a chatbot!

Chatbots allow custom scripts for total customisation.

Hubot has a large base of custom scripts for performing certain actions, and you can view them here or search through npm ( npm search hubot-scripts <query>). For now, let’s create our own custom script. Hubot scripts can be written in either pure javascript or coffee script.

Let’s create a fictional company directory so we can ask hubot to give us details on particular people.  I’ll place my company directory script in scripts/company-directory.coffee. I’m defining the users as an array of objects for this example, but you can easily replace this with some logic to query a user directory or database as necessary.

I’m using a Node.js module from the atom text editor (also by Github) called fuzzaldrin to take care of the search logic.

scripts/company-directory.coffee

# Description:
#   Lookup user info from company directory
#
# Dependencies:
#   "fuzzaldrin": "^2.1.0"
#
# Commands:
#   hubot phone of <user query> - Return phone details for <user query>
#   hubot email of <user query> - Return email details for <user query>
#   hubot details of <user query> - Return all details for <user query>
#
# Author:
#   Angus Williams <angus@forest-technologies.co.uk>
{filter} = require 'fuzzaldrin'
 
# Define a list of users
directory = [
  {
    firstName: "John",
    lastName: "Lennon",
    fullName: "John Lennon",
    email: "johnl@example.com",
    phone: "+44 700 700 700"
  },
  {
    firstName: "Paul",
    lastName: "McCartney",
    fullName: "Paul McCartney",
    email: "paulm@example.com",
    phone: "+44 700 700 701"
  },
  {
    firstName: "George",
    lastName: "Harrison",
    fullName: "George Harrison",
    email: "georgeh@example.com",
    phone: "+44 700 700 703"
  },
  {
    firstName: "Ringo",
    lastName: "Starr",
    fullName: "Ringo Starr",
    email: "ringos@example.com",
    phone: "+44 700 700 704"
  }
]

module.exports = (robot) ->
  robot.respond /phone of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()
    
    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')
    
    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{person.phone}"

  robot.respond /email of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()


    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')


    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{person.email}"


  robot.respond /details of ([\w .\-]+)\?*$/i, (res) ->
    # Get user query from capture group and remove whitespace
    query = res.match[1].trim()


    # Fuzzy search the directory list for the query
    results = filter(directory, query, key: 'fullName')


    # Reply with results
    res.send "Found #{results.length} results for query '#{query}'"
    for person in results
      res.send "#{person.fullName}: #{person.email}, #{person.phone}"

A closer look at the script in question.

Let’s dissect the script a little. The comments at the start of the script are reasonably important here, and follow a certain format so that information can be pulled out. The Commands section is particularly of interest:

# Commands:
#   hubot phone of <user query> - Return phone details for <user query>
#   hubot email of <user query> - Return email details for <user query>
#   hubot details of <user query> - Return all details for <user query>

This section is used by hubot’s help module. Documenting available commands here is important, as it will allow users to see the functionality provided by your script using the ‘hubot help’ command. More info about script documentation can be found here.

The next import section is further down the script, under the robot module export.

robot.respond /phone of ([\w .\-]+)\?*$/i, (res) ->
  name = res.match[1].trim()
  results = filter(directory, name, key: 'fullName')
  res.send "Found #{results.length} results for query '#{name}'"
  for person in results
    res.send "#{person.fullName}: #{person.phone}"

Here we are telling hubot to respond to anything that matches the regex described ('/phone of ([\w .\-]+)\?*$/i').Regex capture groups ('([\w .\-]+)') are used to pull the name from the command.

Another important thing to note is that we’re using the robot.respond invocation. This means that hubot will only respond to commands that directly address him, I.e. ‘garry do something’. We can also use hear, which will match any messages sent to a particular chat room. Similarly, we are addressing everyone with the res.send method to send a reply, but we could just as easily address the user that invoked the script by using the res.reply method instead.

Let’s restart hubot and test out our script.

garry> garry phone of ringo
Found 1 results for query 'ringo'
Ringo Starr: +44 700 700 704


garry> garry details of in
Found 2 results for query 'in'
Ringo Starr: ringos@example.com, +44 700 700 704
George Harrison: georgeh@example.com, +44 700 700 703 

garry> garry email of paul
Found 1 results for query 'paul'
Paul McCartney: paulm@example.com

That’s all good and well, but how do we actually use this with a proper chat client?

Lets use Slack as an example. First, you’re going to need to sign up for a slack account if you don’t already have one. Once you’ve got a slack team set up, you’ll need to enable hubot integration. You’ll find step-by-step instructions here. Once complete, you should be presented with an API token.

Let’s install the Slack adaptor in the root of the hubot repository and start hubot with the slack adaptor using the API token from the Slack hubot integration setup.

$ npm install hubot-slack --save
$ env HUBOT_SLACK_TOKEN=<SLACK API TOKEN> ./bin/hubot --adapter slack
[Wed Nov 18 2015 17:31:16 GMT+0000 (GMT)] INFO Connecting...
[Wed Nov 18 2015 17:31:20 GMT+0000 (GMT)] INFO Logged in as garry of Dummy Corp, but not yet connected
[Wed Nov 18 2015 17:31:21 GMT+0000 (GMT)] INFO Slack client now connected

Now all that’s left for us to do is to talk to Garry in Slack!

garry.png

As you can see, hubot listens for either @ mentions as well as the bot’s name.  You can find the example chatbot on the forest technologies github account here.

Useful Links

https://github.com/github/hubot/blob/master/docs/scripting.md

https://hubot.github.com/docs/

https://github.com/slackhq/hubot-slack

If you’d like to find out more about us and our services, including consultation, training and our extensive experience with enterprise and open source DevOps tools, please don’t hesitate to get in touch.

Image credit: Jason Hand

Angus WilliamsHow ChatOps is redefining enterprise and open source DevOps
read more
What Open Source DevOps means for the future of Enterprise Infrastructure

What Open Source DevOps means for the future of Enterprise Infrastructure

No comments

A change, they say, is as good as a holiday. That might have been true in simpler times, but with change being the overarching constant in the IT world today, it very rarely seems like that. Change in today’s IT world is not only ever-present, it’s something that is essential to get to grips with if you want any hope of surviving – let alone excelling – in your respective field. One of the most significant transformations happening in the IT world today is the increasing shift away from on-premise infrastructure management to hybrid and cloud solutions. While it’s by no means unequivocal among IT professionals that on-premise enterprise infrastructure is in its twilight years, it’s hard to argue with the facts: cloud and hybrid infrastructure solutions are disrupting traditional infrastructure models in enterprises. In this blog, we’ll take a look at the contributing factors to this fundamental shift, including the role of open source DevOps and the increasingly common use of virtualisation.

Where are we on a timeline of the on- versus off-premise infrastructure debate?

Although the shift away from traditional infrastructures and the increasing feasibility of cloud architecture has been a long time coming, we’ve only now reached the tipping point of enterprise adoption. Until just two years ago, C-level executives were voting pretty unanimously against the cloud’s ability to replace on-premise applications – their main reason being the security and stability benefits of in-house infrastructure. With high-profile hacks seemingly escalating in regularity and intensity with each year, it’s understandable that security is a primary concern – however, no business today operates in isolation from the internet, and it’s an unfortunate truth that until such a time as we have a true solution to locking down online security, some element of risk to sensitive data is unavoidable whether it is stored on-premise or in the cloud. The question of stability has also been addressed as virtual architectures and Infrastructure as a Service (IaaS) technology have matured alongside increasing broadband capabilities around the world. The levels of stability that can be achieved at scale and across multiple geographies is far beyond that which was economically achievable with an on-premise model.

Granular virtualisation is breaking down the barriers between on- and off-premise infrastructure.

Before the turn of the millennium, virtualisation was something that only massive data-centres were likely to have anything to do with, but after the release of VMWare and ESX, virtualisation became feasible for commercial and personal use. In 2006, the worlds largest book seller entered the cloud market with Amazon Web Services (AWS). Just ten years later, AWS is a $7 billion business servicing 5 million customers in the UK alone. A key enabler for this explosive growth of virtualisation and cloud has been infrastructure automation.  Multiplying the size of your server estate multiplies the overhead of configuration management, and the ability to provision and de-provision quickly and consistently are critical to ensuring that cost is not also multiplied unnecessarily. Today, virtualisation technology has matured to the extent that it’s possible to not only virtualise systems, but also granular processes.  In fact, two of the hottest trends in technology are microservices and containerisation.  It remains to be seen if containerisation will ultimately replace virtualisation, but there is a clear drive towards more granular application services and infrastructure to support them.  Infrastructure automation has made it possible to provide businesses with services that would otherwise be astronomically expensive in hardware terms. Combine infrastructure as code with the surge in popularity of Software Defined Networking (SDN) and enterprise architecture of the near future will look quite different from that of today – the decoupling of network control from the hardware layer not only means less reliance on in-house hardware and less constraints on physical space, it also gives IT professionals an unprecedented level of control over their environment. Ultimately, this enables organisations to deliver faster without infrastructure bottlenecking the process.

Open Source DevOps tools are perfect for hacking out and experimenting with new infrastructure concepts.

DevOps is changing the face of enterprise architecture because it brings these game-changing technologies together under one roof. With open source DevOps tools, it’s unbelievably easy to create a completely new prototype or Minimum Viable Product (MVP)  without disrupting the way things get done in your organisation. By using infrastructure automation to create virtualised systems that you can then tweak in whichever way you please without fear of failure. Open source DevOps software also requires little to no capital investment, so at the worst you may end up wasting a few hours of staff time. Simply put, Open source DevOps software lets your entire organisation come together and work out where and how your infrastructure and processes could be improved. This means you can have the benefit of trying out new technologies or concepts as soon as you hear about them, and instantly roll back to your stable system should anything go awry. Keeping up with the latest infrastructure technology through open source DevOps allows you to keep tabs on the latest trends, while keeping enough distance to invest only in the ones that truly benefit your organisation.

ECS Digital is a DevOps consultancy with 12 years’ experience in implementing DevOps in businesses of all kinds, all around the world. Our team has a combined wealth of knowledge on infrastructure automation and open source DevOps. To find out more about what DevOps could mean for your business, don’t hesitate to contact us.

Andy CuretonWhat Open Source DevOps means for the future of Enterprise Infrastructure
read more
Why building a POC is easy with open source DevOps tools

Why building a POC is easy with open source DevOps tools

No comments

Working in a corporate environment with enterprise tools doesn’t often provide opportunities for innovation or experimentation. Any developer with experience in a large organisation or corporate environment knows that the amount of work that needs to get done on a daily basis and the particular processes that need to be followed don’t leave much time for experimenting with or optimising systems. However, the increasing ubiquity of open source software in corporate environments has given organisations first-hand experience of the benefits it can provide, not only to developers, but the company as a whole. As a result, many large corporates are not only advocating the use of open source software, they’re doing everything they can to leverage its potential to create new levels of efficiency and improve the performance of their staff. In this blog, we’ll take a look at what makes open source DevOps software the perfect platform for easily creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), and how some of the major figures in international business are using similar platforms to facilitate workplace innovation.

Hackdays and hackathons are becoming an intrinsic part of tech culture – and beyond.

As open source tools have become more frequently adopted in enterprises, a growing number of organisations are beginning to leverage the business benefits of letting their employees break free from the usual restrictions that come with using proprietary software and experiment on passion projects that might have little or nothing to do with the organisation itself. The scope and scale of how this is implemented can vary from place to place, from simply encouraging developers to spend time optimising code to improve processes within the business, to hosting a week-long event in which employees are given free rein to work on anything they choose.  Atlassian for example have “ShipIt” days where employees can work on anything for 24 hours which they describe as 20% time on steroids –  20% time being the Google initiative to encourage employees to spend 20% of their time working on what they think will most benefit Google.  Gmail, for example, started life as a 20% time project!

While there’s a trend towards businesses organising their own hacking events, there are also a huge number of popular public hackathons ranging from small meetups to fully-fledged events complete with catering services and sponsors. TechCrunch’s Disrupt Hackathon, which will be taking place in London on the 5th and 6th of December, has become so popular that spectator tickets are now being sold for people who aren’t taking part, but want to experience the hackathon.

How do hackathons benefit your developers and your business?

There are many benefits to hosting a hackathon, whether you’re part of a start-up or an already-established organisation. Start-ups can leverage them to meet developers in their local community, and larger organisations can use them to scout for new talent or get help outside the organisation for innovating or improving their services. Open source DevOps tools make it easy for individuals or teams hack out POCs and MVPs by providing an automatable framework that ties every team member into the project in equal measure. On tight timeframes and even tighter budgets, any way to maximise speed and quality whilst removing errors is invaluable, and open source DevOps tools are the logical way to achieve this.

What are the big players in tech doing to embrace the hackathon trend?

Some of the biggest names in tech, including Dropbox, Twitter, Google and Facebook, periodically host hackathons and hackdays of their own. Google’s DevFest is a community-run combination between a conference and a hackathon, featuring full-day hack days as well as speakers across multiple product areas. DevFest operates on the shared idea that great things happen when developers come together, but the specifics are tailored to the local community organising each particular event, meaning no two DevFests are ever the same.

Dropbox runs an annual Hack Week, during which its 800 employees are given carte blanche to innovate and create anything at all – whether or not it’s related to their job title, or even to Dropbox itself. “We don’t actually set any restrictions,” says Max Belanger, one of the organisers behind Hack Week. “A lot of people are actually going to work on projects that are completely unrelated to Dropbox itself.”

While there’s no pressure to deliver something that will push the company forward, some employees take the opportunity to work on problems that pertain specifically to the business. Dropbox’s multi-account feature, for example, was first conceived in a Hack Week project, and went on to be integrated into the product’s core offering. But the spirit of hackathons goes beyond this – As Alicia Chen said in an article on The Verge, “Part of the spirit of Hack Week is getting out of your comfort zone, learning something new, doing something unusual.”

Hackathons aren’t only beneficial for developers– they’re a great place for organisations to source talent and build relationships

Part of the reason that hackathons are so successful is that everybody takes something out after attending. For the organisation hosting, it’s a great opportunity to source talented developers and build relationships with your local development community. For developers both inside and outside the organisation, it’s an invaluable chance to hone your skills and meet and engage with like-minded professionals who share a passion for coding and innovation. It’s also an excellent opportunity for tech companies to build a name for themselves by supporting the event through sponsorships and prizes. For example, CircleCI, an open source DevOps software provider, sponsored prizes for the recent TechCrunch Disrupt in San Francisco. The cultural component of hackathons can also not be underestimated – many companies use them as a way to advertise their internal culture and source developers and other staff that identify with the way they work and their company values.

ECS Digital has a wealth of experience in open source DevOps tools and offers a variety of services for their implementation including consultation and training.  As you would expect given the ethos of tight timelines and budgets around POCs and MVPs, we offer a variety of “quick starts” enabling organisations to get where they want to be as fast as possible.  If you’d like to find out more about us, including our comprehensive training and enablement programmes, please don’t hesitate to get in touch.

Andy CuretonWhy building a POC is easy with open source DevOps tools
read more