30 DevOps Tools You Could Be Using

30 DevOps Tools You Could Be Using

No comments

As a DevOps consultancy, we spend a lot of time thinking about, and evaluating DevOps tools.

There are a number of different tools that form part of our DevOps workbench, and we base our evaluation on years of experience in IT, working with complex, heterogeneous technology stacks.

We’ve found that DevOps tooling has become a key part of our tech and operations. We take a lot of time to select and improve our DevOps toolset. The vast majority of tools that we use are open source. By sharing the tools that we use and like, we hope to start a discussion within the DevOps community about what further improvements can be made.

We hope that you enjoy browsing through the list below.

You may already be well acquainted with some of the tools, and some may be newer to you.

1. Puppet

What is it? Puppet is designed to provide a standard way of delivering and operating software, no matter where it runs. Puppet has been around since 2005 and has a large and mature ecosystem, which has evolved to become one of the best in breed Infrastructure Automation tools that can scale. It is backed up and supported by a highly active Open Source community.                    

Why use Puppet? Planning ahead and using config management tools like Puppet can cut down on the amount of time you spend repeating basic tasks, and help ensure that your configurations are consistent, accurate and repeatable across your infrastructure. Puppet is one of the most mature tools in this area and has an excellent support backbone

What are the problems with Puppet? The learning curve is quite high for those who are unfamiliar with puppet, and the ruby DSL may seem unfamiliar for users who have no development experience.

2. Vagrant

What is it? Vagrant – another tool from Hashicorp – provides easy to configure, easily reproducible and portable work environments that are built on top of industry standard technology. Vagrant helps enforce a single consistent workflow to maximise the flexibility of you and your team.

Why use Vagrant? Vagrant provides operations engineers with a disposable environment and consistent workflow for developing and testing infrastructure management scripts. Vagrant can be downloaded and installed within minutes on Mac OS X, Linux and Windows.

Vagrant allows you to create a single file for your project, to define the kind of machine you want to create, the software that needs to be installed, and the way you want to access the machine.

Are there any problems with Vagrant? Vagrant has been criticised as being painfully, troublingly slow.

3. ELK Stack

What is ELK? The ELK stack actually refers to three technologies – ElasticsearchLogstash and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine, Logstash is a log pipeline tool that accepts inputs from different sources and exports the data to various targets, and Kibana is a visualisation layer for Elasticsearch. And they work very well together.

What are its use cases? Together they’re often used in log analysis in IT environments (although you can also use the ELK stack for BI, security and compliance & analytics.)

Why is it popular? ELK is incredibly popular. The stack is downloaded 500,000 times every month. This makes it the world’s most popular log management platform. SaaS and web startups in particular are not overly keen to stump up for enterprise products such as Splunk. In fact, there’s an increasing amount of discussion as to whether open source products are overtaking Splunk, with many seeing 2014 as a tipping point.

4. Consul.io

What is Consul.io? Consul is a tool for discovering and configuring services in your infrastructure. It can be used to present nodes and services in a flexible interface, allowing clients to have an up-to-date view of the infrastructure they’re part of.

Why use Consul.io? Consul.io comes with a number of features for providing consistent information about your infrastructure. Consul provides service and node discovery, tagging, health checks, consensus based election routines, key value storage and more. Consul allows you to build awareness into your applications and services.

Anything else I should know? Hashicorp have a really strong reputation within the developer community for releasing strong documentation with their products, and Consul.io is no exception. Consul is distributed, highly available, and datacentre aware.

5. Jenkins

What is Jenkins? Everyone loves Jenkins! Jenkins is an open source CI tool, written in Java. CI is the practice of running tests on a non-developer machine, automatically, every time someone pushes code into a source repo. Jenkins is considered a prerequisite for Continuous Integration.

Why would I want to use Jenkins? Jenkins helps automate a lot of the work of frequent builds, allows you to resolve and detect issues quickly, and also reduce integration costs because serious integration issues become less likely.

Any problems with Jenkins? Jenkins configuration can be tricky. Jenkins UI has evolved over many years without a guiding vision – and it’s arguably got more complex. It has been compared unfavourably to more modern tools such as Travis CI (which of course isn’t open source).

6. Docker

What is it? There was a time last year, when it seemed that all anyone wanted to talk about was Docker. Docker provides a portable application environment which enables you to package an application in a unit for application development.

Should I use it? Depending on who you ask, Docker is either the next big thing in software development or a case of the emperor’s new clothes. Docker has some neat features, including DockerHub, a public repository of Docker containers, and docker-compose, a tool for managing multiple containers as a unit on a single machine.

It’s been suggested that Docker can be a way of reducing server footprint by packing containers on physical tin without running physical kernels – but equally Docker’s security story is a hot topic. Docker’s UI also continues to improve – Docker has just released a new Mac and Windows client.

What’s the verdict? Docker can be a very useful technology – particularly in development and QA – but you should think carefully about whether you need or want to run it in production. Not everyone needs to operate at Google scale.

7. Ansible

What is it? Ansible is a free platform for configuring and managing servers. It combines multi-node software deployment, task execution and configuration management.

Why use Ansible? Configuration management tools such as Ansible are designed to automate away much of the work of configuring machines.

Manually configuring machines via SSH, and running the commands you need to install your application stack, editing config files, and copying application code can be tedious work, and can lead to each machine being its own ‘special snowflake’ depending on who configured it. This can compound if you are setting up tens, or thousands of machines.

What are the problems with using Ansible? Ansible is considered to have a fairly weak UI. Tools such as Ansible Tower exist, but many consider them a work in progress, and using Ansible Tower drives up the TCO of using Ansible.

Ansible also has no notion of state – it just executes a series of tasks, stopping when it finishes, fails, or encountering an error. Ansible has also been around for less time than Chef and Puppet, meaning that it has a smaller developer community than some of its more mature competitors.

8. Salkstack

What is it? Saltstack, much like Ansible, is a configuration management tool and remote execution engine. It is primarily designed to allow the management of infrastructure in a predictable and repeatable way. Saltstack was designed to manage large infrastructures with thousands of servers – the kind seen at LinkedIn, Wikipedia and Google.

What are the benefits of using Salt? Because Salt uses the ZeroMQ framework, and serialises messages using msgpack, Salt is able to achieve severe speed and bandwidth gains over traditional transport layers, and is thus able to fit far more data more quickly through a given pipe. Getting set up is very simple, and someone new to configuration management can be productive before lunchtime.

Any problems with using Saltstack? Saltstack is considered to have weaker Web UI and reporting capabilities than some of its more mature competitors. It also lacks deep reporting capabilities. Some of these issues have been addressed in Saltstack Enterprise, but this may be out of budget for you.

9. Kubernetes

What is it? Kubernetes is an open-source container cluster manager by Google. It aims to provide a platform for automating deployment, scaling and operations of container clusters across hosts.

Why should I use it? Kubernetes is a system for managing containerised applications across a cluster of nodes. Kubernetes was designed to address some of the disconnect between the way that modern, clustered applications work, and the assumptions they make about some of their environments.

On the one hand, users shouldn’t have to care too much about where work is scheduled – the unit is presented at the service level, and can be accomplished by any of the member nodes. On the other hand, it is important because a sysadmin will want to make sure that not all instances of a service are assigned to the same host. Kubernetes is designed to make these scheduling decisions easier.

10. Collectd

What is it? Collectd is a daemon that collects statistics on system performance, and provides mechanisms to store the values in different ways.

Why should I use collectd? Collectd helps you collect and visualise data about your servers, and thus make informed decisions. It’s useful for working with tools like Graphite, which can render the data that collectd collects.

Collectd is an incredibly simple tool, and requires very few resources. It can even run on a Raspberry Pi! It’s also popular because of its pervasive modularity. It’s written in C, contains almost no code that would be specific to any operating system, and will therefore run on any Unix-like operating system.

11. Git

What is Git? Git is the most widely used version control system in the world today.

An incredibly large number of products use Git for version control: from hobbyist projects to large enterprises, from commercial products to open source. Git is designed with speed, flexibility and security in mind, and is an example of a distributed version control system.

Should I use Git? Git is an incredibly impressive tool – combining speed, functionality, performance and security. When compared side by side to other SCM tools, Git often comes out ahead. Git has also emerged as a de facto standard, meaning that vast numbers of developers already have Git experience.

Why shouldn’t I use Git? Git has an initially steep learning curve. Its terminology can seem a little arcane and new to novices. Revert, for instance, has a very different meaning in Git than it does in SCM and CVS. However, it rewards that investment curve with increased development speed once mastered.

12. Rudder

What is Rudder? Rudder is (yet another!) open source audit and configuration management tool that’s designed to help automate system config across large IT infrastructures.

What are the benefits of Rudder? Rudder allows users (even non-experts) to define parameters in a single console, and check that IT services are installed, running and in good health. Rudder is useful for keeping configuration drift low. Managers are also able to access compliance reports and access audit logs.  Rudder is built in Scala.

13. Gradle

What is it? Gradle is an open source build automation tool that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based DSL instead of the XML form used by Maven.

Why use Gradle instead of Ant or Maven? For many years, build tools were simply about compiling and packaging software. Today, projects tend to involve larger and more complex software stacks, have multiple programming languages, and incorporate many different testing strategies. It’s now really important (particularly with the rise of Agile) that build tools support early integration of code as well as easy delivery to test and prod.

Gradle allows you to map out your problem domain using a domain specific language, which is implemented in Groovy rather than XML. Writing code in Groovy rather than XML cuts down on the size of a build, and is far more readable.

14. Chef

What is Chef? Chef is a config management tool designed to automate machine setup on physical servers, VMs and in the cloud. Many companies use Chef software to manage and control their infrastructure – including Facebook, Etsy and Indiegogo. Chef is designed to define Infrastructure as Code.

What is infrastructure as code? Infrastructure as Code means that, rather than manually changing and setting up machines, the machine setup is defined in a Chef recipe. Leveraging Chef allows you to easily recreate your environment in a predictable manner by automating the entire system configuration.

What are the next steps for Chef? Chef has released Chef Delivery, a tool for creating automated workflows around enterprise software development and establishing a pipeline from creation to production. Chef Delivery establishes a pipeline that every new piece of software should go through in order to prepare it for production use. Chef Delivery works in a similar way to Jenkins, but offers greater reporting and auditing capabilities.

15. Cobbler

What is it? Cobbler is a Linux provisioning server that facilitates a network-based system installation of multiple OSes from a central point using services such as DHCP, TFTP and DNS.

Cobbler can be configured for PXE, reinstallations and virtualised guests using Xen, KVM and Xenware. Cobbler also comes with a lightweight configuration management system, as well as support for integrating with Puppet.

16. SimianArmy

What is it? SimianArmy is a suite of tools designed by Netflix to support cloud operations. ChaosMonkey is part of SimianArmy, and is described as a ‘resiliency tool that helps applications tolerate random instance failures.’

What does it do? The SimianArmy suite of tools are designed to help engineers test the reliability, resiliency and recoverability of their cloud services running on AWS.

Netflix began the process of creating the SimianArmy suite of tools soon after they moved to AWS. Each ‘monkey’ is decided to help Netflix make its service less fragile, and better able to support continuous service.

The SimianArmy includes:

  • Chaos Monkey – randomly shuts down virtual machines (VMs) to ensure that small disruptions will not affect the overall service.
  • Latency Monkey – simulates a degradation of service and checks to make sure that upstream services react appropriately.
  • Conformity Monkey – detects instances that aren’t coded to best-practices and shuts them down, giving the service owner the opportunity to re-launch them properly.
  • Security Monkey – searches out security weaknesses, and ends the offending instances. It also ensures that SSL and DRM certificates are not expired or close to expiration.
  • Doctor Monkey – performs health checks on each instance and monitors other external signs of process health such as CPU and memory usage.
  • Janitor Monkey – searches for unused resources and discards them.

Why use SimianArmy? SimianArmy is designed to make cloud services less fragile and more capable of supporting continuous service, when parts of cloud services come across a problem. By doing this, potential problems can be detected and addressed.

17. AWS

What is it? AWS is a secure cloud services platform, which offers compute, database storage, content delivery and other functionality to help businesses scale and grow.

Why use AWS? EC2 is the most popular AWS service, and provides a very easy way for DevOps teams to run tests. Whenever you need them, you can set up an EC2 server with a machine image up and running in seconds.

EC2 is also great for scaling out systems. You can set up bundles of servers for different services, and when there is additional load on servers, scripts can be configured to spin up additional servers. You can also handle this automatically through Amazon auto-scaling.

What are the downsides of AWS? The main downside of AWS is that all of your servers are virtual. There are options available on AWS for single tenant access, and different instance types exist, but performance will vary and never be as stable as physical infrastructure.

If you don’t need elasticity, EC2 can also be expensive at on-demand rates.

18. CoreOS

What is it? CoreOS is a Linux distribution that is designed specifically to solve the problem of making large, scalable deployments on varied infrastructure easy to manage. It maintains a lightweight host system, and uses containers to provide isolation.

Why use CoreOS? CoreOS is a barebones Linux distro. It’s known for having a very small footprint, built for “automated updates” and geared specifically for clustering.

If you’ve installed CoreOS on disk, it will update by having two system partitions – one “known good” because you’ve used it to boot to, and another that is used to download updates to. It will then automatically reboot and switch to update.

CoreOS gives you a stack of systemd, etcd, Fleet, Docker and rkt with very little else. It’s useful for spinning up a large cluster where everything is going to run in Docker containers.

What are the alternatives? Snappy Ubuntu and Project Atomic offer similar solutions.

19. Grafana

What is Grafana? Grafana is a neat open source dashboard tool. Grafana is useful for because it displays various metrics from Graphite through a web browser.

What are the advantages of Grafana? Grafana is very simple to setup and maintain, and displays metrics in a simple, Kibana-like display style. In 2015, Grafana also released a SaaS component, Grafana.net.

You might wonder how Grafana differs from the ELK stack. While ELK is about log analytics, Grafana is more about time-series monitoring.

Grafana helps you maximise the power and ease of use of your existing time-series store, so you can focus on building nice looking and informative dashboards. It also lets you define generic dashboards through variables that can be used in metrics queries. This allows you to reuse the same dashboards for different servers, apps and experiments.

20. Chocolatey

What is Chocolatey? Chocolatey is apt-get for Windows. Once installed, you can install Windows applications quickly and easily using the command line. You could install Git, 72Zip, Ruby, or even Microsoft Office! The catalogue is now incredibly complete – you really can install a wide array of apps using Chocolatey.

Why should I use Chocolatey? Because manual installs are slow and inefficient. Chocolatey promises that you can install a program (including dependencies, such as the .NET framework) without user intervention.

You could use Chocolatey on a new PC to write a simple command, and download and install a fully functioning dev environment in a few hours. It’s really cool.

21. Zookeeper

What is it? Zookeeper is a centralised service for maintaining configuration information, naming, providing distributed synchronisation, and providing group services. All of these services are used in one form or another by distributed applications.

Why use Zookeeper? Zookeeper is a co-ordination system for maintaining distributed services. It’s best to see Zookeeper as a giant properties file for different processes, telling them which services are available and where they are located. This post from the Engineering team at Pinterest outlines some possible use cases for Zookeeper.

Where can I read more? Aside from Zookeeper’s documentation, which is pretty good, chapter 14 of “Hadoop: The Definitive Guide” has around 35 pages, describing in some level of detail what Zookeeper does.

22. GitHub

What is GitHub? GitHub is a web based repository service. It provides distributed revision control and source control management functionality.

At the heart of GitHub is Git, the version control system designed and developed by Linus Torvalds. Git, like any other version control system, is designed to system, manage and store revisions of products.

GitHub is a centralised repository system for Git, which adds a Web-based graphical user interface and several collaboration features, such as wiki and basic task management tools.

One of GitHub’s coolest features is “forking” – copying a repo from one user’s account to another. This allows you to take a project that you don’t have write access to, and modify it under your own account. If you make changes, you can send a notification called a “pull request” to the original owner. The user can then merge your changes with the original repo.

23. Drone

What is it? Drone is a continuous integration platform, based on Docker and built in Go. Drone works with Docker to run tests, and also works with Github, Gitlab and Bitbucket.

Why use Drone? The use case for Drone is much the same as any other continuous integration solution. CI is the practice of making regular commits to your code base. Since with CI you will end up building and testing your code more frequently, the development process will be sped up. Drone does this – speeding up the process of building and testing.

How does it work? Drone pulls code from a Git repository, and then runs scripts that you define. Drone allows you to run any test suite, and will report back to you via email or indicate the status with a badge on your profile. Because Drone is integrated with Docker, it can support a huge number of languages including PHP, Go, Ruby and Python, to name just a few.

24. Pagerduty

What is it? Pagerduty is an alarm aggregation and monitoring system that is used predominantly by support and sysadmin teams.

How does it work? PagerDuty allows support teams to pull all of their incident reporting tools into a single place, and receive an alert when an incident occurs. Before PagerDuty came along, companies used to cobble together their own incident management solutions. PagerDuty is designed to plug in whatever monitoring systems they are using, and manage the incident reporting from one place.

Anything else? PagerDuty provides detailed metrics on response and resolution times too.

25. Dokku

What is it? Dokku is a mini-Heroku, running on Docker.

Why should I use it? If you’re already deploying apps the Heroku way, but don’t like the way that Heroku is getting more expensive for hobbyists, running Dokku from a tool such as DigitalOcean could be a great solution.

Having the ability to deploy a site to a remote and have it immediately using Github is a huge boon. Here’s a tutorial for getting it up and running.

26. OpenStack

What is it? OpenStack is free and open source software for cloud computing, which is mostly deployed as Infrastructure as a Service.

What are the aims of OpenStack? OpenStack is designed to help businesses build Amazon-like cloud services in their own data centres.

OpenStack is a Cloud OS designed to control large pools of compute, storage and networking resources through a datacentre, managed through a dashboard giving administrators control while also empowering users to provision resources.

27. Sublime-Text

What is it? Sublime-Text is a cross-platform source code editor with a Python API. It supports many different programming languages and markup languages, and has extensive code highlighting functionality.

What’s good about it? Sublime-Text is feature-ful, it’s stable, and it’s being continuously developed. It is also built from the ground up to be extremely customisable (with a great plugin architecture, too).

28. Nagios

What is it? Nagios is an open source tool for monitoring systems, networks and infrastructure. Nagios provides alerting and monitoring services for servers, switches, applications and services.

Why use Nagios? Nagios main strengths are that it is open source, relatively robust and reliable, and is highly configurable. It has an active development community, and runs on many different kind of operating systems. You can use Nagios to monitor services such as DHCP, DNS, FTP, SSH, Telnet, HTTP, NTP, POP3, IMAP, SMTP and more. It can also be used to monitor database servers such as MySQL, Postgres, Oracle and SQL Server.

Has it had any criticism? Nagios has been criticised as lacking scalability and usability. However, Nagios is stable and its limitations and problems are well-known and understood. And certainly some, including Etsy, are happy to see Nagios live on a little longer.

29. Spinnaker

What is it? Spinnaker is an open-source, multi-cloud CD platform for releasing software changes with high velocity and confidence.

What’s it designed to do? Spinnaker was designed by Netflix as the successor to its “Asgard” project. Spinnaker is designed to allow companies to hook into and deploy assets across two cloud providers at the same time.

What’s good about it? It’s battle-tested on Netflix’s infrastructure, and allows the creation of pipelines that begin with the creation of some deployable asset (say a Docker image or a jar file), and end with a deployment. Spinnaker offers an out of the box setup, and engineers can make and re-use pipelines on different workflows.

30. Flynn

What is it? Flynn is one of the most popular open source Docker PaaS solutions. Flynn aims to provide a single platform that Ops can provide to developers to power production, testing and development, freeing developers to focus.

Why should you use Flynn? Flynn is an open source PaaS built from pluggable components that you can mix and match however you want. Out of the box, it works in a very similar way to Heroku, but you are able to replace pieces and put whatever you need into Flynn.

Is Flynn production-ready? The Flynn team correctly point out that “production ready” means different things to different people. As with many of the tools in this list, the best way to find out if it’s a fit for you is to try them!

If you’re interested in learning more about DevOps or specific DevOps tools, why not take a look at our Training pages. 

We offer regular Introduction to DevOps courses, and have a number of upcoming Jenkins training courses.

Jason Man30 DevOps Tools You Could Be Using
read more
New features in Puppet Enterprise 2017.3

New features in Puppet Enterprise 2017.3

No comments

Today at PuppetConfPuppet announced the release of Puppet Enterprise 2017.3.

The last release, Puppet Enterprise 2017.2, gave users the ability to:

  • Deploy using the Console, with visual tools
  • Scale up brokers, deploying them into Compile Masters
  • Inspect unmanaged software packages, know what you are running and identify security issues and know which ones are managed / unmanaged
  • Run PE in the cloud with marketplace images

What’s new for Puppet Enterprise 2017.3?

Here is a summary of Puppet Enterprise 2017.3, released today:

  • Puppet Pipelines, allowing to ship software faster to any endpoint, using Distelli which was acquired by Puppet earlier this year. These automated deployment pipelines are technology agnostic, and will integrate easily with the technologies you use.
  • Inspect known vulnerabilities affecting packages that you are using
  • Puppet has switched to two releases per year for its Enterprise offering, so this is the last releases for this year, and the next release will be in the first half of next year. The versions will be supported for 9 months. For LTS, 1.5 years support, with 6 more months of extended support
  • Puppet Tasks, which you can read more about in this blog post
  • Improved Marketplace images
  • Helm and Kubernetes support
  • Enhancements to the language and supported modules
  • Click to see a great Visual Code Studio plugin
  • Of course, the release of the Puppet5 Platform
  • Puppet Discovery, which you can also read more about in this blog post
  • Improvements to Puppet on Windows

Read more about Puppet Tasks and Puppet Discovery in our two related blog posts.

Javier VelasquezNew features in Puppet Enterprise 2017.3
read more
Puppet Feature highlight: Puppet Tasks using Puppet Bolt

Puppet Feature highlight: Puppet Tasks using Puppet Bolt

No comments

Today at PuppetConf, Puppet announced a new feature for Puppet Enterprise 2017.3: Puppet Tasks.

Using Bolt, an agentless open source task runner, you will now have access to Puppet Enterprise Task Management in the new version of Puppet Enterprise.

Puppet Bolt allows you to run ad-hoc commands without the need to install any agent, simply using WinRM or SSH.

Puppet Enterprise Task Management allows you to use Puppet Bolt with the benefits of PE, with automatic scaling, RBAC, auditing, etc. This is the easiest way to run tasks across tens of thousands of servers, easily.

You can read more about Puppet Tasks here

Javier VelasquezPuppet Feature highlight: Puppet Tasks using Puppet Bolt
read more
New features in Puppet Enterprise 2016.4

New features in Puppet Enterprise 2016.4

No comments

Above, Sanjay Mirchandani, Puppet CEO during the opening keynote for PuppetConf 2016. 

This year’s PuppetConf is currently underway. 

As Puppet partners, we currently have a team in San Diego for PuppetConf 2016: an opportunity to meet fellow Puppet users and explore shared problems, solutions and experiences.

We’re bringing you updates as and when they happen.

What’s new for Puppet Enterprise 2016.4?

The latest version of Puppet Enterprise will bring these improvements to the table.

• Puppet now natively supports building Docker containers automatically, on top of being able to install and manage containers, using Docker, Kubernetes, Mesos, etc.

• The Puppet Orchestrator now allows you to target specific servers or groups of servers using PQL (Puppet Query Language) on which new configuration will be deployed.

• Puppet Enterprise now makes the distinction between intended changes (e.g. a change in a puppet manifest) and corrective changes (e.g. a change made by another user that puppet corrects).

• Self service will be made easy with a plugin for vRealize Automation next month.

• Puppet and CloudBees have been working together to roll out a Jenkins integration for Puppet Enterprise. This means that Puppet can now be integrated into in your CD pipeline in Jenkins.

• New native CLI tools for Windows and Mac, to avoid having to login on a server to direct changes on your infrastructure.

• You can now hide or redact sensitive configuration data contained in Hiera from PuppetDB, logs and change reports.

• Microsoft Azure’s module has been improved to support more resources that can be provisioned, and will be release next month.

 

You can watch the Product Announcement for yourself, here. 

Michel LebeauNew features in Puppet Enterprise 2016.4
read more
How to set-up a simple web development environment (web & database server) with Puppet

How to set-up a simple web development environment (web & database server) with Puppet

No comments

In this step-by-step guide, we will see how to set-up a Puppet Master in Amazon Web Services, and how to use it to create two other AWS instances.

We will then use Puppet to configure these two instances, one will be a MySQL Database Server and the other an Apache Web Server.

Summary

  1. AWS Setup
  2. Set up the Master
    1. Create the AWS Master Instance
    2. Install Puppet Enterprise
  3. Configure the Agent Nodes
    1. Launch the agent nodes with Puppet
    2. Configure Apache and MySQL using Roles and Profiles
      1. Create The Database and and the Webserver Roles
      2. Create the Apache and MySQL Profiles
    3. Classify our Nodes
    4. Manually run Puppet on each Agent Nods
  4. Related Articles

AWS Setup

Note that in this Guide, we use the eu-central-1 / Frankfurt zone.

If you intend to use a different zone, you will have to change the ami-id in the appropriate places in the scripts.

  1. Login to your AWS EC2 Console.
  2. Select the zone mentioned above.
  3. Create a AWS Access Key and a Secret Key in Security Credentials > Users > Your User Name > Create Access Key, and keep them handy so you can refer to them later.
  4. Go to the EC2 console.
  5. Create a Key Pair in Network & Security > Key Pairs > Create Key Pairs called Webdev-forest,and save the pem file to an accessible locaiton. (You will need this in order to access the Master).

Set up the Master

Create the AWS Master Instance

  1. In order to create the Master Instance, select EC2 Console > Instances > Launch Instance, and configure it as follows:
    1. Choose the Ubuntu Server 14.04 LTS (HVM), SSD Volume Type AMI.
    2. Choose the t2.large type.
    3. Use the default instance details settings.
    4. Use the default storage, 20 GB SSD.
    5. Give it a recognizable name, e.g. master_of_puppets.
    6. Create a security group with ports 22 for SSH only from your IP, and 3000, 8140, 443, 61613, 8142 for puppet services from anywhere.
    7. Review and launch.
    8. Use the keypair that you just created.
    9. Launch!
    10. Wait until the instance has finished initializing.

Install Puppet Enterprise

  1. Now using the key created before and the public hostname of your instance, which you can find in the ec2 description of your instance at the Public DNS section
    1. chmod 400 Webdev-forest.pem
    2. ssh -i Webdev-forest.pem ubuntu@[public hostname]
    3. accept the connection 
  2. Become root
    1. sudo su 
  3. Edit your /etc/hosts and add the following line at the top:
    1. vim /etc/hosts
    2. "127.0.0.1   localhost master.puppet.vm master puppet" 
  4. Change the hostname to “master.puppet.vm”
    1. hostname master.puppet.vm 

  5. Download the pe master installer
    1. wget -O puppet-installer.tar.gz "https://pm.puppetlabs.com/cgi-bin/download.cgi?dist=ubuntu&rel=14.04&arch=amd64&ver=latest"
      
      
  6. Unpack the installer
    1. tar -xf puppet-installer.tar.gz
  7. Install puppet master
    1.  ./puppet-enterprise-<version>-ubuntu-14.04-amd64/puppet-enterprise-installer
    2. Select the [1] option to perform a guided installation
    3. Copy the public hostname of your ec2 instance, and go to https://<public-hostname>:3000Puppet_1.png
    4. There will be an error displayed by your browser, add an exception in firefox or click on advanced and then proceed in chrome. For a more elaborate guide go tohttps://docs.puppetlabs.com/pe/latest/console_accessing.html to access the console.
    5. Click on Let’s get started!
    6. Select a monolithic installation
    7. Type in the Puppet master FQDN: master.puppet.vm
    8. Type in the Puppet master DNS aliases: puppet
    9. Type in a Console Administrator password. Later on you will use it to login as the adminuser.
    10. Click on Submit and then Continue
    11. Now the Puppet Installer will do some checks before the installation, and will probably prompt some warnings which can be skipped.
    12. Click Deploy Now
    13. This step will take around 10 minutes, which is normal and you will then see this screen indicating that all went well:Puppet_2.png
  8. access the console at https://<public-hostname>
    1. The user is “admin” and the password is the one that you chose in the step before.
    2. You will then see the console:Puppet_3.png

The puppet master is now all set, so let’s take care of the agents.

Configure the Agent Nodes

Launch the agent nodes with Puppet

  1. On the master, create a new directory called create_instances in the /tmp/ directory.
    1. mkdir ~/create_instances 
  2. Create a new file create.pp that will create the instances
    1. vim ~/create_instances/create.pp
    2. Paste the following code:
      $pe_master_hostna$::fqdn # Get the master's fqdn
      me = $facts['ec2_metadata']['hostname'] # Get the hostname of the master $pe_master_ip = $facts['ec2_metadata']['local-ipv4'] # Get the ip of the master $pe_master_fqdn = 
      # Set the default for the security groups
      Ec2_securitygroup {
        region => 'eu-central-1', # Replace by the region in which your puppet master is
        ensure => present,
        vpc    => 'My VPC', # Replace by the name of your VPC
      }
      
      # Set the default for the instances
      Ec2_instance {
        region        => 'eu-central-1', # Replace by the region in which your puppet master is
        key_name      => 'Webdev-forest', # Replace by the name of your key if you chose something else
        ensure        => 'running',
        image_id      => 'ami-87564feb', # ubuntu-trusty-14.04-amd64-server-20160114.5 (ami-87564feb)
        instance_type => 't2.micro',
        tags          => {
          'OS'    => 'Ubuntu Server 14.04 LTS',
          'Owner' => 'Michel Lebeau' # Replace by your name
        },
        subnet        => 'My Subnet', # Replace by the name of your Subnet
      }
      
      # Set up the security group for the webserver
      ec2_securitygroup { 'web-sg':
        description => 'Security group for web servers',
        ingress     => [{ 
          # Open the port 22 to be able to SSH into, replace by your.ip/32 to secure it better 
          protocol => 'tcp',
          port     => 22,
          cidr     => '0.0.0.0/0'
        },{ 
          # Open the port 80 for HTTP
          protocol => 'tcp',
          port     => 80,
          cidr     => '0.0.0.0/0'
          },
        ],
      }
      
      # Set up the security group for the database server
      ec2_securitygroup { 'db-sg':
        description => 'Security group for database servers',
        ingress     => [{ 
          # Open the port 22 to be able to SSH into, replace by your.ip/32 to secure it better 
          protocol => 'tcp',
          port     => 22,
          cidr     => '0.0.0.0/0'
        },{ 
          # Open the port 3306 to be able to access mysql
          protocol => 'tcp',
          port     => 3306,
          cidr     => '0.0.0.0/0'
          },
        ],
      }
       
      # Set up the instances, assign the security groups and provide user data that will be executed at the end of the initialization 
      ec2_instance { 'webserver':
        security_groups => ['web-sg'],
        user_data       => template('/root/create_instances/templates/webserver.sh.erb'),
      }
      ec2_instance { 'dbserver':
        security_groups => ['db-sg'],
        user_data       => template('/root/create_instances/templates/dbserver.sh.erb'),
      }

      You can find the VPC and subnet in the VPC section of AWS, please note that Puppet expects the name of the VPc and subnet, the ID will not work.

    3. If you are using a different region than eu-central-1, change the region and the image_id accordingly.
  3. Create 2 templates
    1. Create a directory called “templates” inside the /tmp/create_instances directory
      1. mkdir ~/create_instances/templates
    2. Create the webserver template
      1. vim ~/create_instances/templates/webserver.sh.erb
        #!/bin/bash
        PE_MASTER='<%= @pe_master_hostname %>'
        echo "<%= @pe_master_ip %> <%= @pe_master_fqdn %>" >> /etc/hosts
        # Download the installation script from the master and execute it
        curl -sk https://$PE_MASTER:8140/packages/current/install.bash | /bin/bash -s agent:certname=webserver
    3. Create the dbserver template
      1. vim ~/create_instances/templates/dbserver.sh.erb 
        #!/bin/bash
        PE_MASTER='<%= @pe_master_hostname %>'
        echo "<%= @pe_master_ip %> <%= @pe_master_fqdn %>" >> /etc/hosts
        # Download the installation script from the master and execute it
        curl -sk https://$PE_MASTER:8140/packages/current/install.bash | /bin/bash -s agent:certname=dbserver
    4. Now let’s create the instances:
      1. Install the retries gem and the Amazon AWS Ruby SDK gem
        1. /opt/puppetlabs/puppet/bin/gem install aws-sdk-core retries
      2. export your aws access key, here is a very small guide on where to find it: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html
        1. mkdir ~/.aws/
        2. vim ~/.aws/credentials
          [default]
          aws_access_key_id =          # Paste here your Access Key ID
          aws_secret_access_key =   # Paste here your Secret Access Key ID
          region =                                # Specify your region, optional
      3. install puppet’s AWS module
        1. puppet module install puppetlabs-aws
      4. finally apply the create script
        1. puppet apply /root/create_instances/create.pp
          [root@master ~]# puppet apply /root/create_instances/create.pp
          Notice: Compiled catalog for master.puppet.vm in environment production in 0.11 seconds
          Notice: /Stage[main]/Main/Ec2_instance[webserver]/ensure: changed absent to running
          Notice: /Stage[main]/Main/Ec2_instance[dbserver]/ensure: changed absent to running
          Notice: Applied catalog in 25.15 seconds
      5. Wait for the instances to be started and initialized. Once this process is finished, puppet will run and you will have to accept their certificates before they can communicate with the master.Puppet_4.png
      6. In the Puppet Enterprise Console, go to Nodes > Unsigned certificatesPuppet_5.png

         

      7. Accept all so the nodes will be able to get their latest configuration from the master.

Configure Apache and MySQL using Roles and Profiles

Now, we have two running Puppet Agent nodes communicating with our Puppet Enterprise Master. Only a few steps more and we will enjoy our new website!

Create The Database and and the Webserver Roles

The Roles will define the business logic of our applications, and will be composed by one or more profiles.

  1. In the master, navigate to the production environment:
    1. cd /etc/puppetlabs/code/environments/production/ 
  2. create the modules/roles/manifests directory
    1. mkdir -p modules/roles/manifests 
  3. create the dbserver role
    1. vim modules/roles/manifests/dbserver.pp
      # Role for a Database Server
      class roles::dbserver {
        # Include the mysql profile
        include profiles::mysql
      }
      
  4. create the webserver role
    1. vim modules/roles/manifests/webserver.pp
      # Role for a Web Server
      class roles::webserver {
        # Include the apache profile
        include profiles::apache
      }

Create the Apache and MySQL Profiles

 

Now, we will create our Profiles which will define the application stack for Aache and MySQL

  1. create the modules/profiles/manifests directory
    1. mkdir -p modules/profiles/manifests 
  2. create the apache profile
    1. vim modules/profiles/manifests/apache.pp
      # Install and configure an Apache server
      class profiles::apache {
        # Install Apache and configure it
        class { 'apache':
          mpm_module => 'prefork',
          docroot    => '/var/www',
        }
        # Install the PHP mod
        include apache::mod::php
      
        # Install php5-mysql for PDO mysql in PHP
        package { 'php5-mysql':
          ensure => installed,
        }
      
        # Get the index.php file from the master and place it in the document root
        file { '/var/www/index.php':
          ensure => file,
          source => 'puppet:///modules/profiles/index.php',
          owner  => 'root',
          group  => 'root',
          mode   => '0755',
        }
      
        # Declare the exported resource
        @@host { 'webserver':
          ip           => $::ipaddress,
          host_aliases => [$::hostname, $::fqdn] ,pin
        }
      
        # Collect the exported resources
        Host <<||>>
      }
  3. create the mysql profile
    1. vim modules/profiles/manifests/mysql.pp
      # Install and configure a MySQL server
      class profiles::mysql {
      
        # Install MySQL Server and configure it
        class {'mysql::server':
          root_password           => 'p4ssw0rd',
          remove_default_accounts => true,
          restart                 => true,
          override_options        => {
            mysqld => {
              bind_address            => '0.0.0.0',
              'lower_case_table_name' => 1,
            }
          }
        }
        # Copy the sql script from the puppet master to the /tmp directory
        file { 'mysql_populate':
          ensure => file,
          path   => '/tmp/populate.sql',
          source => 'puppet:///modules/profiles/populate.sql',
        } ->
        # Only once the file has been copied, use it to populate a new database
        mysql::db { 'cats':
          user     => 'forest',
          password => 'p4ssw0rd2',
          grant    => ['SELECT', 'UPDATE', 'INSERT', 'DELETE'],
          host     => '%', # You can replace by 'webserver' to make it more secure,
          # but you might have to flush your hosts in mysql for it
          # to be taken into account
          sql      => '/tmp/populate.sql',
        }
      
        # Declare the exported resources
        @@host { $::hostname:
          ip           => $::ipaddress,
          host_aliases => [$::fqdn, 'database'] ,
        }
      
        # Collect the exported resources
        Host <<||>>
      }
  4. Create the files that will be used to pre populate the MySQL database with some sample data and the webpage that will consume that information
    1. mkdir modules/profiles/files
    2. vim modules/profiles/files/populate.sql

      1. USE `cats`;
        
        CREATE TABLE `family` (
          `id` mediumint(8) unsigned NOT NULL auto_increment,
          `Name` varchar(255) default NULL,
          `Age` mediumint default NULL,
          PRIMARY KEY (`id`)
        ) AUTO_INCREMENT=1;
        
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Hasad",6),("Uma",5),("Breanna",17),("Macaulay",14),("Colton",11),("Serina",16),("Emery",13),("Christian",7),("Vladimir",16),("Wang",13);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Hermione",12),("Yoshio",9),("Hilel",10),("Autumn",6),("Solomon",7),("Briar",6),("Armand",9),("Alyssa",1),("Shelby",1),("Yasir",15);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Wallace",1),("Yoshio",5),("Pascale",6),("Dalton",17),("Trevor",9),("Joan",10),("Zephr",14),("Neville",3),("Nicole",4),("Halee",14);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Wayne",15),("Maile",8),("Alfonso",9),("Neve",6),("Heidi",16),("Mona",11),("Mollie",16),("Audra",16),("Karyn",12),("Acton",17);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Xyla",1),("Cole",6),("Blossom",9),("Sybill",4),("Lavinia",4),("Keely",14),("Gwendolyn",15),("Trevor",10),("Acton",12),("Christine",10);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Stone",17),("Erich",12),("Elijah",10),("Emerson",14),("Rafael",8),("Scott",17),("Olympia",13),("Nehru",14),("Casey",8),("Michael",3);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Montana",8),("Heidi",11),("Edward",13),("Xenos",1),("Venus",9),("Malik",5),("Madeline",2),("Sacha",8),("Whitney",13),("Eagan",8);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Lewis",2),("Guinevere",17),("Oliver",6),("Jana",7),("Rachel",2),("Ariel",7),("Pamela",6),("Medge",11),("Clare",10),("Meghan",8);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Stone",10),("Chase",4),("Vladimir",17),("Grace",11),("Damon",15),("Ferdinand",11),("Veronica",14),("Wesley",13),("Zelda",15),("Eugenia",6);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Carlos",9),("Cherokee",14),("Theodore",3),("Tanisha",11),("Grant",7),("Xyla",6),("Austin",11),("Madison",4),("Kasper",7),("Andrew",10);
        
    3. vim modules/profiles/files/index.php
      <?php
      echo "<h1>Our small cat family</h1>";
      echo "<table style='border: solid 1px black;'>";
      echo "<tr><th>Id</th><th>Name</th><th>Age</th></tr>";
      
      class TableRows extends RecursiveIteratorIterator {
          function __construct($it) {
              parent::__construct($it, self::LEAVES_ONLY);
          }
      
          function current() {
              return "<td style='width:150px;border:1px solid black;'>" . parent::current(). "</td>";
          }
      
          function beginChildren() {
              echo "<tr>";
          }
      
          function endChildren() {
              echo "</tr>" . "\n";
          }
      }
      
      $host = "database";
      $port = "3306";
      $username = "forest";
      $password = "p4ssw0rd2";
      $dbname = "cats";
      
      try {
          $conn = new PDO("mysql:host=$host;port=$port;dbname=$dbname", $username, $password);
          $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
          $stmt = $conn->prepare("SELECT id, Name, Age FROM family");
          $stmt->execute();
      
          // set the resulting array to associative
          $result = $stmt->setFetchMode(PDO::FETCH_ASSOC);
          foreach(new TableRows(new RecursiveArrayIterator($stmt->fetchAll())) as $k=>$v) {
              echo $v;
          }
      }
      catch(PDOException $e) {
          echo "Error: " . $e->getMessage();
      }
      $conn = null;
      echo "</table>";
      ?>
  5. install the apache and mysql modules
    1. puppet module install puppetlabs-apache
    2. puppet module install puppetlabs-mysql

 

Classify our Nodes

  1. Edit the manifest/site.pp
    1. vim manifests/site.pp
      node 'dbserver'{
        include roles::dbserver
      }
      
      node 'webserver'{
        include roles::webserver
      }
      
      node default {
      }

 

Manually run Puppet on each Agent Node

Puppet can be run using various methods, with the CLI, using MCollective or by using the Web Console for example. In this case we are going to use MCollective:

root@master:~# su - peadmin

peadmin@master:~$ mco puppet runonce -v -I webserver -I dbserver 

 * [ ============================================================> ] 2 / 2


webserver                               : OK
    {:summary=>      "Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --splay --splaylimit 120' command",     :initiated_at=>1471353250}

dbserver                                : OK
    {:summary=>      "Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --splay --splaylimit 120' command",     :initiated_at=>1471353250}



---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 2 / 0
      Start Time: 2016-08-16 13:14:11 +0000
  Discovery Time: 0.00ms
      Agent Time: 142.88ms
      Total Time: 142.88ms
peadmin@master:~$

To check if Puppet run successfully in the nodes and the changes that were applied to them, login to the Web Console and go to Configuration > Overview

 

Puppet_7.png

Now paste the public address of your webserver in your favourite browser and voilà, you are done! Note that if you get a Error: SQLSTATE[HY000] [2005] Unknown MySQL server host ‘database’ (2), you should try to run puppet using mco another time, as the exported resources haven’t been collected. This happens if the ip from the dbserver is not exported before when the webserver collects its resources. Running puppet another time will collect it.

Please note that if you terminate an AWS instance and start another with the create.pp script, it will have the same certname as the one that has been terminated, however the IP will differ. In order for Puppet to run correcty in this case, on the master execute:

puppet cert clean <certname>
With <certname> being either dbserver or webserver,

Related articles

  1. https://forge.puppetlabs.com/puppetlabs/aws
  2. https://forge.puppetlabs.com/puppetlabs/mysql
  3. https://forge.puppetlabs.com/puppetlabs/apache
  4. https://docs.puppetlabs.com/puppet/latest/reference/modules_fundamentals.html

Puppet Enterprise is one of the leading continuous delivery technologies, building on its heritage in infrastructure automation with the addition of Puppet Application Orchestration. Forest Technologies are proud partners of Puppet Labs and experts in delivering rapid value to our customers’ digital transformation initiatives using Puppet Enterprise.

Michel LebeauHow to set-up a simple web development environment (web & database server) with Puppet
read more
The Puppet State of DevOps Report 2016: What’s new?

The Puppet State of DevOps Report 2016: What’s new?

No comments

If you’re aware of what’s been going on in the DevOps space, chances are, you’ll know that Puppet have just released this years’ State of DevOps report.

There’s been a lot going on in the past year, and some of the statistics that we’ve all been faithfully reciting have changed (for the better), as high-performing organisations continue to reap the benefits of adopting DevOps. We’ve also learnt some new things.

If you’ve been busy, chances are you haven’t had a chance to fully read the report (it is a whole 18 pages longer this year!).

Here’s what we learned this year:

High-performing organisations continue to decisively outperform their lower-performing peers in terms of throughput. 

What’s up? Last year, high-performers deployed 30 times more frequently than low-performers, with 200 times faster lead times. This year, they’re deploying an incredible 200 times more frequently, with 2,555 times faster lead times.

Anything down? Now, according to last year’s report, high-performers were beating low-performers with recovery times that were 186 times faster. This year, whilst still outperforming, the low-performers have significantly caught up. Mean time to recovery for high-performers is today 24 times faster than low.

What’s new? This years’ report chooses not to focus on change success rate, but change failure rate, which is 3 times lower for high-performers in 2016. What was the reason for this? Have low-performers caught up to the high-performers? Or are we now more interested in avoiding issues in the first place… it is (after all) kind of the whole point of DevOps!

High performers have better employee loyalty, as measured by employee Net Promoter Score (eNPS).

This is completely new for the 2016 report. Whilst last year’s report looked into employee burnout, this year’s goes a step beyond to uncover the happiness of DevOps employees – measured by their loyalty to their organisations.

Apparently, employees in high-performing organisations are 2.2 times more likely to recommend their organisation to a friend as a great place to work, and 1.8 times more likely to recommend their team to a friend as a great working environment.

Why is this important? As the report notes, a number of other studies have shown how high employee loyalty is correlated with better business outcomes. We have, in fact, presented multiple times the fact that a happy employee is an engaged, loyal and more productive employee.

This finding brings Adam Jacob, Chef CTO’s quote: “happy people make happy products” full circle. As happy people make happy products, high-performing organisations are producing happy employees. This all comes down to feeling benefit from the work that employees are doing. As Simon Sinek said: “Working hard for something we don’t care about is called stress; working hard for something we love is called passion”.

Improving quality is everyone’s job.

What did we know? The 2015 report highlighted the fact that within high-performing organisations, quality control and testing were shifted further to the left in the development cycle, becoming the responsibility of everyone in the team and improving speed, reliability and quality.

What do we know now? This year, the report takes this one step further, providing us with tangible outcomes of building quality into production. High-performing organisations are spending 22% less time on unplanned work and rework, and as a result are able to spend 29% more time on new work, such as new features or code.

Why is this important? This is hugely significant in the drive for agility: enabling companies to deliver new products, services or enhancements to customers, quicker. We all know that planned work is often hindered by unplanned work. Being able to save saved on unplanned work and rework means that companies have more time to effectively deliver planned work, making the organisation more agile.

High performers spend 50% less time remediating security issues than low performers.

We like this one. We know at Forest Technologies that DevOps can help improve and tackle security issues, but it’s nice to put a stat to it.

How does this work? The general gist is that by better integrating information security objectives into daily work, teams achieve higher levels of IT performance and build more secure systems. What’s more, they save significant time retrofitting security at the end, and addressing security issues.

According to the report, the integration of security objectives is just as important as the integration of other business objectives. Security must be integrated into the daily work of delivery teams to see improvements. We actually go into this in more detail in our recent whitepaper, which you can download here.

Taking an experimental approach to product development can improve IT and organisational performance.

What did we know? Last years’ report executed a deep dive into employee culture for successful DevOps implementation. The outcomes being that a collaborative DevOps culture improves employee performance through its characteristics of cross-functional teams, blameless post-mortems, shared responsibilities, breaking down silos and experimentation time.

What’s new? This year, the report strives to find out the extent to which employees identified with the organisations they worked for. The results: that the use of continuous delivery and lean product management increases the extent to which employees identify with their organisation and, in turn, perform higher. Employees that are less scared of failure are more likely to innovate.

Should this surprise us? Honestly, no. That shouldn’t surprise us. People are a company’s greatest asset, and having employees who strongly identify with a company provides a competitive advantage.  Employees that are less scared of failure are more likely to innovate.

Undertaking a technology transformation initiative can produce sizeable cost savings for any organisation.

As a DevOps consultancy, we can verify that EVERY technology leader wants to know exactly what return to expect on investing in a technology transformation.

The 2016 report becomes the first State of DevOps Report to take organisations part of the way towards understanding potential return from adopting DevOps practices.

Using key metrics from the report, as well as industry benchmarks, they’ve provided formulas for organisations to quantify potential cost savings for example, using metrics from your own organisation.

What can we expect? There’s a lot of stats listed, depending on what you’re looking to find out. The report estimates yearly cost savings from cost of excess rework and reducing downtime for high, medium and low-performers of various sized organisations. The general point of view being that DevOps saves organisations a lot of money (more often than not, in the millions).

If you’re interested in finding out more about the latest Puppet research, you can download the full report here. 

Andy CuretonThe Puppet State of DevOps Report 2016: What’s new?
read more