DevOps Playground – Hands On with CloudBuild

DevOps Playground – Hands On with CloudBuild

Led by our very own Michel Lebeau, this Playground explores the fundamentals of a CI/CD pipeline using CloudBuild.

Over the session, we walk guests through how to create a basic build config file that defines the steps and parameters needed for CloudBuild to perform your tasks.

We also look at how to build and test a golang application, and then finish off the Playground by deploying the application using Google Cloud App Engine. And then we prepared this video so you can give it a go from the comfort of your home!


Interested in attending our next DevOps Playground in London? Follow us on Meetup to receive a notification about the next event.

Check out the Meetups we have at our other global locations:

You can also find all the information and resources you need about DevOps Playground sessions, upcoming events and past events on our website:

Michel LebeauDevOps Playground – Hands On with CloudBuild
read more
New announcements from HashiConf 2018!

New announcements from HashiConf 2018!

We are writing from San Francisco at the Fairmont Hotel where HashiCorp has just kicked off HashiConf 2018.

Since the company’s inception in 2012, it has seen huge growth and each of Hashicorp’s tools have become incredibly valuable to the industry. In particular, Terraform, Vault Consul and Nomad.

Terraform is currently used in most Fortune 500 companies. It also serves an incredible number of small and medium-size companies and plays an important part of the individual developer toolkit, thanks to growth in the adoption of the Cloud. Vault, Consul, Nomad are also being heavily utilised by the industry.

We’ve just kicked things off and HashiConf 2018 has a packed agenda of exciting talks, which is leading to some tough choices on our part!

Ready? Set. Go!

At ECS Digital, we’ve been working with the entire suite of products that HashiCorp has created.

Meet Michel Lebeau, DevOps and Continuous Delivery Consultant at ECSD. Michel has been heavily involved in projects that involve Hashicorp tools and runs Hashicorp Training courses. Here’s what he has to say about the product announcements at HashiConf 2018:

“I’m personally very excited about the free remote state feature that Terraform Enterprise is going to offer to everyone. This will allow teams to work together and manage the same resources much easier. This is a feature that Enterprise customers have enjoyed for a while now, and I’m extremely pleased to see that the general public will be able to benefit from it too.”

Nice one HashiCorp! See here for more details

“I’m also looking forward to Terraform 0.12, as I’m sure many others are, with the new for loop, conditional expressions, dynamic blocks, etc. However I am not looking forward to the breaking changes!

Vault 1.0 is of course another big one, it’s an awesome security tool that is being adopted by more companies by the day, and seeing HashiCorp give it its 1.0 seal of approval is very exciting. Auto Unseal for the open source community will help smaller companies sort out their unseal keys headache, which is a welcome addition.

Consul Connect and first-class support for Kubernetes are other announcements that have me unreasonably joyful for a Tuesday morning!”

Now meet Daniel Meszaros, also a DevOps and Continuous Delivery Consultant at ECS Digital. Daniel’s been working with open-source versions of the HashiStack for about 2 years. Here’s his take on the announcements at HashiConf 2018:

“There are a lot of exciting announcements happening this morning at HashiConf and let me tell you what my favourites are:

Terraform: HashiCorp is starting a new service to enable every terraform user to collaborate better. Remote state-file storage, with no limits on users or workspaces. Data is encrypted with Vault. This service also offers shared module registry, and centralised plans and applies. The brand new version of HCL is also something, a lot of people are waiting for, and it’s finally here. They’ve made the language more flexible, and introduced features (like loops, and dynamic blocks) that will make writing .tf files better.

Vault: 1.0 Preview Release. The entire community is waiting for the 1.0 for a long time. Vault is and has been a very mature product for years now, but the company didn’t want to release the first major version until they were sure, everything was just right, stable enough and supportable. New features include auto-unseal in open source versions, working with all the major public cloud providers: AWS, GCP, Azure and Alibaba Cloud.

Consul: Preview Release of v1.4. Connect is now Generally Available. Native integration with Envoy, the most commonly used service-mesh proxy. With the Kubernetes integration, announced earlier this year, Consul is now capable of discovering and securing and connecting services inside and outside a Kubernetes cluster.

Nomad: 0.9. I love the idea of Nomad. I love that HashiCorp is not trying to make yet another container-only platform, that focuses on the benefits of using container images, but besides being a container scheduler, is also trying to provide help to companies with legacy applications to start segregating and automating the deployments of their softwares in their current form. What shows that their effort is worth it, are the raw stats. Nomad is currently the fastest growing Hashicorp product in terms of downloads. In the new version coming in November, we’ll have new, improved UI and lot’s of new features, like utilising Nvidia GPU, Affinity type constraints, and a new type of scheduling, Spreading.

Learn: Hashicorp announced a new learning platform that helps everyone getting started with their products. Starting with Vault, but Consul and Nomad coming later this year.”

The official announcement by Armon Dadgar, co-founder of HashiCorp, can be found here.

Watch this space and follow us on Twitter for follow up blog posts and other specific announcements from Michel and Daniel at the conference!

Quick shameless plug: We offer Official HashiCorp Training in London and Singapore, get in touch if that’s something that your company is looking for.

Michel LebeauNew announcements from HashiConf 2018!
read more
DevOps Playground #21 – Google Kubernetes Engine on Google Cloud Platform

DevOps Playground #21 – Google Kubernetes Engine on Google Cloud Platform

Our 21st DevOps Playground took place last evening in Edinburgh, in our ECS office. We explored Google Kubernetes Engine on Google Cloud Platform.

We created a GKE cluster, using preemptible instances, then created an application and built a Docker image from it, which we pushed to the Google Container Registry. We then ran this image on our GKE cluster.

We did run into some issues with the limits on Google Cloud, be warned, you need to upgrade your account or you will be limited to 1 Kubernetes Cluster. Lesson learned! 😊

The remaining of the meet-up was to run some load testing with Locust, playing around with the performance of the application, allowing everyone to see how GKE auto-scaled the cluster and the deployment. Lowering the performance of the app lead to pods being created by the deployment, and as the number of pods became too large for the cluster, the cluster itself scaled, growing from 3 to 7 instances. Improving the performance of the app then lead to the cluster being able to scale back to only 3 instances.

All the changes to the application were made using rolling updates, minimising the negative effects of changing an application version while serving live traffic.

This was our busiest meetup so far in Edinburgh, and we are looking forward to meet with everyone again next month, in our ECS office near Haymarket.

Register for our next DevOps Playground in Edinburgh here, and in London here if you are around.

Interested in attending one of our DevOps Playground events? Follow up on Meetup to receive a notification about the next event – Join us!


Michel LebeauDevOps Playground #21 – Google Kubernetes Engine on Google Cloud Platform
read more
DevOps Playground #15 – Consul

DevOps Playground #15 – Consul

We ran our 15th DevOps Playground which took place at the Velocity conference venue – it was focused around Consul.

We used Docker to spin two Nginx web server containers and a third Nginx container acting as a load balancer, using Consul to then register the two web servers and dynamically configure the load balancer’s upstreams.

We will publish a follow-up post soon so you are able to run through the Playground from the confort of your home!

Our new meetup will soon be announced here, we look forward to seeing you all there 😁

Interested in attending one of our DevOps Playground events? Follow up on Meetup to recieve a notification about the next event – Join us!


Michel LebeauDevOps Playground #15 – Consul
read more
Puppet Feature highlight: Puppet Discovery

Puppet Feature highlight: Puppet Discovery

In May, Puppet released Lumogon, a tool used to help discover your infrastructure.

Today at PuppetConf, Puppet announced Puppet Discovery, which allows you to quickly and easily discover your resources, no matter what they are, traditional, cloud and container resources, it does them all, and is built on top of Lumogon.From PuppetConf, here are few key-points about Puppet Discovery:

  • Agentless service discovery for AWS EC2, containers, and physical hosts
  • Actionable intuitive views across your hybrid landscape
  • The ability to instantly bring your unmanaged resources under Puppet management
  • Delivered as turnkey and auto-updating experience

Read here for more information about Puppet Discovery.


Michel LebeauPuppet Feature highlight: Puppet Discovery
read more
DevOps Playground #14 – Nomad by HashiCorp

DevOps Playground #14 – Nomad by HashiCorp

In September, we hosted our 14th DevOps Playground, where DevOps Consultant at ECS Digital, Daniel Monteiro, presented Nomad by Hashicorp.

It was our very first meetup that was hosted in our brand new office!

Nomad is HashiCorp’s open source scheduler. It uses a declarative job file for scheduling virtualized, containerized, and standalone applications

Daniel’s hands-on part was split into three parts:

  1. Setting up the environment
  2. Generaing and running a job file
  3. Creating a cluster and running a job across the cluster.

All the steps that Daniel spoke about are available on our GitHub profile.

Many questions were asked, beers were drank and pizza was eaten 🍕🍻

For those who attended, we hope that you enjoyed getting hands-on with Nomad. Any feedback or questions, please let us know.

Our next Meetup will run during the Velocity conference on the 18th October – we will be getting Hands on with HashiCorp Consul.

Interested in attending one of our DevOps Playground events? Follow up on Meetup to recieve a notification about the next event – Join us!

Michel LebeauDevOps Playground #14 – Nomad by HashiCorp
read more
All Hands on DevOps #10

All Hands on DevOps #10

Yesterday our 10th All Hands On DevOps Meetup, co-organised with Third Republic, was hosted at Shazam’s office.

Our first talk was done by Dawn James, Portal Architect at Kobalt Music.

Dawn went through the issues he faced previously when Kobalt had a monolithic application. With monolithic application usually comes the same set of issues, regular downtime, difficulty to deliver changes quickly, etc.

Over the course of a year, he was able to transition from on premise to AWS, to go from using zip files to Docker, from PHP endpoints to JSON APIs. They now use micro services, Terraform and Fabric.

The change was progressive, one piece of the monolith at a time, but the result is no regular downtime (and hopefully no irregular one!), better performance (triple API speed), quicker deployments, and more independent developers who can self service.

Dawn mentioned he was a big fan of HashiCorp, but the biggest blocker to the adoption of Terraform being the difficulty for developers to adopt the tool.

ECS Digital offers a range of HashiCorp training sessions, from Consul, Terraform and Vault – register for our courses here.

Our second talk was split between Ben Belchak, Head of SRE and Jesús Roncero, Site Reliability Engineer at Shazam.

Ben talked about Shazam’s journey to containers over the past three years. When Shazam started, 20 odds years ago, it was a monolithic application, with unpatched OSes. Progressively, micro-services creeped in, in an ill-conceived way, while trying to match the business requirements and the deadlines, while putting out fires. A lack of good communication across offices spread around the globe also played a role in creating silos. At that point, there was a large amount of technical debt that needed to be addressed.

Enters Ben.

He started by defining targets: A happy team, a stable infrastructure, a good relationship between SREs and software engineers, and monitoring systems that could be trusted.

He took steps to move towards these targets. He demolished the silos that each SRE in the company had created, with the support of all levels of management, CTO and CEO comprised.

He started addressing each and every alert, deleting the useless ones, and properly annotating the useful ones.

He addressed recurring issues, which snowballed and freed up more time to fix more issues.

He then worked on automating deployments, going from taking more than an hour of an SRE’s time for a single deployment to no time at all after pressing a button.

He collected extensive metrics and incident tickets over the entire stack to understand exactly what was going on across the board, going as far as doing pre-mortems to try predicting future issues.

This lead to much more breathing room and a much more stable environment, with developers happy to focus on their work.

Later on, Ben walked us through how Shazam went from baremetal to running Kubernetes on Google Cloud.

Shazam’s servers were provisioned for large events, like the Superbowl or the Granny’s, which meant that most of the time, a lot of hardware was sitting unused.

At the beginning of 2017, they started migrating to Google Cloud. They now have almost all their clusters on Google Cloud, with only a few services left on premise.

The adoption of Kubernetes came after, and it emerged from these wants: Self healing services, auto scaling based on metrics, self sufficient developers, rolling deployments and rollbacks, dynamic monitoring based on SLOs, and the ability to create several environments from the same Docker image.

This has all been achieved, using Kubernetes on Google Cloud, with the help of helm (The Kubernetes Package Manager).

This gives Shazam the right amount of processing power at the right time, and the ability to deploy changes safely, quickly and reliably.


Thanks to everyone who came along. We love hearing about people’s knowledge, whilst consuming beer! We hope everyone had a great time and learned something new.


As always, we’d love to hear any ideas and suggestions you might have for our next event. 

Michel LebeauAll Hands on DevOps #10
read more
New features in Puppet Enterprise 2016.4

New features in Puppet Enterprise 2016.4

Above, Sanjay Mirchandani, Puppet CEO during the opening keynote for PuppetConf 2016. 

This year’s PuppetConf is currently underway. 

As Puppet partners, we currently have a team in San Diego for PuppetConf 2016: an opportunity to meet fellow Puppet users and explore shared problems, solutions and experiences.

We’re bringing you updates as and when they happen.

What’s new for Puppet Enterprise 2016.4?

The latest version of Puppet Enterprise will bring these improvements to the table.

• Puppet now natively supports building Docker containers automatically, on top of being able to install and manage containers, using Docker, Kubernetes, Mesos, etc.

• The Puppet Orchestrator now allows you to target specific servers or groups of servers using PQL (Puppet Query Language) on which new configuration will be deployed.

• Puppet Enterprise now makes the distinction between intended changes (e.g. a change in a puppet manifest) and corrective changes (e.g. a change made by another user that puppet corrects).

• Self service will be made easy with a plugin for vRealize Automation next month.

• Puppet and CloudBees have been working together to roll out a Jenkins integration for Puppet Enterprise. This means that Puppet can now be integrated into in your CD pipeline in Jenkins.

• New native CLI tools for Windows and Mac, to avoid having to login on a server to direct changes on your infrastructure.

• You can now hide or redact sensitive configuration data contained in Hiera from PuppetDB, logs and change reports.

• Microsoft Azure’s module has been improved to support more resources that can be provisioned, and will be release next month.


You can watch the Product Announcement for yourself, here. 

Michel LebeauNew features in Puppet Enterprise 2016.4
read more
DevOps Playground Meetup #6: Hands on with HashiCorp’s Terraform

DevOps Playground Meetup #6: Hands on with HashiCorp’s Terraform

A successful sixth meetup!

This Tuesday, we hosted our sixth monthly #DevOpsPlayground meetup. It was a successful evening, attended by many.

These meetups allow us to explore and present DevOps tools – as well as providing others with the opportunity to give them a try.

This month, Mourad Trabelsi talked about HashiCorp’s Terraform.


1.pngHashicorp’s Terraform allows you to write your infrastructure as code.

Writing configuration files and the running Terraform apply allows you to easily spin up new infrastructure. You can do this using multiple providers, including AWSDigitalOceanDocker and many more.

You can then provision them if needed.

Hands on!

During this meetup, Mourad guided us through creating a configuration file to create two webservers using one security group, then a load balancer in front of these two webservers, using its own security group, all of that in AWS.

Schema of the final infrastructure:


You can find a walkthrough of the technical steps on our GitHub page, here.


A big thank you to everyone who participated in this meetup.

We hope to see you all again in the next one!

Michel LebeauDevOps Playground Meetup #6: Hands on with HashiCorp’s Terraform
read more
How to set-up a simple web development environment (web & database server) with Puppet

How to set-up a simple web development environment (web & database server) with Puppet

In this step-by-step guide, we will see how to set-up a Puppet Master in Amazon Web Services, and how to use it to create two other AWS instances.

We will then use Puppet to configure these two instances, one will be a MySQL Database Server and the other an Apache Web Server.


  1. AWS Setup
  2. Set up the Master
    1. Create the AWS Master Instance
    2. Install Puppet Enterprise
  3. Configure the Agent Nodes
    1. Launch the agent nodes with Puppet
    2. Configure Apache and MySQL using Roles and Profiles
      1. Create The Database and and the Webserver Roles
      2. Create the Apache and MySQL Profiles
    3. Classify our Nodes
    4. Manually run Puppet on each Agent Nods
  4. Related Articles

AWS Setup

Note that in this Guide, we use the eu-central-1 / Frankfurt zone.

If you intend to use a different zone, you will have to change the ami-id in the appropriate places in the scripts.

  1. Login to your AWS EC2 Console.
  2. Select the zone mentioned above.
  3. Create a AWS Access Key and a Secret Key in Security Credentials > Users > Your User Name > Create Access Key, and keep them handy so you can refer to them later.
  4. Go to the EC2 console.
  5. Create a Key Pair in Network & Security > Key Pairs > Create Key Pairs called Webdev-forest,and save the pem file to an accessible locaiton. (You will need this in order to access the Master).

Set up the Master

Create the AWS Master Instance

  1. In order to create the Master Instance, select EC2 Console > Instances > Launch Instance, and configure it as follows:
    1. Choose the Ubuntu Server 14.04 LTS (HVM), SSD Volume Type AMI.
    2. Choose the t2.large type.
    3. Use the default instance details settings.
    4. Use the default storage, 20 GB SSD.
    5. Give it a recognizable name, e.g. master_of_puppets.
    6. Create a security group with ports 22 for SSH only from your IP, and 3000, 8140, 443, 61613, 8142 for puppet services from anywhere.
    7. Review and launch.
    8. Use the keypair that you just created.
    9. Launch!
    10. Wait until the instance has finished initializing.

Install Puppet Enterprise

  1. Now using the key created before and the public hostname of your instance, which you can find in the ec2 description of your instance at the Public DNS section
    1. chmod 400 Webdev-forest.pem
    2. ssh -i Webdev-forest.pem ubuntu@[public hostname]
    3. accept the connection 
  2. Become root
    1. sudo su 
  3. Edit your /etc/hosts and add the following line at the top:
    1. vim /etc/hosts
    2. "   localhost master.puppet.vm master puppet" 
  4. Change the hostname to “master.puppet.vm”
    1. hostname master.puppet.vm 

  5. Download the pe master installer
    1. wget -O puppet-installer.tar.gz ""
  6. Unpack the installer
    1. tar -xf puppet-installer.tar.gz
  7. Install puppet master
    1.  ./puppet-enterprise-<version>-ubuntu-14.04-amd64/puppet-enterprise-installer
    2. Select the [1] option to perform a guided installation
    3. Copy the public hostname of your ec2 instance, and go to https://<public-hostname>:3000Puppet_1.png
    4. There will be an error displayed by your browser, add an exception in firefox or click on advanced and then proceed in chrome. For a more elaborate guide go to to access the console.
    5. Click on Let’s get started!
    6. Select a monolithic installation
    7. Type in the Puppet master FQDN: master.puppet.vm
    8. Type in the Puppet master DNS aliases: puppet
    9. Type in a Console Administrator password. Later on you will use it to login as the adminuser.
    10. Click on Submit and then Continue
    11. Now the Puppet Installer will do some checks before the installation, and will probably prompt some warnings which can be skipped.
    12. Click Deploy Now
    13. This step will take around 10 minutes, which is normal and you will then see this screen indicating that all went well:Puppet_2.png
  8. access the console at https://<public-hostname>
    1. The user is “admin” and the password is the one that you chose in the step before.
    2. You will then see the console:Puppet_3.png

The puppet master is now all set, so let’s take care of the agents.

Configure the Agent Nodes

Launch the agent nodes with Puppet

  1. On the master, create a new directory called create_instances in the /tmp/ directory.
    1. mkdir ~/create_instances 
  2. Create a new file create.pp that will create the instances
    1. vim ~/create_instances/create.pp
    2. Paste the following code:
      $pe_master_hostna$::fqdn # Get the master's fqdn
      me = $facts['ec2_metadata']['hostname'] # Get the hostname of the master $pe_master_ip = $facts['ec2_metadata']['local-ipv4'] # Get the ip of the master $pe_master_fqdn = 
      # Set the default for the security groups
      Ec2_securitygroup {
        region => 'eu-central-1', # Replace by the region in which your puppet master is
        ensure => present,
        vpc    => 'My VPC', # Replace by the name of your VPC
      # Set the default for the instances
      Ec2_instance {
        region        => 'eu-central-1', # Replace by the region in which your puppet master is
        key_name      => 'Webdev-forest', # Replace by the name of your key if you chose something else
        ensure        => 'running',
        image_id      => 'ami-87564feb', # ubuntu-trusty-14.04-amd64-server-20160114.5 (ami-87564feb)
        instance_type => 't2.micro',
        tags          => {
          'OS'    => 'Ubuntu Server 14.04 LTS',
          'Owner' => 'Michel Lebeau' # Replace by your name
        subnet        => 'My Subnet', # Replace by the name of your Subnet
      # Set up the security group for the webserver
      ec2_securitygroup { 'web-sg':
        description => 'Security group for web servers',
        ingress     => [{ 
          # Open the port 22 to be able to SSH into, replace by your.ip/32 to secure it better 
          protocol => 'tcp',
          port     => 22,
          cidr     => ''
          # Open the port 80 for HTTP
          protocol => 'tcp',
          port     => 80,
          cidr     => ''
      # Set up the security group for the database server
      ec2_securitygroup { 'db-sg':
        description => 'Security group for database servers',
        ingress     => [{ 
          # Open the port 22 to be able to SSH into, replace by your.ip/32 to secure it better 
          protocol => 'tcp',
          port     => 22,
          cidr     => ''
          # Open the port 3306 to be able to access mysql
          protocol => 'tcp',
          port     => 3306,
          cidr     => ''
      # Set up the instances, assign the security groups and provide user data that will be executed at the end of the initialization 
      ec2_instance { 'webserver':
        security_groups => ['web-sg'],
        user_data       => template('/root/create_instances/templates/'),
      ec2_instance { 'dbserver':
        security_groups => ['db-sg'],
        user_data       => template('/root/create_instances/templates/'),

      You can find the VPC and subnet in the VPC section of AWS, please note that Puppet expects the name of the VPc and subnet, the ID will not work.

    3. If you are using a different region than eu-central-1, change the region and the image_id accordingly.
  3. Create 2 templates
    1. Create a directory called “templates” inside the /tmp/create_instances directory
      1. mkdir ~/create_instances/templates
    2. Create the webserver template
      1. vim ~/create_instances/templates/
        PE_MASTER='<%= @pe_master_hostname %>'
        echo "<%= @pe_master_ip %> <%= @pe_master_fqdn %>" >> /etc/hosts
        # Download the installation script from the master and execute it
        curl -sk https://$PE_MASTER:8140/packages/current/install.bash | /bin/bash -s agent:certname=webserver
    3. Create the dbserver template
      1. vim ~/create_instances/templates/ 
        PE_MASTER='<%= @pe_master_hostname %>'
        echo "<%= @pe_master_ip %> <%= @pe_master_fqdn %>" >> /etc/hosts
        # Download the installation script from the master and execute it
        curl -sk https://$PE_MASTER:8140/packages/current/install.bash | /bin/bash -s agent:certname=dbserver
    4. Now let’s create the instances:
      1. Install the retries gem and the Amazon AWS Ruby SDK gem
        1. /opt/puppetlabs/puppet/bin/gem install aws-sdk-core retries
      2. export your aws access key, here is a very small guide on where to find it:
        1. mkdir ~/.aws/
        2. vim ~/.aws/credentials
          aws_access_key_id =          # Paste here your Access Key ID
          aws_secret_access_key =   # Paste here your Secret Access Key ID
          region =                                # Specify your region, optional
      3. install puppet’s AWS module
        1. puppet module install puppetlabs-aws
      4. finally apply the create script
        1. puppet apply /root/create_instances/create.pp
          [root@master ~]# puppet apply /root/create_instances/create.pp
          Notice: Compiled catalog for master.puppet.vm in environment production in 0.11 seconds
          Notice: /Stage[main]/Main/Ec2_instance[webserver]/ensure: changed absent to running
          Notice: /Stage[main]/Main/Ec2_instance[dbserver]/ensure: changed absent to running
          Notice: Applied catalog in 25.15 seconds
      5. Wait for the instances to be started and initialized. Once this process is finished, puppet will run and you will have to accept their certificates before they can communicate with the master.Puppet_4.png
      6. In the Puppet Enterprise Console, go to Nodes > Unsigned certificatesPuppet_5.png


      7. Accept all so the nodes will be able to get their latest configuration from the master.

Configure Apache and MySQL using Roles and Profiles

Now, we have two running Puppet Agent nodes communicating with our Puppet Enterprise Master. Only a few steps more and we will enjoy our new website!

Create The Database and and the Webserver Roles

The Roles will define the business logic of our applications, and will be composed by one or more profiles.

  1. In the master, navigate to the production environment:
    1. cd /etc/puppetlabs/code/environments/production/ 
  2. create the modules/roles/manifests directory
    1. mkdir -p modules/roles/manifests 
  3. create the dbserver role
    1. vim modules/roles/manifests/dbserver.pp
      # Role for a Database Server
      class roles::dbserver {
        # Include the mysql profile
        include profiles::mysql
  4. create the webserver role
    1. vim modules/roles/manifests/webserver.pp
      # Role for a Web Server
      class roles::webserver {
        # Include the apache profile
        include profiles::apache

Create the Apache and MySQL Profiles


Now, we will create our Profiles which will define the application stack for Aache and MySQL

  1. create the modules/profiles/manifests directory
    1. mkdir -p modules/profiles/manifests 
  2. create the apache profile
    1. vim modules/profiles/manifests/apache.pp
      # Install and configure an Apache server
      class profiles::apache {
        # Install Apache and configure it
        class { 'apache':
          mpm_module => 'prefork',
          docroot    => '/var/www',
        # Install the PHP mod
        include apache::mod::php
        # Install php5-mysql for PDO mysql in PHP
        package { 'php5-mysql':
          ensure => installed,
        # Get the index.php file from the master and place it in the document root
        file { '/var/www/index.php':
          ensure => file,
          source => 'puppet:///modules/profiles/index.php',
          owner  => 'root',
          group  => 'root',
          mode   => '0755',
        # Declare the exported resource
        @@host { 'webserver':
          ip           => $::ipaddress,
          host_aliases => [$::hostname, $::fqdn] ,pin
        # Collect the exported resources
        Host <<||>>
  3. create the mysql profile
    1. vim modules/profiles/manifests/mysql.pp
      # Install and configure a MySQL server
      class profiles::mysql {
        # Install MySQL Server and configure it
        class {'mysql::server':
          root_password           => 'p4ssw0rd',
          remove_default_accounts => true,
          restart                 => true,
          override_options        => {
            mysqld => {
              bind_address            => '',
              'lower_case_table_name' => 1,
        # Copy the sql script from the puppet master to the /tmp directory
        file { 'mysql_populate':
          ensure => file,
          path   => '/tmp/populate.sql',
          source => 'puppet:///modules/profiles/populate.sql',
        } ->
        # Only once the file has been copied, use it to populate a new database
        mysql::db { 'cats':
          user     => 'forest',
          password => 'p4ssw0rd2',
          grant    => ['SELECT', 'UPDATE', 'INSERT', 'DELETE'],
          host     => '%', # You can replace by 'webserver' to make it more secure,
          # but you might have to flush your hosts in mysql for it
          # to be taken into account
          sql      => '/tmp/populate.sql',
        # Declare the exported resources
        @@host { $::hostname:
          ip           => $::ipaddress,
          host_aliases => [$::fqdn, 'database'] ,
        # Collect the exported resources
        Host <<||>>
  4. Create the files that will be used to pre populate the MySQL database with some sample data and the webpage that will consume that information
    1. mkdir modules/profiles/files
    2. vim modules/profiles/files/populate.sql

      1. USE `cats`;
        CREATE TABLE `family` (
          `id` mediumint(8) unsigned NOT NULL auto_increment,
          `Name` varchar(255) default NULL,
          `Age` mediumint default NULL,
          PRIMARY KEY (`id`)
        ) AUTO_INCREMENT=1;
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Hasad",6),("Uma",5),("Breanna",17),("Macaulay",14),("Colton",11),("Serina",16),("Emery",13),("Christian",7),("Vladimir",16),("Wang",13);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Hermione",12),("Yoshio",9),("Hilel",10),("Autumn",6),("Solomon",7),("Briar",6),("Armand",9),("Alyssa",1),("Shelby",1),("Yasir",15);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Wallace",1),("Yoshio",5),("Pascale",6),("Dalton",17),("Trevor",9),("Joan",10),("Zephr",14),("Neville",3),("Nicole",4),("Halee",14);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Wayne",15),("Maile",8),("Alfonso",9),("Neve",6),("Heidi",16),("Mona",11),("Mollie",16),("Audra",16),("Karyn",12),("Acton",17);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Xyla",1),("Cole",6),("Blossom",9),("Sybill",4),("Lavinia",4),("Keely",14),("Gwendolyn",15),("Trevor",10),("Acton",12),("Christine",10);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Stone",17),("Erich",12),("Elijah",10),("Emerson",14),("Rafael",8),("Scott",17),("Olympia",13),("Nehru",14),("Casey",8),("Michael",3);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Montana",8),("Heidi",11),("Edward",13),("Xenos",1),("Venus",9),("Malik",5),("Madeline",2),("Sacha",8),("Whitney",13),("Eagan",8);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Lewis",2),("Guinevere",17),("Oliver",6),("Jana",7),("Rachel",2),("Ariel",7),("Pamela",6),("Medge",11),("Clare",10),("Meghan",8);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Stone",10),("Chase",4),("Vladimir",17),("Grace",11),("Damon",15),("Ferdinand",11),("Veronica",14),("Wesley",13),("Zelda",15),("Eugenia",6);
        INSERT INTO `family` (`Name`,`Age`) VALUES ("Carlos",9),("Cherokee",14),("Theodore",3),("Tanisha",11),("Grant",7),("Xyla",6),("Austin",11),("Madison",4),("Kasper",7),("Andrew",10);
    3. vim modules/profiles/files/index.php
      echo "<h1>Our small cat family</h1>";
      echo "<table style='border: solid 1px black;'>";
      echo "<tr><th>Id</th><th>Name</th><th>Age</th></tr>";
      class TableRows extends RecursiveIteratorIterator {
          function __construct($it) {
              parent::__construct($it, self::LEAVES_ONLY);
          function current() {
              return "<td style='width:150px;border:1px solid black;'>" . parent::current(). "</td>";
          function beginChildren() {
              echo "<tr>";
          function endChildren() {
              echo "</tr>" . "\n";
      $host = "database";
      $port = "3306";
      $username = "forest";
      $password = "p4ssw0rd2";
      $dbname = "cats";
      try {
          $conn = new PDO("mysql:host=$host;port=$port;dbname=$dbname", $username, $password);
          $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
          $stmt = $conn->prepare("SELECT id, Name, Age FROM family");
          // set the resulting array to associative
          $result = $stmt->setFetchMode(PDO::FETCH_ASSOC);
          foreach(new TableRows(new RecursiveArrayIterator($stmt->fetchAll())) as $k=>$v) {
              echo $v;
      catch(PDOException $e) {
          echo "Error: " . $e->getMessage();
      $conn = null;
      echo "</table>";
  5. install the apache and mysql modules
    1. puppet module install puppetlabs-apache
    2. puppet module install puppetlabs-mysql


Classify our Nodes

  1. Edit the manifest/site.pp
    1. vim manifests/site.pp
      node 'dbserver'{
        include roles::dbserver
      node 'webserver'{
        include roles::webserver
      node default {


Manually run Puppet on each Agent Node

Puppet can be run using various methods, with the CLI, using MCollective or by using the Web Console for example. In this case we are going to use MCollective:

root@master:~# su - peadmin

peadmin@master:~$ mco puppet runonce -v -I webserver -I dbserver 

 * [ ============================================================> ] 2 / 2

webserver                               : OK
    {:summary=>      "Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --splay --splaylimit 120' command",     :initiated_at=>1471353250}

dbserver                                : OK
    {:summary=>      "Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --splay --splaylimit 120' command",     :initiated_at=>1471353250}

---- rpc stats ----
           Nodes: 2 / 2
     Pass / Fail: 2 / 0
      Start Time: 2016-08-16 13:14:11 +0000
  Discovery Time: 0.00ms
      Agent Time: 142.88ms
      Total Time: 142.88ms

To check if Puppet run successfully in the nodes and the changes that were applied to them, login to the Web Console and go to Configuration > Overview



Now paste the public address of your webserver in your favourite browser and voilà, you are done! Note that if you get a Error: SQLSTATE[HY000] [2005] Unknown MySQL server host ‘database’ (2), you should try to run puppet using mco another time, as the exported resources haven’t been collected. This happens if the ip from the dbserver is not exported before when the webserver collects its resources. Running puppet another time will collect it.

Please note that if you terminate an AWS instance and start another with the create.pp script, it will have the same certname as the one that has been terminated, however the IP will differ. In order for Puppet to run correcty in this case, on the master execute:

puppet cert clean <certname>
With <certname> being either dbserver or webserver,

Related articles


Puppet Enterprise is one of the leading continuous delivery technologies, building on its heritage in infrastructure automation with the addition of Puppet Application Orchestration. Forest Technologies are proud partners of Puppet Labs and experts in delivering rapid value to our customers’ digital transformation initiatives using Puppet Enterprise.

Michel LebeauHow to set-up a simple web development environment (web & database server) with Puppet
read more