Jenkins World travels to Europe!

Jenkins World travels to Europe!

No comments

We’re back at DevOps World / Jenkins World, and this time; we’re in Europe! The first location for Jenkins World Europe is Nice, France and we’re really excited to be in such an amazing city. We’re here to provide official CloudBees training to attendees as well as network at the expo to hear stories from individuals about their DevOps successes and #DevOps moments.

Jenkins World

Billy Michael & Abhaya Ghatkar provided Jenkins Pipeline Intermediate training to over 40 attendees during the first two days of the event. This day long course was designed to build upon the Jenkins Pipeline Fundamentals course. It focused on declarative pipeline using blue ocean and how Jenkins can be further enhanced with the usage of shared libraries. In addition, we discussed new features, best practises and how to ensure the students can make the most out of their pipelines with the usage of new features. The training provided the students with interactive labs to complete during the course.

These training courses also allowed us to interact with students from companies of all sizes from all over the globe. This provided us with a way to hear the unique problems users are encountering on a day-to-day basis and provide advice on how the knowledge from the course can help them to resolve these issues.

The theme of this year’s Jenkins World remains the same as last year; Transform. This shows CloudBees clearly have a commitment to building a better product for the future and feel that their transformation is still ongoing. This was showcased by the keynote presentation provided by Kohsuke Kawaguchi, where he refers to the “Superpowers” CloudBees and the community have been working hard to provide for its users.

These superpowers include:

  1. Jenkins Pipeline
  2. Jenkins Evergreen
  3. Configuration as Code
  4. Cloud Native Jenkins
  5. Jenkins X

For more information about some of these topics, please refer to our blog post from Jenkins World San Francisco where we have spoken in more details about the keynote.

In addition to the keynotes there are a number of talks spread throughout the two days which are provided by CloudBees and its partners. Some of the topics which interest us are:

  • Jenkins Configuration as Code
  • DevOps performance management with DevOptics
  • DevOps at scale within the Enterprise
  • Jenkins X: Continuous Delivery for Kubernetes
  • AWS Keynote
  • 10 things we all do, but shouldn’t do with Jenkins

We’re pleased to have been a part of Jenkins World’s first trip to Europe. So far it has provided a great platform for networking, interesting talks and the ability to meet other companies who are here showcasing their latest offering.

We look forward to enjoying the rest of the event and hope to see you all again next year is Lisbon, Portugal!

Billy MichaelJenkins World travels to Europe!
read more
ECS Digital returns to Jenkins World 2018

ECS Digital returns to Jenkins World 2018

No comments

ECS Digital returned once again to Jenkins World in San Francisco, hosted by our partner Cloudbees. This year we had the opportunity to listen to a whole host of talks delivered by various industry leaders. We also conducted the ‘Jenkins Pipeline Fundamentals’ – training over 35 students from numerous backgrounds and experiences in Jenkins.

Our very own Ivan Audisio led the training, covering the essential best practices and nature of declarative and scripted pipelines. The real-world experience shared by both him and the various students made for a stimulating and enlightening experience for all. Alongside the theory, there were practical labs to provide an immediate application of the theory learned.

In tandem with the training, there were a variety of courses available during the convention, including Jenkins Pipeline Intermediate, Jenkins Fundamentals and CloudBees Core on Kubernetes – Intermediate.

These full-day training sessions were held over two days to give those interested a chance to expand their knowledge and familiarity with the Jenkins tools and concepts. These ranged from the basic configuration of projects to end-to-end automation.

During the event, Cloudbees hosted their second annual DevOps World Awards Program which aimed to honour all the Jenkins contributors and DevOps innovators. ECS Digital received the award for ‘Service Delivery Partner of the Year’ in recognition of our contributions to the Cloudbees and Jenkins community. We are extremely grateful for this award, thank you to the Cloudbees and Jenkins team!

The Keynotes

Following the conclusion of the training, the rest of the convention was dedicated to hosting talks, demonstrations and presentations of Jenkins and other related Continuous Integration (CI) technologies and concepts.

During one such keynote presentation, Kohsuke Kawaguchi, Cloudbees CTO and creator of Jenkins, introduced the exciting new technologies they have been working on and discussed their vision of the future of CI. The five technologies discussed were:

  1. Jenkins Pipeline
  2. Jenkins Evergreen
  3. Configuration as Code
  4. Cloud Native Jenkins
  5. Jenkins X

Here were some other event announcements that caught our attention:

Jenkins Pipeline

As with before, Cloudbees continues to push forward with improving the Jenkins Pipeline, with updates to the Blue Ocean interface they have been developing since last year. One development Kawaguchi was particularly excited about was the extensibility to facilitate the Jenkins community to contribute to the project; similar to the wealth of plugins developed by the community for Jenkins. He also believes it is time to move away from the old Jenkins User Interface (UI) and begin to fully integrate Blue Ocean as the go-to UI for Jenkins.

Configuration as Code

While only touched on briefly, the idea to have Jenkins’ configuration as a file that is able to be version controlled and tracked is an exciting one. Rather than having users manually make modifications with no means to track changes, which may break builds and functions, support is being developed to allow such version controlling to exist. By facilitating the creation of configurations as a single config file that can be stored in repositories, it becomes possible to easily implement rollbacks in the event of failures and easy replication. The idea to replicate a Jenkins setup by simply copying a single file is one step closer to the final goal of turning everything into code.

Cloudbees Suite

Recognising the increasing desire for greater tools and support for their Cloudbees Software, Christina Noren, Cloudbees’ Chief Product Officer, conducted the keynote introducing the Cloudbees Suite – a software package consisting of Cloudbees DevOptics, Cloudbees Codeship & Cloudbees Core.

Acknowledging the confusion caused by their rapid development of new software and improvements, Christina elaborates on their desire to rebrand their tools. This rebranding will help to alleviate the issue, as well as highlight their continued dedication to improving the tools available and creating more for the community.

DevOptics continues to deliver a means to accurately monitor performance and provide metrics of improve – a key concept in Continuous Integration and Delivery of providing feedback to users. Working together with Core for easy deployment and Codeship for operational maintenance, the software suite provides a strong collection of tools for furthering any company’s digital transformation.

The Conference

The conference served as a good place for networking, on top of providing a place for various talks and technical demonstrations from industry leaders and commentators. These talks and demonstrations ranged from personal insights to experiences with Jenkins deployment.

Our gratitude goes out to Cloudbees for hosting the conference as well as everyone who took the time to come speak with us and attend the training session conducted by us.

If you’re interested in contacting us for Jenkins or other DevOps related consultations, please contact us here.

Matthew SongECS Digital returns to Jenkins World 2018
read more
Latest Enablement Pod offering unveiled…

Latest Enablement Pod offering unveiled…

No comments

ECS Digital announced the official unveiling of their Enablement Pod offering yesterday at DevOps World | Jenkins World, the annual gathering of DevOps practitioners using Jenkins for continuous delivery.

Understanding that business-wide transformations take time and involve multi-year programmes, ECS Digital have designed Enablement Pods to help clients effect change and realise value in the short and long term.

Enablement Pods are a collection of outcome-focused sprints that handpicks specialist teams to deliver the people, resources and capabilities their clients’ need, when they need them. These Pods help enterprises transform at scale by embedding – for short periods – in existing engineering teams to enable new ways of working, tooling and technology.

The unique feature of ECS Digital’s Enablement Pods is that they – and ECS Digital’s success – are measured against KPIs defined in Sprint Zero. By tying success to business outcomes, clients are guaranteed a real return on investment. And if ECS Digital don’t hit the agreed outcomes, customers get a return on the revenue invested.

Each additional sprint to the Sprint Zero provides an opportunity to showcase and review progress ensuring maximum value from all activities. Sprints last between two weeks and resources are dependent on specific project and sprint KPIs. Another unique feature of ECS Digital’s Enablement Pods is that their resource profile remains dynamic to satisfy the different skills requirements of sprint KPIs.

ECS Digital have begun using Enablement Pods as an essential tool to deliver transformation at scale for their prolific customers. In addition to exceeding project KPIs, ECS Digital have enhanced value by enabling internal teams so they become self-sufficient and architect solutions designed to survive tomorrow’s challenges, not just todays.

 

“ECS Digital’s input has added an extra level of intelligence which has enabled us to build on the capacity under their guidance. We have grown in our capabilities over these past 12 months and developed the skillsets of our internal team through additional training. If we have any DevOps or automation or platform requirements in the future, we won’t bother going to tender, we will go straight to ECS Digital.” Matthew Bates, IT Director at ThinkSmart

Enablement Pod outcomes:

  • For each £1 invested in us, we have delivered £3 of annualised savings in the development lifecycle of a Retail Bank core application
  • A 99% reduction of application environment configuration delivery timescales (from 7200 minutes to 3 minutes)
  • Increase quality of testing through automation as well as timescales of test cycles by over 50%
  • 12x reduction of application delivery cycle

About ECS Digital:

ECS Digital is an experienced digital transformation consultancy, helping clients deliver better products faster through the adoption of DevOps practices.

They are the digital practice of the ECS Group and have been leaders in digital transformation since 2003 – evolving their offerings to support their customers’ evolving needs. They believe in a better way to adopt and deliver new ways of working, processes and technology. A more valuable and outcome focused way of leveraging Enterprise DevOps and Agile testing to help build tomorrow’s enterprises today.

They’ve helped over 100 customers – including Lloyds Banking Group, ASOS, BP plc and Sky – realise the benefits of Enterprise DevOps and Agile Testing and have proactively remained relevant in the face of increasing challenges of customer expectation and market disruption. You can follow the ECS Digital community on LinkedIn and Twitter (@ECS_Digi).

Andy CuretonLatest Enablement Pod offering unveiled…
read more
ECS Digital heads to Jenkins World 2018

ECS Digital heads to Jenkins World 2018

No comments

We’re excited to be heading back to San Francisco for this years’ DevOps World | Jenkins World. ECS Digital are exhibiting and offering training sessions throughout this 4-day event where we can meet and talk to like-minded individuals to help them realise the innovative solutions that can assist them in reaching their business objectives.

We had such a great time exhibiting and offering Jenkins certification training last year that ECS Digital has been asked back again in 2018 to be silver sponsors for both the San Francisco and Nice event.

Jenkins World is the largest gathering of Jenkins users in the world providing technical demonstrations of innovative technology and solutions. They showcase what other companies have to offer too which has the potential to benefit your organisation. It is a perfect opportunity for tech-savvy people to network and talk all things DevOps and Jenkins. ECS Digital has been a technology partner with Jenkins and Cloudbees for 4 years allowing us to develop a strong relationship with them and their team. We are interested to see first-hand the latest developments in the Jenkins environment.

Jenkins World 2017

Last year’s event was one to remember, the opening keynote involved Kohsuke Kawaguchi (CTO, Cloudbees) and Sacha Labourey (CEO, Cloudbees) focussing on the move from Jobs to Declarative Pipelines as well as Blue Ocean – the (then) new Jenkins UX.

They delved into the details about how their technology has the ability to change the DevOps landscape through the development of Cloudbees DevOptics – a service that provides organisations with a consolidated overview of their end-to-end application delivery stream – and the Cloudbees Jenkins Advisor Service which helps companies identify and fix issues before they have a major impact on business-critical pipelines.

Read our blog about Jenkins World 2017 here.

What you can expect from Jenkins World this year

Throughout the weekend, there is a combined total of 120+ sessions and workshops, covering diverse DevOps and Jenkins topics including Automated Testing, DevSecOps and AI-powered Visual Testing. Cloudbees have published the schedule here for the whole conference – make sure you pick your talks wisely! The workshops that are run by Cloudbees will walk you through how to get your Jenkins X on a public cloud provider.

We are particularly looking forward to the Keynote on Tuesday morning which will discuss the latest development with Jenkins, how far it has evolved since its inception and what Cloudbees has planned for its’ future.

On top of this, Jenkins World is offering two full days of additional training on the Sunday and Monday prior to the event, covering topics such as Cloudbees core – Fundamentals, Jenkins Fundamentals and Value Stream Mapping for DevOps.

ECS Digital is excited to be providing the training for the DevOps Leader Certification Training on both days (16th and 17th) for anyone who wants to gain a practical understanding of:

  • DevOps and time to market
  • The key difference between DevOps IT and traditional IT
  • Ideas for organising workflows
  • Managing culture change
  • Popular tools and key practices

If you would like to sign up for our training sessions, please head to this website – as an extra bonus, ECS Digital friends can receive 20% off their registration fee using the code JWECSCUSTregister now.

DevOps World | Jenkins World will be a highly educational and engaging conference providing a perfect opportunity for you to meet like-minded individuals and hear about their experiences with Jenkins.

If you are interested in meeting the ECS Digital team at Jenkins World, find us at booth 608 in the exhibition room on the 18th and 19th September – you can follow our journey on Twitter at @ECS_Digi.

Stay tuned to read our round-up of the latest announcements and updates from Jenkins World next week!

Ivan.AudisioECS Digital heads to Jenkins World 2018
read more
30 DevOps Tools You Could Be Using

30 DevOps Tools You Could Be Using

No comments

As a DevOps consultancy, we spend a lot of time thinking about, and evaluating DevOps tools.

There are a number of different tools that form part of our DevOps workbench, and we base our evaluation on years of experience in IT, working with complex, heterogeneous technology stacks.

We’ve found that DevOps tooling has become a key part of our tech and operations. We take a lot of time to select and improve our DevOps toolset. The vast majority of tools that we use are open source. By sharing the tools that we use and like, we hope to start a discussion within the DevOps community about what further improvements can be made.

We hope that you enjoy browsing through the list below.

You may already be well acquainted with some of the tools, and some may be newer to you.

1. Puppet

What is it? Puppet is designed to provide a standard way of delivering and operating software, no matter where it runs. Puppet has been around since 2005 and has a large and mature ecosystem, which has evolved to become one of the best in breed Infrastructure Automation tools that can scale. It is backed up and supported by a highly active Open Source community.                    

Why use Puppet? Planning ahead and using config management tools like Puppet can cut down on the amount of time you spend repeating basic tasks, and help ensure that your configurations are consistent, accurate and repeatable across your infrastructure. Puppet is one of the most mature tools in this area and has an excellent support backbone

What are the problems with Puppet? The learning curve is quite high for those who are unfamiliar with puppet, and the ruby DSL may seem unfamiliar for users who have no development experience.

2. Vagrant

What is it? Vagrant – another tool from Hashicorp – provides easy to configure, easily reproducible and portable work environments that are built on top of industry standard technology. Vagrant helps enforce a single consistent workflow to maximise the flexibility of you and your team.

Why use Vagrant? Vagrant provides operations engineers with a disposable environment and consistent workflow for developing and testing infrastructure management scripts. Vagrant can be downloaded and installed within minutes on Mac OS X, Linux and Windows.

Vagrant allows you to create a single file for your project, to define the kind of machine you want to create, the software that needs to be installed, and the way you want to access the machine.

Are there any problems with Vagrant? Vagrant has been criticised as being painfully, troublingly slow.

3. ELK Stack

What is ELK? The ELK stack actually refers to three technologies – ElasticsearchLogstash and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine, Logstash is a log pipeline tool that accepts inputs from different sources and exports the data to various targets, and Kibana is a visualisation layer for Elasticsearch. And they work very well together.

What are its use cases? Together they’re often used in log analysis in IT environments (although you can also use the ELK stack for BI, security and compliance & analytics.)

Why is it popular? ELK is incredibly popular. The stack is downloaded 500,000 times every month. This makes it the world’s most popular log management platform. SaaS and web startups in particular are not overly keen to stump up for enterprise products such as Splunk. In fact, there’s an increasing amount of discussion as to whether open source products are overtaking Splunk, with many seeing 2014 as a tipping point.

4. Consul.io

What is Consul.io? Consul is a tool for discovering and configuring services in your infrastructure. It can be used to present nodes and services in a flexible interface, allowing clients to have an up-to-date view of the infrastructure they’re part of.

Why use Consul.io? Consul.io comes with a number of features for providing consistent information about your infrastructure. Consul provides service and node discovery, tagging, health checks, consensus based election routines, key value storage and more. Consul allows you to build awareness into your applications and services.

Anything else I should know? Hashicorp have a really strong reputation within the developer community for releasing strong documentation with their products, and Consul.io is no exception. Consul is distributed, highly available, and datacentre aware.

5. Jenkins

What is Jenkins? Everyone loves Jenkins! Jenkins is an open source CI tool, written in Java. CI is the practice of running tests on a non-developer machine, automatically, every time someone pushes code into a source repo. Jenkins is considered a prerequisite for Continuous Integration.

Why would I want to use Jenkins? Jenkins helps automate a lot of the work of frequent builds, allows you to resolve and detect issues quickly, and also reduce integration costs because serious integration issues become less likely.

Any problems with Jenkins? Jenkins configuration can be tricky. Jenkins UI has evolved over many years without a guiding vision – and it’s arguably got more complex. It has been compared unfavourably to more modern tools such as Travis CI (which of course isn’t open source).

6. Docker

What is it? There was a time last year, when it seemed that all anyone wanted to talk about was Docker. Docker provides a portable application environment which enables you to package an application in a unit for application development.

Should I use it? Depending on who you ask, Docker is either the next big thing in software development or a case of the emperor’s new clothes. Docker has some neat features, including DockerHub, a public repository of Docker containers, and docker-compose, a tool for managing multiple containers as a unit on a single machine.

It’s been suggested that Docker can be a way of reducing server footprint by packing containers on physical tin without running physical kernels – but equally Docker’s security story is a hot topic. Docker’s UI also continues to improve – Docker has just released a new Mac and Windows client.

What’s the verdict? Docker can be a very useful technology – particularly in development and QA – but you should think carefully about whether you need or want to run it in production. Not everyone needs to operate at Google scale.

7. Ansible

What is it? Ansible is a free platform for configuring and managing servers. It combines multi-node software deployment, task execution and configuration management.

Why use Ansible? Configuration management tools such as Ansible are designed to automate away much of the work of configuring machines.

Manually configuring machines via SSH, and running the commands you need to install your application stack, editing config files, and copying application code can be tedious work, and can lead to each machine being its own ‘special snowflake’ depending on who configured it. This can compound if you are setting up tens, or thousands of machines.

What are the problems with using Ansible? Ansible is considered to have a fairly weak UI. Tools such as Ansible Tower exist, but many consider them a work in progress, and using Ansible Tower drives up the TCO of using Ansible.

Ansible also has no notion of state – it just executes a series of tasks, stopping when it finishes, fails, or encountering an error. Ansible has also been around for less time than Chef and Puppet, meaning that it has a smaller developer community than some of its more mature competitors.

8. Salkstack

What is it? Saltstack, much like Ansible, is a configuration management tool and remote execution engine. It is primarily designed to allow the management of infrastructure in a predictable and repeatable way. Saltstack was designed to manage large infrastructures with thousands of servers – the kind seen at LinkedIn, Wikipedia and Google.

What are the benefits of using Salt? Because Salt uses the ZeroMQ framework, and serialises messages using msgpack, Salt is able to achieve severe speed and bandwidth gains over traditional transport layers, and is thus able to fit far more data more quickly through a given pipe. Getting set up is very simple, and someone new to configuration management can be productive before lunchtime.

Any problems with using Saltstack? Saltstack is considered to have weaker Web UI and reporting capabilities than some of its more mature competitors. It also lacks deep reporting capabilities. Some of these issues have been addressed in Saltstack Enterprise, but this may be out of budget for you.

9. Kubernetes

What is it? Kubernetes is an open-source container cluster manager by Google. It aims to provide a platform for automating deployment, scaling and operations of container clusters across hosts.

Why should I use it? Kubernetes is a system for managing containerised applications across a cluster of nodes. Kubernetes was designed to address some of the disconnect between the way that modern, clustered applications work, and the assumptions they make about some of their environments.

On the one hand, users shouldn’t have to care too much about where work is scheduled – the unit is presented at the service level, and can be accomplished by any of the member nodes. On the other hand, it is important because a sysadmin will want to make sure that not all instances of a service are assigned to the same host. Kubernetes is designed to make these scheduling decisions easier.

10. Collectd

What is it? Collectd is a daemon that collects statistics on system performance, and provides mechanisms to store the values in different ways.

Why should I use collectd? Collectd helps you collect and visualise data about your servers, and thus make informed decisions. It’s useful for working with tools like Graphite, which can render the data that collectd collects.

Collectd is an incredibly simple tool, and requires very few resources. It can even run on a Raspberry Pi! It’s also popular because of its pervasive modularity. It’s written in C, contains almost no code that would be specific to any operating system, and will therefore run on any Unix-like operating system.

11. Git

What is Git? Git is the most widely used version control system in the world today.

An incredibly large number of products use Git for version control: from hobbyist projects to large enterprises, from commercial products to open source. Git is designed with speed, flexibility and security in mind, and is an example of a distributed version control system.

Should I use Git? Git is an incredibly impressive tool – combining speed, functionality, performance and security. When compared side by side to other SCM tools, Git often comes out ahead. Git has also emerged as a de facto standard, meaning that vast numbers of developers already have Git experience.

Why shouldn’t I use Git? Git has an initially steep learning curve. Its terminology can seem a little arcane and new to novices. Revert, for instance, has a very different meaning in Git than it does in SCM and CVS. However, it rewards that investment curve with increased development speed once mastered.

12. Rudder

What is Rudder? Rudder is (yet another!) open source audit and configuration management tool that’s designed to help automate system config across large IT infrastructures.

What are the benefits of Rudder? Rudder allows users (even non-experts) to define parameters in a single console, and check that IT services are installed, running and in good health. Rudder is useful for keeping configuration drift low. Managers are also able to access compliance reports and access audit logs.  Rudder is built in Scala.

13. Gradle

What is it? Gradle is an open source build automation tool that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based DSL instead of the XML form used by Maven.

Why use Gradle instead of Ant or Maven? For many years, build tools were simply about compiling and packaging software. Today, projects tend to involve larger and more complex software stacks, have multiple programming languages, and incorporate many different testing strategies. It’s now really important (particularly with the rise of Agile) that build tools support early integration of code as well as easy delivery to test and prod.

Gradle allows you to map out your problem domain using a domain specific language, which is implemented in Groovy rather than XML. Writing code in Groovy rather than XML cuts down on the size of a build, and is far more readable.

14. Chef

What is Chef? Chef is a config management tool designed to automate machine setup on physical servers, VMs and in the cloud. Many companies use Chef software to manage and control their infrastructure – including Facebook, Etsy and Indiegogo. Chef is designed to define Infrastructure as Code.

What is infrastructure as code? Infrastructure as Code means that, rather than manually changing and setting up machines, the machine setup is defined in a Chef recipe. Leveraging Chef allows you to easily recreate your environment in a predictable manner by automating the entire system configuration.

What are the next steps for Chef? Chef has released Chef Delivery, a tool for creating automated workflows around enterprise software development and establishing a pipeline from creation to production. Chef Delivery establishes a pipeline that every new piece of software should go through in order to prepare it for production use. Chef Delivery works in a similar way to Jenkins, but offers greater reporting and auditing capabilities.

15. Cobbler

What is it? Cobbler is a Linux provisioning server that facilitates a network-based system installation of multiple OSes from a central point using services such as DHCP, TFTP and DNS.

Cobbler can be configured for PXE, reinstallations and virtualised guests using Xen, KVM and Xenware. Cobbler also comes with a lightweight configuration management system, as well as support for integrating with Puppet.

16. SimianArmy

What is it? SimianArmy is a suite of tools designed by Netflix to support cloud operations. ChaosMonkey is part of SimianArmy, and is described as a ‘resiliency tool that helps applications tolerate random instance failures.’

What does it do? The SimianArmy suite of tools are designed to help engineers test the reliability, resiliency and recoverability of their cloud services running on AWS.

Netflix began the process of creating the SimianArmy suite of tools soon after they moved to AWS. Each ‘monkey’ is decided to help Netflix make its service less fragile, and better able to support continuous service.

The SimianArmy includes:

  • Chaos Monkey – randomly shuts down virtual machines (VMs) to ensure that small disruptions will not affect the overall service.
  • Latency Monkey – simulates a degradation of service and checks to make sure that upstream services react appropriately.
  • Conformity Monkey – detects instances that aren’t coded to best-practices and shuts them down, giving the service owner the opportunity to re-launch them properly.
  • Security Monkey – searches out security weaknesses, and ends the offending instances. It also ensures that SSL and DRM certificates are not expired or close to expiration.
  • Doctor Monkey – performs health checks on each instance and monitors other external signs of process health such as CPU and memory usage.
  • Janitor Monkey – searches for unused resources and discards them.

Why use SimianArmy? SimianArmy is designed to make cloud services less fragile and more capable of supporting continuous service, when parts of cloud services come across a problem. By doing this, potential problems can be detected and addressed.

17. AWS

What is it? AWS is a secure cloud services platform, which offers compute, database storage, content delivery and other functionality to help businesses scale and grow.

Why use AWS? EC2 is the most popular AWS service, and provides a very easy way for DevOps teams to run tests. Whenever you need them, you can set up an EC2 server with a machine image up and running in seconds.

EC2 is also great for scaling out systems. You can set up bundles of servers for different services, and when there is additional load on servers, scripts can be configured to spin up additional servers. You can also handle this automatically through Amazon auto-scaling.

What are the downsides of AWS? The main downside of AWS is that all of your servers are virtual. There are options available on AWS for single tenant access, and different instance types exist, but performance will vary and never be as stable as physical infrastructure.

If you don’t need elasticity, EC2 can also be expensive at on-demand rates.

18. CoreOS

What is it? CoreOS is a Linux distribution that is designed specifically to solve the problem of making large, scalable deployments on varied infrastructure easy to manage. It maintains a lightweight host system, and uses containers to provide isolation.

Why use CoreOS? CoreOS is a barebones Linux distro. It’s known for having a very small footprint, built for “automated updates” and geared specifically for clustering.

If you’ve installed CoreOS on disk, it will update by having two system partitions – one “known good” because you’ve used it to boot to, and another that is used to download updates to. It will then automatically reboot and switch to update.

CoreOS gives you a stack of systemd, etcd, Fleet, Docker and rkt with very little else. It’s useful for spinning up a large cluster where everything is going to run in Docker containers.

What are the alternatives? Snappy Ubuntu and Project Atomic offer similar solutions.

19. Grafana

What is Grafana? Grafana is a neat open source dashboard tool. Grafana is useful for because it displays various metrics from Graphite through a web browser.

What are the advantages of Grafana? Grafana is very simple to setup and maintain, and displays metrics in a simple, Kibana-like display style. In 2015, Grafana also released a SaaS component, Grafana.net.

You might wonder how Grafana differs from the ELK stack. While ELK is about log analytics, Grafana is more about time-series monitoring.

Grafana helps you maximise the power and ease of use of your existing time-series store, so you can focus on building nice looking and informative dashboards. It also lets you define generic dashboards through variables that can be used in metrics queries. This allows you to reuse the same dashboards for different servers, apps and experiments.

20. Chocolatey

What is Chocolatey? Chocolatey is apt-get for Windows. Once installed, you can install Windows applications quickly and easily using the command line. You could install Git, 72Zip, Ruby, or even Microsoft Office! The catalogue is now incredibly complete – you really can install a wide array of apps using Chocolatey.

Why should I use Chocolatey? Because manual installs are slow and inefficient. Chocolatey promises that you can install a program (including dependencies, such as the .NET framework) without user intervention.

You could use Chocolatey on a new PC to write a simple command, and download and install a fully functioning dev environment in a few hours. It’s really cool.

21. Zookeeper

What is it? Zookeeper is a centralised service for maintaining configuration information, naming, providing distributed synchronisation, and providing group services. All of these services are used in one form or another by distributed applications.

Why use Zookeeper? Zookeeper is a co-ordination system for maintaining distributed services. It’s best to see Zookeeper as a giant properties file for different processes, telling them which services are available and where they are located. This post from the Engineering team at Pinterest outlines some possible use cases for Zookeeper.

Where can I read more? Aside from Zookeeper’s documentation, which is pretty good, chapter 14 of “Hadoop: The Definitive Guide” has around 35 pages, describing in some level of detail what Zookeeper does.

22. GitHub

What is GitHub? GitHub is a web based repository service. It provides distributed revision control and source control management functionality.

At the heart of GitHub is Git, the version control system designed and developed by Linus Torvalds. Git, like any other version control system, is designed to system, manage and store revisions of products.

GitHub is a centralised repository system for Git, which adds a Web-based graphical user interface and several collaboration features, such as wiki and basic task management tools.

One of GitHub’s coolest features is “forking” – copying a repo from one user’s account to another. This allows you to take a project that you don’t have write access to, and modify it under your own account. If you make changes, you can send a notification called a “pull request” to the original owner. The user can then merge your changes with the original repo.

23. Drone

What is it? Drone is a continuous integration platform, based on Docker and built in Go. Drone works with Docker to run tests, and also works with Github, Gitlab and Bitbucket.

Why use Drone? The use case for Drone is much the same as any other continuous integration solution. CI is the practice of making regular commits to your code base. Since with CI you will end up building and testing your code more frequently, the development process will be sped up. Drone does this – speeding up the process of building and testing.

How does it work? Drone pulls code from a Git repository, and then runs scripts that you define. Drone allows you to run any test suite, and will report back to you via email or indicate the status with a badge on your profile. Because Drone is integrated with Docker, it can support a huge number of languages including PHP, Go, Ruby and Python, to name just a few.

24. Pagerduty

What is it? Pagerduty is an alarm aggregation and monitoring system that is used predominantly by support and sysadmin teams.

How does it work? PagerDuty allows support teams to pull all of their incident reporting tools into a single place, and receive an alert when an incident occurs. Before PagerDuty came along, companies used to cobble together their own incident management solutions. PagerDuty is designed to plug in whatever monitoring systems they are using, and manage the incident reporting from one place.

Anything else? PagerDuty provides detailed metrics on response and resolution times too.

25. Dokku

What is it? Dokku is a mini-Heroku, running on Docker.

Why should I use it? If you’re already deploying apps the Heroku way, but don’t like the way that Heroku is getting more expensive for hobbyists, running Dokku from a tool such as DigitalOcean could be a great solution.

Having the ability to deploy a site to a remote and have it immediately using Github is a huge boon. Here’s a tutorial for getting it up and running.

26. OpenStack

What is it? OpenStack is free and open source software for cloud computing, which is mostly deployed as Infrastructure as a Service.

What are the aims of OpenStack? OpenStack is designed to help businesses build Amazon-like cloud services in their own data centres.

OpenStack is a Cloud OS designed to control large pools of compute, storage and networking resources through a datacentre, managed through a dashboard giving administrators control while also empowering users to provision resources.

27. Sublime-Text

What is it? Sublime-Text is a cross-platform source code editor with a Python API. It supports many different programming languages and markup languages, and has extensive code highlighting functionality.

What’s good about it? Sublime-Text is feature-ful, it’s stable, and it’s being continuously developed. It is also built from the ground up to be extremely customisable (with a great plugin architecture, too).

28. Nagios

What is it? Nagios is an open source tool for monitoring systems, networks and infrastructure. Nagios provides alerting and monitoring services for servers, switches, applications and services.

Why use Nagios? Nagios main strengths are that it is open source, relatively robust and reliable, and is highly configurable. It has an active development community, and runs on many different kind of operating systems. You can use Nagios to monitor services such as DHCP, DNS, FTP, SSH, Telnet, HTTP, NTP, POP3, IMAP, SMTP and more. It can also be used to monitor database servers such as MySQL, Postgres, Oracle and SQL Server.

Has it had any criticism? Nagios has been criticised as lacking scalability and usability. However, Nagios is stable and its limitations and problems are well-known and understood. And certainly some, including Etsy, are happy to see Nagios live on a little longer.

29. Spinnaker

What is it? Spinnaker is an open-source, multi-cloud CD platform for releasing software changes with high velocity and confidence.

What’s it designed to do? Spinnaker was designed by Netflix as the successor to its “Asgard” project. Spinnaker is designed to allow companies to hook into and deploy assets across two cloud providers at the same time.

What’s good about it? It’s battle-tested on Netflix’s infrastructure, and allows the creation of pipelines that begin with the creation of some deployable asset (say a Docker image or a jar file), and end with a deployment. Spinnaker offers an out of the box setup, and engineers can make and re-use pipelines on different workflows.

30. Flynn

What is it? Flynn is one of the most popular open source Docker PaaS solutions. Flynn aims to provide a single platform that Ops can provide to developers to power production, testing and development, freeing developers to focus.

Why should you use Flynn? Flynn is an open source PaaS built from pluggable components that you can mix and match however you want. Out of the box, it works in a very similar way to Heroku, but you are able to replace pieces and put whatever you need into Flynn.

Is Flynn production-ready? The Flynn team correctly point out that “production ready” means different things to different people. As with many of the tools in this list, the best way to find out if it’s a fit for you is to try them!

If you’re interested in learning more about DevOps or specific DevOps tools, why not take a look at our Training pages. 

We offer regular Introduction to DevOps courses, and have a number of upcoming Jenkins training courses.

Jason Man30 DevOps Tools You Could Be Using
read more
ECS Digital attends Jenkins World

ECS Digital attends Jenkins World

No comments

Last week, ECS Digital attended Jenkins World 2017, providing certification training. While there, we also attended several informative talks, demonstrations and the expo. 

The first 2 days at Jenkins World provided attendees with many courses to help improve their Jenkins knowledge. These courses included:

  • Jenkins Certification training
  • Fundamentals of Jenkins Pipeline and Docker training
  • Fundamentals of CloudBees Jenkins Enterprise
  • And many more…

ECS Digital provided Jenkins Certification Training to 30 students with backgrounds ranging from Development to Senior Management. The course comprised one of our instructors providing training on the fundamental knowledge needed for the certification and labs to provide students with a hands-on way of learning Jenkins. These labs included how to configure agent nodes, pipelines, CloudBees functionality etc.

Jenkins Certification Training

The Keynotes

The following two days were dedicated to keynotes, presentations, demos and the expo. The opening keynote this year had talks from Kohsuke Kawaguchi, who built on this year’s theme of “Transform”, by explaining how Jenkins has evolved since its inception to the point where it is at today, focusing on the move from Jobs to Declarative Pipelines as well as Blue Ocean, which is the new Jenkins UX currently in Beta. Sacha Labourey was then welcomed on stage to discuss the future updates to Jenkins and how Jenkins is changing the DevOps landscape.

Multiple new features and services were unveiled as part of the keynote including:

  • CloudBees DevOptics
  • CloudBees Jenkins Advisor Service
  • New UI for CloudBees Jenkins Enterprise

Credit: https://twitter.com/kohsukekawa

CloudBees DevOptics

One of the biggest announcement at Jenkins World was CloudBees DevOptics. This solution aims to provide organisations with a consolidated overview of their end-to-end application delivery stream. It achieves this by collecting data from all pipelines associated with a project or team and displaying it as a live view. Some benefits of this are:

  • Allows users to see where issues are trapped and identify bottlenecks which will decrease the time to delivery
  • Provides teams with the ability to see who, what, where and when for each issue which also decreases the time to progress trapped issues
  • Metrics for the entire delivery process can be collected allowing users to identify where resource usage could be improved.

You can find out more about CloudBees DevOptics here.

CloudBees Jenkins Advisor Service

Another new service unveiled this year was CloudBees Jenkins Advisor. The aim of this product is to continuously analyse your Jenkins environment, identify potential issues and provide advice on how to fix it. The aim of the advisor is to help companies identify and fix issues before they have a major impact on business-critical pipelines.

More information about this new product can be found here.

The Conference

The Jenkins World provided attendees with a place to network and see what other companies have to offer that could benefit their organisation. In addition to the booths on offer, Jenkins World provided many technical demonstrations at the sponsor theatre and the Jenkins project booth. These demonstrations included:

  • Delivery Pipelines with Jenkins
  • Securing a Jenkins Instance
  • Docker Based Build Executor Agents

We’d like to thank everyone who came and spoke to us at the event and attended our training sessions, we hope you had a great time.

Jenkins World 2018 will be returning to the Marriott Marquis in San Francisco from September 16 – 19, 2018 and ECS Digital will be attending. We look forward to seeing you there! 

If you’re intrested in our Jenkins training courses, follow this link. We offer regular User and Admin Jenkins training.

Billy MichaelECS Digital attends Jenkins World
read more
Jenkins: The issues around selecting, not investing, in a tool

Jenkins: The issues around selecting, not investing, in a tool

No comments

Over recent years, Jenkins has built a reputation as a very reliable scheduling build system. With 1360 plugins at the time of this writing, it’s currently one of the most popular build automation tools. However, its increasing popularity has seen a broader range of teams, with varying degrees of technical skill, adopting Jenkins as their driver to Continuous Delivery.

Due to their lack of technical knowledge, these teams require a more bespoke user interface. Enter Blue Ocean, a plugin that introduces a new user interface that makes Continuous Delivery more accessible to this new audience, without sacrificing any of the power of Jenkins.

However, Jenkins is used very differently in each organisation and many businesses find that they are unable to get the results they expect Jenkins to deliver.

Identifying the challenges

ECS Digital has implemented and optimised Jenkins in many leading organisations. In our experience, issues appear when businesses fail to invest in the processes that enable them to get the most out of the tool.

“The tool selection process can be long and protracted. So, when a business settles on a tool, the temptation is to begin using it as soon as possible” says ECS Digital Founder and Managing Director, Andy Cureton.

By neglecting to invest in the infrastructure to support Jenkins and failing to implement best practices for usage and training, businesses are diminishing their chances of successful implementation.

Examples of challenges that businesses could face when selecting to implement Jenkins rather than investing in it as whole are as follows:

  • Updating and upgrading – Challenge when doing routine updates and also major ones, like migrating from Jenkins 1.x to Jenkins 2.0. A common issue being the reliance on many plugins, if a plugin isn’t compatible with the new version it might break entire pipelines.
  • Scaling – Issues with scaling or availability. A master with 4 build agents might cope with 100 builds a day, but might not be adequate for 1000 builds a day.
  • Pipelines – Challenges with implementing pipeline as code. It is easy to overload pipelines which will put all the load on the masters.
  • Integrations – Connecting other tools and services in a way that matches company policies, and enables controlled access to sensitive environments.
  • Performance – Hitting an internal road block or not performing to achieve the long-term business goals.
  • Personalisation – How to build a CI pipeline for your system.

Fixing the problem

Making sure that your implementation is architected to support your business from the outset is a key way to avoid the above issues. Here are two real examples that illustrate not only the businesses challenges, but the solution we have implemented to overcome them:

Example 1: We were engaged to help alleviate the growing pains of a Developer Services Solutions team within a global, UK based financial institution. Due to a massive growth in the amount of Jenkins users, the team struggled to deliver more Masters and Slaves, a manual process, while still working on day-to-day tasks.

To solve this problem, we automated the installation and configuration of CloudBees Jenkins Masters and Slaves, so that the Jenkins service could be built end-to-end in a matter of minutes. This meant that new teams and users could be on-boarded much quicker, while also ensuring the same configuration would be used on the entire Jenkins estate. This delivered a more stable and predictable environment for the consumers and the maintainers of the service.

Example 2: We were engaged by another leading UK financial institution to assist with an issue around the creation of jobs in Jenkins. Upon arrival, we found a monolithic Jenkins instance running around 6,000 jobs. After closer investigation, we discovered that over 5,000 of these had been created by a single individual, an issue caused by a lack of understanding of up-to-date best practice use of features and processes.

To solve the issue, we identified configuration and installation practices that were outdated, and introduced a new roadmap to the team, showcasing best practices and new processes to follow. We also introduced a new feature to assist with increasing the workflow output. This allowed the tech team lead to create seven templated jobs replacing the 5,000+ jobs created from the 2 initial seed jobs.

Coming to a conclusion

You get out of a tool what you put into it so it’s important to take the time to secure the processes around Jenkins. As Andy Cureton says, a process that works for 10 people may also work for 100, start creaking for 1,000 and completely fail for 10,000”.

As part of our Jenkins Health check service, we spend time on site to understand your system to help you fix and optimise Jenkins, so you can get the results you expected.


Enjoyed reading this? Get in touch with our team today.

Jason ManJenkins: The issues around selecting, not investing, in a tool
read more
How to go from good to great with Jenkins CI and ECS Digital

How to go from good to great with Jenkins CI and ECS Digital

No comments

Jenkins CI is probably the most widely-used Continuous Integration platform in use today. Started in 2004 under the name Hudson, the platform quickly grew into one of the world’s most-loved Open Source build servers, and was renamed Jenkins CI after forking from the original project with Oracle. Today, Jenkins enjoys one of the most dedicated and active Open Source communities, with contributors from all around the world consistently adding new features, plugins and capabilities to an already robust software platform.

But for all the richness of features that Jenkins CI provides, many users stick to the bare minimum and don’t get as much value as they could out of their use, for example Jenkins new open source pipeline plugin. In this blog, we’ll look at the difference between good and great use of Jenkin CI, and how ECS Digital can help you get the most out of your Continuous Integration software.

Jenkins CI provides an intelligent CI platform – are you making use of it?

First off, it’s worth mentioning that not every CI pipeline needs all the bells and whistles attached – if a basic pipeline is all you need to ensure your software service is delivered on time and to your users’ expectations, you’re already making good use of your software. That being said, virtually every average CI pipeline stands to benefit from a more intelligent CI, even if it’s largely a means of shortening development windows or running more reliable tests. Many organisations use Jenkins as a glorified Cron job that runs static commands at predefined times rather than making the most of one of the thousands of potential plugins and features. The real power of Jenkins CI is its ability to act as an intelligent platform that understands how your software development journey fits together, ensures the output is of highest quality, and keeps the necessary tasks ticking over in the way that works best for your organisation.

Developers all around the world – including some members of ECS Digital – contribute plugins that make it easy to customise and optimise Jenkins CIfor particular needs. There are also a number of plugins that add cross-software support, such as the Docker/Jenkins plugins released in 2015. In this sense, Jenkins becomes much more than a CI tool – by centralising parts of the delivery and deployment pipelines, Jenkins becomes the roadmap and orchestrator for your entire software development journey.

What is the best way to become a Jenkins Jedi?

There’s only so much that you can read about getting the most out of Jenkins CI – for an in-depth understanding of the way the software works, and how to use its advanced features and plugins, it’s essential to have practical, real-world experience. There are a number of platforms for online Jenkins training, as well as some substantial forums, videos and podcasts that discuss best practices for creating Jenkins pipelines, but being walked through a practical example and having the opportunity to question one of our Certified CloudBees Jenkins Platform Engineers should you have any difficulty makes Jenkins training courses a far more beneficial option. For more about the benefits of hands-on DevOps training, read our previous blog on the subject. ECS Digital offers regular Jenkins training courses, ranging from basic introductory classes and general Jenkins best practices in our User course, to managing complex workflows and using Jenkins’ more advanced features in our Admin course. Our courses are a 50/50 split between theory and practical skills, which gives attendees a holistic understanding of how to build a good CI pipeline, and our experienced course instructors work on a one-on-one basis to ensure you get the value you need.

With over 12 years’ experience helping enterprises around the world deliver software faster and at a lower cost through the adoption of DevOps and Continuous Delivery practices, ECS Digital is the perfect DevOps training partner for anybody looking to grow their understanding of DevOps and develop their skills in a variety of software platforms. For more information, or to book a training course, view our upcoming courses by following the link below.

Image credit: tamaramccleary.com

Andy CuretonHow to go from good to great with Jenkins CI and ECS Digital
read more