30 DevOps Tools You Could Be Using

30 DevOps Tools You Could Be Using

No comments

As a DevOps consultancy, we spend a lot of time thinking about, and evaluating DevOps tools.

There are a number of different tools that form part of our DevOps workbench, and we base our evaluation on years of experience in IT, working with complex, heterogeneous technology stacks.

We’ve found that DevOps tooling has become a key part of our tech and operations. We take a lot of time to select and improve our DevOps toolset. The vast majority of tools that we use are open source. By sharing the tools that we use and like, we hope to start a discussion within the DevOps community about what further improvements can be made.

We hope that you enjoy browsing through the list below.

You may already be well acquainted with some of the tools, and some may be newer to you.

1. Puppet

What is it? Puppet is designed to provide a standard way of delivering and operating software, no matter where it runs. Puppet has been around since 2005 and has a large and mature ecosystem, which has evolved to become one of the best in breed Infrastructure Automation tools that can scale. It is backed up and supported by a highly active Open Source community.                    

Why use Puppet? Planning ahead and using config management tools like Puppet can cut down on the amount of time you spend repeating basic tasks, and help ensure that your configurations are consistent, accurate and repeatable across your infrastructure. Puppet is one of the most mature tools in this area and has an excellent support backbone

What are the problems with Puppet? The learning curve is quite high for those who are unfamiliar with puppet, and the ruby DSL may seem unfamiliar for users who have no development experience.

2. Vagrant

What is it? Vagrant – another tool from Hashicorp – provides easy to configure, easily reproducible and portable work environments that are built on top of industry standard technology. Vagrant helps enforce a single consistent workflow to maximise the flexibility of you and your team.

Why use Vagrant? Vagrant provides operations engineers with a disposable environment and consistent workflow for developing and testing infrastructure management scripts. Vagrant can be downloaded and installed within minutes on Mac OS X, Linux and Windows.

Vagrant allows you to create a single file for your project, to define the kind of machine you want to create, the software that needs to be installed, and the way you want to access the machine.

Are there any problems with Vagrant? Vagrant has been criticised as being painfully, troublingly slow.

3. ELK Stack

What is ELK? The ELK stack actually refers to three technologies – ElasticsearchLogstash and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine, Logstash is a log pipeline tool that accepts inputs from different sources and exports the data to various targets, and Kibana is a visualisation layer for Elasticsearch. And they work very well together.

What are its use cases? Together they’re often used in log analysis in IT environments (although you can also use the ELK stack for BI, security and compliance & analytics.)

Why is it popular? ELK is incredibly popular. The stack is downloaded 500,000 times every month. This makes it the world’s most popular log management platform. SaaS and web startups in particular are not overly keen to stump up for enterprise products such as Splunk. In fact, there’s an increasing amount of discussion as to whether open source products are overtaking Splunk, with many seeing 2014 as a tipping point.

4. Consul.io

What is Consul.io? Consul is a tool for discovering and configuring services in your infrastructure. It can be used to present nodes and services in a flexible interface, allowing clients to have an up-to-date view of the infrastructure they’re part of.

Why use Consul.io? Consul.io comes with a number of features for providing consistent information about your infrastructure. Consul provides service and node discovery, tagging, health checks, consensus based election routines, key value storage and more. Consul allows you to build awareness into your applications and services.

Anything else I should know? Hashicorp have a really strong reputation within the developer community for releasing strong documentation with their products, and Consul.io is no exception. Consul is distributed, highly available, and datacentre aware.

5. Jenkins

What is Jenkins? Everyone loves Jenkins! Jenkins is an open source CI tool, written in Java. CI is the practice of running tests on a non-developer machine, automatically, every time someone pushes code into a source repo. Jenkins is considered a prerequisite for Continuous Integration.

Why would I want to use Jenkins? Jenkins helps automate a lot of the work of frequent builds, allows you to resolve and detect issues quickly, and also reduce integration costs because serious integration issues become less likely.

Any problems with Jenkins? Jenkins configuration can be tricky. Jenkins UI has evolved over many years without a guiding vision – and it’s arguably got more complex. It has been compared unfavourably to more modern tools such as Travis CI (which of course isn’t open source).

6. Docker

What is it? There was a time last year, when it seemed that all anyone wanted to talk about was Docker. Docker provides a portable application environment which enables you to package an application in a unit for application development.

Should I use it? Depending on who you ask, Docker is either the next big thing in software development or a case of the emperor’s new clothes. Docker has some neat features, including DockerHub, a public repository of Docker containers, and docker-compose, a tool for managing multiple containers as a unit on a single machine.

It’s been suggested that Docker can be a way of reducing server footprint by packing containers on physical tin without running physical kernels – but equally Docker’s security story is a hot topic. Docker’s UI also continues to improve – Docker has just released a new Mac and Windows client.

What’s the verdict? Docker can be a very useful technology – particularly in development and QA – but you should think carefully about whether you need or want to run it in production. Not everyone needs to operate at Google scale.

7. Ansible

What is it? Ansible is a free platform for configuring and managing servers. It combines multi-node software deployment, task execution and configuration management.

Why use Ansible? Configuration management tools such as Ansible are designed to automate away much of the work of configuring machines.

Manually configuring machines via SSH, and running the commands you need to install your application stack, editing config files, and copying application code can be tedious work, and can lead to each machine being its own ‘special snowflake’ depending on who configured it. This can compound if you are setting up tens, or thousands of machines.

What are the problems with using Ansible? Ansible is considered to have a fairly weak UI. Tools such as Ansible Tower exist, but many consider them a work in progress, and using Ansible Tower drives up the TCO of using Ansible.

Ansible also has no notion of state – it just executes a series of tasks, stopping when it finishes, fails, or encountering an error. Ansible has also been around for less time than Chef and Puppet, meaning that it has a smaller developer community than some of its more mature competitors.

8. Salkstack

What is it? Saltstack, much like Ansible, is a configuration management tool and remote execution engine. It is primarily designed to allow the management of infrastructure in a predictable and repeatable way. Saltstack was designed to manage large infrastructures with thousands of servers – the kind seen at LinkedIn, Wikipedia and Google.

What are the benefits of using Salt? Because Salt uses the ZeroMQ framework, and serialises messages using msgpack, Salt is able to achieve severe speed and bandwidth gains over traditional transport layers, and is thus able to fit far more data more quickly through a given pipe. Getting set up is very simple, and someone new to configuration management can be productive before lunchtime.

Any problems with using Saltstack? Saltstack is considered to have weaker Web UI and reporting capabilities than some of its more mature competitors. It also lacks deep reporting capabilities. Some of these issues have been addressed in Saltstack Enterprise, but this may be out of budget for you.

9. Kubernetes

What is it? Kubernetes is an open-source container cluster manager by Google. It aims to provide a platform for automating deployment, scaling and operations of container clusters across hosts.

Why should I use it? Kubernetes is a system for managing containerised applications across a cluster of nodes. Kubernetes was designed to address some of the disconnect between the way that modern, clustered applications work, and the assumptions they make about some of their environments.

On the one hand, users shouldn’t have to care too much about where work is scheduled – the unit is presented at the service level, and can be accomplished by any of the member nodes. On the other hand, it is important because a sysadmin will want to make sure that not all instances of a service are assigned to the same host. Kubernetes is designed to make these scheduling decisions easier.

10. Collectd

What is it? Collectd is a daemon that collects statistics on system performance, and provides mechanisms to store the values in different ways.

Why should I use collectd? Collectd helps you collect and visualise data about your servers, and thus make informed decisions. It’s useful for working with tools like Graphite, which can render the data that collectd collects.

Collectd is an incredibly simple tool, and requires very few resources. It can even run on a Raspberry Pi! It’s also popular because of its pervasive modularity. It’s written in C, contains almost no code that would be specific to any operating system, and will therefore run on any Unix-like operating system.

11. Git

What is Git? Git is the most widely used version control system in the world today.

An incredibly large number of products use Git for version control: from hobbyist projects to large enterprises, from commercial products to open source. Git is designed with speed, flexibility and security in mind, and is an example of a distributed version control system.

Should I use Git? Git is an incredibly impressive tool – combining speed, functionality, performance and security. When compared side by side to other SCM tools, Git often comes out ahead. Git has also emerged as a de facto standard, meaning that vast numbers of developers already have Git experience.

Why shouldn’t I use Git? Git has an initially steep learning curve. Its terminology can seem a little arcane and new to novices. Revert, for instance, has a very different meaning in Git than it does in SCM and CVS. However, it rewards that investment curve with increased development speed once mastered.

12. Rudder

What is Rudder? Rudder is (yet another!) open source audit and configuration management tool that’s designed to help automate system config across large IT infrastructures.

What are the benefits of Rudder? Rudder allows users (even non-experts) to define parameters in a single console, and check that IT services are installed, running and in good health. Rudder is useful for keeping configuration drift low. Managers are also able to access compliance reports and access audit logs.  Rudder is built in Scala.

13. Gradle

What is it? Gradle is an open source build automation tool that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based DSL instead of the XML form used by Maven.

Why use Gradle instead of Ant or Maven? For many years, build tools were simply about compiling and packaging software. Today, projects tend to involve larger and more complex software stacks, have multiple programming languages, and incorporate many different testing strategies. It’s now really important (particularly with the rise of Agile) that build tools support early integration of code as well as easy delivery to test and prod.

Gradle allows you to map out your problem domain using a domain specific language, which is implemented in Groovy rather than XML. Writing code in Groovy rather than XML cuts down on the size of a build, and is far more readable.

14. Chef

What is Chef? Chef is a config management tool designed to automate machine setup on physical servers, VMs and in the cloud. Many companies use Chef software to manage and control their infrastructure – including Facebook, Etsy and Indiegogo. Chef is designed to define Infrastructure as Code.

What is infrastructure as code? Infrastructure as Code means that, rather than manually changing and setting up machines, the machine setup is defined in a Chef recipe. Leveraging Chef allows you to easily recreate your environment in a predictable manner by automating the entire system configuration.

What are the next steps for Chef? Chef has released Chef Delivery, a tool for creating automated workflows around enterprise software development and establishing a pipeline from creation to production. Chef Delivery establishes a pipeline that every new piece of software should go through in order to prepare it for production use. Chef Delivery works in a similar way to Jenkins, but offers greater reporting and auditing capabilities.

15. Cobbler

What is it? Cobbler is a Linux provisioning server that facilitates a network-based system installation of multiple OSes from a central point using services such as DHCP, TFTP and DNS.

Cobbler can be configured for PXE, reinstallations and virtualised guests using Xen, KVM and Xenware. Cobbler also comes with a lightweight configuration management system, as well as support for integrating with Puppet.

16. SimianArmy

What is it? SimianArmy is a suite of tools designed by Netflix to support cloud operations. ChaosMonkey is part of SimianArmy, and is described as a ‘resiliency tool that helps applications tolerate random instance failures.’

What does it do? The SimianArmy suite of tools are designed to help engineers test the reliability, resiliency and recoverability of their cloud services running on AWS.

Netflix began the process of creating the SimianArmy suite of tools soon after they moved to AWS. Each ‘monkey’ is decided to help Netflix make its service less fragile, and better able to support continuous service.

The SimianArmy includes:

  • Chaos Monkey – randomly shuts down virtual machines (VMs) to ensure that small disruptions will not affect the overall service.
  • Latency Monkey – simulates a degradation of service and checks to make sure that upstream services react appropriately.
  • Conformity Monkey – detects instances that aren’t coded to best-practices and shuts them down, giving the service owner the opportunity to re-launch them properly.
  • Security Monkey – searches out security weaknesses, and ends the offending instances. It also ensures that SSL and DRM certificates are not expired or close to expiration.
  • Doctor Monkey – performs health checks on each instance and monitors other external signs of process health such as CPU and memory usage.
  • Janitor Monkey – searches for unused resources and discards them.

Why use SimianArmy? SimianArmy is designed to make cloud services less fragile and more capable of supporting continuous service, when parts of cloud services come across a problem. By doing this, potential problems can be detected and addressed.

17. AWS

What is it? AWS is a secure cloud services platform, which offers compute, database storage, content delivery and other functionality to help businesses scale and grow.

Why use AWS? EC2 is the most popular AWS service, and provides a very easy way for DevOps teams to run tests. Whenever you need them, you can set up an EC2 server with a machine image up and running in seconds.

EC2 is also great for scaling out systems. You can set up bundles of servers for different services, and when there is additional load on servers, scripts can be configured to spin up additional servers. You can also handle this automatically through Amazon auto-scaling.

What are the downsides of AWS? The main downside of AWS is that all of your servers are virtual. There are options available on AWS for single tenant access, and different instance types exist, but performance will vary and never be as stable as physical infrastructure.

If you don’t need elasticity, EC2 can also be expensive at on-demand rates.

18. CoreOS

What is it? CoreOS is a Linux distribution that is designed specifically to solve the problem of making large, scalable deployments on varied infrastructure easy to manage. It maintains a lightweight host system, and uses containers to provide isolation.

Why use CoreOS? CoreOS is a barebones Linux distro. It’s known for having a very small footprint, built for “automated updates” and geared specifically for clustering.

If you’ve installed CoreOS on disk, it will update by having two system partitions – one “known good” because you’ve used it to boot to, and another that is used to download updates to. It will then automatically reboot and switch to update.

CoreOS gives you a stack of systemd, etcd, Fleet, Docker and rkt with very little else. It’s useful for spinning up a large cluster where everything is going to run in Docker containers.

What are the alternatives? Snappy Ubuntu and Project Atomic offer similar solutions.

19. Grafana

What is Grafana? Grafana is a neat open source dashboard tool. Grafana is useful for because it displays various metrics from Graphite through a web browser.

What are the advantages of Grafana? Grafana is very simple to setup and maintain, and displays metrics in a simple, Kibana-like display style. In 2015, Grafana also released a SaaS component, Grafana.net.

You might wonder how Grafana differs from the ELK stack. While ELK is about log analytics, Grafana is more about time-series monitoring.

Grafana helps you maximise the power and ease of use of your existing time-series store, so you can focus on building nice looking and informative dashboards. It also lets you define generic dashboards through variables that can be used in metrics queries. This allows you to reuse the same dashboards for different servers, apps and experiments.

20. Chocolatey

What is Chocolatey? Chocolatey is apt-get for Windows. Once installed, you can install Windows applications quickly and easily using the command line. You could install Git, 72Zip, Ruby, or even Microsoft Office! The catalogue is now incredibly complete – you really can install a wide array of apps using Chocolatey.

Why should I use Chocolatey? Because manual installs are slow and inefficient. Chocolatey promises that you can install a program (including dependencies, such as the .NET framework) without user intervention.

You could use Chocolatey on a new PC to write a simple command, and download and install a fully functioning dev environment in a few hours. It’s really cool.

21. Zookeeper

What is it? Zookeeper is a centralised service for maintaining configuration information, naming, providing distributed synchronisation, and providing group services. All of these services are used in one form or another by distributed applications.

Why use Zookeeper? Zookeeper is a co-ordination system for maintaining distributed services. It’s best to see Zookeeper as a giant properties file for different processes, telling them which services are available and where they are located. This post from the Engineering team at Pinterest outlines some possible use cases for Zookeeper.

Where can I read more? Aside from Zookeeper’s documentation, which is pretty good, chapter 14 of “Hadoop: The Definitive Guide” has around 35 pages, describing in some level of detail what Zookeeper does.

22. GitHub

What is GitHub? GitHub is a web based repository service. It provides distributed revision control and source control management functionality.

At the heart of GitHub is Git, the version control system designed and developed by Linus Torvalds. Git, like any other version control system, is designed to system, manage and store revisions of products.

GitHub is a centralised repository system for Git, which adds a Web-based graphical user interface and several collaboration features, such as wiki and basic task management tools.

One of GitHub’s coolest features is “forking” – copying a repo from one user’s account to another. This allows you to take a project that you don’t have write access to, and modify it under your own account. If you make changes, you can send a notification called a “pull request” to the original owner. The user can then merge your changes with the original repo.

23. Drone

What is it? Drone is a continuous integration platform, based on Docker and built in Go. Drone works with Docker to run tests, and also works with Github, Gitlab and Bitbucket.

Why use Drone? The use case for Drone is much the same as any other continuous integration solution. CI is the practice of making regular commits to your code base. Since with CI you will end up building and testing your code more frequently, the development process will be sped up. Drone does this – speeding up the process of building and testing.

How does it work? Drone pulls code from a Git repository, and then runs scripts that you define. Drone allows you to run any test suite, and will report back to you via email or indicate the status with a badge on your profile. Because Drone is integrated with Docker, it can support a huge number of languages including PHP, Go, Ruby and Python, to name just a few.

24. Pagerduty

What is it? Pagerduty is an alarm aggregation and monitoring system that is used predominantly by support and sysadmin teams.

How does it work? PagerDuty allows support teams to pull all of their incident reporting tools into a single place, and receive an alert when an incident occurs. Before PagerDuty came along, companies used to cobble together their own incident management solutions. PagerDuty is designed to plug in whatever monitoring systems they are using, and manage the incident reporting from one place.

Anything else? PagerDuty provides detailed metrics on response and resolution times too.

25. Dokku

What is it? Dokku is a mini-Heroku, running on Docker.

Why should I use it? If you’re already deploying apps the Heroku way, but don’t like the way that Heroku is getting more expensive for hobbyists, running Dokku from a tool such as DigitalOcean could be a great solution.

Having the ability to deploy a site to a remote and have it immediately using Github is a huge boon. Here’s a tutorial for getting it up and running.

26. OpenStack

What is it? OpenStack is free and open source software for cloud computing, which is mostly deployed as Infrastructure as a Service.

What are the aims of OpenStack? OpenStack is designed to help businesses build Amazon-like cloud services in their own data centres.

OpenStack is a Cloud OS designed to control large pools of compute, storage and networking resources through a datacentre, managed through a dashboard giving administrators control while also empowering users to provision resources.

27. Sublime-Text

What is it? Sublime-Text is a cross-platform source code editor with a Python API. It supports many different programming languages and markup languages, and has extensive code highlighting functionality.

What’s good about it? Sublime-Text is feature-ful, it’s stable, and it’s being continuously developed. It is also built from the ground up to be extremely customisable (with a great plugin architecture, too).

28. Nagios

What is it? Nagios is an open source tool for monitoring systems, networks and infrastructure. Nagios provides alerting and monitoring services for servers, switches, applications and services.

Why use Nagios? Nagios main strengths are that it is open source, relatively robust and reliable, and is highly configurable. It has an active development community, and runs on many different kind of operating systems. You can use Nagios to monitor services such as DHCP, DNS, FTP, SSH, Telnet, HTTP, NTP, POP3, IMAP, SMTP and more. It can also be used to monitor database servers such as MySQL, Postgres, Oracle and SQL Server.

Has it had any criticism? Nagios has been criticised as lacking scalability and usability. However, Nagios is stable and its limitations and problems are well-known and understood. And certainly some, including Etsy, are happy to see Nagios live on a little longer.

29. Spinnaker

What is it? Spinnaker is an open-source, multi-cloud CD platform for releasing software changes with high velocity and confidence.

What’s it designed to do? Spinnaker was designed by Netflix as the successor to its “Asgard” project. Spinnaker is designed to allow companies to hook into and deploy assets across two cloud providers at the same time.

What’s good about it? It’s battle-tested on Netflix’s infrastructure, and allows the creation of pipelines that begin with the creation of some deployable asset (say a Docker image or a jar file), and end with a deployment. Spinnaker offers an out of the box setup, and engineers can make and re-use pipelines on different workflows.

30. Flynn

What is it? Flynn is one of the most popular open source Docker PaaS solutions. Flynn aims to provide a single platform that Ops can provide to developers to power production, testing and development, freeing developers to focus.

Why should you use Flynn? Flynn is an open source PaaS built from pluggable components that you can mix and match however you want. Out of the box, it works in a very similar way to Heroku, but you are able to replace pieces and put whatever you need into Flynn.

Is Flynn production-ready? The Flynn team correctly point out that “production ready” means different things to different people. As with many of the tools in this list, the best way to find out if it’s a fit for you is to try them!

If you’re interested in learning more about DevOps or specific DevOps tools, why not take a look at our Training pages. 

We offer regular Introduction to DevOps courses, and have a number of upcoming Jenkins training courses.

Jason Man30 DevOps Tools You Could Be Using
read more
Why you should invest in AWS Big Data & 8 steps to becoming certified

Why you should invest in AWS Big Data & 8 steps to becoming certified

No comments

A decision that many engineers face at some point of their career is deciding what to focus their attention on next. One of the amazing advantages of working in a consultancy is being exposed to many different technologies, providing you the opportunity to explore any emerging trends you might be interested in. I’ve been lucky enough to work with a huge variety of clients ranging from industry leaders in the FTSE 100 to smaller start-ups disrupting the same technology space.

So why did I pick Big Data?

A common pattern I’ve noticed is that everyone has access to data – large amounts of raw, unstructured data. Business and technology leaders all recognise the importance of it, and the value and insight that it can deliver. Processes have been established to extract, transform and store this large amount of information, but the architecture is usually inefficient and incomplete.

Years ago these steps may have equated to the definition of an efficient data pipeline but now with emerging technologies such as Kinesis Streams, Redshift and even Server-less databases there is another way. We now have the possibility of having a real-time, cost efficient and low operational overhead solution.

Alongside this, companies set their sights on creating a data lake in the cloud. In doing so, they take advantage of a whole suite of technologies to store information in formats that they currently leverage and also in a configuration they possibly may harness in the future. These are all clear steps in the journey towards digital transformation, and with the current pace of development in AWS technologies it is the perfect time to become more acquainted with Big Data.

 

But why is the certification necessary?

The AWS Certified Big Data Speciality exam introduces and validates several key big data fundamentals. The exam itself is not just limited to AWS specific technologies but also explores the big data community. Taken straight from the exam guide we can see that the domains cover:

  1. Collection
  2. Storage
  3. Processing
  4. Analysis
  5. Visualization
  6. Data Security

These domains involve a broad range of technical roles ranging from data engineers and data scientists to individuals in SecOps. Personally, I’ve had some exposure to collection and storage of data but much less with regards to visualisation and security. You certainly have to be comfortable with wearing many different hats when tackling this exam as it tests not only your technical understanding of the solutions but also the business value created from the implementation. It’s equally important to consider the costs involved including any forecasts as the solution scales.

Having already completed several associate exams I found this certification much greater in difficulty because you are required to deep dive into Big Data concepts and the relevant technologies. One of the benefits of this certification is that the scope extends to these technologies’ application of Big Data so be prepared to dive into Machine Learning and popular frameworks like Spark & Presto.

 

Okay so how do I pass the exam?

1. A Cloud Guru’s certified big data specialty course provides an excellent introduction and overview.

2. Have some practical experience of Big data in AWS, theoretical knowledge is not enough to pass this exam…

  1. Practice architecting data pipelines, consider when Kinesis Streams vs Firehose would be appropriate.
  2. Think about how the solution would differ according to the size of the data transfer, sometimes even Snowmobile can become efficient.

3. Understand the different storage options on AWS – S3, DynamoDB, RDS, Redshift, HDFS vs EMRFS, HBase…

4. Understand the differences and use cases of popular Big Data frameworks e.g. Presto, Hive, Spark. 

5. Data Security contributes the most to your overall exam score at 20% and is involved in every single AWS service. There are always options for making the solution more secure and sometimes they’re enabled by default.

  1. Understand how to enable encryption at rest or in-transit, whether to use KMS or S3, or client side vs server side.
  2. How to grant privileged access to data e.g. IAM, Redshift Views.
  3. Authentication flows with Cognito and integrations with external identity providers.

6. Performance is a key trend

  1. Have a sound understanding of what GSI’s and LSI’s are in DynamoDB.
  2. Consider primary & sort keys, distribution styles in all of the database services
  3. Different compression types and speed of compressing/decompressing.

7.  Dive into Machine learning (ML)

  1. The Cloud Guru course mentioned above gives a good overview of the different ML models.
  2. If you have time I would recommend this machine learning course by Andrew Ng on Coursera. The technical depth is more lower level than you will need for the exam but it provides a very good introduction to a novice about the whole machine learning landscape.

8. Dive into Visualisation

  1. The A Cloud Guru course provides more than enough knowledge to tackle any questions here.
  2. Again if you have the time there’s an excellent data science course on Udemy which has a data visualisation chapter that would prove useful here.

 

Exam prep

It can’t be emphasised enough that AWS themselves provide amazing resources for learning. Definitely as preparation for the exam watch re:Invent videos and read AWS blogs & case studies.

 

Watch these videos:

  1. AWS re:Invent 2017: Big Data Architectural Patterns and Best Practices on AWS 
  2. AWS re:Invent 2017: Best Practices for Building a Data Lake in Amazon S3 and Amazon
  3. AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns  
  4. AWS Summit Series 2016 | Chicago – Deep Dive + Best Practices for Real-Time Streaming Applications 

 

Read these AWS blogs:

  1. Secure Amazon EMR with Encryption 
  2. Building a Near Real-Time Discovery Platform with AWS 

 

Whitepapers

  1. Streaming Data Solutions on AWS with Amazon Kinesis
  2. Big Data Analytics Options on AWS 
  3. Lambda Architecture for Batch and Real-Time Processing on AWS with Spark Streaming and Spark SQL 

 

All of the Big Data services developer guides.

 

One last note….

This exam will expect you to consider the question from many different perspectives. You’ll need to think about not just the technical feasibility of the solution presented but also the business value that can be created. The majority of questions are scenario specific and often there is more than one valid answer, look for subtle clues to determine which solution is more ‘correct’ than the others, e.g. whether speed is a factor or if the question expects you to answer from a cost perspective.

Finally, this exam is very long (3 hours) and requires a lot of reading. I found that the time given was more than enough but remember to pace yourself otherwise you can get burned out quite easily.

Hopefully my experience and tips will have helped in preparation for the exam. Let us know if they helped you. 

Good Luck!!!

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

ECS DigitalWhy you should invest in AWS Big Data & 8 steps to becoming certified
read more
ECS Digital win twice at this year’s Computing DevOps Excellence awards

ECS Digital win twice at this year’s Computing DevOps Excellence awards

No comments

The ECS Digital team is extremely proud to have taken home not one, but two awards from last night’s Computing DevOps Excellence awards.

Voted ‘Best DevOps Consulting Firm’, the panel of judges recognised our contribution within the DevOps space, with over a decade of delivering successful projects across multiple industries, territories and technologies.

But the fun didn’t stop there. Our very own Michel Lebeau was named ‘Young DevOps Engineer of the Year’.This award is a tribute to his continued commitment to exceeding customers’ expectations, no matter how much effort and self-sacrifice is necessary.

Our diverse and highly-skilled team is the reason we maintain a leading position helping enterprises transform through the adoption of DevOps. These awards are testament to the team’s singular focus of helping our customers meet and exceed their goals through the adoption of modern ways of working and technology. Every customer is unique and each project has challenges that require partnering in true sense of the word. 

I would like to congratulate to everyone at ECS Digital for their win last night, and thank both our customers and partners for making it possible! 

Get in touch to find out how ECS Digital can help you.  

Andy Cureton Michel Lebeau

Andy CuretonECS Digital win twice at this year’s Computing DevOps Excellence awards
read more
Alexa: Building Skills for the World of Tomorrow

Alexa: Building Skills for the World of Tomorrow

No comments

We have all seen the TV Ads with someone asking Alexa (Amazons personal assistant AI) to dim the lights or start playing ‘The Grand Tour’ on Prime Video, and this technology is growing larger and faster every day.

Most commercial technologies like computers and internet started their lives in the hands of big businesses and large institutes that could afford the large initial RnD costs. In light of this, the Amazon team have taken a reverse approach and employed a small scale, iterative expansion of the product.

By providing developers access to the Alexa development kit and opening the voice service to the public, Amazon have made Alexa development a straightforward, painless and rewarding process.

Amazon incentivises its cult following of open source developers by rewarding those who create great skills that others want to use. Amazon announced:

“Publish a new skill this month and get an Alexa water bottle to help you stay hydrated during your coding sessions. If more than 75 customers use your skill in its first 30 days in the Alexa Skills Store, you can also qualify to receive an Echo Dot to help you make Alexa even smarter. The skill with the most unique users within its first 30 days after publishing in February will also earn an Echo Spot.”

Vocal Skills Revolution

We should all remember the mobile app revolution along with the tremendous increase in the number of smartphone users  experienced in global mobile app markets . A massive increase in the user base drove innovation, producing better mobile phones. An organised marketplace for app download, timely updates, advanced app development platforms became the norm. Most significantly, the development of some very useful and revolutionary apps have become part of our everyday lives. With the number of users almost doubling over the last 5 years, mobile app developers can reach more consumers than ever.

At ECS Digital, we believe Voice will experience the same type of growth as mobile applications did.

While consumers command more of their day to day life using voice-controlled technologies, from smart TVs to Alexa enabled electric cars, we can be safe in the knowledge that the voice revolution is coming and will change the way future generations interact with technology.

Alexa for Business

What is Alexa for Business?

Alexa for Business makes it easy for you to use Alexa in your organisation. Alexa for Business provides tools to manage Alexa devices, enrol users and configure skills across those devices. You can build your own context-aware voice skills using the Alexa Skills Kit (ASK) and conferencing device APIs, and you can make them available as private skills for your organisation.

What is an Alexa Skill?

Alexa is Amazon’s voice service and the brain behind tens of millions of devices like the Amazon Echo, Echo Dot, and Echo Show. It provides capabilities, or skills, that enable customers to create a more personalised experience. There are now tens of thousands of skills from companies like Starbucks, Uber, and Capital One as well as other innovative designers and developers.

Alexa Voice Service

The Alexa Voice Service (AVS) enables you to integrate Alexa directly into your products. We provide you with access to a suite of resources to quickly and easily build Alexa-enabled products, including APIs, hardware and software development tools, and documentation. With AVS, you can add a new intelligent interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills.

What is the Alexa Skills Kit?

The Alexa Skills Kit (ASK) is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills. ASK enables designers, developers, and brands to build engaging skills and reach customers through tens of millions of Alexa-enabled devices. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

ECS Digital and Amazon Alexa

With Alexa for business being released in the US and coming to the rest of the world soon, we at ECS Digital have been using her to increase productivity and enable innovation within the office. We have been working on a few different initiatives coining the term OfficeOps.

Here are some of them:

Booking a meeting room

Working in a large consultancy,  it can be difficult to know if a meeting room is free. Moreover, booking said room can be a complicated and confusing process. The answer: create an internal/Dev skill to track the availability of a room, who has it and for how long. This skill also allows users to book a room on the spot, allowing our colleagues to interact with the booking process by literally asking the room for a booking slot .

Interactive Training

As a fast-moving DevOps consultancy, ECS Digital are always looking for innovative ways to improve our skills. For a long time now, we have been using Alexa to learn new skills and brush up on existing ones by using her as a pop quiz master. Colleagues located in our London Bridge office can ask Alexa to test their knowledge about a technology, helping them to maintain a high level of competency.

Summary

All evidence suggests that voice is here to stay, and will drive the next wave of technical innovation, both in business and at home, making those laborious, everyday tasks a little easier and futuristic. However, our assessment comes with a note: work still needs to be done in order make voice the standard, but we are confident that changes will be made swiftly.

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

Morgan AtkinsAlexa: Building Skills for the World of Tomorrow
read more
The Psychology Behind Agile Practice and Better Communication

The Psychology Behind Agile Practice and Better Communication

No comments

As the number of unfilled technology based jobs increases, companies can no longer only rely on computer science graduates. Companies need to seek new sources of human capital – one being career changers. Employers of career changers, particularly in junior positions, often benefit from a number of transferable skills that might traditionally fall outside the average developer’s repertoire.

I am a career changer. The main transferable skill I bring along with me from my previous life in sales and teaching, is effective communication. This is a skill I have now honed and adapted to fit within the realms of the tech world – a world where the only common language is code. 

But when I entered the world of software, there was a notable difference to my previous jobs. Although people were willing to talk about technical concepts to a newbie developer like me, there was no willingness to ensure I could understand what had just been said, which was hardly supportive or empowering.

It’s strikingly obvious to anyone looking from the outside that there is a massive communication issue within technology today. Developers need to be able to explain technical concepts in a way that is accessible and comprehendible, not just for the sake of career changers, but also for key stakeholders and other departments. Using layman’s terms will lead to greater transparency, clearer communication and a better understanding of technical issues – both within project teams, as well as at a business level.

The most successful organisations I’ve observed and worked with tend to think a bit deeper about how they can address these issues beyond the obvious agile processes and ceremonies. Their leaders are willing to innovate and even change their own behaviour with employees. These organisations also have certain processes in place which facilitate better communication and agile practice.

Below are some of their tried and tested approaches that you might want to use or encourage at your own organisation:

        1. Questions can take your organisation a mile

Often, those in technical roles can be so absorbed in their own work that they unintentionally encourage their peers to not ask questions. There is an assumption that the rest of us in similar roles are well versed in all technology and technical concepts – an unrealistic assumption given the changing nature of the industry. Differences in skill set and experience levels make it probable that your colleagues don’t fully understand everything that you’re trying to communicate. This assumed knowledge can also become an issue, especially for new employees who are trying to prove their worth in a new workplace. 

What’s worse (and I’m sure you’ll all agree) is when people respond by ‘filling in the gaps’ or just nodding emptily through the jargon. This will always affect the end product/performance of a team, which makes it so important to encourage questions – realising that individuals learn in different ways. While some people like reading independently, others might need somebody to break it down and explain concepts to them. 

This culture of blagging is actually advocated in some companies. New employees are forced to “fake it till they make it.” But why? Sure, you should expect someone working in tech to be a quick learner and know how to research. However, instead of assuming they know everything or have the perfect understanding from their own research – why not assign someone to clarify and confirm if their current understanding is correct, whilst also explaining areas that they’re unsure of and generally just practice being helpful? The consequences of not encouraging questions is far more costly for the industry in the long run. 

TLDR: Don’t make it difficult to ask questions and don’t encourage a culture of blagging. 

        2. Leave unwanted attitudes at the door

In technology-focused roles it’s not uncommon for the staff to be highly opinionated. Having a unique perspective or understanding should be encouraged – as long as it doesn’t turn into a way for you to look down on others.

If a new developer is paired with someone who thinks they’re right all the time and doesn’t invest time bringing the newcomer up to speed, it can be detrimental. It can lead to frustration from both sides and hinder the learning process significantly. This attitude can also lead to poor productivity on a particular project – leaving both employees with low morale or blaming each other for the end result.

Ultimately, developers shouldn’t look down on each other or non-technical individuals for not knowing. Instead, they should be concerned when someone doesn’t understand a certain concept and try their best to explain. Technology affects all areas of business and developers need to practice explaining even the simplest of concepts. Although difficult and seemingly arbitrary, the knock-on effect for business growth is huge. The more others understand conceptually how things work, the smarter questions they’ll ask and the more empowered they’ll feel to tackle issues on their own – helping to foster a culture of innovation and agile practice.

TLDR: Don’t look down on people that don’t know and don’t encourage a culture of snobbery.

        3. Two heads are better than one

Employees should be encouraged to learn, teach and mentor one another. There are a number of ways to do this and facilitate communication. Many companies don’t understand the benefits of pairing and believe it can slow teams down. From experience at Makers Academy where we paired 100% of the time – I learnt much faster and the productivity levels of both parties involved increased dramatically.

I believe this is because it’s no longer a solo mission. To get to the other side successfully – communication becomes necessary. Discussing ideas and bringing your pairing partner on the journey can help cement your own understanding, as well as iron out any creases along the way. As we know, two pairs of eyes are better than one – errors are resolved faster and code is cleaner.

Another advantage to pairing is that distractions are minimised – because your very presence prompts your partner to keep working and vice versa.

Experienced developers who have been coding alone for a long time may not want to pair. The question they need to ask themselves is “why.”  Does it relate to looking down on people? Are they slowing you down? What these developers don’t realise is that they are also missing out on a great opportunity for their own learning and progress. Whether it be another technical perspective allowing you to see a blind spot or improving your soft skills. Everyone has something to learn from someone and pairing facilitates that.

TLDR: Do Mandatory pairing – helps bring people out of their comfort zones and creates basic rapport.

        4. We’re not all bots

During my first role, I noticed that developers preferred to communicate with each other over Slack. As great as having a communication tool like Slack is, it’s hardly a substantial replacement for face to face interaction. A number of studies have been conducted on the limitations of virtual text-based communication. These seem to indicate that no matter how many emoji’s or gifs used, text-based communication still cannot accurately convey the messages found in facial expressions, gestures, body language and eye contact. Text-based communication is a comfort space for many highly competent software developers but companies need to encourage other methods of communication as well.  

With reports coming out about mental health issues across the tech industry, stemmed from living isolated lives in high-pressure environments – this could be one of the most important things to consider and changes to make. A simple smile, a hello or even a ‘how was your weekend?’ would do the trick. Developers need to speak to each other in person and use the interaction muscle on a regular basis. Discussing work in person not only forces us out of isolation and from being in front of a screen all day but creates better working teams; teams comprised of actual people to connect with and not eating-sleeping-coding machines.

TLDR: Don’t communicate using only technology – talk to your colleagues face to face!

        5. Lunch ’n’ learn

Everyone loves to have a conversation over lunch. So why not take advantage of it? Lunch and learns are a great way to facilitate better communication and create a communal culture of learning. For example, internal and external individuals could speak regularly about a variety of topics. Facilitate this with a free lunch for those attending and you’re well on your way to communal culture of learning…

TLDR: Create a communal culture of learning – by providing lunchtime activities.

        6. Wellness matters

The wellbeing of your people, is a key success factor in business today. Activities such as meditation have been proven to lead to increased productivity. The de-stress effect leads to better work and can even motivate people. If the mind is in the right place, clean and organised, then code will follow. It also helps people communicate and talk to each other because they are relaxed, less anxious or stressed. Encourage this by promoting Calm or Headspace subscriptions. Another idea might be communal meditation sessions after lunch.

Wellbeing can also mean providing access to counselling, discounts at gyms and generally encouraging a healthy work/life balance through flexibility where possible.

If you’re not convinced why addressing mental health in the workplace is important, see this Deloitte article. 

TLDR: Do not neglect mental health, physical health, work/life balance and the important role they play in technology today. 

We know this kind of change is not going to happen overnight and will require some serious cultural and psychological shifts. There are gatekeepers, financial concerns and the general but ubiquitous ‘fear of change’. People can get very defensive about keeping things as they are – often to their own detriment. But even implementing one of these approaches within your place of work will help massively in truly being agile and improving communication.

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

Asif HafeezThe Psychology Behind Agile Practice and Better Communication
read more
The rise of Artificial Intelligence at the AIBE Summit 2018

The rise of Artificial Intelligence at the AIBE Summit 2018

No comments

A couple of weeks ago, we attended the annual Artificial Intelligence in Business & Entrepreneurship Summit (AIBE). The summit boasted more than 700 delegates and took place in QEII Centre in Westminster. The organisation behind AIBE wanted to create an event that would attract all education levels. Although, a relatively new event, AIBE has managed to establish itself over the past two years as a great way to engage diverse audiences and share ideas to further progress AI initiatives.

The selection of speakers varied significantly, from your highly technical IBM engineer, to academics who focused more on theoretical aspects and the future of technology. Several topics were debated around what exactly an AI driven world would look like; examples included how AI will influence society in terms of social interaction, career choices, job interaction, and the potential harm that AI can inherently cause. One speaker suggested that we should be pro-active in developing regulatory systems around how this technology should be used.

Democratisation of AI

Danilo Poccia, Amazon’s Evangelist led a discussion on the benefits of democratisation. We were introduced to Amazon’s products and services that are already widely available in their AWS ecosystem. The potential for AI to grow within this industry is huge and the more widely available AI technology becomes to the masses, the greater the opportunity for anyone and everyone to build the AI systems they need.

How is blockchain and AI influencing Fintech?

In a panel discussion, there was an interesting debate regarding how blockchain is influencing Fintech companies and how, in turn this is disrupting business. The ultimate question from this was: what is the future of digital currencies based on blockchain and AI?

Only time can tell as there was no consensus on whether cryptocurrencies would replace legacy currencies or how exactly AI would influence monetary systems. Several advantages were highlighted especially the fact that rules are set from the outset, potentially making these currencies less volatile. Cryptocurrencies are still in their infancy and they will need to go through several iterations of improvements towards faster and more secure platforms.

Expo

Another area of the AIBE Summit was an Expo where you can lead discussions with companies about their particular AI driven software. People were happy to confer the technical details and it was a good chance to gauge the diverse opinions of the attendees.

So what has AI got to do with DevOps and Continuous Delivery?

It indeed contributes a great deal when you’re in the business of automation, testing and performance optimisation. As a matter of fact, we’re currently developing several tools used for our clients to make automated testing and performance optimisation more autonomous. We will be sharing

If you would like some more information about Artificial Intelligence and DevOps, or if you have any questions, please get in touch.

Marian KnotekThe rise of Artificial Intelligence at the AIBE Summit 2018
read more
AWS reveals Managed Kubernetes: EKS

AWS reveals Managed Kubernetes: EKS

No comments

There were many product announcements at the AWS re:Invent 2017 conference in November that have got the team at ECS excited, particularly in the compute space.

As announced by Andy Jassy, during his re:Invent keynote, the goal for AWS is to create a platform that provides everything builders require. Enabling services, platforms and tooling that can be utilised effectively and securely within an enterprise environment. Werner Vogels, CTO at Amazon, expanded on this concept in his keynote speech a day later, when discussing building a platform that not only helps businesses achieve their goals today, but enables them to build for 2020.

With that in mind, AWS launched Elastic Container Service for Kubernetes (EKS), “a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters“.

This long awaited move to realign the cloud colossus with other services providers (Azure and GCP), who already provide native support for this technology, is fully compatible with the existing AWS ecosystem such as:

  • Fully managed user authentication to the K8S masters through IAM
  • Restricted access through the newly revealed PrivateLink
  • Native AZ Cluster distribution to provide High Availability

You can read more about how to use this service in the following blog post, produced by Jeff Barr, Chief Evangelist at AWS.

So what do these new developments mean for our customers? Why was this solution sought after, even when AWS launched its own container solution ECS?

To answer this question, we need to step back and understand what Kubernetes is and its role in the modern containerisation scene.

Kubernetes takes its name from a greek word that means “”Helmsman”, and is a Container Scheduler that can be better defined as an “Operative System for clusters”.

It was first released in 2014, when Google released an open source version of their own internal scheduler, Borg. In the past 3 years it has gained huge momentum thanks to an active community directly involved in its roadmap. It’s designed with stability and high-availability in mind, removing the complexity of managing an entire cluster at a unique endpoint.

Even with all this support, managing large clusters can be complex and challenging. Problems like missing system containers and failing correct scheduling are real, and can introduce fallacy into a critical microservice, which in turn can cause downtime across the entire service.

On this matter, AWS recognised that managing production workloads  “is not for faint of heart”, with many moving pieces contributing to its unpredictability.

EKS is a total managed solution: you decide the number of nodes, autoscaling rules, instance type, access policies and AWS will think of the rest. No need to worry about scalability or accessibility. Want more machines? Just add them to the clusters! Want to access them via command line? Just use kubectl!

Kubernetes in the Financial Sector

Amazon calculated that approximately 66% of the world’s Kubernetes workload runs on AWS. Amongst them, new banking companies like Monzo, who are using and massively contributing to this technology, enabling them to scale and grow much faster than the competition.

Bearing in mind the successes that the challenger banks have had with microservices and containerisation, Fintech companies will have enormous benefits leveraging the structured and resilient architecture of Kubernetes, paired with the ease of management and scalability offered by EKS.

If you’d like to find out more about how you can leverage these services in the Cloud please contact our experts today.

ECS DigitalAWS reveals Managed Kubernetes: EKS
read more
Why Continuous Testing is crucial to DevOps

Why Continuous Testing is crucial to DevOps

No comments

Getting testing right – or wrong – can have enormous consequences for businesses in all walks of life, from both reputational and financial perspectives. Take British Airways, who suffered a disastrous datacenter outage in May 2017 that led to flights from Heathrow and Gatwick being grounded for almost 48 hours. Or market-making firm Knight Capital Group, who lost $440 million in 30 minutes in August 2012, owing to a bug in its trading software.

While most software testing goes unnoticed by consumers unless something goes wrong, there are companies who proactively enhance their reputations by sharing what they do. Netflix’s Tech Blog contains a remarkable amount of detail on the streaming giant’s continuous testing practices.

Continuous testing and automation is a crucial piece of the DevOps jigsaw, where the full benefits can only be realised if everything is in place, with automation and monitoring at all stages of software development and operations.

Worldwide, more and more companies are trying to implement DevOps across their software development and operations – the State of Testing Report 2017 saw a 12% increase in DevOps use compared to 2015.

This is a significant rise but, from our experience, problems often occur when DevOps is implemented but testing is left behind. Continuous testing and automation should be seen as a precursor for a DevOps implementation, rather than something to fit in as and when.

Automated testing

If the ultimate aim of DevOps is to have the confidence to release at any given moment, knowing that neither your infrastructure nor application will fall apart, then testing based on old working practices just won’t cut the mustard.

Ideally, all the required elements for DevOps are ready before any kind of development begins but businesses usually need to implement DevOps on to an existing organisation, full of processes and tools that are at different stages of readiness. DevOps is more often an upgrade, not a clean install.

For release on a regular basis, whether that’s daily or another timescale, you need a set of tests you can automate and have confidence in. An old-fashioned testing cycle of two weeks, say, ties your hands; you can either release quickly or be confident about the quality of the release, but not both.

It’s also important to remember that ensuring things are working is only part of a good testing model. A major aspect often overlooked by methodologies outside of continuous testing is the role that testing plays in helping to communicate, define and deliver the original business objectives using techniques such as Behaviour Driven Development (BDD).

Getting it right 

Good infrastructure and platforms are integral to successful testing and a DevOps mindset can help make this happen. A good example of where these worlds come together to enable more effective and quicker testing is containerisation. One of the many benefits is that you can have a production-like environment that you can start up and bring down quickly and easily. You also have complete control over that environment, so you can change the data, simulate network interruptions, simulate load and so on, with complete safety.

Many organisations have tried to adopt Test Automation with varying degrees of success. According to the State of Testing Report 2017, 85% of businesses are using automation to some extent in their testing processes but under a quarter of those are applying it to the majority of their test cases.

Ultimately, implementing a DevOps process is futile without backing it up with good continuous testing and automation. The rewards are there to be claimed. Getting testing right is the key to achieving the full benefits of DevOps and actualising business value.

Over the coming months we will be posting more articles where we delve deeper into the relationship between DevOps and continuous testing, and the benefits it can bring to your business.

Here at ECS Digital we’re always happy to talk about what we do, why and how. If you’re interested in finding out how we can help you, please do get in touch.

Kouros AliabadiWhy Continuous Testing is crucial to DevOps
read more
The best team structures for DevOps success

The best team structures for DevOps success

No comments

DevOps Team Structures

Agile is taking the world by storm. Businesses of all shapes and sizes are seeing the benefits of embracing DevOps and moving to adopt a more agile culture. A number of high-profile companies have had great success in applying DevOps, including streaming giants Netflix and Spotify.

We’d like to tell you that everything is rosy but, while some are celebrating, others are struggling. The reason? It’s often the way they’re working.

Getting the formation of teams and structures right in order to implement DevOps efficiently isn’t easy but it’s absolutely key to a successful DevOps adoption.

So, what exactly is the best way to work?

To answer that, we need to look at what companies are doing. Whilst there are many ways you could structure your teams, we typically see three types in our work with clients:

1. Platform Engineering

In this scenario, the Platform Operations Team sits inside the different business units (e.g. mortgages or payments) to help the teams in that unit adopt different working practices. This provides a wider holistic view to the different business units.

The team comprises developers, QAs and release engineers who are responsible for building out platform availability, upgrades and providing new services. There would be an overarching Platform Engineering team to ensure consistency across business units. They may sometimes be referred to as SREs (Site Reliability Engineers) but the responsibility is far wider reach as they need to enable business units as well.

Pros

  • Unified and consistent – a singular message across the board
  • Maintain control and flexibility into individual teams with different ways of working

Cons

  • Takes time to adopt and build
  • Resource required to learn about the different business units – gathering the different requirements and building a platform to support everyone
  • If implemented incorrectly the “enabler” becomes a single point of failure – if they can’t enable others, they have to do the work personally and become a bottleneck

2. Virtual teams

This is a way of organising around virtual teams. The idea is to take people from each of the business units and form one single virtual team where the high-level strategic and tactical decisions are made.

The virtual team can then take information from the business units to bring out a holistic view of what each unit needs and wants, using that to build out the practice.

Rather than being a dedicated platform team, it is designed to leverage existing knowledge within the teams themselves.

Pros

  • People are empowered to make their own decisions
  • They have free rein over how they go about tasks because they know the business units
  • It can sometimes be faster as they already understand what they want to do in their business units

Cons

  • Risk of individual business units implementing strategic decisions made by the virtual team differently
  • Uneven balance of transformation within the company as teams work at a different paces
  • Can create anarchy between teams as there is little to no collaboration. This is particularly problematic with business units that need to integrate with each other
  • Possible duplicated effort – if one team has done it already then another team may not know because virtual teams not running effectively

3. Teams based on functions

This is a much more traditional way of working. Instead of being formed by business units, the process is built on different specialised functions such as developers and sys admins.

Rather than attempting to create a collaborative model, this method is extremely linear – you start with the developer to build out the practice, following which it is pushed out to the different functions. All the teams have their own champions and essentially do their own thing independently of other teams.

Pros

  • At a granular level, adoption can be faster within the smaller practices, but this isn’t necessarily reflected in the overall speed of adoption
  • Individuals own their own responsibilities

Cons

  • Miss out on a holistic view of what’s going on end-to-end – teams are only concerned with their own area
  • Linear organisation means you can do all your development work, move into the QA stage and only then realise that there are issues that need to be solved – and that you need to go back to the development phase.

Which structure should you use?

All businesses need to adopt the strategy that works for them. The three we’ve outlined above are some of the most common ways of working that we’ve seen our clients use and that we’ve used to help transform our clients’ organisations.

In our experience, the method that’s both most successful and quickly adopted is the first – Platform Engineering. It has a number of advantages: with this team structure, people can jump teams or manage multiple business units depending on the resources and requirements of the businesses.

In addition, this structure provides the most consistency thanks to its dedicated team. In more regulated environments where governance and regulation compliance is key, a central team can ensure compliance across the organisation.

Here at ECS Digital we’re always happy to talk about what we do, why and how. If you’re interested in finding out how we can help you, please do get in touch.

Jason ManThe best team structures for DevOps success
read more
Deep Dive into DevOps at the DevOps Enterprise Summit 2017

Deep Dive into DevOps at the DevOps Enterprise Summit 2017

No comments

We’ve just arrived back from the DevOps Enterprise Summit (DOES), and to say we’re a little enthused is an understatement. This was the 4th year DOES has run and each time, this successful event was completely sold out, accumulating around 1,400 attendees to hear the success (and failure’s) from the enterprise organisations that are adopting DevOps.

Gene Kim kicked off the whole event with some opening remarks which really outlined why we are here:

  • We believe DevOps is important
  • We believe DevOps creates business value
  • We believe DevOps makes our work humane

This has been proven true within the state of DevOps report which has been running for multiple years outlining that some of the benefits gained from adopting DevOps practices can include:

  • 46x more frequent deployments
  • 440x shorter change deployment times
  • 96x lower MTTR.

The panel at DOES did a really great job selecting speakers this year and some of the objectives for each talk outlined the following:

  • The organisation and the industry they compete in
  • Their role and where they fit
  • The business problem to be solved
  • Where they started and why
  • What they did including tools and techniques
  • The outcomes
  • The remaining challenges
  • What they don’t know how to do
  • What there looking for help with

Following these objectives there were some superb talks from a range of great companies including the likes of Disney, Barclays, Capital One, Starbucks, Nike and many more.

A few talks we’d suggest for you to go back and watching include:

How your systems keep running Day after Data – John Allspaw 
Augmenting the Org for DevOps – Stephanie Gillespie & John Rzeszotarski

There are many other talks that were very interesting over the course of the event, you can find the talks and slides here:

YouTube: http://bit.ly/itrevvideos

Flickr: http://bit.ly/DOES16SFOphotos

Dropbox: http://bit.ly/DOES17SFOslides

The Sponsors

A major turnout for the sponsors this year and some really interesting conversations had, there was one organisation that flew in 40 staff to visit the conference! The conversations were varied with some exploring the start of their DevOps Journey, all the way to those who are looking at the next evolution using practices like Machine Learning and greater use of Containers.

Finally, something that really caught our eye here, Christopher Fuller of griotseye.com created some brilliant illustrations of the lessons learnt and practices organisations have adopted. We will leave you here to have a look at the selection of them below.

That’s a wrap on this, and we will hopefully see you all again next year.

 

Jason ManDeep Dive into DevOps at the DevOps Enterprise Summit 2017
read more