Q&A: The Evolution of the term ‘DevOps’

Q&A: The Evolution of the term ‘DevOps’

Use of DevOps practices has soared in recent years. Commonly, this is the result of an increased number of organisations seeking to respond more effectively to their business challenges with agile methodologies and ways of working. And yet, the term ‘DevOps’ seems to be diluting at a similar pace.

People appear to be referring to their own digital transformations by referencing DevOps practices without necessarily having these in place – certainly not in the traditional sense. Is the term ‘DevOps’ simply losing its specificity, or is it becoming altogether redundant? Or should the term DevOps embrace a widened context in the wake of changing industry trends?

We sat down with both ECS Digital’s Founder & Managing Director, Andy Cureton, and Head of DevOps, Jason Man, to discuss the evolution of DevOps as a practice, and how the term ‘DevOps’ looks to be changing.

Here’s what they had to say:

Q: How would you summarise DevOps in a sentence or two?

Jason: “DevOps is about delivering speed, quality and business value. It’s not about the technology out there or using the right tech to be captivating your audience, but actually about what business value it’s bringing”.

Andy: “DevOps is about aligning all areas of an organisation to leverage modern ways of work and technology to deliver the target business outcomes.”.

 

Q: Have you heard customers, or people from within the industry describe DevOps in other ways?

Jason: “People tend to use terms like digital transformation, engineering capability, platform engineering as a way to describe the DevOps methodology as a whole, broadening the term far beyond its traditional meaning. DevOps is the overall encapsulating term for all the different practices, one term which has risen and what I see may be the next term for this is Customer Experience (CX).  CX is on the rise as this is ultimately what organisations look to improve, how they achieve this would be implementing DevOps practices, adopting agile methodologies and so forth.”.

Andy: “Engineering or Digital Transformation are more commonly used to describe programmes of work to adopt DevOps. One of the reasons for this is the overuse of the term ‘DevOps’.  There is also the challenge in the breadth of things that the term DevOps is being used to describe. I believe this reflects the broader adoption in the industry where there are organisations well advanced on their journey and those at or towards the beginning. There are comparatively few in between. The early adopters have provided the hard data around DevOps that has led to the conclusion that it is essential to the success and survival of businesses. At one end of the spectrum you can find people referring to DevOps practices to describe the introduction of source code management or continuous integration. At the other end the same term is used to refer to continuous deployment to dynamic serverless production infrastructure 10s of times a day yet there is no distinction in how the term is being used. For this reason, people tend to refer to the specific technology or practice, for example continuous delivery or continuous integration rather than the overarching term DevOps.

 

Q: Is there an element of the ‘Cloud-wash’ effect happening?

Andy: “Yes. People attach the word DevOps to everything in the same way they attach the word Cloud to everything as a way of implying modern, cool, agile or technologically advanced. In both instances, it betrays what true DevOps or Cloud is and creates a negative stigma around the terms. A CIO told me over a year ago that he “would be shot if he went to the board to ask for money to do DevOps” and that the conversation to have would be about investment to reduce lead time to production, increase service availability etc.

Jason: “The DevOps term is being overused, unlike the Agile Manifesto, there is no definitive way to describe if you have adopted DevOps or not. It could be as simple as adopting a CI server or going full blown immutable infrastructure with every part of your pipeline provided “as a service”. DevOps is a bit like a New Years’ Resolution, in the sense that at the beginning of the year everyone sets out good intentions to introduce a new resolution. It’s almost like everybody feels they need to have one and most will set out to stick to one. But then after a couple of months, the novelty wears off and they lose their discipline and go back to how things were originally.

 

Q: How have you seen the DevOps methodology evolving, and do you think the term should evolve too?

Jason: “DevOps is a continually evolving term, as the whole concept is constantly improving. In 2009 – when the term came into place – you can actually see that there has been quite a lot of improvement. DevOps didn’t used to involve containers but now they’ve come in recently. Serverless is coming in and now people are talking about Machine Learning and AI being introduced too. Whether or not it will remain being called DevOps, that is something to look out for, but ultimately it continues to evolve. We do see that the traditional practices are now being adopted at scale across all sectors including finance and public sectors. The newer and niche practices are setting out on their early adopters and proving its value, which hopefully will end up with scalable enterprise adoption.”

Andy: “As discussed earlier the use of DevOps practices has exploded as their benefits have been increasingly documented. The practices themselves have not evolved but with increased adoption and advancement of technology, they have been applied to different use case and technologies. For example, using AI/ML to perform previously manual exploratory testing, container technologies being applied to use cases ranging from technology currency to cloud migration.

 

Q: What do you see in the future for DevOps? Is there a risk the term will die out as its scope widens?

Andy: “DevOps practices have become a critical element to the success and survival of companies in this increasingly software driven world. The term will die out for two reasons. Firstly, because it is overused and attached to things incorrectly it has diminished in value. Secondly, the term will die because the practices that DevOps covers are now becoming the new normal. These practices including continuous integration and continuous delivery will however continue to be referred to. As mentioned before, the benefits seen by organisations who have adopted DevOps are well documented and transformational to the fortunes of those companies. As IDC says DevOps is no longer optional, it’s mandatory. It is therefore becoming the new normal.

Jason: “The term DevOps will die, and I would almost say that it has died. It will be termed under a different methodology due to its overuse. IT has gone through this change many, many times. I have only been in the industry for 10 years and I’ve seen three different methodologies that cover the same thing.

With regards to its scope, in the past organisations used to outsource engineering capability because it was seen as a cheaper methodology to run. But more recently, people are bringing this back in house as they have the talent available. I can see people in the future saying the cost is too high and we should outsource again specially if they are not seeing the results the market is promising. It is a continually evolving methodology, every organisation is a software company hopefully with the ultimate goal to improve customer experience.”

 

Q: Where are the areas of DevOps that need additional tools or support to help optimise its capabilities?

Andy: “DevOps isn’t about tools; DevOps refers to a group of ways of working and practices. These practices can and are being applied to new technologies and use cases that will see the use of “DevOps” evolve and grow. The question should therefore be, what are the technologies and use cases that need DevOps practices to optimise them? These will continue to be uncovered as new technologies or use cases for existing technologies are discovered.

Jason: “An area that is still underplayed or underutilised is the data side of things – people are talking about collating data and baseline metrics, but I feel like there is room to improve and manage this data. Everything flows through systems and computers and we need to look at how we can analyse this data in a better form because actually, in order to continually improve, you can’t always be looking, discovering or finding out what is it that we need to improve. Whereas if you have a proper data metric system, you can immediately know what’s needed. This space is overcrowded already but I would go as far as saying there is no outright leader in the space”.

 

With the increase in businesses undergoing Digital Transformation, DevOps has become an industry buzzword. A way for businesses to feel like and project externally that they are achieving the same as others, without fully understanding what it means to adopt DevOps. As we’ve seen with terms such as Cloud and agile, as the frequency of use increases, the murkier the meaning becomes.

Puppet’s VP of Ecosystem Engineering, Nigel Kersten states that an increasing number of people will claim that DevOps is ‘dead,’ not because the practice is dead, but more that the “lessons from the DevOps movement [will] become increasingly internalised in new companies and projects, [where] we’ll stop seeing the cool kids talk about it at all.” This was a prediction of Andy’s some time ago which he spoke about within a 2016 DevOps Online article – he stated that DevOps will no longer be called DevOps as it will become the new normal, an integral part of all companies without any questions asked.

ECS DigitalQ&A: The Evolution of the term ‘DevOps’
read more
DevOps Playground Singapore #2: Hashicorp Consul & Smashing

DevOps Playground Singapore #2: Hashicorp Consul & Smashing

Following the success our last DevOps Playground in Singapore, the team were at it again with another playground showing the power of service discovery and monitoring using Hashicorp Consul.

A typical DevOps Engineer could be responsible for the maintenance of many services in a DevOps pipeline such as a build server, binary storage solution, a a code repository service, wiki and a ticket tracking service.

High agility and a mature DevOps capability require these services to be running 100% of the time. An outage of any service will impact the route to live of any change. People will be walking up to your desk, wondering what is going on!

Using Hashicorp’s Consul, we can intelligently monitor the status of these services and react in real time to unpredicted behaviour. You can therefore identify the issue before the rest of your end users do.

However, there will be occasions when outages require time to fix. How do you keep your end users informed without having to update them on a continual basis? Display this information from Consul in an easy to view dashboard on TVs around your office using Smashing.

It looks smart, tidy, and provides everyone the information they need so you can be left to bring the services back on line in the quickest time possible.

At the event, we went through the steps to install and run Hashicorp Consul and registered both our Jenkins and Artifactory services by configuration a service definition file and loading that into Consul. We validated the health check feature of Consul by taking Jenkins offline and seeing this outage being reflected in the User Interface.

We then installed a Smashing Dashboard and saw how easy it was to post updates to Smashing by executing a simple curl command. With all the core pieces in place, we implemented a watch that pushed an update to Smashing whenever a service went offline.  This result in an eye pleasing dashboard that instantly highlighted the status of your pipeline in real time.

 Following this video you should be able to run through this playground from the comfort of your own home.

Thank you to everyone who attended. We look forward to seeing you again at our next DevOps Playground – keep an eye out for the next event!

ECS DigitalDevOps Playground Singapore #2: Hashicorp Consul & Smashing
read more
DevOps Deadlock – moving from distracted to determined

DevOps Deadlock – moving from distracted to determined

60% of European organisations now utilise DevOps*. But as this number increases, so does the performance gap between those “stuck” at the experimental stage and those that have been able to successfully adopt DevOps to achieve scale.

To lead and excel in today’s digital economy, companies must embrace a business-centric collaboration – AKA DevOps. It’s no longer a choice to adopt or not. The decision is about how to get DevOps right, so it can scale across the business.

Delivering greater security, speed and quality of output whilst driving a truly digital-native experience is what DevOps looks to advance and secure. And once adopted, business innovation is the recognised principal benefit – surpassing the allure of enhanced developer productivity.

But scaling requires going one step further. It’s not enough to have DevOps in place, you need to understand how to enable it to scale.

At this year’s IDC DevOps Conference, Jen Thomson – IDC’s Research Director – took to the stage to discuss the following three key areas:

  • Where European organisations are on their journeys to enterprise-scale DevOps; looking specifically at how they are becoming ‘unstuck’
  • New emergent challenges faced as DevOps journeys progress
  • DevOps for business agility and the KPIs for success

This talk was bookended by Andy Cureton’s – Managing Director and Founder at ECS Digital – presentation on lessons from a year of Enterprise DevOps. Cureton supported Thomson’s observations, reiterating that moving past the challenges of transformation at scale requires taking a more holistic view of technology and the business – especially when embracing technologies that are still emerging. Having a laser focus on security and architecting for future outcomes is also vital for securing long-term success.

Scaling DevOps requires progress across multiple segments of the software delivery pipeline including: planning, developing, testing, delivering, deploying, securing, operating and managing. And whilst these would have traditionally sat in one department, for DevOps to scale successfully, security and software-driven innovation needs to become everyone’s responsibility.

Determined organisations are pathing the way when it comes to moving past the “DevOps Deadlock” because they are modernising their infrastructures specifically, so they can propel DevOps further within the organisation.

Lloyds Banking Group are one such organisation. And on the 8thNovember, Dave Gore– Engineering Transformation Lead at Lloyds Banking Group – will be joining Jen Thomson and Andy Cureton to reveal how the bank is supporting their own transformation journey.

As well as this deep-dive into a customer experience of utilising DevOps to secure business innovation, this webinar looks to continue the core conversations that arose at IDC DevOps Conference, tapping into the following:

  • How to differentiate between DevOps Distracted and DevOps Determined
  • Different paths to take to getting unstuck
  • Supporting a transformation to a customer-centric software-defined business

Free to attend and with a live Q&A to follow the discussion, this is your chance to hear from the experts and have your say!

Registrations for the webinar are open now. Save your spot here.

*IDC DevOps Conference 2018

ECS DigitalDevOps Deadlock – moving from distracted to determined
read more

New announcements at HashiConf 2018

We are writing from San Francisco at the Fairmont Hotel where HashiCorp has just kicked off HashiConf 2018.

Since the company’s inception in 2012, it has seen huge growth and each of Hashicorp’s tools have become incredibly valuable to the industry. In particular, Terraform, Vault Consul and Nomad.

Terraform is currently used in most Fortune 500 companies. It also serves an incredible number of small and medium-size companies and plays an important part of the individual developer toolkit, thanks to growth in the adoption of the Cloud. Vault, Consul, Nomad are also being heavily utilised by the industry.

We’ve just kicked things off and HashiConf 2018 has a packed agenda of exciting talks, which is leading to some tough choices on our part!

Ready? Set. Go!

At ECS Digital, we’ve been working with the entire suite of products that HashiCorp has created. Meet Michel Lebeau, DevOps and Continuous Delivery Consultant at ECSD. Michel has been heavily involved in projects that involve Hashicorp tools and runs  Hashicorp Training courses. Here’s what he has to say about the product announcements at HashiConf 2018:

“I’m personally very excited about the free remote state feature that Terraform Enterprise is going to offer to everyone. This will allow teams to work together and manage the same resources much easier. This is a feature that Enterprise customers have enjoyed for a while now, and I’m extremely pleased to see that the general public will be able to benefit from it too.”

Nice one HashiCorp! See here for more details

“I’m also looking forward to Terraform 0.12, as I’m sure many others are, with the new for loop, conditional expressions, dynamic blocks, etc. However I am not looking forward to the breaking changes!

Vault 1.0 is of course another big one, it’s an awesome security tool that is being adopted by more companies by the day, and seeing HashiCorp give it its 1.0 seal of approval is very exciting. Auto Unseal for the open source community will help smaller companies sort out their unseal keys headache, which is a welcome addition.

Consul Connect and first-class support for Kubernetes are other announcements that have me unreasonably joyful for a Tuesday morning!”

Now meet Daniel Meszaros, also a DevOps and Continuous Delivery Consultant at ECS Digital. Daniel’s been working with open-source versions of the HashiStack for about 2 years. Here’s his take on the announcements at HashiConf 2018:

“There are a lot of exciting announcements happening this morning at HashiConf and let me tell you what my favourites are:

Terraform: HashiCorp is starting a new service to enable every terraform user to collaborate better. Remote state-file storage, with no limits on users or workspaces. Data is encrypted with Vault. This service also offers shared module registry, and centralised plans and applies. The brand new version of HCL is also something, a lot of people are waiting for, and it’s finally here. They’ve made the language more flexible, and introduced features (like loops, and dynamic blocks) that will make writing .tf files better.

Vault: 1.0 Preview Release. The entire community is waiting for the 1.0 for a long time. Vault is and has been a very mature product for years now, but the company didn’t want to release the first major version until they were sure, everything was just right, stable enough and supportable. New features include auto-unseal in open source versions, working with all the major public cloud providers: AWS, GCP, Azure and Alibaba Cloud.

Consul: Preview Release of v1.4. Connect is now Generally Available. Native integration with Envoy, the most commonly used service-mesh proxy. With the Kubernetes integration, announced earlier this year, Consul is now capable of discovering and securing and connecting services inside and outside a Kubernetes cluster.

Nomad: 0.9. I love the idea of Nomad. I love that HashiCorp is not trying to make yet another container-only platform, that focuses on the benefits of using container images, but besides being a container scheduler, is also trying to provide help to companies with legacy applications to start segregating and automating the deployments of their softwares in their current form. What shows that their effort is worth it, are the raw stats. Nomad is currently the fastest growing Hashicorp product in terms of downloads. In the new version coming in November, we’ll have new, improved UI and lot’s of new features, like utilising Nvidia GPU, Affinity type constraints, and a new type of scheduling, Spreading.

Learn: Hashicorp announced a new learning platform that helps everyone getting started with their products. Starting with Vault, but Consul and Nomad coming later this year.”

The official announcement by Armon Dadgar, co-founder of HashiCorp, can be found here.

Watch this space and follow us on Twitter for follow up blog posts and other specific announcements from Michel and Daniel at the conference!

Quick shameless plug: We offer Official HashiCorp Training in London and Singapore, get in touch if that’s something that your company is looking for.

ECS DigitalNew announcements at HashiConf 2018
read more
Year in the life at ECS Digital

Year in the life at ECS Digital

  • Ever wondered what our consultants do?
  • Do you have an interest in engineering practices, culture, automation, coding or testing?
  • Are you ready to join our family?

At ECS Digital, we are always looking to grow and diversify our talented team of consultants. We pride ourselves in creating the optimal environment for our team to succeed. This means investing in our people when it matters most to them on their journey.

We also make sure that each day is a new and exciting opportunity for learning – because doing the same thing day in day out just isn’t fun for anyone.

So, if you’re looking to break into the world of Agile, BDD/ATDD, coding, CI and CD, Continuous Testing or DevOps, here’s what you can expect from your first year at ECS Digital:

Month 2

By month two, you should be settling into your new role and starting to learn how ECS Digital really works. You’ll start working towards your first certifications and shadowing on customer sites – which means lots of new faces and names to remember.

To help sharpen your consulting skills, you’ll be asked to conduct a few internal presentations and, seeing as we’re a social bunch, it’s likely you’ll have been invited to attend one of our frequent dinners or seasonal parties too! In addition to food and drinks, we plan regular events such as yoga and team-building sessions.

“ECS Digital has an amazing culture which promotes a good working environment for everyone with plenty of opportunity to progress if you wish to. ECS Digital will support this progression and ensure you have the tools required, but they also understand that sometimes the real world can get in the way and provide you with the flexibility needed too”

“While ECS Digital ensures you have people supporting you so you are not overwhelmed, they also provide you the opportunity to take responsibility if you want it, running an event like DevOps Playground provides a great stepping stone.”

Months 9-12

You should be feeling pretty great about what you have achieved and feeling confident in being responsible for delivering a whole host of tasks, as well as where you see yourself progressing in the coming months with us.

You’ll be encouraged to work towards additional certifications or attend one of a wide variety of courses, all while receiving the support and encouragement you need to succeed. It’s an exciting time, so be open to every opportunity that comes your way and continue to sharpen your skills as much as possible. Training is always available to employees but staying curious as you work towards leading your first client project is hugely important.

“After 12 months, I led my first client project involving two ECS Digital resources to work with the customer, a large UK based bank, on building up its Cloud capability. This involved a wide range of areas, control, security, infrastructure, networking, etc. and my personal focus has been on making all of that align so that project teams can consume AWS in a controlled and secured manner.”

When the time comes, usually after around a year with us, you’ll become responsible for leading a client project – with all of the support of your ECS Digital family around you, of course! Awards dinners and events will also guarantee you celebrate your first 12 months as an ECS Digital Consultant in style.

And that’s it, your first year with ECS Digital! We’re excited for you to start your consulting journey with us. If we’ve piqued your interest, take a look at our vacancies now.

If you missed our recent Year in the Life infographic, you can check it out here.

ECS DigitalYear in the life at ECS Digital
read more
Why you should invest in AWS Big Data & 8 steps to becoming certified

Why you should invest in AWS Big Data & 8 steps to becoming certified

A decision that many engineers face at some point of their career is deciding what to focus their attention on next. One of the amazing advantages of working in a consultancy is being exposed to many different technologies, providing you the opportunity to explore any emerging trends you might be interested in. I’ve been lucky enough to work with a huge variety of clients ranging from industry leaders in the FTSE 100 to smaller start-ups disrupting the same technology space.

So why did I pick Big Data?

A common pattern I’ve noticed is that everyone has access to data – large amounts of raw, unstructured data. Business and technology leaders all recognise the importance of it, and the value and insight that it can deliver. Processes have been established to extract, transform and store this large amount of information, but the architecture is usually inefficient and incomplete.

Years ago these steps may have equated to the definition of an efficient data pipeline but now with emerging technologies such as Kinesis Streams, Redshift and even Server-less databases there is another way. We now have the possibility of having a real-time, cost efficient and low operational overhead solution.

Alongside this, companies set their sights on creating a data lake in the cloud. In doing so, they take advantage of a whole suite of technologies to store information in formats that they currently leverage and also in a configuration they possibly may harness in the future. These are all clear steps in the journey towards digital transformation, and with the current pace of development in AWS technologies it is the perfect time to become more acquainted with Big Data.

 

But why is the certification necessary?

The AWS Certified Big Data Speciality exam introduces and validates several key big data fundamentals. The exam itself is not just limited to AWS specific technologies but also explores the big data community. Taken straight from the exam guide we can see that the domains cover:

  1. Collection
  2. Storage
  3. Processing
  4. Analysis
  5. Visualization
  6. Data Security

These domains involve a broad range of technical roles ranging from data engineers and data scientists to individuals in SecOps. Personally, I’ve had some exposure to collection and storage of data but much less with regards to visualisation and security. You certainly have to be comfortable with wearing many different hats when tackling this exam as it tests not only your technical understanding of the solutions but also the business value created from the implementation. It’s equally important to consider the costs involved including any forecasts as the solution scales.

Having already completed several associate exams I found this certification much greater in difficulty because you are required to deep dive into Big Data concepts and the relevant technologies. One of the benefits of this certification is that the scope extends to these technologies’ application of Big Data so be prepared to dive into Machine Learning and popular frameworks like Spark & Presto.

 

Okay so how do I pass the exam?

1. A Cloud Guru’s certified big data specialty course provides an excellent introduction and overview.

2. Have some practical experience of Big data in AWS, theoretical knowledge is not enough to pass this exam…

  1. Practice architecting data pipelines, consider when Kinesis Streams vs Firehose would be appropriate.
  2. Think about how the solution would differ according to the size of the data transfer, sometimes even Snowmobile can become efficient.

3. Understand the different storage options on AWS – S3, DynamoDB, RDS, Redshift, HDFS vs EMRFS, HBase…

4. Understand the differences and use cases of popular Big Data frameworks e.g. Presto, Hive, Spark. 

5. Data Security contributes the most to your overall exam score at 20% and is involved in every single AWS service. There are always options for making the solution more secure and sometimes they’re enabled by default.

  1. Understand how to enable encryption at rest or in-transit, whether to use KMS or S3, or client side vs server side.
  2. How to grant privileged access to data e.g. IAM, Redshift Views.
  3. Authentication flows with Cognito and integrations with external identity providers.

6. Performance is a key trend

  1. Have a sound understanding of what GSI’s and LSI’s are in DynamoDB.
  2. Consider primary & sort keys, distribution styles in all of the database services
  3. Different compression types and speed of compressing/decompressing.

7.  Dive into Machine learning (ML)

  1. The Cloud Guru course mentioned above gives a good overview of the different ML models.
  2. If you have time I would recommend this machine learning course by Andrew Ng on Coursera. The technical depth is more lower level than you will need for the exam but it provides a very good introduction to a novice about the whole machine learning landscape.

8. Dive into Visualisation

  1. The A Cloud Guru course provides more than enough knowledge to tackle any questions here.
  2. Again if you have the time there’s an excellent data science course on Udemy which has a data visualisation chapter that would prove useful here.

 

Exam prep

It can’t be emphasised enough that AWS themselves provide amazing resources for learning. Definitely as preparation for the exam watch re:Invent videos and read AWS blogs & case studies.

 

Watch these videos:

  1. AWS re:Invent 2017: Big Data Architectural Patterns and Best Practices on AWS 
  2. AWS re:Invent 2017: Best Practices for Building a Data Lake in Amazon S3 and Amazon
  3. AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns  
  4. AWS Summit Series 2016 | Chicago – Deep Dive + Best Practices for Real-Time Streaming Applications 

 

Read these AWS blogs:

  1. Secure Amazon EMR with Encryption 
  2. Building a Near Real-Time Discovery Platform with AWS 

 

Whitepapers

  1. Streaming Data Solutions on AWS with Amazon Kinesis
  2. Big Data Analytics Options on AWS 
  3. Lambda Architecture for Batch and Real-Time Processing on AWS with Spark Streaming and Spark SQL 

 

All of the Big Data services developer guides.

 

One last note….

This exam will expect you to consider the question from many different perspectives. You’ll need to think about not just the technical feasibility of the solution presented but also the business value that can be created. The majority of questions are scenario specific and often there is more than one valid answer, look for subtle clues to determine which solution is more ‘correct’ than the others, e.g. whether speed is a factor or if the question expects you to answer from a cost perspective.

Finally, this exam is very long (3 hours) and requires a lot of reading. I found that the time given was more than enough but remember to pace yourself otherwise you can get burned out quite easily.

Hopefully my experience and tips will have helped in preparation for the exam. Let us know if they helped you. 

Good Luck!!!

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

ECS DigitalWhy you should invest in AWS Big Data & 8 steps to becoming certified
read more
ECS are attending FOSDEM 2018

ECS are attending FOSDEM 2018

This year ECSD are proud to be attending FOSDEM.

FOSDEM (or Free Open Source European Developer’s Meeting) is a massive free-to-attend Free Open Source event held annually at the ULB Solbosch Campus in Brussels, Belgium.

For those of you not familiar with the Open Source software concept, the fundamental principle is the practice of openly developing software in such a way that the source code is publicly available and maintained by a moderated community of developers.

Open Source software projects form the backbone of many supporting technologies of the DevOps toolchain. Interacting with Open Source projects will give a better insight into both the tools themselves and the way in which they function and behave in the background. You may already recognise some of the tools appearing including Docker, Kubernetes and AWS.

An all star line-up

The event’s sponsors include some big names including Google, RedHat, AWS, CISCO as well as many others. This year’s itinerary promises to be as fulfilling as previous conferences; with 653 speakers, 685 events and 57 tracks. Something that is guaranteed to appeal.

Particularly exciting tracks for DevOps-inclined individuals include

‘Identity and Access management’, Containers, ‘Monitoring and Cloud’ and ‘Testing and Automation’.

Alongside thousands of other Developers, we will be taking advantage of the DevRooms (and beer rooms), seeking to understand the focus and drive of the developers behind some of the tools which we use and gain insight into potential emerging industry trends.

The shift to DevOps practices

One thing we’ve witnessed over the years at FOSDEM is the shift into DevOps practices from the practices of old. There are a variety of stands and workshops enabling you to have one-to-one conversations with the developers themselves in order to gain further insight into the background behind the decisions that have led to the tool’s functionality and intended usage.

After this year’s FOSDEM conference has come to a close, we will be reporting back with our thoughts and insights into what we have seen and learnt over this weekend.

Stay tuned!

ECS DigitalECS are attending FOSDEM 2018
read more
AWS reveals Managed Kubernetes: EKS

AWS reveals Managed Kubernetes: EKS

There were many product announcements at the AWS re:Invent 2017 conference in November that have got the team at ECS excited, particularly in the compute space.

As announced by Andy Jassy, during his re:Invent keynote, the goal for AWS is to create a platform that provides everything builders require. Enabling services, platforms and tooling that can be utilised effectively and securely within an enterprise environment. Werner Vogels, CTO at Amazon, expanded on this concept in his keynote speech a day later, when discussing building a platform that not only helps businesses achieve their goals today, but enables them to build for 2020.

With that in mind, AWS launched Elastic Container Service for Kubernetes (EKS), “a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters“.

This long awaited move to realign the cloud colossus with other services providers (Azure and GCP), who already provide native support for this technology, is fully compatible with the existing AWS ecosystem such as:

  • Fully managed user authentication to the K8S masters through IAM
  • Restricted access through the newly revealed PrivateLink
  • Native AZ Cluster distribution to provide High Availability

You can read more about how to use this service in the following blog post, produced by Jeff Barr, Chief Evangelist at AWS.

So what do these new developments mean for our customers? Why was this solution sought after, even when AWS launched its own container solution ECS?

To answer this question, we need to step back and understand what Kubernetes is and its role in the modern containerisation scene.

Kubernetes takes its name from a greek word that means “”Helmsman”, and is a Container Scheduler that can be better defined as an “Operative System for clusters”.

It was first released in 2014, when Google released an open source version of their own internal scheduler, Borg. In the past 3 years it has gained huge momentum thanks to an active community directly involved in its roadmap. It’s designed with stability and high-availability in mind, removing the complexity of managing an entire cluster at a unique endpoint.

Even with all this support, managing large clusters can be complex and challenging. Problems like missing system containers and failing correct scheduling are real, and can introduce fallacy into a critical microservice, which in turn can cause downtime across the entire service.

On this matter, AWS recognised that managing production workloads  “is not for faint of heart”, with many moving pieces contributing to its unpredictability.

EKS is a total managed solution: you decide the number of nodes, autoscaling rules, instance type, access policies and AWS will think of the rest. No need to worry about scalability or accessibility. Want more machines? Just add them to the clusters! Want to access them via command line? Just use kubectl!

Kubernetes in the Financial Sector

Amazon calculated that approximately 66% of the world’s Kubernetes workload runs on AWS. Amongst them, new banking companies like Monzo, who are using and massively contributing to this technology, enabling them to scale and grow much faster than the competition.

Bearing in mind the successes that the challenger banks have had with microservices and containerisation, Fintech companies will have enormous benefits leveraging the structured and resilient architecture of Kubernetes, paired with the ease of management and scalability offered by EKS.

If you’d like to find out more about how you can leverage these services in the Cloud please contact our experts today.

ECS DigitalAWS reveals Managed Kubernetes: EKS
read more
Applying Machine Learning to DevOps

Applying Machine Learning to DevOps

Andi MannThis is a guest blog written by:

Andi Mann, Chief Technology Advocate, Splunk

With contributions by: Jeff Spencer, Senior Engineer, Splunk

There is powerful synergy between DevOps and Machine Learning (ML) – and related capabilities, like Predictive Analytics, IT Operations Analytics (ITOA), Algorithmic IT Operations (AIOps), and Artificial Intelligence (AI).

Conceptually, ML represents codification and acceleration of Gene Kim’s “Culture of Continuous Learning”. With ML DevOps teams can mine massive complex datasets, detect patterns and antipatterns, uncover new insights, iterate and refine queries, and repeat continuously – all at ‘computer speed’.

Similarly, ML is in many ways the next-generation of Automation, building on John Willis’ and Damon Edwards’ prescription for ‘CAMS’. With automation, DevOps enables a much faster SDLC, but one that is too opaque, distributed, dynamic, and ephemeral for normal human comprehension. But like automation, ML uniquely handles the velocity, volume, and variety of data generated by new delivery processes and the next-generation of composable, atomized, and scaled out applications.

In practice, some key examples of applying ML to DevOps include:

Tracking application delivery

Activity data from ‘DevOps tools’ (like Jira, Git, Jenkins, SonarQube, Puppet, Ansible, etc.) provides visibility into the delivery process. Applying ML can uncover anomalies in that data – large code volumes, long build times, slow release rates, late code check-ins – to identify many of the ‘wastes’ of software development, including gold-plating, partial work, inefficient resourcing, excessive task switching, or process slowdowns.

Ensuring application quality

By analyzing output from testing tools, ML can intelligently review QA results, detect novel errors, and effectively build a test pattern library based on discovery. This machine-driven understanding of a ‘known good release’ helps to ensure comprehensive testing on every release, even for novel defects, raising the quality of delivered applications.

Securing application delivery

Patterns of user behavior can be as unique as fingerprints. Applying ML to Dev and Ops user behaviors can help to identify anomalies that may represent malicious activity. For example, anomalous patterns of access to sensitive repos, automation routines, deployment activity, test execution, system provisioning, and more can quickly highlight users exercising ‘known bad’ patterns – whether intentionally or accidentally – such as coding back doors, deploying unauthorized code, or stealing intellectual property

Managing production

Analyzing an application in production is where machine learning really comes into its own, because of the greater data volumes, user counts, transactions etc. that occur in prod, compared to dev or test. DevOps teams can use ML to analyze ‘normal’ patterns – user volumes, resource utilization, transaction throughput, etc. – and subsequently to detect ‘abnormal’ patterns (e.g. DDOS conditions, memory leaks, race conditions, etc.).

Managing alert storms

A simple, practical, high-value use of ML is in managing the massive flood of alerts that occur in production systems. This can be as simple as ML grouping related alerts (e.g. by a common transaction ID; a common set of servers; or a common subnet). Or it can be more complex, such as ‘training’ systems over time to recognize ‘known good’ and ‘known bad’ alerts. This enables filtering to reduce alert storms and alert fatigue.

Troubleshooting and triage analytics

This is another area where today’s ML technologies shine. ML can automatically detect and even start to intelligently triage ‘known issues’, and even some unknown ones. For example, ML tools can detect anomalies in ‘normal’ processing, and then further analyze release logs to correlate this issue with a new configuration or deployment. Other automation tools can use ML to alert operations, open a ticket (or a chat window), and assign it to the right resource. Over time, ML may even be able to suggest the best fix!

Preventing production failures

ML can go well beyond straight-line capacity planning in preventing failures. ML can map known good patterns of utilization to predict, for example, the best configuration for a desired level of performance; how many customers will use a new feature; infrastructure requirements for a new promotion; or how an outage will impact customer engagement. ML sees otherwise opaque ‘early indicators’ in systems and applications, allowing Ops to start remediation or avoid problems, much faster than typical response times.

Analyzing business impact

Understanding the impact of code release on business goals is critical to success in DevOps. By synthesizing and analyzing real user metrics, ML systems can detect good and bad patterns to provide an ‘early warning system’ to coders and business teams alike when applications are having problems (e.g. through early reporting of increased cart abandonment or foreshortened buyer journeys); or being wildly successful (e.g. through early detection of high user registrations or click-through rates).

Of course, there is no easy button for ML, yet. There is no substitute for intelligence, experience, creativity, and hard work. But we are already seeing much of this applied today and, as we continue to push the boundaries, the sky is the limit.

ECS DigitalApplying Machine Learning to DevOps
read more
The Future of DevOps

The Future of DevOps

What is DevOps, and what changes will we see to DevOps over the next couple of years?

Our founder, Andy Cureton, takes a position on the Future of DevOps, in his article “How Will DevOps Change“, published in DevOps Online.

View the full article, here.

ECS DigitalThe Future of DevOps
read more