DOES 2018 –  bigger, better, brighter

DOES 2018 –  bigger, better, brighter

We’re back, sponsoring the DevOps Enterprise summit (DOES) 2018 in London – the 5th event in the series we’ve sponsored.

DOES primarily takes a look at how large enterprises are adopting DevOps and the associated challenges which comes at scale. This year was no different. Over the past couple of days, we heard talks from the likes of Jaguar Land Rover, Adidas, Nomura and Lloyds bank – not to mention the usual suspects of Barclays, Hiscox and Disney. Each provided valuable insights into their transformation journeys.

It came as no surprise that the common theme amongst these speakers was that whilst adoption is growing, scaling across different areas of the business is proving the greater challenge. One of the key takeaways for adoption is that businesses need to stop looking at IT as “projects” or “programmes”. IT should instead become “long lived products” where the focus is on business outcomes.

The below images, drawn at DOES 2018, gives an overview of the different talks that took place over the two-day event:

Two of the most impressive talks were given by Verizon and Disney. We’ve summarised the key takeaways from each below:

Verizon

‘DevOps is not a hobby but a new avenue to revenue’

Delivered by John Scott, Oliver Cantor, & Sanjeev Jain

This presentation focused heavily on howVerizon has enabled different systems with new ways of working, as well as the adoption of new technology. Their talk touched on:

  • The creation of Immersion Centres, where teams would focus on current challenges and look to improve these during a 6-week period
  • Creation of MVP products
  • New ways of working and the coaching required
  • Using gamification to gather more momentum with the “DevOps Cup”

Disney

‘Creating Digital Magic’

Delivered by Jason Cox & Jim Vanns

An incredibly powerful talk, with a spectacular cinematic view of some of Disney’s blockbusters. Fundamentally, all areas of the business are powered by technology and Jim Vanns explained within Industrial Light and Magic (ILM) how they have used technology to change how they operate. The below were highlights from the talk:

  • Technology stack used include Docker, Ansible, Elasticsearch
  • Strong focus on Microservices
  • Main challenges include scale, speed and stability
  • DevOps transformation focuses on leadership, technology and community

DOES 2018 had some amazing presentations, as well as memorable insights from some of the industry’s trailblazers. It is an event for bringing together innovative thinking, and as Gene Kim mentioned in one of the opening remarks: “business leaders who are driving organisations forward in the next 5-10 years will be in this room”.

We don’t believe any other conference brings this type of thought leadership and access to such an open community. We look forward to DOES 2019 where next year it will be spread across three days!

Jason ManDOES 2018 –  bigger, better, brighter
read more
Securing your transformation

Securing your transformation

At least 42% of CEOs have already begun a business digital transformation, with IT-related priorities at an all-time high (Gartner Survey Results). While CEOs are beginning to understand and set digital transformation agendas, the responsibility for delivering the promised benefits lies with the CIO. This means that CIOs are equally responsible for ensuring a company’s digital transformation has the processes in place to safeguard security measures and remain complaint with regulations.

73% of CIOs see cybersecurity as a key area of investment in 2018 and 2019. And at the same time, digital transformation is seen as the highest priority strategy to support organisational growth goals. Investing in DevOps is a highly recommended place to start.

This blog looks at how DevOps practices result in more secure systems by design, enabling CIOs to achieve their transformational targets whilst strengthening security.

 

Baking in security from the start

All too often, security has been seen as something to be bolted on to a project after the important features have been completed and tested. This approach was problematic even in the time before agile, with months or years between releases meaning there was time to add in security and test before going live.

Today, with an ever-growing cyber threat and organisations striving for continuous delivery with weekly or daily releases, leaving security to the last minute is simply not an option.

The answer lies in the way DevOps rewrites the old ways of working, shifting security left in the SDLC (Software Development Lifecycle) until it is present by default in every iteration.

It does this through a number of approaches, starting with the culture. All teams and individuals involved need to understand not just the ‘how’ but, the ‘Why’! Buying in to the idea that working toward one shared objective that has security at its foundation is essential to success.

Developers should be educated of the importance of introducing security into the SDLC and its impact on delivery. In fostering a culture of care, workarounds are reduced, removing vulnerabilities and creating more secure systems from the outset.

Promoting a blame-free culture where people feel they can find new ways of working, fail fast and learn from each iteration is imperative – with guidance coming from an overall agile framework. Practitioners often do their best work when they are given the opportunity to exercise the very wealth of knowledge and experience they were hired for in the first place.

 

A practical approach to security 

Automated testing is key, and not just because it reduces human error. It ensures consistent quality gates throughout the SDLC, including security check markers. This not only increases confidence in the software being delivered, it guarantees everything that has passed through the lifecycle has been cleared by security. 

DevOps also enables transparency across the SDLC. Using IaC (Infrastructure as Code), teams are able to use the similar SDLC as the application they host will eventually be hosting. This allows for security check marks to be applied to these elements, ensuring compliance, policies and security best-practices have been adhered to. 

Greater visibility promotes proactiveness, with configuration changes and issues monitored across the overall systems in real time. This in turn offers the ability to identify and action potential security breaches as they happen – for example stopping applications without interrupting other systems before it become a threat. This is a way of working that hasn’t been possible until DevOps’ holistic approach to software development. 

These benefits of DevOps means QA and security are built in to the testing processes, with software unable to move though the lifecycle if it does not comply with pre-agreed standards. 

Harry McLaren, Managing Consultant at ECS Security, explains more about managing security in a DevOps environment: 

DevOps and the corresponding tooling means you can respond faster in the development lifecycle. You can fail fast and fail safe. It’s not possible to remove 100% of risk but it is possible to eliminate the vast majority of it. By using like for like code in a development environment, with mirrored dependencies and so on, we can safely fail without risk before the release goes anywhere near the live environment. 

“It’s vital to get buy-in from your security team, involving them in the initial conversation when it comes to DevOps. Today’s consumers see security as a priority, they take it for granted. If you break that trust, there can be far-reaching reputational consequences as well as short-term practical ones.” 

2-2

 

The future of security 

We’re seeing a shift in how the big players respond to security breaches. There is a trend towards far more public ownership of the breach and transparency as to how the organisation intends to fix or mitigate risk in the future.  

Whilst traditional companies – including some in the banking sector – are more reluctant to take a public stance because of the severity of reputational threat, modern companies are adopting a different tact.   

Amazon and Reddit are two such companies, demonstrating an openness of sharing ideas around how to avoid or deal with security breaches. Netflix is another, going as far as to release ChaosMonkey – an opensource service which identifies groups of systems and randomly terminates one of the systems in a group. Whilst deliberate termination of a system seems illogical, failure happens, and being able to challenge your system’s architecture at a time that suits your business is invaluable.  

This open sharing of information is not only bolstering the leaders’ business reputations, they are changing the digital landscape by enabling businesses to build fully resilient applications that can face modern problems.   

McLaren agrees: The general trend is that transparency is becoming a differentiator. Monitoring and early warning are hugely important in order to get insights into what’s occurring. My advice is to empower your developers with data and KPIs – and challenge them.” 

 

Organisations with mature DevOps practices are able to build fully-resilient applications that can cope in the face of today’s threat landscape. They do this by building in security early and testing rigorously in a safe environment. 

Would you like to learn more about how DevOps can help to secure your digital transformation? Contact us today for more information. 

Morgan AtkinsSecuring your transformation
read more
DevOps growth is leading to a skills shortage

DevOps growth is leading to a skills shortage

Now pilot projects are complete, DevOps demand is outstripping talent availability

Andy Cureton, Founder and Managing Director of ECS Digital – winner of the 2018 Best DevOps Consulting Firm award – thinks that the DevOps industry is struggling to meet demand, and will continue to do so in the future as enterprise-scale transformation begins.

Many organisations have now completed their pilot DevOps projects successfully, which are now being or have been replaced by larger transformation programmes across the organisation.

“This is driving a sharp increase in demand for support from partners and a challenge to the industry as a whole to satisfy,” said Cureton. “The larger consultancies and outsourcers are struggling to transform themselves and develop DevOps or agile capabilities at scale, and we predict that this will lead to consolidation in the sector as demand outstrips talent availability.”

ECS is leading some of these enterprise-wide transformation programmes itself, and is rolling out a new as-a-service concept that it calls ‘Enablement Pods‘, combining DevOps, agile testing and automation. Cureton added:

“Whilst all of our transformation programmes leverage enablement rather than long term staff augmentation, we will be actively pursuing ‘Innovation Pods’ going forward. These are in effect full-stack teams: from product owner through to architect, developer, QA and DevOps. This is a key strategic area where we feel we can bring additional benefit to our clients.”

The adoption of DevOps practices will differ from organisation to organisation, and that is where DevOps consultancies like ECS come in. These business can provide advice on tools, methodologies and people.

ECSDigitaloffice-590x129

ECS Digital’s office is based in central London

Cureton thinks that these are some of the most important traits to have in employees and partners when it comes to DevOps:

  • Pragmatic and outcome-orientated;
  • Team player;
  • Empathic;
  • Strong problem solving abilities;
  • Communication.

On the win itself, Cureton said that he and his team were “extremely proud” to have secured the top spot, along with ECS Digital’s Michel Lebeau, who was announced as the Young DevOps Engineer of the Year.

“We’ve continually innovated and evolved our services over the past 15 years to help organisations realise the benefits of adopting DevOps and are proud to be the only DevOps consultancy to offer specialist testing expertise as a foundation element of our offerings.”

He added, “Recognising the work of the team and making them feel part of something bigger has also seen a boost in team morale which is extremely important for our culture – and a good excuse to celebrate!”

The original article was published on Computing.co.uk on May 29th 2018, read the feature here.

At ECS Digital, we help customers deliver better products faster through the adoption of modern software delivery methods. We understand the pain of regulatory compliance, embracing new technology, disruptive competitors, people and skills shortages, and deliver business value through tailored Digital Transformation.

If you’re looking for help accelerating change within your business, get in touch with us here. 

Andy CuretonDevOps growth is leading to a skills shortage
read more
Banking on DevOps

Banking on DevOps

Andy Cureton, Founder and Managing Director, ECS Digital, looks at how, in a competitive environment, banks and other organisations can use the latest IT and business methodologies to modernise their IT systems to meet customer expectations and comply with regulations.

It wouldn’t be an exaggeration to say that the banking sector, like many industries, is now more competitive than ever before. There has never been a more difficult time for the big banks in particular, with the disruption from digital innovation hitting everyone hard. Time is running out and, to stay relevant, today’s big banks need to embrace agile methodologies across their entire organisation.

Digital transformation in the banking sector has a unique set of tough challenges, both external and internal. Along with regulatory changes such as the General Data Protection Regulation (GDPR) and Open Banking, there is increasing external pressure from FinTechs, challenger banks and Google, Apple, Facebook and Amazon (GAFA), who have innovation hardwired into their culture and are more customer centric by nature – exactly where many of the more traditional banks fail.

All this set against the background of acquisitions, meaning there are now, in effect, four big banks in the UK. Customers may think they are banking with one bank but are in fact sitting on the systems of another. Take for example TSB, whose customers up until recently have been using Lloyds Banking Group’s core banking systems. This leaves banks with complex ecosystems full of legacy systems which as of now, no bank has completely got to grips with… Add issues of dealing with both structured and unstructured data, it is no wonder changing and updating systems is a complex problem to solve.

DevOps – an approach to IT where software developers and IT operations combine their skills and responsibilities to produce software and infrastructure services rapidly, frequently and reliably by standardising and automating processes –  can help organisations such as banks to address the issues they face. These challenges include overhauling and modernising legacy systems without additional risk and addressing the thorny issue of testing. To remain relevant, organisations need to change their culture one step at a time.  Challenger banks are leap-frogging old-fashioned ways of working in favour of agile practices that promote innovation. GAFA have high-performing Digital/DevOps-native cultures with levels of innovation, efficiency and customer centricity that most organisations can only dream of. But the good news is that any organisation can incorporate these ways of working into their culture and harness the power of DevOps.

 

The Myth of DevOps

It’s a myth that legacy issues mean DevOps practices can’t be applied, that the only solution is to rearchitect and replace. Technologies such as containerisation and data virtualisation, coupled with automation, can improve the speed and quality of change in existing systems, whilst reducing reliance on increasingly scarce and expensive specialists. The concept of containerisation essentially allows virtual instances to share a single host operating system and relevant binaries, libraries or drivers. Data virtualisation, on the other hand, provides the ability to create multiple virtual copies of a physical data set without the requirement for the same physical storage.  These virtual copies can be created very quickly and then used independently by environments for testing and even production, with only the differences to the base data set being stored.  Functionality such as bookmarking and data masking further enhance the performance and storage benefits.

Changing mind-sets, organisational culture and building confidence in new ways of working is essential to getting the most value from DevOps adoption. DevOps provides a structured way of working to improve management frameworks and reduce a product’s time to market, taking it from several months to perhaps weeks. Additionally, it can help to strengthen governance and regulatory compliance across the business whilst increasing innovation and agility.

While many banks intend to adopt new technologies, the execution is often mixed. The first step is accepting that the world of finance is changing and there is a better, different way of doing things. You are only as fast as your slowest link. Any system that is slow and process-heavy will hold back an organisation from moving at the pace their customers expect – and indeed demand – in today’s 24/7 world. If such systems are not improved they limit innovation and become a risk in themselves, as faster, more agile competitors are appearing across the finance sector.

 

A Better, Different Way

Testing has a very important role within the banking sector; ensuring continuous testing is taking place makes regulatory compliance easier to achieve and maintain. The introduction of automation in the testing process can actually reduce the risk of change by removing the opportunity for human error and increasing the achievable test coverage.

Getting the testing strategy right can help make the transformational changes more achievable, by reducing both cost and time taken to deliver quality software. Testing in banks is done thoroughly, but it needs to happen earlier in the Software Development Life Cycle, a concept known as “Shifting left”. Testing is typically manual, time- consuming and error-prone. This creates bottlenecks and slows down the flow of change, depleting both the time and resources available for innovation.

Automation brings additional benefits. It speeds up the provisioning of environments and data, and also delivers cost savings. Inconsistent and over-provisioned environments can result in unpredictable outages. The cost in downtime and testers’ time to fix environments is considerable when calculated over the course of a year with multiple instances, each taking two to three days to fix. Configuration management tools such as Ansible and Puppet give businesses increased control over downtime costs by using automation to ensure environments are fit for purpose; Containers provide the ability to instantly replace environments that are out of sync.

DevOps brings with it a licence to fail – but fail fast – something which is essential for real innovation to exist – with processes in place to spot, learn from and remedy failure quickly and early. In this way, teams are encouraged to be proactive, accepting and understanding of their impact on each decision or change in a blame-free environment. Failure demonstrates that boundaries are being tested and are an opportunity to learn.

 

The Journey to Increase Innovation and Agility

Banks are slowly changing their organisational structures and operating models to bring the business and IT closer together – although such is the risk-averse nature of the industry, that an aversion to quick change is almost built in. But it doesn’t have to be this way – DevOps is not risky in any way when properly introduced. In fact, getting the culture and working methodologies right can help to strengthen governance and regulatory compliance across the business whilst increasing innovation and agility.

Ultimately, DevOps adoption is a journey. Many organisations don’t have a blank sheet of paper to start from like challenger banks, FinTechs and GAFA. So, unless they build separately on the side, they’ll always have a heritage challenge. That challenge does, however, come with tried and tested operational processes – which typically demonstrate greater resilience and availability than their nimbler competition. This approach is being pioneered by Scandinavian bank, Nordic Nous, who are using new technology to build a new customer bank alongside the existing bank. Over the past decade, they have thrown away their legacy technologies and invested heavily in the right frameworks to adopt agile practices. Combining heritage with the agility, quality and compliance benefits of DevOps gives banks formidable capability with which to compete in the digital era.

The original article was published on Acquisition International, read the feature here.

At ECS Digital, we help customers deliver better products faster through the adoption of modern software delivery methods. We understand the pain of regulatory compliance, embracing new technology, disruptive competitors, people and skills shortages, and deliver business value through tailored Digital Transformation.

If you’re looking for help accelerating change within your business, get in touch with us here.

Andy CuretonBanking on DevOps
read more
Why traditional banks need DevOps to remain competitive

Why traditional banks need DevOps to remain competitive

The banking landscape is changing at an accelerating rate, and competition in the sector has never been greater. Traditional banks are encountering threats from multiple sources, all of which need to be met and mitigated head on if these banking giants are to stay relevant and competitive.

On one side there are the nimble challenger banks who boast smaller, easier to manage product sets. On another are the regulatory changes including Open Banking and the EU’sGeneral Data Protection Regulation (GDPR). And the digital unicorns ofGoogle, Apple, Facebook and Amazon (GAFA) are already beginning to stake their own claims on the banking world with their innovation-driven culture, and immense worldwide customer scale and data.

The internal threat to traditional banks is no less pressing; the majority are reliant on legacy systems that are slow, bulky and process-heavy. And it is these systems that will hold the banks back from moving at the pace their customers expect and demand. Tied into this is another issue – that of skills shortages. As time goes by, legacy skills are becoming less and less available, and can only be bought at a premium.

 

Unlocking the Value in Legacy Systems 

Time is running out for the traditional banks; if they are to stay relevant they need to embrace agile methodologies across their entire organization – and this is where DevOps can help. It’s true that, for most banks, re-engineering and replacing these bulky legacy systems with modern technology simply isn’t feasible. In most cases it would involve unpalatable levels of risk and would require a capital investment bigger than they could withstand.

A more viable solution is to work with the systems they have, using DevOps practices and tooling to bring them up to speed. DevOps is an approach to IT where software developers and IT operations combine their skills and responsibilities to produce software and infrastructure services rapidly, frequently and reliably by standardising and automating processes. Contrary to popular belief, it’s not purely for new, startup or unicorn companies. Adopting DevOps principles and practices allows companies to unlock value in the systems they already have. It allows them to move as fast as the rest of the marketplace – so maintaining their competitiveness, compliance and, ultimately, profitability.

 

gb100518-10

 

Changing the Legacy Mindset

In the more traditional banks, it is common for people and teams to have very set ways of working, often within distinct siloes. To ease the cultural challenges associated with the adoption of new ways of working, it’s important to involve the teams that will be impacted, and help them to fully engage with the benefits both to the business and to their own professional development.

Creating small, interconnected teams, all working towards a common, achievable goal backed by a considered plan of how to get there makes the transition much more palatable. The agility that creating these integrated, task-focused teams allows, means they can find the optimal balance between speed, control and risk management, therefore improving efficiency and reducing the time to market of any new and fully compliant products.

The key to gaining the most benefit from the DevOps way of working for any business is to understand fully what they are trying to achieve, and which elements are best placed to be transformed to help meet those goals.

 

Regulation vs Innovation

Since the banking crisis of 2008, regulations have grown even tougher. Banks are being closely scrutinised by the Financial Conduct Authority (FCA) and the Prudential Regulation Authority. They also have to adhere to the data management requirements of GDPR and similar regulations in other countries and regions. At the same time, the Payment Services Directive II has given customers access to more innovative and flexible financial services through third-party internet and mobile banking solutions. Keeping up with consumer demands, whilst complying with these new regulations is a fine balancing act.

Banks must keep an eye on every regulatory change, whilst at the same time innovating in order to stay competitive. The risks of a single mistake at any point in the development process, especially of core systems, could have serious repercussions.

The DevOps methodology of collaboration between business and IT teams can mitigate some of these risks. It ensures regulatory compliance is built into products from the start, and allows any subsequent changes to regulations to be easily and quickly trialled, tested and implemented. The focus on automation,which is part of DevOps, in turn provides the auditability and visibility needed to demonstrate compliance, and cuts down on the need for manual overheads – a huge financial drain on most of the major banks.

 

Automating to Rise to the Challenge

As well as the demands of data security and the new, stricter regulations, traditional banks are also facing competition from challenger banks and GAFA – many of whom have DevOps built into the core of their processes and systems.To rise to these demands, they need to achieve digital transformation at all levels of the bank.

DevOps brings people, processes and technology together, working more collaboratively in order to speed up and improve the quality of the development process, and take software to market faster. Paramount to this is the need to get testing practices right.

Traditionally, testing is a manual and time-consuming process, and typically prone to errors. It commands huge amounts of resource, which are unavailable for innovation. To test effectively, DevOps/agile testing processes use anonymised production-like data; data that is consistent and quality assured – that can be replicated in real production-like scenarios and automated, ensuring consistency across the data being used for each set of tests.

As well as speed and efficiency, automation also produces cost savings. It reduces the risk of human error and allows increased test coverage in a shorter period of time – which in turn reduces the number of unpredictable and costly outages, with their associated downtime while a fix is sought. The failure of TSB to migrate its customer data without serious operational and security incidents, highlights some of the worst possible outcomes.

One top five UK bank is working towards a solution where all processes are automated or orchestrated, including testing. This would dramatically reduce lead times and delays in projects and provide considerable efficiencies in their end-to-end delivery model.

 

A Shift in Culture

The threats from challenger banks and GAFA to the more traditional banks are pressing. And the only way the traditional banks can meet those threats head on is to adapt their cultures to suit more modern ways of working – both at a leadership and a team level.

Leaders within these big banks need to embrace and promote a culture where it’s acceptable to fail – with the emphasis on identifying, learning from and, of course, remedying those failures as quickly and early as possible. They must recognise the need to constantly embrace new ways of doing things – instilling a culture of continuous improvement and growth. One way to achieve this is by running agile ‘experiments’across multiple teams and locations. These experiments can be used to assess the benefits of new methodologies and tools whilst, at the same time, focusing on communication and collaboration across the teams.

Ultimately, DevOps adoption is a journey. For big banks the challenge is to make this journey from a cumbersome heritage system to a modern, agile way of working as seamless and efficient as possible. The rewards of doing so will be a formidable capability with which to compete in the digital era.

 

The original article was published on Global Banking & Finance review on May 10th 2018, read the feature here.

 

At ECS Digital, we help customers deliver better products faster through the adoption of modern software delivery methods. We understand the pain of regulatory compliance, embracing new technology, disruptive competitors, people and skills shortages, and deliver business value through tailored Digital Transformation.

If you’re looking for help accelerating change within your business, get in touch with us here. 

Andy CuretonWhy traditional banks need DevOps to remain competitive
read more
DevOpsDays Beijing 2018

DevOpsDays Beijing 2018

The DevOpsDays conference took place on 5th May at Empark Grand Hotel. It is the second run in Beijing with 2017 being the first. There seems to be a significant drop in attendance as compared to 2017 but it still garnered a good crowd of an estimated 600+ professionals. 

This year, the DevOpsDays China core organising committee is planning to host the second DevOpsDays Shanghai event in August and the first DevOpsDays Shenzhen event in November. Looks like they will eventually reach out to more cities in the near future.

The morning session of the conference comprises talks like “Journey from Enterprise Architecture to DevOps” and “The dirty parts of DevOps” that are delivered to the whole audience while the afternoon session is made of 3 tracks, namely, Finance, Internet and DevOps Practices.

0-5

 

I attended the Finance track that is made up of implementation stories sharing and talks like “Digitalization: DevOps Design and Thinking” and “Release Fast or Die!”. The finale was anchored by Jez Humble with his talk on “what i learned from 4 years sciencing the crap out of devops”.

Another highlight for DevOpsDays Beijing 2018 is the official launch of the Chinese edition of The DevOps Handbook written by Gene Kim, Jez Humble John Willis and Patrick Debois. Accordingly, the team of translators took about one and a half years to complete this translation work! A commendable effort to benefit the Chinese community. 

Through the use of Wechat, the organisers are able to connect with event attendees through live updates of event information, sharing of official photos taken during the event and conducting of lucky draws via real time games. Even theslides from all speakers and video recordings were shared out within 48 hours of event closure. A truly effective use of Wechat to engage with people and who knows, Wechat might one day become an integral part of China’s DevOps solutions.

From conversations with the some of the participants at the conference, it is observed that organisations in China are generally faced with similar challenges that organisations in other countries in the region are also facing. In particular, smaller organisations or startups are more willing to experiment with new concepts and hence they might be the ones that are spear-heading the initial DevOps movement.

0-3

Large organisations are merely adopting a sit-back-and-watch-first approach, with a lack of strong mandate and support from higher management. Eventually when success stories start to build up sufficiently, these big boys will surely take DevOps more seriously.

Generally, China is catching up very fast in promoting DevOps adoption. As observed by one of the speakers, there has been multi-fold increase in the number of job advertisements for DevOps related positions over the past one year. He even jokingly advised participants to start changing their job titles to increase their market value. In time to come, China may well be the “Big Brother” of DevOps in the APAC region. 

 

We’d like to thank the organisers of DevOpsDays Beijing.  It was a great event and we hope to see everyone again in the upcoming meet ups and DevOps events.

 

If you’d like to get in touch with us about how we can help you implement DevOps in your business, just click the link below.

Kok Hoong WaiDevOpsDays Beijing 2018
read more
30 DevOps Tools You Could Be Using

30 DevOps Tools You Could Be Using

As a DevOps consultancy, we spend a lot of time thinking about, and evaluating DevOps tools.

There are a number of different tools that form part of our DevOps workbench, and we base our evaluation on years of experience in IT, working with complex, heterogeneous technology stacks.

We’ve found that DevOps tooling has become a key part of our tech and operations. We take a lot of time to select and improve our DevOps toolset. The vast majority of tools that we use are open source. By sharing the tools that we use and like, we hope to start a discussion within the DevOps community about what further improvements can be made.

We hope that you enjoy browsing through the list below.

You may already be well acquainted with some of the tools, and some may be newer to you.

1. Puppet

What is it? Puppet is designed to provide a standard way of delivering and operating software, no matter where it runs. Puppet has been around since 2005 and has a large and mature ecosystem, which has evolved to become one of the best in breed Infrastructure Automation tools that can scale. It is backed up and supported by a highly active Open Source community.                    

Why use Puppet? Planning ahead and using config management tools like Puppet can cut down on the amount of time you spend repeating basic tasks, and help ensure that your configurations are consistent, accurate and repeatable across your infrastructure. Puppet is one of the most mature tools in this area and has an excellent support backbone

What are the problems with Puppet? The learning curve is quite high for those who are unfamiliar with puppet, and the ruby DSL may seem unfamiliar for users who have no development experience.

2. Vagrant

What is it? Vagrant – another tool from Hashicorp – provides easy to configure, easily reproducible and portable work environments that are built on top of industry standard technology. Vagrant helps enforce a single consistent workflow to maximise the flexibility of you and your team.

Why use Vagrant? Vagrant provides operations engineers with a disposable environment and consistent workflow for developing and testing infrastructure management scripts. Vagrant can be downloaded and installed within minutes on Mac OS X, Linux and Windows.

Vagrant allows you to create a single file for your project, to define the kind of machine you want to create, the software that needs to be installed, and the way you want to access the machine.

Are there any problems with Vagrant? Vagrant has been criticised as being painfully, troublingly slow.

3. ELK Stack

What is ELK? The ELK stack actually refers to three technologies – ElasticsearchLogstash and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine, Logstash is a log pipeline tool that accepts inputs from different sources and exports the data to various targets, and Kibana is a visualisation layer for Elasticsearch. And they work very well together.

What are its use cases? Together they’re often used in log analysis in IT environments (although you can also use the ELK stack for BI, security and compliance & analytics.)

Why is it popular? ELK is incredibly popular. The stack is downloaded 500,000 times every month. This makes it the world’s most popular log management platform. SaaS and web startups in particular are not overly keen to stump up for enterprise products such as Splunk. In fact, there’s an increasing amount of discussion as to whether open source products are overtaking Splunk, with many seeing 2014 as a tipping point.

4. Consul.io

What is Consul.io? Consul is a tool for discovering and configuring services in your infrastructure. It can be used to present nodes and services in a flexible interface, allowing clients to have an up-to-date view of the infrastructure they’re part of.

Why use Consul.io? Consul.io comes with a number of features for providing consistent information about your infrastructure. Consul provides service and node discovery, tagging, health checks, consensus based election routines, key value storage and more. Consul allows you to build awareness into your applications and services.

Anything else I should know? Hashicorp have a really strong reputation within the developer community for releasing strong documentation with their products, and Consul.io is no exception. Consul is distributed, highly available, and datacentre aware.

5. Jenkins

What is Jenkins? Everyone loves Jenkins! Jenkins is an open source CI tool, written in Java. CI is the practice of running tests on a non-developer machine, automatically, every time someone pushes code into a source repo. Jenkins is considered a prerequisite for Continuous Integration.

Why would I want to use Jenkins? Jenkins helps automate a lot of the work of frequent builds, allows you to resolve and detect issues quickly, and also reduce integration costs because serious integration issues become less likely.

Any problems with Jenkins? Jenkins configuration can be tricky. Jenkins UI has evolved over many years without a guiding vision – and it’s arguably got more complex. It has been compared unfavourably to more modern tools such as Travis CI (which of course isn’t open source).

6. Docker

What is it? There was a time last year, when it seemed that all anyone wanted to talk about was Docker. Docker provides a portable application environment which enables you to package an application in a unit for application development.

Should I use it? Depending on who you ask, Docker is either the next big thing in software development or a case of the emperor’s new clothes. Docker has some neat features, including DockerHub, a public repository of Docker containers, and docker-compose, a tool for managing multiple containers as a unit on a single machine.

It’s been suggested that Docker can be a way of reducing server footprint by packing containers on physical tin without running physical kernels – but equally Docker’s security story is a hot topic. Docker’s UI also continues to improve – Docker has just released a new Mac and Windows client.

What’s the verdict? Docker can be a very useful technology – particularly in development and QA – but you should think carefully about whether you need or want to run it in production. Not everyone needs to operate at Google scale.

7. Ansible

What is it? Ansible is a free platform for configuring and managing servers. It combines multi-node software deployment, task execution and configuration management.

Why use Ansible? Configuration management tools such as Ansible are designed to automate away much of the work of configuring machines.

Manually configuring machines via SSH, and running the commands you need to install your application stack, editing config files, and copying application code can be tedious work, and can lead to each machine being its own ‘special snowflake’ depending on who configured it. This can compound if you are setting up tens, or thousands of machines.

What are the problems with using Ansible? Ansible is considered to have a fairly weak UI. Tools such as Ansible Tower exist, but many consider them a work in progress, and using Ansible Tower drives up the TCO of using Ansible.

Ansible also has no notion of state – it just executes a series of tasks, stopping when it finishes, fails, or encountering an error. Ansible has also been around for less time than Chef and Puppet, meaning that it has a smaller developer community than some of its more mature competitors.

8. Salkstack

What is it? Saltstack, much like Ansible, is a configuration management tool and remote execution engine. It is primarily designed to allow the management of infrastructure in a predictable and repeatable way. Saltstack was designed to manage large infrastructures with thousands of servers – the kind seen at LinkedIn, Wikipedia and Google.

What are the benefits of using Salt? Because Salt uses the ZeroMQ framework, and serialises messages using msgpack, Salt is able to achieve severe speed and bandwidth gains over traditional transport layers, and is thus able to fit far more data more quickly through a given pipe. Getting set up is very simple, and someone new to configuration management can be productive before lunchtime.

Any problems with using Saltstack? Saltstack is considered to have weaker Web UI and reporting capabilities than some of its more mature competitors. It also lacks deep reporting capabilities. Some of these issues have been addressed in Saltstack Enterprise, but this may be out of budget for you.

9. Kubernetes

What is it? Kubernetes is an open-source container cluster manager by Google. It aims to provide a platform for automating deployment, scaling and operations of container clusters across hosts.

Why should I use it? Kubernetes is a system for managing containerised applications across a cluster of nodes. Kubernetes was designed to address some of the disconnect between the way that modern, clustered applications work, and the assumptions they make about some of their environments.

On the one hand, users shouldn’t have to care too much about where work is scheduled – the unit is presented at the service level, and can be accomplished by any of the member nodes. On the other hand, it is important because a sysadmin will want to make sure that not all instances of a service are assigned to the same host. Kubernetes is designed to make these scheduling decisions easier.

10. Collectd

What is it? Collectd is a daemon that collects statistics on system performance, and provides mechanisms to store the values in different ways.

Why should I use collectd? Collectd helps you collect and visualise data about your servers, and thus make informed decisions. It’s useful for working with tools like Graphite, which can render the data that collectd collects.

Collectd is an incredibly simple tool, and requires very few resources. It can even run on a Raspberry Pi! It’s also popular because of its pervasive modularity. It’s written in C, contains almost no code that would be specific to any operating system, and will therefore run on any Unix-like operating system.

11. Git

What is Git? Git is the most widely used version control system in the world today.

An incredibly large number of products use Git for version control: from hobbyist projects to large enterprises, from commercial products to open source. Git is designed with speed, flexibility and security in mind, and is an example of a distributed version control system.

Should I use Git? Git is an incredibly impressive tool – combining speed, functionality, performance and security. When compared side by side to other SCM tools, Git often comes out ahead. Git has also emerged as a de facto standard, meaning that vast numbers of developers already have Git experience.

Why shouldn’t I use Git? Git has an initially steep learning curve. Its terminology can seem a little arcane and new to novices. Revert, for instance, has a very different meaning in Git than it does in SCM and CVS. However, it rewards that investment curve with increased development speed once mastered.

12. Rudder

What is Rudder? Rudder is (yet another!) open source audit and configuration management tool that’s designed to help automate system config across large IT infrastructures.

What are the benefits of Rudder? Rudder allows users (even non-experts) to define parameters in a single console, and check that IT services are installed, running and in good health. Rudder is useful for keeping configuration drift low. Managers are also able to access compliance reports and access audit logs.  Rudder is built in Scala.

13. Gradle

What is it? Gradle is an open source build automation tool that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based DSL instead of the XML form used by Maven.

Why use Gradle instead of Ant or Maven? For many years, build tools were simply about compiling and packaging software. Today, projects tend to involve larger and more complex software stacks, have multiple programming languages, and incorporate many different testing strategies. It’s now really important (particularly with the rise of Agile) that build tools support early integration of code as well as easy delivery to test and prod.

Gradle allows you to map out your problem domain using a domain specific language, which is implemented in Groovy rather than XML. Writing code in Groovy rather than XML cuts down on the size of a build, and is far more readable.

14. Chef

What is Chef? Chef is a config management tool designed to automate machine setup on physical servers, VMs and in the cloud. Many companies use Chef software to manage and control their infrastructure – including Facebook, Etsy and Indiegogo. Chef is designed to define Infrastructure as Code.

What is infrastructure as code? Infrastructure as Code means that, rather than manually changing and setting up machines, the machine setup is defined in a Chef recipe. Leveraging Chef allows you to easily recreate your environment in a predictable manner by automating the entire system configuration.

What are the next steps for Chef? Chef has released Chef Delivery, a tool for creating automated workflows around enterprise software development and establishing a pipeline from creation to production. Chef Delivery establishes a pipeline that every new piece of software should go through in order to prepare it for production use. Chef Delivery works in a similar way to Jenkins, but offers greater reporting and auditing capabilities.

15. Cobbler

What is it? Cobbler is a Linux provisioning server that facilitates a network-based system installation of multiple OSes from a central point using services such as DHCP, TFTP and DNS.

Cobbler can be configured for PXE, reinstallations and virtualised guests using Xen, KVM and Xenware. Cobbler also comes with a lightweight configuration management system, as well as support for integrating with Puppet.

16. SimianArmy

What is it? SimianArmy is a suite of tools designed by Netflix to support cloud operations. ChaosMonkey is part of SimianArmy, and is described as a ‘resiliency tool that helps applications tolerate random instance failures.’

What does it do? The SimianArmy suite of tools are designed to help engineers test the reliability, resiliency and recoverability of their cloud services running on AWS.

Netflix began the process of creating the SimianArmy suite of tools soon after they moved to AWS. Each ‘monkey’ is decided to help Netflix make its service less fragile, and better able to support continuous service.

The SimianArmy includes:

  • Chaos Monkey – randomly shuts down virtual machines (VMs) to ensure that small disruptions will not affect the overall service.
  • Latency Monkey – simulates a degradation of service and checks to make sure that upstream services react appropriately.
  • Conformity Monkey – detects instances that aren’t coded to best-practices and shuts them down, giving the service owner the opportunity to re-launch them properly.
  • Security Monkey – searches out security weaknesses, and ends the offending instances. It also ensures that SSL and DRM certificates are not expired or close to expiration.
  • Doctor Monkey – performs health checks on each instance and monitors other external signs of process health such as CPU and memory usage.
  • Janitor Monkey – searches for unused resources and discards them.

Why use SimianArmy? SimianArmy is designed to make cloud services less fragile and more capable of supporting continuous service, when parts of cloud services come across a problem. By doing this, potential problems can be detected and addressed.

17. AWS

What is it? AWS is a secure cloud services platform, which offers compute, database storage, content delivery and other functionality to help businesses scale and grow.

Why use AWS? EC2 is the most popular AWS service, and provides a very easy way for DevOps teams to run tests. Whenever you need them, you can set up an EC2 server with a machine image up and running in seconds.

EC2 is also great for scaling out systems. You can set up bundles of servers for different services, and when there is additional load on servers, scripts can be configured to spin up additional servers. You can also handle this automatically through Amazon auto-scaling.

What are the downsides of AWS? The main downside of AWS is that all of your servers are virtual. There are options available on AWS for single tenant access, and different instance types exist, but performance will vary and never be as stable as physical infrastructure.

If you don’t need elasticity, EC2 can also be expensive at on-demand rates.

18. CoreOS

What is it? CoreOS is a Linux distribution that is designed specifically to solve the problem of making large, scalable deployments on varied infrastructure easy to manage. It maintains a lightweight host system, and uses containers to provide isolation.

Why use CoreOS? CoreOS is a barebones Linux distro. It’s known for having a very small footprint, built for “automated updates” and geared specifically for clustering.

If you’ve installed CoreOS on disk, it will update by having two system partitions – one “known good” because you’ve used it to boot to, and another that is used to download updates to. It will then automatically reboot and switch to update.

CoreOS gives you a stack of systemd, etcd, Fleet, Docker and rkt with very little else. It’s useful for spinning up a large cluster where everything is going to run in Docker containers.

What are the alternatives? Snappy Ubuntu and Project Atomic offer similar solutions.

19. Grafana

What is Grafana? Grafana is a neat open source dashboard tool. Grafana is useful for because it displays various metrics from Graphite through a web browser.

What are the advantages of Grafana? Grafana is very simple to setup and maintain, and displays metrics in a simple, Kibana-like display style. In 2015, Grafana also released a SaaS component, Grafana.net.

You might wonder how Grafana differs from the ELK stack. While ELK is about log analytics, Grafana is more about time-series monitoring.

Grafana helps you maximise the power and ease of use of your existing time-series store, so you can focus on building nice looking and informative dashboards. It also lets you define generic dashboards through variables that can be used in metrics queries. This allows you to reuse the same dashboards for different servers, apps and experiments.

20. Chocolatey

What is Chocolatey? Chocolatey is apt-get for Windows. Once installed, you can install Windows applications quickly and easily using the command line. You could install Git, 72Zip, Ruby, or even Microsoft Office! The catalogue is now incredibly complete – you really can install a wide array of apps using Chocolatey.

Why should I use Chocolatey? Because manual installs are slow and inefficient. Chocolatey promises that you can install a program (including dependencies, such as the .NET framework) without user intervention.

You could use Chocolatey on a new PC to write a simple command, and download and install a fully functioning dev environment in a few hours. It’s really cool.

21. Zookeeper

What is it? Zookeeper is a centralised service for maintaining configuration information, naming, providing distributed synchronisation, and providing group services. All of these services are used in one form or another by distributed applications.

Why use Zookeeper? Zookeeper is a co-ordination system for maintaining distributed services. It’s best to see Zookeeper as a giant properties file for different processes, telling them which services are available and where they are located. This post from the Engineering team at Pinterest outlines some possible use cases for Zookeeper.

Where can I read more? Aside from Zookeeper’s documentation, which is pretty good, chapter 14 of “Hadoop: The Definitive Guide” has around 35 pages, describing in some level of detail what Zookeeper does.

22. GitHub

What is GitHub? GitHub is a web based repository service. It provides distributed revision control and source control management functionality.

At the heart of GitHub is Git, the version control system designed and developed by Linus Torvalds. Git, like any other version control system, is designed to system, manage and store revisions of products.

GitHub is a centralised repository system for Git, which adds a Web-based graphical user interface and several collaboration features, such as wiki and basic task management tools.

One of GitHub’s coolest features is “forking” – copying a repo from one user’s account to another. This allows you to take a project that you don’t have write access to, and modify it under your own account. If you make changes, you can send a notification called a “pull request” to the original owner. The user can then merge your changes with the original repo.

23. Drone

What is it? Drone is a continuous integration platform, based on Docker and built in Go. Drone works with Docker to run tests, and also works with Github, Gitlab and Bitbucket.

Why use Drone? The use case for Drone is much the same as any other continuous integration solution. CI is the practice of making regular commits to your code base. Since with CI you will end up building and testing your code more frequently, the development process will be sped up. Drone does this – speeding up the process of building and testing.

How does it work? Drone pulls code from a Git repository, and then runs scripts that you define. Drone allows you to run any test suite, and will report back to you via email or indicate the status with a badge on your profile. Because Drone is integrated with Docker, it can support a huge number of languages including PHP, Go, Ruby and Python, to name just a few.

24. Pagerduty

What is it? Pagerduty is an alarm aggregation and monitoring system that is used predominantly by support and sysadmin teams.

How does it work? PagerDuty allows support teams to pull all of their incident reporting tools into a single place, and receive an alert when an incident occurs. Before PagerDuty came along, companies used to cobble together their own incident management solutions. PagerDuty is designed to plug in whatever monitoring systems they are using, and manage the incident reporting from one place.

Anything else? PagerDuty provides detailed metrics on response and resolution times too.

25. Dokku

What is it? Dokku is a mini-Heroku, running on Docker.

Why should I use it? If you’re already deploying apps the Heroku way, but don’t like the way that Heroku is getting more expensive for hobbyists, running Dokku from a tool such as DigitalOcean could be a great solution.

Having the ability to deploy a site to a remote and have it immediately using Github is a huge boon. Here’s a tutorial for getting it up and running.

26. OpenStack

What is it? OpenStack is free and open source software for cloud computing, which is mostly deployed as Infrastructure as a Service.

What are the aims of OpenStack? OpenStack is designed to help businesses build Amazon-like cloud services in their own data centres.

OpenStack is a Cloud OS designed to control large pools of compute, storage and networking resources through a datacentre, managed through a dashboard giving administrators control while also empowering users to provision resources.

27. Sublime-Text

What is it? Sublime-Text is a cross-platform source code editor with a Python API. It supports many different programming languages and markup languages, and has extensive code highlighting functionality.

What’s good about it? Sublime-Text is feature-ful, it’s stable, and it’s being continuously developed. It is also built from the ground up to be extremely customisable (with a great plugin architecture, too).

28. Nagios

What is it? Nagios is an open source tool for monitoring systems, networks and infrastructure. Nagios provides alerting and monitoring services for servers, switches, applications and services.

Why use Nagios? Nagios main strengths are that it is open source, relatively robust and reliable, and is highly configurable. It has an active development community, and runs on many different kind of operating systems. You can use Nagios to monitor services such as DHCP, DNS, FTP, SSH, Telnet, HTTP, NTP, POP3, IMAP, SMTP and more. It can also be used to monitor database servers such as MySQL, Postgres, Oracle and SQL Server.

Has it had any criticism? Nagios has been criticised as lacking scalability and usability. However, Nagios is stable and its limitations and problems are well-known and understood. And certainly some, including Etsy, are happy to see Nagios live on a little longer.

29. Spinnaker

What is it? Spinnaker is an open-source, multi-cloud CD platform for releasing software changes with high velocity and confidence.

What’s it designed to do? Spinnaker was designed by Netflix as the successor to its “Asgard” project. Spinnaker is designed to allow companies to hook into and deploy assets across two cloud providers at the same time.

What’s good about it? It’s battle-tested on Netflix’s infrastructure, and allows the creation of pipelines that begin with the creation of some deployable asset (say a Docker image or a jar file), and end with a deployment. Spinnaker offers an out of the box setup, and engineers can make and re-use pipelines on different workflows.

30. Flynn

What is it? Flynn is one of the most popular open source Docker PaaS solutions. Flynn aims to provide a single platform that Ops can provide to developers to power production, testing and development, freeing developers to focus.

Why should you use Flynn? Flynn is an open source PaaS built from pluggable components that you can mix and match however you want. Out of the box, it works in a very similar way to Heroku, but you are able to replace pieces and put whatever you need into Flynn.

Is Flynn production-ready? The Flynn team correctly point out that “production ready” means different things to different people. As with many of the tools in this list, the best way to find out if it’s a fit for you is to try them!

If you’re interested in learning more about DevOps or specific DevOps tools, why not take a look at our Training pages. 

We offer regular Introduction to DevOps courses, and have a number of upcoming Jenkins training courses.

Jason Man30 DevOps Tools You Could Be Using
read more
Why you should invest in AWS Big Data & 8 steps to becoming certified

Why you should invest in AWS Big Data & 8 steps to becoming certified

A decision that many engineers face at some point of their career is deciding what to focus their attention on next. One of the amazing advantages of working in a consultancy is being exposed to many different technologies, providing you the opportunity to explore any emerging trends you might be interested in. I’ve been lucky enough to work with a huge variety of clients ranging from industry leaders in the FTSE 100 to smaller start-ups disrupting the same technology space.

So why did I pick Big Data?

A common pattern I’ve noticed is that everyone has access to data – large amounts of raw, unstructured data. Business and technology leaders all recognise the importance of it, and the value and insight that it can deliver. Processes have been established to extract, transform and store this large amount of information, but the architecture is usually inefficient and incomplete.

Years ago these steps may have equated to the definition of an efficient data pipeline but now with emerging technologies such as Kinesis Streams, Redshift and even Server-less databases there is another way. We now have the possibility of having a real-time, cost efficient and low operational overhead solution.

Alongside this, companies set their sights on creating a data lake in the cloud. In doing so, they take advantage of a whole suite of technologies to store information in formats that they currently leverage and also in a configuration they possibly may harness in the future. These are all clear steps in the journey towards digital transformation, and with the current pace of development in AWS technologies it is the perfect time to become more acquainted with Big Data.

 

But why is the certification necessary?

The AWS Certified Big Data Speciality exam introduces and validates several key big data fundamentals. The exam itself is not just limited to AWS specific technologies but also explores the big data community. Taken straight from the exam guide we can see that the domains cover:

  1. Collection
  2. Storage
  3. Processing
  4. Analysis
  5. Visualization
  6. Data Security

These domains involve a broad range of technical roles ranging from data engineers and data scientists to individuals in SecOps. Personally, I’ve had some exposure to collection and storage of data but much less with regards to visualisation and security. You certainly have to be comfortable with wearing many different hats when tackling this exam as it tests not only your technical understanding of the solutions but also the business value created from the implementation. It’s equally important to consider the costs involved including any forecasts as the solution scales.

Having already completed several associate exams I found this certification much greater in difficulty because you are required to deep dive into Big Data concepts and the relevant technologies. One of the benefits of this certification is that the scope extends to these technologies’ application of Big Data so be prepared to dive into Machine Learning and popular frameworks like Spark & Presto.

 

Okay so how do I pass the exam?

1. A Cloud Guru’s certified big data specialty course provides an excellent introduction and overview.

2. Have some practical experience of Big data in AWS, theoretical knowledge is not enough to pass this exam…

  1. Practice architecting data pipelines, consider when Kinesis Streams vs Firehose would be appropriate.
  2. Think about how the solution would differ according to the size of the data transfer, sometimes even Snowmobile can become efficient.

3. Understand the different storage options on AWS – S3, DynamoDB, RDS, Redshift, HDFS vs EMRFS, HBase…

4. Understand the differences and use cases of popular Big Data frameworks e.g. Presto, Hive, Spark. 

5. Data Security contributes the most to your overall exam score at 20% and is involved in every single AWS service. There are always options for making the solution more secure and sometimes they’re enabled by default.

  1. Understand how to enable encryption at rest or in-transit, whether to use KMS or S3, or client side vs server side.
  2. How to grant privileged access to data e.g. IAM, Redshift Views.
  3. Authentication flows with Cognito and integrations with external identity providers.

6. Performance is a key trend

  1. Have a sound understanding of what GSI’s and LSI’s are in DynamoDB.
  2. Consider primary & sort keys, distribution styles in all of the database services
  3. Different compression types and speed of compressing/decompressing.

7.  Dive into Machine learning (ML)

  1. The Cloud Guru course mentioned above gives a good overview of the different ML models.
  2. If you have time I would recommend this machine learning course by Andrew Ng on Coursera. The technical depth is more lower level than you will need for the exam but it provides a very good introduction to a novice about the whole machine learning landscape.

8. Dive into Visualisation

  1. The A Cloud Guru course provides more than enough knowledge to tackle any questions here.
  2. Again if you have the time there’s an excellent data science course on Udemy which has a data visualisation chapter that would prove useful here.

 

Exam prep

It can’t be emphasised enough that AWS themselves provide amazing resources for learning. Definitely as preparation for the exam watch re:Invent videos and read AWS blogs & case studies.

 

Watch these videos:

  1. AWS re:Invent 2017: Big Data Architectural Patterns and Best Practices on AWS 
  2. AWS re:Invent 2017: Best Practices for Building a Data Lake in Amazon S3 and Amazon
  3. AWS re:Invent 2016: Deep Dive: Amazon EMR Best Practices & Design Patterns  
  4. AWS Summit Series 2016 | Chicago – Deep Dive + Best Practices for Real-Time Streaming Applications 

 

Read these AWS blogs:

  1. Secure Amazon EMR with Encryption 
  2. Building a Near Real-Time Discovery Platform with AWS 

 

Whitepapers

  1. Streaming Data Solutions on AWS with Amazon Kinesis
  2. Big Data Analytics Options on AWS 
  3. Lambda Architecture for Batch and Real-Time Processing on AWS with Spark Streaming and Spark SQL 

 

All of the Big Data services developer guides.

 

One last note….

This exam will expect you to consider the question from many different perspectives. You’ll need to think about not just the technical feasibility of the solution presented but also the business value that can be created. The majority of questions are scenario specific and often there is more than one valid answer, look for subtle clues to determine which solution is more ‘correct’ than the others, e.g. whether speed is a factor or if the question expects you to answer from a cost perspective.

Finally, this exam is very long (3 hours) and requires a lot of reading. I found that the time given was more than enough but remember to pace yourself otherwise you can get burned out quite easily.

Hopefully my experience and tips will have helped in preparation for the exam. Let us know if they helped you. 

Good Luck!!!

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

ECS DigitalWhy you should invest in AWS Big Data & 8 steps to becoming certified
read more
ECS Digital win twice at this year’s Computing DevOps Excellence awards

ECS Digital win twice at this year’s Computing DevOps Excellence awards

The ECS Digital team is extremely proud to have taken home not one, but two awards from last night’s Computing DevOps Excellence awards.

Voted ‘Best DevOps Consulting Firm’, the panel of judges recognised our contribution within the DevOps space, with over a decade of delivering successful projects across multiple industries, territories and technologies.

But the fun didn’t stop there. Our very own Michel Lebeau was named ‘Young DevOps Engineer of the Year’.This award is a tribute to his continued commitment to exceeding customers’ expectations, no matter how much effort and self-sacrifice is necessary.

Our diverse and highly-skilled team is the reason we maintain a leading position helping enterprises transform through the adoption of DevOps. These awards are testament to the team’s singular focus of helping our customers meet and exceed their goals through the adoption of modern ways of working and technology. Every customer is unique and each project has challenges that require partnering in true sense of the word. 

I would like to congratulate to everyone at ECS Digital for their win last night, and thank both our customers and partners for making it possible! 

Get in touch to find out how ECS Digital can help you.  

Andy Cureton Michel Lebeau

Andy CuretonECS Digital win twice at this year’s Computing DevOps Excellence awards
read more
Alexa: Building Skills for the World of Tomorrow

Alexa: Building Skills for the World of Tomorrow

We have all seen the TV Ads with someone asking Alexa (Amazons personal assistant AI) to dim the lights or start playing ‘The Grand Tour’ on Prime Video, and this technology is growing larger and faster every day.

Most commercial technologies like computers and internet started their lives in the hands of big businesses and large institutes that could afford the large initial RnD costs. In light of this, the Amazon team have taken a reverse approach and employed a small scale, iterative expansion of the product.

By providing developers access to the Alexa development kit and opening the voice service to the public, Amazon have made Alexa development a straightforward, painless and rewarding process.

Amazon incentivises its cult following of open source developers by rewarding those who create great skills that others want to use. Amazon announced:

“Publish a new skill this month and get an Alexa water bottle to help you stay hydrated during your coding sessions. If more than 75 customers use your skill in its first 30 days in the Alexa Skills Store, you can also qualify to receive an Echo Dot to help you make Alexa even smarter. The skill with the most unique users within its first 30 days after publishing in February will also earn an Echo Spot.”

Vocal Skills Revolution

We should all remember the mobile app revolution along with the tremendous increase in the number of smartphone users  experienced in global mobile app markets . A massive increase in the user base drove innovation, producing better mobile phones. An organised marketplace for app download, timely updates, advanced app development platforms became the norm. Most significantly, the development of some very useful and revolutionary apps have become part of our everyday lives. With the number of users almost doubling over the last 5 years, mobile app developers can reach more consumers than ever.

At ECS Digital, we believe Voice will experience the same type of growth as mobile applications did.

While consumers command more of their day to day life using voice-controlled technologies, from smart TVs to Alexa enabled electric cars, we can be safe in the knowledge that the voice revolution is coming and will change the way future generations interact with technology.

Alexa for Business

What is Alexa for Business?

Alexa for Business makes it easy for you to use Alexa in your organisation. Alexa for Business provides tools to manage Alexa devices, enrol users and configure skills across those devices. You can build your own context-aware voice skills using the Alexa Skills Kit (ASK) and conferencing device APIs, and you can make them available as private skills for your organisation.

What is an Alexa Skill?

Alexa is Amazon’s voice service and the brain behind tens of millions of devices like the Amazon Echo, Echo Dot, and Echo Show. It provides capabilities, or skills, that enable customers to create a more personalised experience. There are now tens of thousands of skills from companies like Starbucks, Uber, and Capital One as well as other innovative designers and developers.

Alexa Voice Service

The Alexa Voice Service (AVS) enables you to integrate Alexa directly into your products. We provide you with access to a suite of resources to quickly and easily build Alexa-enabled products, including APIs, hardware and software development tools, and documentation. With AVS, you can add a new intelligent interface to your products and offer your customers access to a growing number of Alexa features, smart home integrations, and skills.

What is the Alexa Skills Kit?

The Alexa Skills Kit (ASK) is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills. ASK enables designers, developers, and brands to build engaging skills and reach customers through tens of millions of Alexa-enabled devices. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

ECS Digital and Amazon Alexa

With Alexa for business being released in the US and coming to the rest of the world soon, we at ECS Digital have been using her to increase productivity and enable innovation within the office. We have been working on a few different initiatives coining the term OfficeOps.

Here are some of them:

Booking a meeting room

Working in a large consultancy,  it can be difficult to know if a meeting room is free. Moreover, booking said room can be a complicated and confusing process. The answer: create an internal/Dev skill to track the availability of a room, who has it and for how long. This skill also allows users to book a room on the spot, allowing our colleagues to interact with the booking process by literally asking the room for a booking slot .

Interactive Training

As a fast-moving DevOps consultancy, ECS Digital are always looking for innovative ways to improve our skills. For a long time now, we have been using Alexa to learn new skills and brush up on existing ones by using her as a pop quiz master. Colleagues located in our London Bridge office can ask Alexa to test their knowledge about a technology, helping them to maintain a high level of competency.

Summary

All evidence suggests that voice is here to stay, and will drive the next wave of technical innovation, both in business and at home, making those laborious, everyday tasks a little easier and futuristic. However, our assessment comes with a note: work still needs to be done in order make voice the standard, but we are confident that changes will be made swiftly.

Visit our services to explore how we enable organisations to transform their internal cultures, to make it easier for teams to collaborate, and adopt practices such as Continuous Integration, Continuous Delivery, and Continuous Testing. 

Morgan AtkinsAlexa: Building Skills for the World of Tomorrow
read more