Xin’s Story as a QA and Continuous Delivery Consultant

Xin’s Story as a QA and Continuous Delivery Consultant

No comments

My name is Xin Wang, I am a QA and Continuous Delivery Consultant as ECS Digital. I recently published a blog explaining how I went from delivering presentations on Fashion Week 2017 fashion trends, to writing functional tests as a software developer engineer.

Working in a male dominated industry has been very different to what I was used to – the approaches that some male engineers take are sometimes very different to the approach that a female would take. But these perspectives combined give you a much valuable overview which is why I really enjoy working on coding challenges with my colleagues.

Take a look at my video if you are interested in understanding why I switched my career around and how I am continuing with my journey as a software developer engineer.

Xin WangXin’s Story as a QA and Continuous Delivery Consultant
read more
Day In the life as a Technical Test Engineer

Day In the life as a Technical Test Engineer

No comments

Hi there, my name is Marie Cruz, and I’m a Senior Technical Test Engineer at ECS Digital. I’m responsible for providing test services to various clients with the focus of implementing BDD processes. I recently published a blog explaining how I balance being a mother and a woman in technology.

Having a family and an active career in tech, people tend to ask me how I manage to keep up with both. My answer is making sure you understand what’s important, but also ensuring that you are happy with the choices that you are making.

If you’ve ever wondered how a female can handle both a career in tech and a family life, feel free to take a look at my “Day in the Life as a Test Engineer” video. I hope it inspires you to take the leap into technology too!

Marie CruzDay In the life as a Technical Test Engineer
read more
Is your master branch production ready?

Is your master branch production ready?

No comments

Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.

A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.

If your answer is no, then how do we start to break down and resolve these problems.

Building confidence in quality

A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.

The test strategy shown above was for a simple front-end application that reads data from an external API.

To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.

We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.

Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.

We isolated our application by stubbing out external services or dependencies where necessary –  we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.

We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.

Start failing fast 

Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.

Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.

We need to run tests earlier in the process.

In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.

We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.

This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.

Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.

The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.

*******

This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.

Matt LowryIs your master branch production ready?
read more
Understanding SAST and DAST Adoption

Understanding SAST and DAST Adoption

No comments

In order to achieve a (software delivery lifecycle) SDLC that is efficient and cost-effective, we strive to automate every step with as little human interaction as possible. We do this because the ability to hold a product to a high quality standard throughout its lifespan is essential in building a maintainable, resilient and secure solution.

This blog focuses on the tools and approaches that help us maintain a high level of code quality and application security, while remaining relatively hands-off. We look at the benefits and problems of these tools and present our recommendations about which approach to take, and when. Whilst a little technical in places for some, if you’re interested in SAST and DAST Adoption and understanding the difference this is the blog for you.

Here are four core concepts we’ll be delving into:

  • Static Application Security Testing (SAST)

Can run on the development machine or have a setup in your CI/CD and can run on every code push to Git

  • Interactive Application Security Testing (IAST)

Conducted post deploy and uses a combination of techniques and tools to achieve desired results. (a Security Expert ‘interacts’ with the application under test by setting up attack vectors)

  • Dynamic Application Security Testing (DAST)

Tool based security testing that is used on top of functional tests to check application communication channels.

  • Runtime Application Self Protection (RASP)

Production monitoring and risk assessment, which relies on tools and automated processes to counter application attacks.

SAST – Static Application Security Testing

SAST is used to identify possible improvements by analysing the source code or binaries without running the code. It is fast and you don’t need the code to be compiled so the SAST tools can be integrated directly in the IDE (Integrated Development Environment). This gives developers immediate feedback about the code they write and how they can deliver a better software product. Projects that manage to integrate SAST in their SDLC will notice immediate benefits in code quality by adhering to a more detailed DoD (Definition of Done) task completion checklist that is not required to go through a PR (Pull Request) Review.

The main benefit of SAST adoption is that developers have immediate feedback on their code and how to improve it; there is no need to deploy or compile the code.

The problem (with SAST) is that application code that is written by developers is just a small part of an application under test. In many cases, we rely on different languages, frameworks, servers that interact and many other systems that make up the ecosystem. If you are doing only static analysis you are not only ignoring application execution and infrastructure but also the communication protocols. Hackers will usually use information that is kept in cookies and requests to penetrate and exploit the system’s flaws in your infrastructure or application.

SAST Tools can be setup on the developer machines or in their IDE, but can also be setup as a CI/CD tool.

IAST – Interactive Application Security Testing

IAST relies on agent tools set inside the application at communication junction points. Tools need to be configured and each agent will gather results to a certain degree of accuracy.

“IAST is a combination of SAST and Dynamic Application Security Testing (DAST). But this security solution has still not matured to a point that it can be defined precisely or measured accurately against rivalling solutions.” – Sharon Solomon

Due to these factors, the tools might not be a fit solution to be used in production environments. The effectiveness of the tools is largely affected by the instrumentation and attack simulations.

The implication is that engineers and security professionals are responsible to set up and run an analysis of results, thus requiring specialised personnel. Agent installation inside the infrastructure might touch on other constraints set by bank rules and regulations.

DAST – Dynamic Application Security Testing

DAST is great for developers, allowing them to have rapid health checks on their code. It should be mentioned though that this often creates a false sense of safety and security, which can can be a very precarious position. Because of the nature of the systems under test, DAST can be run as a security proxy. The advantage of such an approach is that you can use your existing tests (Integration or E2E) tests and run them through the DAST proxy. On top of testing your application business flows and environment setup, you also get a nifty security report on all the requests that bounced during testing. Reports usually contain warnings for industry standard OWASP Security Threats. The security assessment can be further refined by security experts in order to achieve a more comprehensive suite of checks.

A benefit of DAST adoption is that developers or security analysts can identify sensitive information exposed by the system through analysing the requests generated by the application.

On a day to day basis, developers can analyse changes in requests and sessions contain only the desired content. Security analysts also now have a tool that can see the underbelly of all business flows so they can focus straight on attack vectors and other security aspects. Policies can be verified and enforced (e.g. GDPR adherence by identifying how user sensitive data is exposed within the application communication). Tools usually also provide warnings on standard configuration but the large array of tools require fine-tuned configurations. Some tools provide community script repositories (10) which can be directly used or customised to project needs.

The problems faced with DAST tools is that they generate a large number of warnings (and false negatives) that need to be carefully investigated by developers and security professionals alike. DAST tools require extensive configuration in order to achieve the targeted results.

DAST tools can be set up in the development infrastructure or as a CI/CD tool. Developers can use the DAST proxy tool from their local machine (by redirecting their tests through the proxy).

RASP – Responsive Application Self-Protection

ASP tools are designed to monitor running applications and intercept all communication, making sure both web and non-web applications are protected. Depending on how the tool is set up it can signal identified threats or it can take actions. RASP can stop the execution of malicious code (e.g. it identifies an SQL injection) or it can terminate a user session in order to stop attacks. The way it stops attacks is dependent on how RASP and the application under test are integrated.

There is an API based approach in which the RASP tools use the application API to steer the behavior of the application. Developers will find that through such an approach they can handle the integration with RASP tools in a very specific way (e.g. Login App might define an extended API to cope with custom decisions from the RASP tools). There is also an approach where a wrapper is set up for the app and RASP sits on top, thus giving a more general integration with an existing app.

A benefit of RASP is that is can distinguish between attacks and legitimate requests. It can limit or block access to attackers that already gained access to the application. These capabilities will not safeguard against all threats, for this reason, security specialists recommend building security mechanisms at the application level.

A drawback is that environments that use RASP can notice their performance is affected by the adoption of active scanners. User experience may be impacted by the loading latency.

Conclusions

Due to the nature and scope of each tool and how they fit in the SDLC, there is no single solution adoption to automate and safeguard delivery to production.

SAST is the most cost effective way of checking for code related defects and security threats, but its scope (static code) ignores the vast majority of interacting elements in a software solution.

IAST adoption is to be desired but might take more time to integrate it as a formal step into the SDLC due to the requirement of specialised resources and tooling.

DAST complements SAST by checking in the SAST blind spots (running against deployed code, checking environment configurations and communication protocols) by providing extended reports with security in mind.

Note: that DAST tools can be used as part of CI/CD processes. Since IAST and DAST are quite similar in many aspects, their capabilities are transferable.

RASP uses a combination of tools for network traffic monitoring. Due to the proactive nature of the intrusion detection systems, a set of tripwires are set (eg. honeypots) by network and security professionals that need to be monitored. The response to identified threats can be carefully handled in a time effective way.

We would recommend the use of a small set of tools and practices that make sense for your SDLC. Gradual adoption of tools and processes should always be made with development delivery in mind. SAST on its own will not safeguard against a number of threats and will signal only code related issues. Ultimately, adopting the right set of tools will help complement coverage and type checks performed on the system to the point where the code is production ready.

If you’d like more specific advice around the right tools and practices for your SDLC you can get in touch with us here.

Voicy TurcuUnderstanding SAST and DAST Adoption
read more
On being a mum and a woman in tech

On being a mum and a woman in tech

No comments

Like most people, I had a five-year plan after I graduated from university. Get a nice job and work for a great company, get married, start a family and buy a house. Fast forward five years and here I am, attempting to write a blog about how I balance being a mother and a woman in technology while listening to my daughter having a tantrum!

Being a first-time mum, I struggled a bit in the beginning after my maternity leave to get used to the idea of working again. I felt like I had forgotten how to code. Not to mention that I was given the responsibility of a Test Architect role in the client site that I am based at. I had to get myself familiar with new tools that I haven’t used before and somehow, I had to lead the team. It was daunting!

At the same time, I was worrying about my daughter all the time. It was hard to focus at work and it definitely wasn’t the best start (let’s just say that my stress hormones were up to the roofs!). But somehow, I managed to get it to work in the end. It wasn’t easy and there were still some sleepless nights (teething is still a nightmare!) but I’m going to list the things that helped me balance my work and my responsibilities as a mum.

  1. Share the responsibility

This I feel is the most important. Don’t be afraid to ask for help and share the responsibility. You won’t be able to do everything by yourself! My husband is very hands-on with our daughter so during his days off, he looks after her. Ask families and friends to help out too. We’re lucky that my mother-in-law helps look after my daughter when my husband and I are both at work. There are also times when my parents pick up my daughter from work, so they can look after her. We pre-plan our schedule and check everyone’s availability so we know who will look after our daughter on what day.

  1. Flexible working is the way forward

If you can work from home or do flexible hours, ask for it. From time to time, I work from home if there is no available babysitter that day or if I need to take my daughter to hospital.

  1. Avoid working outside hours

You might be tempted to bring some of the work home with you if you have tight deadlines but try to avoid doing this if possible. I used to bring work home with me to finish off some tasks, check slack messages and reply to emails but this meant that even when I’m home, I’m still thinking about work rather than just spending quality time with my daughter. This just made me more stressed in the end so if I do have deadlines, I try to be more focused at work and time box my tasks. If it’s something that your colleagues can definitely help, share the responsibility. Again, you can’t do everything by yourself 🙂

  1. Stop overthinking about your children

It’s natural that we tend to worry about our little ones. I used to worry a lot about my daughter at work and text my husband or my mother-in-law to see how she was doing, if she’s eaten or drank her milk, if she’s had her nap, if she’s crying, etc. and I always get the same answers – that she is doing ok. Rather than spending time worrying about things I couldn’t change, I now use that time to be focused at work so I can get home sooner and answer these questions myself

  1. Find time to learn

Now this might be difficult for some of you but if you can, still find time to learn something new every day. Doesn’t matter if it’s just for an hour or 30 minutes. Especially in the tech industry, there are always new tools coming up. So, once my daughter is asleep, I make a habit to read a book, read tech blogs, or do a little bit of coding.

  1. Find a company that appreciates you

I feel that this is as important as the first point. If you work for a company that micromanages and doesn’t give you room to improve, then this might be a red flag. It’s great that I work for a company that is appreciative of what I do and rewards those who have done a great job. Recently, I was nominated for an Outstanding People Award and it has given me a great boost to continue doing what it is I’m doing – I must be doing something right after all!

Achieving a work-life balance, especially if you are a mum, is a challenge, but it is doable. It was difficult at the beginning, but like everything else, it gets easier 🙂

Join our Women In Tech DevOps Playground on 8th November where we will be getting hands-on with Cypress!

Follow other stories from the ECS Digital team here.

Marie CruzOn being a mum and a woman in tech
read more
AyeSpy, a new way to test for visual regressions

AyeSpy, a new way to test for visual regressions

No comments

Bill Gates famously said, “I will always choose a lazy person to do a difficult job because a lazy person will find an easy way to do it.”

At The Times, there is an incredible amount of business value placed on the aesthetics of the site. There have also been past incidents where CSS bugs have caused rollbacks.

With this in mind, traditional `automated` functional testing with selenium is ineffective to find these defects – in addition to being slow and high maintenance. To add to the problem, The Times release far too often to make manual verification possible.

This is where visual regression tools shine through. Their sole purpose is to give confidence that the applications under test are visually correct.

So what is visual regression?

There are 3 main parts to understanding how visual regression works.

  1. Baseline

A set of images that define how the application should look, based on previous recordings.

  1. Latest

A set of images detailing how the application currently looks.

  1. The comparison

Once we have both the baseline and the latest, we are able to perform a comparison between how the application is supposed to look and how it looks now. If there are differences, the build will fail, and you will need to approve the changes to update the baseline images once more.

We have used a number of visual regression tools within the Times Tooling team at News UK and each proved to have limitations.

A core testing principle that we believe at ECS Digital is you need to be testing as close to production/end users as possible.

Headless browsers such as phantomJS may give you a small performance increase when executing tests, but these browsers are far from how your end users will be interacting with the application under test.

Our first visual regression tool only supported headless browsers. We had several instances where it allowed bugs through, but this only occurred on Firefox and not PhantomJS. This loophole was the reason we decided to move on.

The second tool we tried was what we believed to be the industry open source favourite. After battling with it for well over a week we could not get it running stable or under 30 minutes, which as a developer is an unacceptable feedback loop.

As you can imagine, these inefficiencies didn’t sit well with the Times Tooling team and we decided to address the problem head-on and create our own “hand-rolled” visual regression tool.

Based on our previous painful visual regression experience, we were determined to build a tool that was:

  • Super performant
  • Lightweight and,
  • Made it easy to interpret results

A proof of concept was put together before fully refining the capabilities of the tool. We then waited for priority to allow before creating ‘AyeSpy’ in one sprint.

Four months down the line and AyeSpy has been successfully implemented, gaining approval from our clients and users on GitHub. Whilst the Times Tooling Team engineered AyeSpy, The Sun and data teams within News UK have since adopted it and it’s not hard to see why – AyeSpy takes less than 90 seconds to run 44 webpage comparisons. Other benefits include:

  • Only requires a .json config file to run
  • Maintenance is low
  • Able to explicitly wait for elements before screenshot is taken
  • Can integrate the Dom before screenshot
  • Drop cookies into browser
  • Remove dynamic elements from the DOM
  • Tests are farmed out to a containerised selenium grid for distributed testing and consistent state

When deciding to use visual regression, we have found in our experience that the tool works best on reasonably static sites that do not require long user journey to be completed before the screenshot. For example, clicking through a checkout journey would introduce a high level of risk and take away value from the tool. Ideally, you want to load the page, remove all dynamic elements, and then snapshot.

Where you can find the tool?

ECS Digital love to find value for our clients and give it back to the wider community, which is why we make these tools available on open source platforms such as GitHub and NPM.

I will also be hosting a hands-on session and demonstration of AyeSpy at an upcoming DevOps Playground on the 29th of November. Come along to learn more about what the AyeSpy has to offer!

Matt LowryAyeSpy, a new way to test for visual regressions
read more
Tooling and efficiency teams

Tooling and efficiency teams

No comments

ECS Digital has been operating in the DevOps space for over 20 years and this success is mostly down to our focus on self-improvement and innovating for the benefit of our clients. Our recent acquisition of QAWorks was largely initiated to support the continued efforts in the digital transformation sphere, focusing primarily on strengthening our expertise in software quality and delivery.

What we’ve seen since this coming-together is a greater offering for our clients – not to mention an increase in the number of smart-minds looking to evolve our existing tools and processes. This ‘fresh blood’ has a mix of experience – with some primarily working within big teams in large organisations where the division between development and test was not aligned to delivering business value.

As has been seen from successful adoptions of modern software delivery techniques, shifting left to a more agile methodology results in your development and operation teams working for each other. It also offers them more autonomy – resulting in smaller wait times and reduced feedback loops.

But what happens when you begin to scale this model within larger organisations?

For ECS Digital, the first step of any digital transformation is enabling you to successfully integrate an agile process. Part of this is helping you communicate and adopt a new culture, as well as introducing an engineering mindset to test– this can involve introducing SDETs to your development teams to ensure any feedback or strategies can be put in place quicker. Quicker feedback means improved lead-time and higher quality of applications.

Once you reach a level of confidence in your new process and are comfortable with the effectiveness of your teams and automated tools, our consultants begin to look at reusability – taking an in-depth view of your processes and offering recommendations of how to take them to the next level.

Focused primarily on larger organisations, our team has developed a quality assurance strategy that supports businesses who have around 25 or more working within the software delivery structure. Once you reach this magic number, an opportunity cost presents itself. 

This opportunity looks to do the following:

  • Reduce duplicated efforts,
  • Improve efficiencies of individuals and teams,
  • Recognise issues that are affecting more than one teamand create a reusable solution,
  • Remove the risk of gatekeeping behaviour by breaking down the silos and cultivating a culture of collaboration between teams 

Internally, this opportunity is known as introducing a ‘tooling and efficiency team’ (official name to be confirmed). Not only are these teams proving successful in current client work, they are a logical next step for those wishing to maximise their agile business model.

In short, this team consists of engineers with a broad skillset and sits within your business permanently. They are responsible for keeping a comprehensive eye over all your development and operation processes and specifically look for areas that are underperforming or no longer fit for purpose. Once identified, they create reusable solutions to combat individual and company-wide inefficiencies.

But if your agile methodology is already delivering on all your performance targets, why is this new team important?

Performance

By analogy, if you have a one-man operation and you invite an additional person to join this team, you are doubling your effectiveness. If work-demands require a third or fourth member of the team, you are again increasing your efficiencies – but as you scale, this math only works to a certain number. It is very much a balancing act, but what we’ve found whilst working with clients is once you reach a large development team of around 25, each new member starts to become less efficient.

By creating a one-stop-shop in the form of a tooling and efficiency team who can afford to spend the time looking for and creating tools to keep your business adapting, you are maximising ROI because you are making the most of the staff you have. This can be seen in our recent client work with NewsUK.

A reoccurring long-term objective for our clients is to increase the speed of delivery whilst maintaining quality. Quality assurance and automated testing are essential to helping them achieve this – and is the reason why a tooling and efficiency team is working so well. We work alongside our clients’ principal engineers to maintain a clear direction for this new team to move towards, measuring against agreed targets periodically. The benefits have so far been a strengthening in DevOps capabilities, as well as a strong improvement in development efficiencies and overall quality.

“ECS Digital consistently provide intelligent, hard-working and professional individuals who always manage to work well together. Kouros provides a strong organisational and delivery focused attitude that resonates through the team – who have made some invaluable and original open source products that will benefit us and others in the future. They are more than simply a QA team, but can-do developers who aren’t afraid of a challenge and putting the client first”

Craig Bilner, Principle Developer at NewsUK.

The transition to this efficiency model requires a level of collaborative consultancy to help oversee the adoption of the new team and integrate them with others already in the structure. ECS Digital engineers have the capability to enable adoption by working alongside your current team or by operating autonomously / self-managed within your business.

Their ability to constantly inspect, improve and adapt aligns with the very nature of agile methodologies, making it an ideal structural change to invest in long term.

Whilst our tooling and efficiency teams are an additional offering to our DevOps consultancy, it is a necessary next step for those wishing to take their agile business model to the next level.

ECS Digital is an experienced digital transformation consultancy that helps clients deliver better products faster through the adoption of modern software delivery methods. Our recent acquisition of the UK’s leading technical software testing organisation, QAWorks, means we’re well placed to offer expert advice about how tooling and efficiency teams can bolster your digital environments.

If you’d like to know more about how the tooling and efficiency approach could benefit your business, drop us a message here.

Kouros AliabadiTooling and efficiency teams
read more
Applying an engineering mind-set to test

Applying an engineering mind-set to test

No comments

How do you continually deliver software with limited time and resources? Where manual testing has been an integral part of software development since the introduction of waterfall methodologies, this process takes a lot of time. And unfortunately, time equals money.

DevOps and automation are two ways modern businesses have responded to the need for increased speeds and agility, especially when it comes to application deployment. Both enable features to be released quicker, giving businesses the chance to react to market changes in realtime and keep ahead of their competitors.

Software Development Engineers in Test (SDETs) are an integral part of any agile transformation. They are the key to ensuring that software can be developed quickly and with confidence, building a bridge between development and testing.

Testing is a rapidly evolving field and key to good software development practices. Today, manual testing is too time consuming and resource hungry to be practical. Instead, automation enables organisations to release features and react to market changes faster.

SDETs have long been a feature of DevOps methodologies, helping to improve automation and give developers more ownership of their applications. They play a vital role in the process of shifting from waterfall models to agile methodologies and represent just how far application development has come over the years.

The problem with waterfall

Traditionally, software development used waterfall models, with progress on the project flowing steadily downward (like a waterfall) through several phases. The origins of this methodology lie in the industrial age. A factory production line has a number of stages to create the finished product, at the end of which it’s tested to make sure it works as designed.

This is fine for an assembly line but it has a number of serious problems as a system for releasing software, including:

  • Software releases are infrequent – potentially months apart – because each iteration needs to go through the entire waterfall process
  • It’s slow to deliver new features and react to requests from the business / customers for the same reason
  • There tends to be a lot of bugs in the finished product. Fixing them is often scheduled for the next release, months down the line

An agile solution

Agile methodologies attempt to solve these issues by bringing all the processes closer to the development team. This is known as ‘shift left’, instilling the idea of fail early, fail fast. The intention is to catch problems as early in the process as possible, rather than seeing testing as an entirely separate stage in the creation process.

It works. Today, software is being released faster and more regularly thanks to agile methodologies. However, this increased velocity means that organisations no longer have the capacity for a team of testers to manually verify every stage of the development process.

To cope with this, organisations need to be moving towards a modern testing strategy – with automation and SDETs as integral parts.

Why using SDETs is the way to transition successfully

Organisations already using agile methodologies such as DevOps often rely on SDETs to think differently, providing robust fast feedback to developers on how an application is behaving and writing automation code.

They are responsible for creating a shared understanding of the feature and thinking about potential edge cases or unhappy paths and asking questions that explore the features further. SDETs are also capable of looking at developer workflows to find inefficiencies. As a result of this, they will inspect and adapt the current quality pipeline.

This all-encompassing role has stemmed for a wider cultural shift, where teams own the quality of an application and everyone in that team has the responsibility to deliver a feature from inception to production.

At ECS Digital, SDETs need to have skillsets that can deliver on modern approaches to testing and for organisations that are undergoing digital transformation. This makes them an essential part of implementing the informed technical approach we take to testing and quality.

Our recent work with Global News Corporation and a world leader in the oil and gas industry demonstrates the importance of coupling SDETs with any transformation process. Global News Corporation, for example, optimised the role of a SDET whilst we helped deliver an engineering transformation. This transformation included the implementation of a continuous delivery process and culture which saw their delivery teams fully build, test and deploy within 20 minutes. Their reduction of regression test times also decreased from three hours to ten minutes and they had a healthy boost in sales thanks to an increase in their average app store rating which jumped from 3 to 4.5 stars.

Comparatively, we successfully introduced a test first approach throughout engineering and a full CI pipeline and culture for our world leader in the oil and gas industry client. This positively changed delivery across the oil trading platform, reducing regression times from four weeks to 30 minutes and providing a $50million a year saving.

What follows a successful transformation?

Our approach recommends that once clients have an agile methodology in place and feel they have made progress with fulfilling their digital implementation objectives, they can start to look at making efficiencies elsewhere. This usually comes at a time when a client’s digital agility has matured enough that the teams supporting these efforts would benefit from the recommendations of a ‘tooling efficiency’ model.

At ECS Digital, this is just one of the new models we are recommending to clients well into their agile methodology transformation, since it removes SDETs from development teams with the following key changes:

  • Developers becoming wholly responsible for the quality and testing of applications, including writing automation testing code.
  • SDETs focus on creating tools to make writing tests for developers trivial, as well as looking at developer workflow in order to increase the efficiency and confidence of the application.

We are successfully implementing this model in current client work. It’s a step up for organisations who are already running agile confidently and want to move up to the next level. It’s also a proven process for those wanting to implement agile methodologies but who need a trusted partner that offers a progressive business-first approach and a team with the knowhow to elevate their business before they do.

Transitioning to an SDET model is our recommended approach to begin in agile testing and digital transformations. Creating a communal ownership of quality is a key part of this and by bringing the traditional test and development roles closer together this is made a lot easier.

If you’re looking for help accelerating change within your business, get in touch with us here.

Efficiencies you can share across team

Kouros AliabadiApplying an engineering mind-set to test
read more
Behaviour Driven Development: It’s More Than Just Testing

Behaviour Driven Development: It’s More Than Just Testing

No comments

Behaviour Driven Development (BDD) is a business-first approach that puts an agreed definition of ‘success’ at the heart of development and testing.

It’s about proper communication from the outset, getting all stakeholders in the business to set out what ‘done’ really means. This, then becomes the cornerstone of agile development, a vision of ‘success’ that all departments can work towards and test against continuously. When properly implemented, BDD should lead to better productivity, quality, rate of change, and ensure that accurately developed products reach the market fast.

BDD is a powerful technique, popularised by Dan North in the early 2000s, that grew from Test Driven Development (TDD). Developers were realising that TDD wasn’t the right fit for their agile development environment; it didn’t provide proper boundaries or structure for coders, because there was no explicit definition of success built in.

If anything, it asked more questions than it answered: When should we test? What are we testing for? How do we know a test has been successful?

The BDD Difference

BDD is often misinterpreted. It isn’t a testing framework that allows you to automate tests, although such techniques do form part of it. Rather, it’s a way of working that promotes testing for the right reasons, at the right time. It’s about communicating business value within integrated teams, and altering development priority from one that tends to favour implementation to one that also considers proper functionality at every step.

To do this, BDD encourages keeping code repositories and consistent toolsets, as well as making everything accessible and readable to all stakeholders, whatever their technical skills, through the use of natural language.

It’s also about testing more than just code. Before development even begins, BDD leads us to test our assumptions about what the software should do and how it should be constructed. Working through hypothetical situations on use and construction leads to more accurate requirements specifications; these, in turn, create a common understanding and a more efficient and painless development cycle for all involved.

Project owners and BAs working under BDD-enabled Project Managers gain greater flexibility and create better requirements documents. Developers, when all levels are involved in its creation, become less sensitive about criticism and analysis of their code, and improve their focus on the reasons for its creation. Testers find themselves more able to appreciate the user perspective, and their tests become more efficient. At every level, BDD can make a huge difference to the business.

Recommended tools

Communication is the key tenet of BDD. There are a number of tools and techniques that can aid the implementation of BDD and the enhancement of agile communication in teams:

Gherkin

Similar to pseudo-code but even more abstract, Gherkin is a writing syntax that puts the ‘what’ in front of the ‘how’. Writing and reading Gherkin’s simple constructed language is an ideal way to structure acceptance criteria and example scenarios, which leads to more accurate software specifications and a much lower defect rate on the tail end.

It’s also inherently testable: Gherkin is an executable specification, created for use with automated testing tools, such as Cucumber and Specflow and others. Check out some examples from the Government Digital Service to see what Gherkin is all about: https://relishapp.com/GDS/whitehall/docs about

The Three Amigos

Unity and communication are, in any agile team, priority number one. The Three Amigos (or Power of Three) concept pushes for pre-development consultation between Dev, QA and business analysts, the latter of which presents the aim of the project and its testing criteria, which is then discussed and developed in combination between the three pillars of its development.

This helps solidify requirements, understanding, and the goal of each part of the development cycle right at its inception, as well as codifying the three elements into a single team. With a combined and co-developed goal, and communication firmly established from the outset, change is easier to handle and can happen much more rapidly.

Impact Mapping

Strategic planning isn’t easy, but impact mapping, a form of discussion-grown mind map, makes it easier to approach. It breaks down the task into its key goal, the actors who will interact with that goal, the impacts those individual actors will have on the goal, and only then are the deliverables which come from those impacts considered.

This results-first structure aids in doing away with the counterproductive features shopping list, and instills greater understanding of the project up and down the chain.

Living Documentation

At every stage of a project, every cog in the machine needs to be aware of which way the others are turning. Living documentation and BDD go hand in hand; as examples are generated for use as acceptance criteria, they’re added to a living document as a matter of course.

This documentation forms the core of the project, and grows and develops alongside it. Written in natural, formulaic language like Gherkin, it is useful for everyone from stakeholders to testers to developers at every stage of the project’s completion, and trickles down to maintenance teams once development is complete.

Test automation frameworks

BDD helps to soften the nervousness around adoption of automated testing; living documentation means the inputs and outputs are available quickly and immediately quantifiable by all involved. Automated frameworks remove human error when properly applied, and drastically reduce workload.

BDD success

Many companies are employing BDD to great success. The Financial Times uses BDD techniques to structure its projects and ensure that ownership of testing isn’t restricted to developers – it’s a group effort. Says Platform Tech Lead Sarah Wells:

“I knew we were getting somewhere with BDD when our first response to questions about a story we were already working on was to grab the product owner and a tester and add new scenarios into the feature files right there. You can’t easily do that with a unit test.”

There are other major users too – from Paypal to BP to News UK – but the true success of BDD isn’t measured in numbers, it’s measured in less quantifiable metrics: team unity, communication and development efficiency, project success.

Eyes on the prize

It’s important to reiterate that BDD itself isn’t a tool or a testing framework; it’s a ruleset that produces order. It’s a subset of agile development that encourages working from the outside in, putting testing first, and collaborating and communicating efficiently between team members of all levels and roles.

With everyone on the same page, and that page being the correct one, more resources can be spent on progression rather than correction. Knowing what success really means helps to drive teams towards goals in a fast, united fashion. BDD means better products, better communication, and better teams.

If you would like some more information about Behavourial Driven Development, or if you have any questions, please get in touch.

 

Sarndeep NijjarBehaviour Driven Development: It’s More Than Just Testing
read more
Why Continuous Testing is crucial to DevOps

Why Continuous Testing is crucial to DevOps

No comments

Getting testing right – or wrong – can have enormous consequences for businesses in all walks of life, from both reputational and financial perspectives. Take British Airways, who suffered a disastrous datacenter outage in May 2017 that led to flights from Heathrow and Gatwick being grounded for almost 48 hours. Or market-making firm Knight Capital Group, who lost $440 million in 30 minutes in August 2012, owing to a bug in its trading software.

While most software testing goes unnoticed by consumers unless something goes wrong, there are companies who proactively enhance their reputations by sharing what they do. Netflix’s Tech Blog contains a remarkable amount of detail on the streaming giant’s continuous testing practices.

Continuous testing and automation is a crucial piece of the DevOps jigsaw, where the full benefits can only be realised if everything is in place, with automation and monitoring at all stages of software development and operations.

Worldwide, more and more companies are trying to implement DevOps across their software development and operations – the State of Testing Report 2017 saw a 12% increase in DevOps use compared to 2015.

This is a significant rise but, from our experience, problems often occur when DevOps is implemented but testing is left behind. Continuous testing and automation should be seen as a precursor for a DevOps implementation, rather than something to fit in as and when.

Automated testing

If the ultimate aim of DevOps is to have the confidence to release at any given moment, knowing that neither your infrastructure nor application will fall apart, then testing based on old working practices just won’t cut the mustard.

Ideally, all the required elements for DevOps are ready before any kind of development begins but businesses usually need to implement DevOps on to an existing organisation, full of processes and tools that are at different stages of readiness. DevOps is more often an upgrade, not a clean install.

For release on a regular basis, whether that’s daily or another timescale, you need a set of tests you can automate and have confidence in. An old-fashioned testing cycle of two weeks, say, ties your hands; you can either release quickly or be confident about the quality of the release, but not both.

It’s also important to remember that ensuring things are working is only part of a good testing model. A major aspect often overlooked by methodologies outside of continuous testing is the role that testing plays in helping to communicate, define and deliver the original business objectives using techniques such as Behaviour Driven Development (BDD).

Getting it right 

Good infrastructure and platforms are integral to successful testing and a DevOps mindset can help make this happen. A good example of where these worlds come together to enable more effective and quicker testing is containerisation. One of the many benefits is that you can have a production-like environment that you can start up and bring down quickly and easily. You also have complete control over that environment, so you can change the data, simulate network interruptions, simulate load and so on, with complete safety.

Many organisations have tried to adopt Test Automation with varying degrees of success. According to the State of Testing Report 2017, 85% of businesses are using automation to some extent in their testing processes but under a quarter of those are applying it to the majority of their test cases.

Ultimately, implementing a DevOps process is futile without backing it up with good continuous testing and automation. The rewards are there to be claimed. Getting testing right is the key to achieving the full benefits of DevOps and actualising business value.

Over the coming months we will be posting more articles where we delve deeper into the relationship between DevOps and continuous testing, and the benefits it can bring to your business.

Here at ECS Digital we’re always happy to talk about what we do, why and how. If you’re interested in finding out how we can help you, please do get in touch.

Kouros AliabadiWhy Continuous Testing is crucial to DevOps
read more