Behaviour Driven Development in a nutshell

Behaviour Driven Development in a nutshell

If you’re new to Behaviour Driven Development (BDD) and don’t understand the jargon surrounding it, this is the article for you. This article will explain the most fundamental concepts as well as the basic implementation of this agile methodology.

Let’s start by clearing up the misconceptions. BDD is not limited to test automation and it is not about tools. Fundamentally, BDD is actually about great communication, where an application is specified and designed by describing in detail how it should behave to an outside observer. In other words, BDD is about working in collaboration to achieve a shared level of understanding where everyone is on the same page. That’s the basic understanding you need to know. Easy enough.

So, what does this ‘great communication’ mean for software development?

Great communication means:

  • A usable product first time round, which allows you to get your product to market faster
  • A lower defect rate and higher overall quality
  • A workflow that allows for rapid change to your software
  • A very efficient and highly productive team

How is it done?

Meet our key stakeholders/teams:

Developers • Testers/QA • Project Manager/Scrum Master • Product Owners/BA


To illustrate what happens when you implement BDD, here are the before and after scenarios:

Before implementing BDD

Traditionally, software is designed in a waterfall approach where each stage happens in isolation and is then passed along to the next team. Think conveyor belt factory style:

  1. First the Business Analyst defines requirements
  2. Then the development team work on these requirements and sends for testing
  3. Then testing discovers lots of bugs and sends back to the development team
  4. Things are miscommunicated in transit so repeat steps 2 and 3 back and forth until you run out of time or budget
  5. Release software

The problem here is that everyone is in isolation, interpreting the requirements differently along the way. By the time code is handed in for release, resources are drained, and people are frustrated as there are issues that could have been avoided had everyone just been working together initially.

After implementing BDD

  1. Business and PO/BA have a set of requirements ready to implement
  2. BA, Developers & QA work collaboratively to refine these requirements by defining the behaviour of the software together. Thinking from the point of view of the user, they create detailed user stories. Throughout this process they address the business value of each user story and potential issues relating to QA that may crop up
  3. Each story is given an estimate of how complex it would be to implement
  4. The whole team now has a strong shared understanding of the behaviour of the software and when it will be considered complete
  5. Begin Sprint: Developers & QA then work together or in parallel to produce a product that is ready for release




This process saves time and money and is incredibly efficient. The core element of this efficiency is the team’s clear understanding of scope and what the fundamental features and behaviours required are. Because of the collaborative nature of BDD, issues are brought to light that otherwise would be an afterthought. For example, how a feature might behave differently on mobile or how a feature might deal with a large number of users. These are considerations that should be addressed from the outset.

What is the best way to implement BDD?

Just because people are in the same room or present at the same meeting doesn’t mean they will collaborate effectively. Each of the stakeholders play a crucial role and some teams/individuals may need to change their way of doing things to make sure that collaboration actually happens. The image below outlines the key deliverables for everyone involved when adopting BDD:




An example of BDD in practice

BDD is a risk-based approach to software development; it mitigates the likelihood of running into issues at crucial times. Teams at ECS Digital have been using the BDD process effectively, including  implementing a website feature for a popular media client. The client wanted to create a swipe feature where more mobile users could swipe to see different articles and move through the sections easily. Everyone was collaborating from the initial stages and the team was able to ensure high quality on the website throughout the process of implementation.

With a clear and shared definition of what the website would be like when completed, they were able to innovate further to mitigate the risk involved. They decided that during times of low traffic they would send users to the new website with the new swipe feature and get feedback. Then during risky times of high traffic users would have the usual website without the new feature. This allowed the team to ensure that when they made the feature a permanent part of the entire website they were taking as little risk as possible.

If this team was not utilising BDD techniques by defining the website’s behaviour in detail and involving each team in the development of requirements, they may have released the feature without such precautionary measures or run into many issues when approaching the release date.

If you’re interested in understanding more about BDD and delving into some of the jargon surrounding it – “gherkin syntax”, “the three amigos”, “impact mapping” & “living documentation” – read our previous article here: Behaviour Driven Development: It’s More Than Just Testing

Kouros AliabadiBehaviour Driven Development in a nutshell
read more
Xin’s Story as a QA and Continuous Delivery Consultant

Xin’s Story as a QA and Continuous Delivery Consultant

My name is Xin Wang, I am a QA and Continuous Delivery Consultant as ECS Digital. I recently published a blog explaining how I went from delivering presentations on Fashion Week 2017 fashion trends, to writing functional tests as a software developer engineer.

Working in a male dominated industry has been very different to what I was used to – the approaches that some male engineers take are sometimes very different to the approach that a female would take. But these perspectives combined give you a much valuable overview which is why I really enjoy working on coding challenges with my colleagues.

Take a look at my video if you are interested in understanding why I switched my career around and how I am continuing with my journey as a software developer engineer.

Xin WangXin’s Story as a QA and Continuous Delivery Consultant
read more
Is your master branch production ready?

Is your master branch production ready?

Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.

A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.

If your answer is no, then how do we start to break down and resolve these problems.

Building confidence in quality

A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.

The test strategy shown above was for a simple front-end application that reads data from an external API.

To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.

We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.

Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.

We isolated our application by stubbing out external services or dependencies where necessary –  we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.

We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.

Start failing fast 

Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.

Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.

We need to run tests earlier in the process.

In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.

We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.

This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.

Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.

The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.


This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.

Matt LowryIs your master branch production ready?
read more
Raising the profile of performance!

Raising the profile of performance!

Performance testing – typically the job of the Non Functional Test (NFT) team – should always be completed against a stable build of the system, in an environment that resembles the final production setting as closely as possible. Extensive functional testing is usually carried out beforehand, along with various other essential actions.

On paper the above looks fine and as expected, but by the time the system reaches the NFT team the underlying code has passed through various developers over many iterations. New code gets added to existing code and existing code is refactored, as different team members work their magic to create the system.

With the code receiving so much attention in the build up to performance testing, the NFT team has a hard time determining where any code-related performance issues originate.

The solution

If the above quandary is to be solved, projects must avoid vehemently segmenting each phase of a project. To reduce any wasted time in the performance testing phase, ‘performance profiling’ tests can be run during the functional test phase. Their purpose is to quickly identify any code-related performance issues. With performance profiling, a small number of virtual users is sufficient – say five or ten concurrent. These profiling tests should be run every time code is checked-in, making performance testing an essential part of the daily build process.

The key is to ensure these lightweight performance tests are run on a consistent and stable environment. Whilst this environment won’t resemble the final production architecture, it will quickly highlight any degradation in performance. Each test will be executed against the code in its most recent form, making it possible to highlight the root of the problem quickly. Once diagnosed, issues can be fixed and re-tested before the code is released to the NFT team for formal and extensive load testing.

A ten-step guide to performance profiling

The above form of lightweight performance testing can have a positive impact on the speed at which you can get a product to market. And by following the ten-step guide below, adopting such an approach doesn’t have to be difficult or costly.

  1. Source an environmentThis doesn’t necessarily have to be the production environment; it could even be the test environment, as long as it’s not being used when the profiling is conducted. It might be possible, for example, to complete the process out of normal office hours. Be sure to log the specification of the machine used, so that it forms part of your baseline setup.
  2. Choose a toolIdeally, this would be the same tool that’s used for full-scale performance testing (during the NFT phase) as the same scripts can be used but for much lower load, obviously. Otherwise, open source tools like JMeter and WebPageTest are excellent for both profiling and full-scale testing.
  3. Start earlyBehaviour-driven development and agile vertical slice development make it pretty easy to create some client-facing functionality relatively quickly. Once at this stage, it’s time to start writing the profile test – even if it’s just calling a simple GET against a web page. Simply put, the earlier you test, the earlier you find bugs.
  4. Run a base lineWith the script in place, you are ready to run a base line. Make sure you discuss the results with the team and business to ensure everything fits in with what was initially expected.
  5. Schedule your testsFor this, you can use a Continuous Integration (CI) server, or even a cron job. If you’re using Jenkins/JMeter, it’s worth using the JMeter plugin – this will not only run the tests but also report back some useful graphs. It makes sense to schedule these for every time new code is checked in.
  6. Monitor your resultsHooking your tests up to the CI can help with this – just be sure to implement some kind of threshold pass and failure conditions. This way, you can sound the alarm if the performance profile build goes ‘RED’.
  7. Learn from the improvementsIf it’s clear that the application is starting to perform better, determine the reasons for this and see if those improvements can be implemented elsewhere in the code. Negative changes should also be picked up on and investigated promptly.
  8. Maintain the testsWhenever more functionality is completed, adjust the tests accordingly. It’s also important to retire tests if they become obsolete. This should be treated like any other testing or development task.
  9. Monitor your systemMonitoring should be installed around all parts of the system, including the database, the webserver, CPU and memory usage. New Relic may come in handy here, or there are a number of open source tools on the market as well.
  10. Don’t forget the full load testThe goal of performance profiling is to make the process easier and more watertight – it’s not designed to replace full load testing. You should still stress/soak/load your application as normal.

Why introduce performance profiling?

When performance profiling is used as part of the daily build process, it becomes easier to instantly highlight any small performance issues early on in the process – long before it even reaches the NFT team. In turn, this means that developers are able to optimize and fine-tune the code as they go, ensuring the best possible results. It can be used to lighten the load on the NFT team, who will benefit from being able to focus on serious load, performance and stress tests to identify bottlenecks that aren’t necessarily code-related.

Performance profiling is definitely not a replacement for traditional load and performance testing, but its benefits as a complementary tactic are too significant to ignore. Implemented in the correct way, it will significantly cut overall costs, and help transform your releases into NFT from functionally good to operationally great.

QAWorks TeamRaising the profile of performance!
read more