Behaviour Driven Development in a nutshell

Behaviour Driven Development in a nutshell

If you’re new to Behaviour Driven Development (BDD) and don’t understand the jargon surrounding it, this is the article for you. This article will explain the most fundamental concepts as well as the basic implementation of this agile methodology.

Let’s start by clearing up the misconceptions. BDD is not limited to test automation and it is not about tools. Fundamentally, BDD is actually about great communication, where an application is specified and designed by describing in detail how it should behave to an outside observer. In other words, BDD is about working in collaboration to achieve a shared level of understanding where everyone is on the same page. That’s the basic understanding you need to know. Easy enough.

So, what does this ‘great communication’ mean for software development?

Great communication means:

  • A usable product first time round, which allows you to get your product to market faster
  • A lower defect rate and higher overall quality
  • A workflow that allows for rapid change to your software
  • A very efficient and highly productive team

How is it done?

Meet our key stakeholders/teams:

Developers • Testers/QA • Project Manager/Scrum Master • Product Owners/BA

 

To illustrate what happens when you implement BDD, here are the before and after scenarios:

Before implementing BDD

Traditionally, software is designed in a waterfall approach where each stage happens in isolation and is then passed along to the next team. Think conveyor belt factory style:

  1. First the Business Analyst defines requirements
  2. Then the development team work on these requirements and sends for testing
  3. Then testing discovers lots of bugs and sends back to the development team
  4. Things are miscommunicated in transit so repeat steps 2 and 3 back and forth until you run out of time or budget
  5. Release software

The problem here is that everyone is in isolation, interpreting the requirements differently along the way. By the time code is handed in for release, resources are drained, and people are frustrated as there are issues that could have been avoided had everyone just been working together initially.

After implementing BDD

  1. Business and PO/BA have a set of requirements ready to implement
  2. BA, Developers & QA work collaboratively to refine these requirements by defining the behaviour of the software together. Thinking from the point of view of the user, they create detailed user stories. Throughout this process they address the business value of each user story and potential issues relating to QA that may crop up
  3. Each story is given an estimate of how complex it would be to implement
  4. The whole team now has a strong shared understanding of the behaviour of the software and when it will be considered complete
  5. Begin Sprint: Developers & QA then work together or in parallel to produce a product that is ready for release

 

BDD

 

This process saves time and money and is incredibly efficient. The core element of this efficiency is the team’s clear understanding of scope and what the fundamental features and behaviours required are. Because of the collaborative nature of BDD, issues are brought to light that otherwise would be an afterthought. For example, how a feature might behave differently on mobile or how a feature might deal with a large number of users. These are considerations that should be addressed from the outset.

What is the best way to implement BDD?

Just because people are in the same room or present at the same meeting doesn’t mean they will collaborate effectively. Each of the stakeholders play a crucial role and some teams/individuals may need to change their way of doing things to make sure that collaboration actually happens. The image below outlines the key deliverables for everyone involved when adopting BDD:

 

BDD

 

An example of BDD in practice

BDD is a risk-based approach to software development; it mitigates the likelihood of running into issues at crucial times. Teams at ECS Digital have been using the BDD process effectively, including  implementing a website feature for a popular media client. The client wanted to create a swipe feature where more mobile users could swipe to see different articles and move through the sections easily. Everyone was collaborating from the initial stages and the team was able to ensure high quality on the website throughout the process of implementation.

With a clear and shared definition of what the website would be like when completed, they were able to innovate further to mitigate the risk involved. They decided that during times of low traffic they would send users to the new website with the new swipe feature and get feedback. Then during risky times of high traffic users would have the usual website without the new feature. This allowed the team to ensure that when they made the feature a permanent part of the entire website they were taking as little risk as possible.

If this team was not utilising BDD techniques by defining the website’s behaviour in detail and involving each team in the development of requirements, they may have released the feature without such precautionary measures or run into many issues when approaching the release date.

If you’re interested in understanding more about BDD and delving into some of the jargon surrounding it – “gherkin syntax”, “the three amigos”, “impact mapping” & “living documentation” – read our previous article here: Behaviour Driven Development: It’s More Than Just Testing

Kouros AliabadiBehaviour Driven Development in a nutshell
read more
Helping Developers Become Testers

Helping Developers Become Testers

As software development practices evolve, the line between developer and tester has becoming increasingly blurred. As testers, we are now expected to know how to setup test automation frameworks, code different types of test (e.g. integration, functional, performance) and even understand and contribute to the build and deploy pipeline process.

Traditionally, there has always been a clear distinction between development and testing. In older software lifecycle models such as Waterfall and V-Model, testing only starts when development work is finished, with few if any automated tests put in place.

Over the years, companies started to adopt a more collaborative and iterative way of working where the testing process is often championed to start as early as the requirement gathering stage.

Even though development practices have evolved throughout the years there is still, from my experience, this misconception that developers cannot write tests. This is why specialist roles such as SDET (Software Development Engineer in Test) were created – to bridge the gap between developers and manual testers. Developers are more than capable of writing tests – they already write most of the unit tests for their own code. So, then …why do some developers not test?

From the different client that I have worked with, I have observed the following reasons why this might be the case:

1. No one asks them to test

If management don’t push for them to do this, they will think that automating tests is not part of their responsibility. This initiative has to come from the top. Test architects and SDETs, that feel developers should help out with test automation, will not be able to convince them on their own.

2. They don’t want to test

Most developers still assume that features should be automated solely by testers. Once their ticket passes peer review, they believe that their work is finished. Some developers hate writing end to end tests because they believe the process is slow and flaky. Those developers who have tried to help out find tools such as Selenium difficult to set up and work with.

3. They lack the necessary guidance of looking at their features from an end to end perspective

Most developers work on single components, so they can lack an understanding of how their components will integrate with the rest of the systems. Also, the requirements provided to them often ensure that all positive scenarios are working as expected, leaving negative scenarios missed or neglected.

How do we then help our fellow developers become testers?

How do we bridge this gap and ensure that we maximise everyone’s potential?

1. Get support from management

Support needs to come from the top. Make sure that you communicate what the business benefits are if developers help the testers. Quality should be owned by everyone and not just by the testers.

2. Regular knowledge sharing with the business

Developer should be told how the application they’re working on is used by the business or its customers.  A simple yet effective idea is to have regular knowledge sharing sessions with the business. Another good idea is to have these sessions documented on Confluence or something similar.

3. Documentation on how to contribute to the automation framework

There should be clear guidelines on how developers can help contribute to writing tests. If someone has not used tools like Cucumber and Selenium, make sure a “how to” guide is created.

4. Introduce pair programming when writing tests

Pair programming is a common way of working amongst developers and this can also be used when writing tests. Experienced SDETs need to pair with developers to share this knowledge.

5. Be a teacher and educate

Point them to resources that will help them with writing tests and show them best practices. Guide them on how to do it but don’t write their tests on their behalf. Peer review their code. Be patient.

6. Modify your process to include testing as part of a developer’s workflow

Before changing your ticket to done, make it a habit of including automated tests. This is especially useful for new features. Rather than writing the tests after the feature is deployed, write it during the development stage.

7. Include automation tests as part of the CI/CD pipeline

The more diverse tests that are added to the pipeline, the more visible the results will be to everyone. Utilize an effective test reporting dashboard so results of all the test runs can be easily displayed. By having these tests in the pipeline, developers will have visibility if they break existing features.

8. Evaluate testing tools effectively

To encourage developers to write tests, the testing tool should be somewhat familiar to them. If you work on a team where JavaScript is the language of choice, there is no point trying to implement the automation framework in Ruby or Python. Speak the same language as the developers. If you work in a company where you’re tasked with setting up the automation framework, ask everyone’s opinion on which tool to use. More and more testing tools are emerging these days, such as Cypress, which aims to provide an easy on boarding process for developers to start testing.

So, what happens when developers become testers?

I’ve seen the benefits first hand. At the client where I’m based, we’ve introduced this approach where developers write the automated tests for some of the new features. Not only are we releasing new features quickly, but knowledge sharing and collaboration between developers and QAs is better than ever before.

Developers should know about testing the same way automation testers know about coding. By getting developers involved with the testing process, we begin to utilise everyone’s knowledge and potential, as well as avoid scenarios where bottleneck occurs.

If you’d like more specific advice around how to help your developers become testers, you can get in touch with us here.

Marie CruzHelping Developers Become Testers
read more
Solve your test data woes with GraphQL & schema introspection

Solve your test data woes with GraphQL & schema introspection

While highly technical in places, this article goes through some solutions ECS Digital has been able to provide for a client to improve testing strategies and reduce costs. Although, not for everyone, we hope that sharing our technical expertise in this area can benefit the community at large.

Tech stack: React | GraphQL | Apollo GraphQL | Javascript | Typescript | Cypress

One of our clients has been going through a change period where we are re-platforming their whole tech stack. As part of this process, we felt that now was a really great time to address an underlying issue that we have experienced with the old tech stack.

That problem is test data.

When I speak of test data, this applies to not only the unit and integration tests, but also our functional UI tests.

We had two fundamental problems we wanted to solve:

Problem one

It was the responsibility of the developer to test his component with any unstructured data they saw fit. If a developer creates a component that expects data of shape A and creates a test with data shape A, the test will pass. If, however, over time the real data that is passed to the component changes to shape B we have no idea if our component will still work until quite late in the development process, which introduces a long feedback loop.

Problem two

Our functional UI tests ran on a long living full stack. There was a known data set that we could reset to, which was all stored as json in its own repository – completely separated from the rest of stack and its tests. To update the fixture data on the full stack you would need to understand what test cases were already using the json, then manually change the json, create a PR, get it reviewed, merged and then run the mechanism to reset the data on the long living stack.

At the start of the project this fixture data was very useful. It allowed our functional UI tests to be robust and repeatable. As a result, when all our tests passed we had high confidence our site was releasable.

Unfortunately, over time and as software naturally adapts our fixture data started to become harder to update and maintain. Some parts were updated inconsistently, we had no clarity on what tests were tied to fixture data and shouldn’t be updated. Eventually our fixture data became unmaintainable or would break other tests as it was updated. 

We spent a lot of time thinking about how to solve both of these problems and after quite some time and several approaches we finally achieved something that we felt was clean, and maintainable.

Solution

Like a lot of the industry we are migrating to a GraphQL back end.

This opened an interesting opportunity as GraphQL uses types and fields to develop a `query language` for your API. You are only ever able to query for fields that exist on their corresponding types, otherwise GraphQL will complain.

GraphQL also supports something called schema introspection, which provides a mechanism to pull down a schema for any enabled GraphQL server. This can be useful to see what queries your API will support.

https://graphql.org/learn/introspection/

Another tool called GraphQL code generator can take a GraphQL schema as an input and will output all the type definitions of your GraphQL schema as a typescript file, along with any type descriptions present on your schema. (shown below)

https://github.com/dotansimha/graphql-code-generator

Problem One Solved

Now that we had the capability for translating our production GraphQL server types into typescript definitions, we were satisfied that we could start to build a fixture generator package matching our production GraphQL server. A key part of building this package was to provide a consistent API for all clients of the fixture generator package. We also ensured that whenever the logic for building fixture data started to become complicated unit and integration tests were baked in.

Once the generator package was in place, the workflow used was anytime a client of the fixture generator package is run, the schema introspection and generating our type files comes as a precursor. The whole process takes around a second and once completed the fixture generator typescript package will build. If the schema has changed, and the fixtures no longer adhere to the types, the build will fail and you are alerted straight away.

This provides huge benefit to our tests as it now means that our tests ask for the data they require. The complexity around managing test data is no longer the responsibility of the tests. We also know that the data will be correct as per the production schema even over time. Finally, if the types do change, we only need to fix it in one place for all our tests to be updated.

You can see an example of how you would use the fixture generator below

Problem Two solved

The fixture generator brought us closer to solving problem two, but we still had no way to run our functional UI tests and somehow pass our fixture generator data to our front end. The front end was still querying the long living GraphQL environment.

Another tool Apollo GraphQL provides some powerful tools around stubbing whereby you can pass your GraphQL Schema, as well as overrides for type definitions in your resolver map. Once you have defined what data you want to return when you query a type you can start a local GraphQL server.

https://www.apollographql.com/docs/apollo-server/features/mocking.html

Once running we could point our front end to our local stubbed GraphQL server.

The final piece was to have our tests once again define the data they required and then spin up the local GraphQL server.

We have also been using Cypress as our new functional UI test tool.

Cypress as a tool is groundbreaking and is revolutionising UI tests. It runs in the same run loop as your application in the browser and provides new features for UI testing such as playback mode. I’d really recommend looking if you haven’t already

In our tests we run a Cypress task to start up our short living mock GraphQL server and provide the fixture data that we want GraphQL to run with straight from the test.

Once again this means that our tests explicitly ask for data. Previously if we wanted this test to work with 4 related articles, we would have had to edit a separate repository. Try to understand if the data we want to edit is already being used by other tests. Create a pull request, get it approved, merged, then run the reset data mechanism.

Now it’s as simple as updating the variable inside the test and it is clear what data the test needs to run and practically removes the previous feedback loop.

If you want to understand at a deeper level how this works, it’s all open source.

Take a look at the repositories below:

https://github.com/newsuk/times-components/tree/master/packages/fixture-generator

https://github.com/newsuk/times-components/tree/master/packages/mock-tpa-server

Or alternatively, find out how ECS Digital can help improve your test strategy by contacting us.

Matt LowrySolve your test data woes with GraphQL & schema introspection
read more
Xin’s Story as a QA and Continuous Delivery Consultant

Xin’s Story as a QA and Continuous Delivery Consultant

My name is Xin Wang, I am a QA and Continuous Delivery Consultant as ECS Digital. I recently published a blog explaining how I went from delivering presentations on Fashion Week 2017 fashion trends, to writing functional tests as a software developer engineer.

Working in a male dominated industry has been very different to what I was used to – the approaches that some male engineers take are sometimes very different to the approach that a female would take. But these perspectives combined give you a much valuable overview which is why I really enjoy working on coding challenges with my colleagues.

Take a look at my video if you are interested in understanding why I switched my career around and how I am continuing with my journey as a software developer engineer.

Xin WangXin’s Story as a QA and Continuous Delivery Consultant
read more
Day In the life as a Technical Test Engineer

Day In the life as a Technical Test Engineer

Hi there, my name is Marie Cruz, and I’m a Senior Technical Test Engineer at ECS Digital. I’m responsible for providing test services to various clients with the focus of implementing BDD processes. I recently published a blog explaining how I balance being a mother and a woman in technology.

Having a family and an active career in tech, people tend to ask me how I manage to keep up with both. My answer is making sure you understand what’s important, but also ensuring that you are happy with the choices that you are making.

If you’ve ever wondered how a female can handle both a career in tech and a family life, feel free to take a look at my “Day in the Life as a Test Engineer” video. I hope it inspires you to take the leap into technology too!

Marie CruzDay In the life as a Technical Test Engineer
read more
Is your master branch production ready?

Is your master branch production ready?

Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.

A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.

If your answer is no, then how do we start to break down and resolve these problems.

Building confidence in quality

A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.

The test strategy shown above was for a simple front-end application that reads data from an external API.

To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.

We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.

Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.

We isolated our application by stubbing out external services or dependencies where necessary –  we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.

We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.

Start failing fast 

Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.

Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.

We need to run tests earlier in the process.

In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.

We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.

This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.

Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.

The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.

*******

This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.

Matt LowryIs your master branch production ready?
read more
Understanding SAST and DAST Adoption

Understanding SAST and DAST Adoption

In order to achieve a (software delivery lifecycle) SDLC that is efficient and cost-effective, we strive to automate every step with as little human interaction as possible. We do this because the ability to hold a product to a high quality standard throughout its lifespan is essential in building a maintainable, resilient and secure solution.

This blog focuses on the tools and approaches that help us maintain a high level of code quality and application security, while remaining relatively hands-off. We look at the benefits and problems of these tools and present our recommendations about which approach to take, and when. Whilst a little technical in places for some, if you’re interested in SAST and DAST Adoption and understanding the difference this is the blog for you.

Here are four core concepts we’ll be delving into:

  • Static Application Security Testing (SAST)

Can run on the development machine or have a setup in your CI/CD and can run on every code push to Git

  • Interactive Application Security Testing (IAST)

Conducted post deploy and uses a combination of techniques and tools to achieve desired results. (a Security Expert ‘interacts’ with the application under test by setting up attack vectors)

  • Dynamic Application Security Testing (DAST)

Tool based security testing that is used on top of functional tests to check application communication channels.

  • Runtime Application Self Protection (RASP)

Production monitoring and risk assessment, which relies on tools and automated processes to counter application attacks.

SAST – Static Application Security Testing

SAST is used to identify possible improvements by analysing the source code or binaries without running the code. It is fast and you don’t need the code to be compiled so the SAST tools can be integrated directly in the IDE (Integrated Development Environment). This gives developers immediate feedback about the code they write and how they can deliver a better software product. Projects that manage to integrate SAST in their SDLC will notice immediate benefits in code quality by adhering to a more detailed DoD (Definition of Done) task completion checklist that is not required to go through a PR (Pull Request) Review.

The main benefit of SAST adoption is that developers have immediate feedback on their code and how to improve it; there is no need to deploy or compile the code.

The problem (with SAST) is that application code that is written by developers is just a small part of an application under test. In many cases, we rely on different languages, frameworks, servers that interact and many other systems that make up the ecosystem. If you are doing only static analysis you are not only ignoring application execution and infrastructure but also the communication protocols. Hackers will usually use information that is kept in cookies and requests to penetrate and exploit the system’s flaws in your infrastructure or application.

SAST Tools can be setup on the developer machines or in their IDE, but can also be setup as a CI/CD tool.

IAST – Interactive Application Security Testing

IAST relies on agent tools set inside the application at communication junction points. Tools need to be configured and each agent will gather results to a certain degree of accuracy.

“IAST is a combination of SAST and Dynamic Application Security Testing (DAST). But this security solution has still not matured to a point that it can be defined precisely or measured accurately against rivalling solutions.” – Sharon Solomon

Due to these factors, the tools might not be a fit solution to be used in production environments. The effectiveness of the tools is largely affected by the instrumentation and attack simulations.

The implication is that engineers and security professionals are responsible to set up and run an analysis of results, thus requiring specialised personnel. Agent installation inside the infrastructure might touch on other constraints set by bank rules and regulations.

DAST – Dynamic Application Security Testing

DAST is great for developers, allowing them to have rapid health checks on their code. It should be mentioned though that this often creates a false sense of safety and security, which can can be a very precarious position. Because of the nature of the systems under test, DAST can be run as a security proxy. The advantage of such an approach is that you can use your existing tests (Integration or E2E) tests and run them through the DAST proxy. On top of testing your application business flows and environment setup, you also get a nifty security report on all the requests that bounced during testing. Reports usually contain warnings for industry standard OWASP Security Threats. The security assessment can be further refined by security experts in order to achieve a more comprehensive suite of checks.

A benefit of DAST adoption is that developers or security analysts can identify sensitive information exposed by the system through analysing the requests generated by the application.

On a day to day basis, developers can analyse changes in requests and sessions contain only the desired content. Security analysts also now have a tool that can see the underbelly of all business flows so they can focus straight on attack vectors and other security aspects. Policies can be verified and enforced (e.g. GDPR adherence by identifying how user sensitive data is exposed within the application communication). Tools usually also provide warnings on standard configuration but the large array of tools require fine-tuned configurations. Some tools provide community script repositories (10) which can be directly used or customised to project needs.

The problems faced with DAST tools is that they generate a large number of warnings (and false negatives) that need to be carefully investigated by developers and security professionals alike. DAST tools require extensive configuration in order to achieve the targeted results.

DAST tools can be set up in the development infrastructure or as a CI/CD tool. Developers can use the DAST proxy tool from their local machine (by redirecting their tests through the proxy).

RASP – Responsive Application Self-Protection

ASP tools are designed to monitor running applications and intercept all communication, making sure both web and non-web applications are protected. Depending on how the tool is set up it can signal identified threats or it can take actions. RASP can stop the execution of malicious code (e.g. it identifies an SQL injection) or it can terminate a user session in order to stop attacks. The way it stops attacks is dependent on how RASP and the application under test are integrated.

There is an API based approach in which the RASP tools use the application API to steer the behavior of the application. Developers will find that through such an approach they can handle the integration with RASP tools in a very specific way (e.g. Login App might define an extended API to cope with custom decisions from the RASP tools). There is also an approach where a wrapper is set up for the app and RASP sits on top, thus giving a more general integration with an existing app.

A benefit of RASP is that is can distinguish between attacks and legitimate requests. It can limit or block access to attackers that already gained access to the application. These capabilities will not safeguard against all threats, for this reason, security specialists recommend building security mechanisms at the application level.

A drawback is that environments that use RASP can notice their performance is affected by the adoption of active scanners. User experience may be impacted by the loading latency.

Conclusions

Due to the nature and scope of each tool and how they fit in the SDLC, there is no single solution adoption to automate and safeguard delivery to production.

SAST is the most cost effective way of checking for code related defects and security threats, but its scope (static code) ignores the vast majority of interacting elements in a software solution.

IAST adoption is to be desired but might take more time to integrate it as a formal step into the SDLC due to the requirement of specialised resources and tooling.

DAST complements SAST by checking in the SAST blind spots (running against deployed code, checking environment configurations and communication protocols) by providing extended reports with security in mind.

Note: that DAST tools can be used as part of CI/CD processes. Since IAST and DAST are quite similar in many aspects, their capabilities are transferable.

RASP uses a combination of tools for network traffic monitoring. Due to the proactive nature of the intrusion detection systems, a set of tripwires are set (eg. honeypots) by network and security professionals that need to be monitored. The response to identified threats can be carefully handled in a time effective way.

We would recommend the use of a small set of tools and practices that make sense for your SDLC. Gradual adoption of tools and processes should always be made with development delivery in mind. SAST on its own will not safeguard against a number of threats and will signal only code related issues. Ultimately, adopting the right set of tools will help complement coverage and type checks performed on the system to the point where the code is production ready.

If you’d like more specific advice around the right tools and practices for your SDLC you can get in touch with us here.

Voicy TurcuUnderstanding SAST and DAST Adoption
read more
On being a mum and a woman in tech

On being a mum and a woman in tech

Like most people, I had a five-year plan after I graduated from university. Get a nice job and work for a great company, get married, start a family and buy a house. Fast forward five years and here I am, attempting to write a blog about how I balance being a mother and a woman in technology while listening to my daughter having a tantrum!

Being a first-time mum, I struggled a bit in the beginning after my maternity leave to get used to the idea of working again. I felt like I had forgotten how to code. Not to mention that I was given the responsibility of a Test Architect role in the client site that I am based at. I had to get myself familiar with new tools that I haven’t used before and somehow, I had to lead the team. It was daunting!

At the same time, I was worrying about my daughter all the time. It was hard to focus at work and it definitely wasn’t the best start (let’s just say that my stress hormones were up to the roofs!). But somehow, I managed to get it to work in the end. It wasn’t easy and there were still some sleepless nights (teething is still a nightmare!) but I’m going to list the things that helped me balance my work and my responsibilities as a mum.

  1. Share the responsibility

This I feel is the most important. Don’t be afraid to ask for help and share the responsibility. You won’t be able to do everything by yourself! My husband is very hands-on with our daughter so during his days off, he looks after her. Ask families and friends to help out too. We’re lucky that my mother-in-law helps look after my daughter when my husband and I are both at work. There are also times when my parents pick up my daughter from work, so they can look after her. We pre-plan our schedule and check everyone’s availability so we know who will look after our daughter on what day.

  1. Flexible working is the way forward

If you can work from home or do flexible hours, ask for it. From time to time, I work from home if there is no available babysitter that day or if I need to take my daughter to hospital.

  1. Avoid working outside hours

You might be tempted to bring some of the work home with you if you have tight deadlines but try to avoid doing this if possible. I used to bring work home with me to finish off some tasks, check slack messages and reply to emails but this meant that even when I’m home, I’m still thinking about work rather than just spending quality time with my daughter. This just made me more stressed in the end so if I do have deadlines, I try to be more focused at work and time box my tasks. If it’s something that your colleagues can definitely help, share the responsibility. Again, you can’t do everything by yourself 🙂

  1. Stop overthinking about your children

It’s natural that we tend to worry about our little ones. I used to worry a lot about my daughter at work and text my husband or my mother-in-law to see how she was doing, if she’s eaten or drank her milk, if she’s had her nap, if she’s crying, etc. and I always get the same answers – that she is doing ok. Rather than spending time worrying about things I couldn’t change, I now use that time to be focused at work so I can get home sooner and answer these questions myself

  1. Find time to learn

Now this might be difficult for some of you but if you can, still find time to learn something new every day. Doesn’t matter if it’s just for an hour or 30 minutes. Especially in the tech industry, there are always new tools coming up. So, once my daughter is asleep, I make a habit to read a book, read tech blogs, or do a little bit of coding.

  1. Find a company that appreciates you

I feel that this is as important as the first point. If you work for a company that micromanages and doesn’t give you room to improve, then this might be a red flag. It’s great that I work for a company that is appreciative of what I do and rewards those who have done a great job. Recently, I was nominated for an Outstanding People Award and it has given me a great boost to continue doing what it is I’m doing – I must be doing something right after all!

Achieving a work-life balance, especially if you are a mum, is a challenge, but it is doable. It was difficult at the beginning, but like everything else, it gets easier 🙂

Join our Women In Tech DevOps Playground on 8th November where we will be getting hands-on with Cypress!

Follow other stories from the ECS Digital team here.

Marie CruzOn being a mum and a woman in tech
read more
AyeSpy, a new way to test for visual regressions

AyeSpy, a new way to test for visual regressions

Bill Gates famously said, “I will always choose a lazy person to do a difficult job because a lazy person will find an easy way to do it.”

At The Times, there is an incredible amount of business value placed on the aesthetics of the site. There have also been past incidents where CSS bugs have caused rollbacks.

With this in mind, traditional `automated` functional testing with selenium is ineffective to find these defects – in addition to being slow and high maintenance. To add to the problem, The Times release far too often to make manual verification possible.

This is where visual regression tools shine through. Their sole purpose is to give confidence that the applications under test are visually correct.

So what is visual regression?

There are 3 main parts to understanding how visual regression works.

  1. Baseline

A set of images that define how the application should look, based on previous recordings.

  1. Latest

A set of images detailing how the application currently looks.

  1. The comparison

Once we have both the baseline and the latest, we are able to perform a comparison between how the application is supposed to look and how it looks now. If there are differences, the build will fail, and you will need to approve the changes to update the baseline images once more.

We have used a number of visual regression tools within the Times Tooling team at News UK and each proved to have limitations.

A core testing principle that we believe at ECS Digital is you need to be testing as close to production/end users as possible.

Headless browsers such as phantomJS may give you a small performance increase when executing tests, but these browsers are far from how your end users will be interacting with the application under test.

Our first visual regression tool only supported headless browsers. We had several instances where it allowed bugs through, but this only occurred on Firefox and not PhantomJS. This loophole was the reason we decided to move on.

The second tool we tried was what we believed to be the industry open source favourite. After battling with it for well over a week we could not get it running stable or under 30 minutes, which as a developer is an unacceptable feedback loop.

As you can imagine, these inefficiencies didn’t sit well with the Times Tooling team and we decided to address the problem head-on and create our own “hand-rolled” visual regression tool.

Based on our previous painful visual regression experience, we were determined to build a tool that was:

  • Super performant
  • Lightweight and,
  • Made it easy to interpret results

A proof of concept was put together before fully refining the capabilities of the tool. We then waited for priority to allow before creating ‘AyeSpy’ in one sprint.

Four months down the line and AyeSpy has been successfully implemented, gaining approval from our clients and users on GitHub. Whilst the Times Tooling Team engineered AyeSpy, The Sun and data teams within News UK have since adopted it and it’s not hard to see why – AyeSpy takes less than 90 seconds to run 44 webpage comparisons. Other benefits include:

  • Only requires a .json config file to run
  • Maintenance is low
  • Able to explicitly wait for elements before screenshot is taken
  • Can integrate the Dom before screenshot
  • Drop cookies into browser
  • Remove dynamic elements from the DOM
  • Tests are farmed out to a containerised selenium grid for distributed testing and consistent state

When deciding to use visual regression, we have found in our experience that the tool works best on reasonably static sites that do not require long user journey to be completed before the screenshot. For example, clicking through a checkout journey would introduce a high level of risk and take away value from the tool. Ideally, you want to load the page, remove all dynamic elements, and then snapshot.

Where you can find the tool?

ECS Digital love to find value for our clients and give it back to the wider community, which is why we make these tools available on open source platforms such as GitHub and NPM.

I will also be hosting a hands-on session and demonstration of AyeSpy at an upcoming DevOps Playground on the 29th of November. Come along to learn more about what the AyeSpy has to offer!

Matt LowryAyeSpy, a new way to test for visual regressions
read more
Tooling and efficiency teams

Tooling and efficiency teams

ECS Digital has been operating in the DevOps space for over 20 years and this success is mostly down to our focus on self-improvement and innovating for the benefit of our clients. Our recent acquisition of QAWorks was largely initiated to support the continued efforts in the digital transformation sphere, focusing primarily on strengthening our expertise in software quality and delivery.

What we’ve seen since this coming-together is a greater offering for our clients – not to mention an increase in the number of smart-minds looking to evolve our existing tools and processes. This ‘fresh blood’ has a mix of experience – with some primarily working within big teams in large organisations where the division between development and test was not aligned to delivering business value.

As has been seen from successful adoptions of modern software delivery techniques, shifting left to a more agile methodology results in your development and operation teams working for each other. It also offers them more autonomy – resulting in smaller wait times and reduced feedback loops.

But what happens when you begin to scale this model within larger organisations?

For ECS Digital, the first step of any digital transformation is enabling you to successfully integrate an agile process. Part of this is helping you communicate and adopt a new culture, as well as introducing an engineering mindset to test– this can involve introducing SDETs to your development teams to ensure any feedback or strategies can be put in place quicker. Quicker feedback means improved lead-time and higher quality of applications.

Once you reach a level of confidence in your new process and are comfortable with the effectiveness of your teams and automated tools, our consultants begin to look at reusability – taking an in-depth view of your processes and offering recommendations of how to take them to the next level.

Focused primarily on larger organisations, our team has developed a quality assurance strategy that supports businesses who have around 25 or more working within the software delivery structure. Once you reach this magic number, an opportunity cost presents itself. 

This opportunity looks to do the following:

  • Reduce duplicated efforts,
  • Improve efficiencies of individuals and teams,
  • Recognise issues that are affecting more than one teamand create a reusable solution,
  • Remove the risk of gatekeeping behaviour by breaking down the silos and cultivating a culture of collaboration between teams 

Internally, this opportunity is known as introducing a ‘tooling and efficiency team’ (official name to be confirmed). Not only are these teams proving successful in current client work, they are a logical next step for those wishing to maximise their agile business model.

In short, this team consists of engineers with a broad skillset and sits within your business permanently. They are responsible for keeping a comprehensive eye over all your development and operation processes and specifically look for areas that are underperforming or no longer fit for purpose. Once identified, they create reusable solutions to combat individual and company-wide inefficiencies.

But if your agile methodology is already delivering on all your performance targets, why is this new team important?

Performance

By analogy, if you have a one-man operation and you invite an additional person to join this team, you are doubling your effectiveness. If work-demands require a third or fourth member of the team, you are again increasing your efficiencies – but as you scale, this math only works to a certain number. It is very much a balancing act, but what we’ve found whilst working with clients is once you reach a large development team of around 25, each new member starts to become less efficient.

By creating a one-stop-shop in the form of a tooling and efficiency team who can afford to spend the time looking for and creating tools to keep your business adapting, you are maximising ROI because you are making the most of the staff you have. This can be seen in our recent client work with NewsUK.

A reoccurring long-term objective for our clients is to increase the speed of delivery whilst maintaining quality. Quality assurance and automated testing are essential to helping them achieve this – and is the reason why a tooling and efficiency team is working so well. We work alongside our clients’ principal engineers to maintain a clear direction for this new team to move towards, measuring against agreed targets periodically. The benefits have so far been a strengthening in DevOps capabilities, as well as a strong improvement in development efficiencies and overall quality.

“ECS Digital consistently provide intelligent, hard-working and professional individuals who always manage to work well together. Kouros provides a strong organisational and delivery focused attitude that resonates through the team – who have made some invaluable and original open source products that will benefit us and others in the future. They are more than simply a QA team, but can-do developers who aren’t afraid of a challenge and putting the client first”

Craig Bilner, Principle Developer at NewsUK.

The transition to this efficiency model requires a level of collaborative consultancy to help oversee the adoption of the new team and integrate them with others already in the structure. ECS Digital engineers have the capability to enable adoption by working alongside your current team or by operating autonomously / self-managed within your business.

Their ability to constantly inspect, improve and adapt aligns with the very nature of agile methodologies, making it an ideal structural change to invest in long term.

Whilst our tooling and efficiency teams are an additional offering to our DevOps consultancy, it is a necessary next step for those wishing to take their agile business model to the next level.

ECS Digital is an experienced digital transformation consultancy that helps clients deliver better products faster through the adoption of modern software delivery methods. Our recent acquisition of the UK’s leading technical software testing organisation, QAWorks, means we’re well placed to offer expert advice about how tooling and efficiency teams can bolster your digital environments.

If you’d like to know more about how the tooling and efficiency approach could benefit your business, drop us a message here.

Kouros AliabadiTooling and efficiency teams
read more