Top 5 takeaways from Nordic Testing Days 2019

Top 5 takeaways from Nordic Testing Days 2019

Nordic Testing Days is the leading testing conference in the Nordic region, held in Tallinn Estonia. This year, our very own Continuous Testing & Delivery Consultant, Ali Hill was one of the speakers. Here’s his take on the experience:

The conference took place on May 30th–31st 2019 (and May 29th if you took part in the tutorial day). It was a truly great experience to both speak and attend over the two days. Here’s what happened:

Speaking Experience

Having arrived in Estonia on the afternoon of May 29th – and having spent a couple of hours exploring the beautiful city of Tallinn – it was time for The Speakers Dinner. The evening started with drinks by the sea before we were surprised with dinner on a boat as we sailed up and down Tallinn’s coastline.

This dinner really characterised how Nordic Testing Days look after their speakers. If you are accepted to speak, then travel costs and two nights accommodation are covered by the conference. The organisers and volunteers were great at replying to any questions I had in the lead up to the event and I truly felt valued as a speaker.

The stage, mic and presentation equipment all made life very easy and the attendees were engaged and asked some really thought-provoking questions.

My talk was titled ‘Let’s Share the Testing’ and focused on a journey I went on with my previous Agile team – after we identified testing was a bottleneck in our attempts to continuously deliver software. I discussed how we removed the testing bottleneck by collaborating on the testing effort, and how sharing testing knowledge improved productivity and communication within the team. I also shared my ideas on how to involve non-test specialists in testing activities in the hope these help others in their own projects!

Ali Hill Nordic Testing

Conference Sessions & Key Lessons

 The conference format provided plenty of variety. Each day started and ended with a keynote attended by all delegates. In between the keynotes were two parallel talk tracks or a longer workshop.

As the name suggests, Nordic Testing Days is primarily a conference about testing and attended by software testers, but not all of the sessions focused on this and there were presenters and attendees from a whole range of disciplines present at the event.

Key Lessons from the Conference

Below are five key sessions/takeaways from across the two days of the conference, in no particular order: 

  1. Don’t Take It Personally

One of the most valuable sessions I attended was delivered by Bailey Hanna whose workshop title was aptly named ‘Don’t Take it Personally’ taught me how to turn potentially negative comments into a positive conversation. The workshop covered a number of linguistic behaviours which may be exhibited by a person acting negatively. We practiced in groups by exhibiting these negative behaviours and turning the conversation into a positive one during this session. As well as teaching me how to handle these situations it also led me to reflect on how I should provide feedback to colleagues. 

  1. Ask Questions About Accessibility

Ady Stokes’ presentation on Accessibility was really interesting. Accessibility is, unfortunately, not an area I have spent much time focusing on in my career. Ady dispelled the myth that developing with accessibility in mind only benefits those with disabilities. He showed us a graphic which is part of this Inclusive Design Blog and highlights the difference between permanent, temporary and situational accessibility issues.

My main takeaway was that it’s important for all members of the development team to ask questions about accessibility, and get the conversation started in their workplace.

  1. STRIDE, Elevation of Privilege, Threat Modelling…

Gwen Diagram’s energetic presentation – ‘Security by Stealth’ – was a late addition to the conference schedule, but an extremely valuable one. It covered two main themes:

  • How to organise well-attended workshops in your workplace (hint: provide food!)
  • The tools Gwen used to get her teams interested in developing with security in mind.

Gwen’s workshops used models such as STRIDE, activities such as Elevation of Privilege and Threat Modelling and tools such as OWASP Juice Shop and ZAP.

Like accessibility, security is an area I haven’t explored in any great depth. All of the terms I’ve used above are areas I’m now interested in learning more about.

  1. Explain Exploratory Testing

Alex Schladebeck kicked off day two of the conference with an excellent Keynote called ‘Why Should Exploratory Testing Even be the Subject of a Keynote?’. It’s an interesting title, but Alex explained why she believed exploratory testing is important (potentially the most important activity testers perform), and why testers need to be better at explaining what it is we’re doing when we’re exploring our products.

Alex stated that testers often talk about ‘intuition’ and ‘experience’ when it comes to finding bugs, but this isn’t useful in explaining what we are doing to developers or other members of our team. My main takeaway from this talk was that I need to pair and mob more with my team and explain what it is I’m looking for when I’m exploring the system under test. 

Exploratory Testing

  1. Cynefin

Towards the end of the second day (actually immediately after my talk) was Lucian Adrian presenting ‘Choose your Test Approach with Cynefin Help’. Cynefin is something I’ve seen come up quite frequently on Twitter and in blogs but not something I’m overly familiar with. Lucian did a great job of introducing Cynefinas a sense making framework consisting of five domains: obvious, complicated, complex, chaotic and disorder – and how he uses this framework to create his test strategies.

I still find Cynefin difficult to fully understand, but it’s something I want to explore more and I’ll definitely be watching Lucian’s talk back when the recordings are made available. 

Post-Conference Activities

As well as a dinner for speakers, there was also a dinner and party after the first day for all speakers and attendees.

An area of the venue was transformed into a dancefloor, but there were also lightning talks and Powerpoint Karaoke for those who preferred a quieter night. If you’re ever at a conference that does Powerpoint Karaoke then I’d highly recommend attending. It’s extremely entertaining watching brave volunteers try to make random slides tie into a topic they have been provided at random.

After 10pm, those who wanted to continue the party could head into Tallinn’s Old Town until the small hours.

Venue

I couldn’t write about Nordic Testing Days without mentioning Kultuurikatel, the venue itself…

It was a power plant in its previous life but has been repurposed into an event centre. It was the perfect size for the 500+ attendees to the conference and was only a five-minute walk from Tallinn’s Old Town.

There were two fantastic presentation rooms and a number of smaller areas for workshops and tutorials. There was also plenty of space to network during the breaks and a nice area outside to sit in the sun.

I think any conference would struggle to get a venue as great as this one.

Concluding thoughts

Overall, I thoroughly enjoyed my time at Nordic Testing Days and highly recommend the event for anyone in the testing and developing space. It was great to meet so many other testers from around the world and discuss challenges we are facing and solutions that we have created.

I’ve got plenty to reflect on over the coming weeks and I look forward to applying some of what I’ve learned in my day to day work.

Keep an eye on the Nordic Testing Days YouTube channel where the recordings of all talks will shortly be made available.

Nordic Testing Days

Ali HillTop 5 takeaways from Nordic Testing Days 2019
read more
Take your testing to the Cloud

Take your testing to the Cloud

There are lots of reasons why companies choose to make the transition to the Cloud, but it’s safe to say that improving the speed and accuracy of your testing is rarely one of them. In fact, the benefits to your testing after moving to the Cloud often go unrealised. This is not because getting to those benefits is hard (it isn’t), or because there are clear reasons for keeping testing on-premise (we would argue that in most circumstances there really aren’t any), but simply because the focus tends to be elsewhere.

In this piece we are going to play out some of the key benefits and also address some of the misconceptions there are around potential barriers.

The benefits:

  1. It’s easy to set up and provision testing environments

Traditionally, getting an environment up and running is a process that takes days, potentially weeks. This means it’s intensive, both in terms of time and resource (which equals money). It also means that in some instances, test environments may not be set up because the cost is seen as too high. Take for example testing your code changes once you have raised a pull request. Creating a test environment for your pull request can have significant benefits when it comes to speeding up delivery and feedback. On-premise it would be highly unlikely to spend the time setting up test environments for pull requests. Testing would be done locally, with issues often missed creating further problems down the line.

In the Cloud, Infrastructure as Code tools like CloudFormation or Terraform can go up in a matter of minutes. You can create a new test environment as you need it and then simply take it down when you are done.

  1. Consistency

By using tools like Docker, you can get a far greater level of consistency between environments. This means you can be more confident you are testing like for like. On-premise there will almost always be small, but potentially important, discrepancies between environments.

  1. Data creation and manipulation

Poor quality data is always an issue when it comes to testing. The worse the data, the fewer issues you will be able to uncover. It is also hard to know whether the test data you have is any good until the testing is underway – and by then it’s too late. Tools like Docker can again be very useful, because the quality of data will be far more consistent.

  1. Scaling

In the on-premise environment, running tests in parallel rather than sequentially would be almost impossible. This is where Cloud comes in.

Because creating and provisioning environments is so much easier in the Cloud, parallel testing is much more doable. You can run the same test across multiple scenarios or run multiple test cases at the same time. Running tests in parallel not only saves a considerable amount of time, it is also far easier to validate different permutations such as browser types and versions.

  1. Faster time to market with reduced risk

Not only does the Cloud enable you to move through test cycles faster, it also enables you to do it with less risk. Tools like Docker and Heroku enable you to release in much smaller chunks which means it is far easier, and faster, to deal with points of failure and then move forward. The fully automated release process also means less manual interference, which in itself reduces risk further.

The barriers:

While the benefits of testing in the Cloud are clear, there are still some concerns / perceived barriers that might hold people back:

  1. Cost

The cost of Cloud is something that is at the top of a lot of people’s minds. Some of those that have already made the transition are finding the cost is substantially higher than they imagined. A lot of this is down to how it is managed, and the same applies to the testing piece. Because environments can be spun up so quickly, there is a danger that they will just proliferate, and numbers will get out of hand. It is important that there is clear governance and process in place to ensure environments are taken down when they are no longer needed.

  1. Security

A few years ago, security was considered the biggest barrier to moving to the Cloud full stop. These days, most will admit that security is often better in the Cloud than on-premise, but that is not to say that security problems don’t exist. In essence, the same rules apply in both environments. Do a good job and follow the right processes and security shouldn’t be an issue. One clear benefit, however, is that in the Cloud environment, it is much easier to test just how good the security is.

  1. Disaster recovery

There have been a number of high-profile instances of Google and Amazon outages affecting customers. The most high-profile causing data to be rerouted to China – which is tricky as Google doesn’t do business there!

The fact is, although these are high profile, they are usually pretty low impact and actually far less likely than on-premise. However, in the case of security, one major benefit of the Cloud testing environment is that it gives you the ability to test your disaster recovery far more effectively.

It is pretty clear that for almost everyone, moving their testing into the Cloud will deliver significant benefits. Although it might not be the thing that is driving Cloud adoption, it is certainly a substantial value add.

To find out about how you can successfully move your testing to the Cloud, talk a member of the ECS Digital team to discuss how you can start reaping the benefits mentioned above.

_______________

Image credit

Kouros AliabadiTake your testing to the Cloud
read more
Behaviour Driven Development in a nutshell

Behaviour Driven Development in a nutshell

If you’re new to Behaviour Driven Development (BDD) and don’t understand the jargon surrounding it, this is the article for you. This article will explain the most fundamental concepts as well as the basic implementation of this agile methodology.

Let’s start by clearing up the misconceptions. BDD is not limited to test automation and it is not about tools. Fundamentally, BDD is actually about great communication, where an application is specified and designed by describing in detail how it should behave to an outside observer. In other words, BDD is about working in collaboration to achieve a shared level of understanding where everyone is on the same page. That’s the basic understanding you need to know. Easy enough.

So, what does this ‘great communication’ mean for software development?

Great communication means:

  • A usable product first time round, which allows you to get your product to market faster
  • A lower defect rate and higher overall quality
  • A workflow that allows for rapid change to your software
  • A very efficient and highly productive team

How is it done?

Meet our key stakeholders/teams:

Developers • Testers/QA • Project Manager/Scrum Master • Product Owners/BA

 

To illustrate what happens when you implement BDD, here are the before and after scenarios:

Before implementing BDD

Traditionally, software is designed in a waterfall approach where each stage happens in isolation and is then passed along to the next team. Think conveyor belt factory style:

  1. First the Business Analyst defines requirements
  2. Then the development team work on these requirements and sends for testing
  3. Then testing discovers lots of bugs and sends back to the development team
  4. Things are miscommunicated in transit so repeat steps 2 and 3 back and forth until you run out of time or budget
  5. Release software

The problem here is that everyone is in isolation, interpreting the requirements differently along the way. By the time code is handed in for release, resources are drained, and people are frustrated as there are issues that could have been avoided had everyone just been working together initially.

After implementing BDD

  1. Business and PO/BA have a set of requirements ready to implement
  2. BA, Developers & QA work collaboratively to refine these requirements by defining the behaviour of the software together. Thinking from the point of view of the user, they create detailed user stories. Throughout this process they address the business value of each user story and potential issues relating to QA that may crop up
  3. Each story is given an estimate of how complex it would be to implement
  4. The whole team now has a strong shared understanding of the behaviour of the software and when it will be considered complete
  5. Begin Sprint: Developers & QA then work together or in parallel to produce a product that is ready for release

 

BDD

 

This process saves time and money and is incredibly efficient. The core element of this efficiency is the team’s clear understanding of scope and what the fundamental features and behaviours required are. Because of the collaborative nature of BDD, issues are brought to light that otherwise would be an afterthought. For example, how a feature might behave differently on mobile or how a feature might deal with a large number of users. These are considerations that should be addressed from the outset.

What is the best way to implement BDD?

Just because people are in the same room or present at the same meeting doesn’t mean they will collaborate effectively. Each of the stakeholders play a crucial role and some teams/individuals may need to change their way of doing things to make sure that collaboration actually happens. The image below outlines the key deliverables for everyone involved when adopting BDD:

 

BDD

 

An example of BDD in practice

BDD is a risk-based approach to software development; it mitigates the likelihood of running into issues at crucial times. Teams at ECS Digital have been using the BDD process effectively, including  implementing a website feature for a popular media client. The client wanted to create a swipe feature where more mobile users could swipe to see different articles and move through the sections easily. Everyone was collaborating from the initial stages and the team was able to ensure high quality on the website throughout the process of implementation.

With a clear and shared definition of what the website would be like when completed, they were able to innovate further to mitigate the risk involved. They decided that during times of low traffic they would send users to the new website with the new swipe feature and get feedback. Then during risky times of high traffic users would have the usual website without the new feature. This allowed the team to ensure that when they made the feature a permanent part of the entire website they were taking as little risk as possible.

If this team was not utilising BDD techniques by defining the website’s behaviour in detail and involving each team in the development of requirements, they may have released the feature without such precautionary measures or run into many issues when approaching the release date.

If you’re interested in understanding more about BDD and delving into some of the jargon surrounding it – “gherkin syntax”, “the three amigos”, “impact mapping” & “living documentation” – read our previous article here: Behaviour Driven Development: It’s More Than Just Testing

Kouros AliabadiBehaviour Driven Development in a nutshell
read more
Helping Developers Become Testers

Helping Developers Become Testers

As software development practices evolve, the line between developer and tester has becoming increasingly blurred. As testers, we are now expected to know how to setup test automation frameworks, code different types of test (e.g. integration, functional, performance) and even understand and contribute to the build and deploy pipeline process.

Traditionally, there has always been a clear distinction between development and testing. In older software lifecycle models such as Waterfall and V-Model, testing only starts when development work is finished, with few if any automated tests put in place.

Over the years, companies started to adopt a more collaborative and iterative way of working where the testing process is often championed to start as early as the requirement gathering stage.

Even though development practices have evolved throughout the years there is still, from my experience, this misconception that developers cannot write tests. This is why specialist roles such as SDET (Software Development Engineer in Test) were created – to bridge the gap between developers and manual testers. Developers are more than capable of writing tests – they already write most of the unit tests for their own code. So, then …why do some developers not test?

From the different client that I have worked with, I have observed the following reasons why this might be the case:

1. No one asks them to test

If management don’t push for them to do this, they will think that automating tests is not part of their responsibility. This initiative has to come from the top. Test architects and SDETs, that feel developers should help out with test automation, will not be able to convince them on their own.

2. They don’t want to test

Most developers still assume that features should be automated solely by testers. Once their ticket passes peer review, they believe that their work is finished. Some developers hate writing end to end tests because they believe the process is slow and flaky. Those developers who have tried to help out find tools such as Selenium difficult to set up and work with.

3. They lack the necessary guidance of looking at their features from an end to end perspective

Most developers work on single components, so they can lack an understanding of how their components will integrate with the rest of the systems. Also, the requirements provided to them often ensure that all positive scenarios are working as expected, leaving negative scenarios missed or neglected.

How do we then help our fellow developers become testers?

How do we bridge this gap and ensure that we maximise everyone’s potential?

1. Get support from management

Support needs to come from the top. Make sure that you communicate what the business benefits are if developers help the testers. Quality should be owned by everyone and not just by the testers.

2. Regular knowledge sharing with the business

Developer should be told how the application they’re working on is used by the business or its customers.  A simple yet effective idea is to have regular knowledge sharing sessions with the business. Another good idea is to have these sessions documented on Confluence or something similar.

3. Documentation on how to contribute to the automation framework

There should be clear guidelines on how developers can help contribute to writing tests. If someone has not used tools like Cucumber and Selenium, make sure a “how to” guide is created.

4. Introduce pair programming when writing tests

Pair programming is a common way of working amongst developers and this can also be used when writing tests. Experienced SDETs need to pair with developers to share this knowledge.

5. Be a teacher and educate

Point them to resources that will help them with writing tests and show them best practices. Guide them on how to do it but don’t write their tests on their behalf. Peer review their code. Be patient.

6. Modify your process to include testing as part of a developer’s workflow

Before changing your ticket to done, make it a habit of including automated tests. This is especially useful for new features. Rather than writing the tests after the feature is deployed, write it during the development stage.

7. Include automation tests as part of the CI/CD pipeline

The more diverse tests that are added to the pipeline, the more visible the results will be to everyone. Utilize an effective test reporting dashboard so results of all the test runs can be easily displayed. By having these tests in the pipeline, developers will have visibility if they break existing features.

8. Evaluate testing tools effectively

To encourage developers to write tests, the testing tool should be somewhat familiar to them. If you work on a team where JavaScript is the language of choice, there is no point trying to implement the automation framework in Ruby or Python. Speak the same language as the developers. If you work in a company where you’re tasked with setting up the automation framework, ask everyone’s opinion on which tool to use. More and more testing tools are emerging these days, such as Cypress, which aims to provide an easy on boarding process for developers to start testing.

So, what happens when developers become testers?

I’ve seen the benefits first hand. At the client where I’m based, we’ve introduced this approach where developers write the automated tests for some of the new features. Not only are we releasing new features quickly, but knowledge sharing and collaboration between developers and QAs is better than ever before.

Developers should know about testing the same way automation testers know about coding. By getting developers involved with the testing process, we begin to utilise everyone’s knowledge and potential, as well as avoid scenarios where bottleneck occurs.

If you’d like more specific advice around how to help your developers become testers, you can get in touch with us here.

Marie CruzHelping Developers Become Testers
read more
Solve your test data woes with GraphQL & schema introspection

Solve your test data woes with GraphQL & schema introspection

While highly technical in places, this article goes through some solutions ECS Digital has been able to provide for a client to improve testing strategies and reduce costs. Although, not for everyone, we hope that sharing our technical expertise in this area can benefit the community at large.

Tech stack: React | GraphQL | Apollo GraphQL | Javascript | Typescript | Cypress

One of our clients has been going through a change period where we are re-platforming their whole tech stack. As part of this process, we felt that now was a really great time to address an underlying issue that we have experienced with the old tech stack.

That problem is test data.

When I speak of test data, this applies to not only the unit and integration tests, but also our functional UI tests.

We had two fundamental problems we wanted to solve:

Problem one

It was the responsibility of the developer to test his component with any unstructured data they saw fit. If a developer creates a component that expects data of shape A and creates a test with data shape A, the test will pass. If, however, over time the real data that is passed to the component changes to shape B we have no idea if our component will still work until quite late in the development process, which introduces a long feedback loop.

Problem two

Our functional UI tests ran on a long living full stack. There was a known data set that we could reset to, which was all stored as json in its own repository – completely separated from the rest of stack and its tests. To update the fixture data on the full stack you would need to understand what test cases were already using the json, then manually change the json, create a PR, get it reviewed, merged and then run the mechanism to reset the data on the long living stack.

At the start of the project this fixture data was very useful. It allowed our functional UI tests to be robust and repeatable. As a result, when all our tests passed we had high confidence our site was releasable.

Unfortunately, over time and as software naturally adapts our fixture data started to become harder to update and maintain. Some parts were updated inconsistently, we had no clarity on what tests were tied to fixture data and shouldn’t be updated. Eventually our fixture data became unmaintainable or would break other tests as it was updated. 

We spent a lot of time thinking about how to solve both of these problems and after quite some time and several approaches we finally achieved something that we felt was clean, and maintainable.

Solution

Like a lot of the industry we are migrating to a GraphQL back end.

This opened an interesting opportunity as GraphQL uses types and fields to develop a `query language` for your API. You are only ever able to query for fields that exist on their corresponding types, otherwise GraphQL will complain.

GraphQL also supports something called schema introspection, which provides a mechanism to pull down a schema for any enabled GraphQL server. This can be useful to see what queries your API will support.

https://graphql.org/learn/introspection/

Another tool called GraphQL code generator can take a GraphQL schema as an input and will output all the type definitions of your GraphQL schema as a typescript file, along with any type descriptions present on your schema. (shown below)

https://github.com/dotansimha/graphql-code-generator

Problem One Solved

Now that we had the capability for translating our production GraphQL server types into typescript definitions, we were satisfied that we could start to build a fixture generator package matching our production GraphQL server. A key part of building this package was to provide a consistent API for all clients of the fixture generator package. We also ensured that whenever the logic for building fixture data started to become complicated unit and integration tests were baked in.

Once the generator package was in place, the workflow used was anytime a client of the fixture generator package is run, the schema introspection and generating our type files comes as a precursor. The whole process takes around a second and once completed the fixture generator typescript package will build. If the schema has changed, and the fixtures no longer adhere to the types, the build will fail and you are alerted straight away.

This provides huge benefit to our tests as it now means that our tests ask for the data they require. The complexity around managing test data is no longer the responsibility of the tests. We also know that the data will be correct as per the production schema even over time. Finally, if the types do change, we only need to fix it in one place for all our tests to be updated.

You can see an example of how you would use the fixture generator below

Problem Two solved

The fixture generator brought us closer to solving problem two, but we still had no way to run our functional UI tests and somehow pass our fixture generator data to our front end. The front end was still querying the long living GraphQL environment.

Another tool Apollo GraphQL provides some powerful tools around stubbing whereby you can pass your GraphQL Schema, as well as overrides for type definitions in your resolver map. Once you have defined what data you want to return when you query a type you can start a local GraphQL server.

https://www.apollographql.com/docs/apollo-server/features/mocking.html

Once running we could point our front end to our local stubbed GraphQL server.

The final piece was to have our tests once again define the data they required and then spin up the local GraphQL server.

We have also been using Cypress as our new functional UI test tool.

Cypress as a tool is groundbreaking and is revolutionising UI tests. It runs in the same run loop as your application in the browser and provides new features for UI testing such as playback mode. I’d really recommend looking if you haven’t already

In our tests we run a Cypress task to start up our short living mock GraphQL server and provide the fixture data that we want GraphQL to run with straight from the test.

Once again this means that our tests explicitly ask for data. Previously if we wanted this test to work with 4 related articles, we would have had to edit a separate repository. Try to understand if the data we want to edit is already being used by other tests. Create a pull request, get it approved, merged, then run the reset data mechanism.

Now it’s as simple as updating the variable inside the test and it is clear what data the test needs to run and practically removes the previous feedback loop.

If you want to understand at a deeper level how this works, it’s all open source.

Take a look at the repositories below:

https://github.com/newsuk/times-components/tree/master/packages/fixture-generator

https://github.com/newsuk/times-components/tree/master/packages/mock-tpa-server

Or alternatively, find out how ECS Digital can help improve your test strategy by contacting us.

Matt LowrySolve your test data woes with GraphQL & schema introspection
read more
Xin’s Story as a QA and Continuous Delivery Consultant

Xin’s Story as a QA and Continuous Delivery Consultant

My name is Xin Wang, I am a QA and Continuous Delivery Consultant as ECS Digital. I recently published a blog explaining how I went from delivering presentations on Fashion Week 2017 fashion trends, to writing functional tests as a software developer engineer.

Working in a male dominated industry has been very different to what I was used to – the approaches that some male engineers take are sometimes very different to the approach that a female would take. But these perspectives combined give you a much valuable overview which is why I really enjoy working on coding challenges with my colleagues.

Take a look at my video if you are interested in understanding why I switched my career around and how I am continuing with my journey as a software developer engineer.

Xin WangXin’s Story as a QA and Continuous Delivery Consultant
read more
Day In the life as a Technical Test Engineer

Day In the life as a Technical Test Engineer

Hi there, my name is Marie Cruz, and I’m a Senior Technical Test Engineer at ECS Digital. I’m responsible for providing test services to various clients with the focus of implementing BDD processes. I recently published a blog explaining how I balance being a mother and a woman in technology.

Having a family and an active career in tech, people tend to ask me how I manage to keep up with both. My answer is making sure you understand what’s important, but also ensuring that you are happy with the choices that you are making.

If you’ve ever wondered how a female can handle both a career in tech and a family life, feel free to take a look at my “Day in the Life as a Test Engineer” video. I hope it inspires you to take the leap into technology too!

Marie CruzDay In the life as a Technical Test Engineer
read more
Is your master branch production ready?

Is your master branch production ready?

Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.

A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.

If your answer is no, then how do we start to break down and resolve these problems.

Building confidence in quality

A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.

The test strategy shown above was for a simple front-end application that reads data from an external API.

To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.

We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.

Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.

We isolated our application by stubbing out external services or dependencies where necessary –  we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.

We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.

Start failing fast 

Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.

Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.

We need to run tests earlier in the process.

In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.

We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.

This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.

Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.

The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.

*******

This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.

Matt LowryIs your master branch production ready?
read more
Understanding SAST and DAST Adoption

Understanding SAST and DAST Adoption

In order to achieve a (software delivery lifecycle) SDLC that is efficient and cost-effective, we strive to automate every step with as little human interaction as possible. We do this because the ability to hold a product to a high quality standard throughout its lifespan is essential in building a maintainable, resilient and secure solution.

This blog focuses on the tools and approaches that help us maintain a high level of code quality and application security, while remaining relatively hands-off. We look at the benefits and problems of these tools and present our recommendations about which approach to take, and when. Whilst a little technical in places for some, if you’re interested in SAST and DAST Adoption and understanding the difference this is the blog for you.

Here are four core concepts we’ll be delving into:

  • Static Application Security Testing (SAST)

Can run on the development machine or have a setup in your CI/CD and can run on every code push to Git

  • Interactive Application Security Testing (IAST)

Conducted post deploy and uses a combination of techniques and tools to achieve desired results. (a Security Expert ‘interacts’ with the application under test by setting up attack vectors)

  • Dynamic Application Security Testing (DAST)

Tool based security testing that is used on top of functional tests to check application communication channels.

  • Runtime Application Self Protection (RASP)

Production monitoring and risk assessment, which relies on tools and automated processes to counter application attacks.

SAST – Static Application Security Testing

SAST is used to identify possible improvements by analysing the source code or binaries without running the code. It is fast and you don’t need the code to be compiled so the SAST tools can be integrated directly in the IDE (Integrated Development Environment). This gives developers immediate feedback about the code they write and how they can deliver a better software product. Projects that manage to integrate SAST in their SDLC will notice immediate benefits in code quality by adhering to a more detailed DoD (Definition of Done) task completion checklist that is not required to go through a PR (Pull Request) Review.

The main benefit of SAST adoption is that developers have immediate feedback on their code and how to improve it; there is no need to deploy or compile the code.

The problem (with SAST) is that application code that is written by developers is just a small part of an application under test. In many cases, we rely on different languages, frameworks, servers that interact and many other systems that make up the ecosystem. If you are doing only static analysis you are not only ignoring application execution and infrastructure but also the communication protocols. Hackers will usually use information that is kept in cookies and requests to penetrate and exploit the system’s flaws in your infrastructure or application.

SAST Tools can be setup on the developer machines or in their IDE, but can also be setup as a CI/CD tool.

IAST – Interactive Application Security Testing

IAST relies on agent tools set inside the application at communication junction points. Tools need to be configured and each agent will gather results to a certain degree of accuracy.

“IAST is a combination of SAST and Dynamic Application Security Testing (DAST). But this security solution has still not matured to a point that it can be defined precisely or measured accurately against rivalling solutions.” – Sharon Solomon

Due to these factors, the tools might not be a fit solution to be used in production environments. The effectiveness of the tools is largely affected by the instrumentation and attack simulations.

The implication is that engineers and security professionals are responsible to set up and run an analysis of results, thus requiring specialised personnel. Agent installation inside the infrastructure might touch on other constraints set by bank rules and regulations.

DAST – Dynamic Application Security Testing

DAST is great for developers, allowing them to have rapid health checks on their code. It should be mentioned though that this often creates a false sense of safety and security, which can can be a very precarious position. Because of the nature of the systems under test, DAST can be run as a security proxy. The advantage of such an approach is that you can use your existing tests (Integration or E2E) tests and run them through the DAST proxy. On top of testing your application business flows and environment setup, you also get a nifty security report on all the requests that bounced during testing. Reports usually contain warnings for industry standard OWASP Security Threats. The security assessment can be further refined by security experts in order to achieve a more comprehensive suite of checks.

A benefit of DAST adoption is that developers or security analysts can identify sensitive information exposed by the system through analysing the requests generated by the application.

On a day to day basis, developers can analyse changes in requests and sessions contain only the desired content. Security analysts also now have a tool that can see the underbelly of all business flows so they can focus straight on attack vectors and other security aspects. Policies can be verified and enforced (e.g. GDPR adherence by identifying how user sensitive data is exposed within the application communication). Tools usually also provide warnings on standard configuration but the large array of tools require fine-tuned configurations. Some tools provide community script repositories (10) which can be directly used or customised to project needs.

The problems faced with DAST tools is that they generate a large number of warnings (and false negatives) that need to be carefully investigated by developers and security professionals alike. DAST tools require extensive configuration in order to achieve the targeted results.

DAST tools can be set up in the development infrastructure or as a CI/CD tool. Developers can use the DAST proxy tool from their local machine (by redirecting their tests through the proxy).

RASP – Responsive Application Self-Protection

ASP tools are designed to monitor running applications and intercept all communication, making sure both web and non-web applications are protected. Depending on how the tool is set up it can signal identified threats or it can take actions. RASP can stop the execution of malicious code (e.g. it identifies an SQL injection) or it can terminate a user session in order to stop attacks. The way it stops attacks is dependent on how RASP and the application under test are integrated.

There is an API based approach in which the RASP tools use the application API to steer the behavior of the application. Developers will find that through such an approach they can handle the integration with RASP tools in a very specific way (e.g. Login App might define an extended API to cope with custom decisions from the RASP tools). There is also an approach where a wrapper is set up for the app and RASP sits on top, thus giving a more general integration with an existing app.

A benefit of RASP is that is can distinguish between attacks and legitimate requests. It can limit or block access to attackers that already gained access to the application. These capabilities will not safeguard against all threats, for this reason, security specialists recommend building security mechanisms at the application level.

A drawback is that environments that use RASP can notice their performance is affected by the adoption of active scanners. User experience may be impacted by the loading latency.

Conclusions

Due to the nature and scope of each tool and how they fit in the SDLC, there is no single solution adoption to automate and safeguard delivery to production.

SAST is the most cost effective way of checking for code related defects and security threats, but its scope (static code) ignores the vast majority of interacting elements in a software solution.

IAST adoption is to be desired but might take more time to integrate it as a formal step into the SDLC due to the requirement of specialised resources and tooling.

DAST complements SAST by checking in the SAST blind spots (running against deployed code, checking environment configurations and communication protocols) by providing extended reports with security in mind.

Note: that DAST tools can be used as part of CI/CD processes. Since IAST and DAST are quite similar in many aspects, their capabilities are transferable.

RASP uses a combination of tools for network traffic monitoring. Due to the proactive nature of the intrusion detection systems, a set of tripwires are set (eg. honeypots) by network and security professionals that need to be monitored. The response to identified threats can be carefully handled in a time effective way.

We would recommend the use of a small set of tools and practices that make sense for your SDLC. Gradual adoption of tools and processes should always be made with development delivery in mind. SAST on its own will not safeguard against a number of threats and will signal only code related issues. Ultimately, adopting the right set of tools will help complement coverage and type checks performed on the system to the point where the code is production ready.

If you’d like more specific advice around the right tools and practices for your SDLC you can get in touch with us here.

Voicy TurcuUnderstanding SAST and DAST Adoption
read more
On being a mum and a woman in tech

On being a mum and a woman in tech

Like most people, I had a five-year plan after I graduated from university. Get a nice job and work for a great company, get married, start a family and buy a house. Fast forward five years and here I am, attempting to write a blog about how I balance being a mother and a woman in technology while listening to my daughter having a tantrum!

Being a first-time mum, I struggled a bit in the beginning after my maternity leave to get used to the idea of working again. I felt like I had forgotten how to code. Not to mention that I was given the responsibility of a Test Architect role in the client site that I am based at. I had to get myself familiar with new tools that I haven’t used before and somehow, I had to lead the team. It was daunting!

At the same time, I was worrying about my daughter all the time. It was hard to focus at work and it definitely wasn’t the best start (let’s just say that my stress hormones were up to the roofs!). But somehow, I managed to get it to work in the end. It wasn’t easy and there were still some sleepless nights (teething is still a nightmare!) but I’m going to list the things that helped me balance my work and my responsibilities as a mum.

  1. Share the responsibility

This I feel is the most important. Don’t be afraid to ask for help and share the responsibility. You won’t be able to do everything by yourself! My husband is very hands-on with our daughter so during his days off, he looks after her. Ask families and friends to help out too. We’re lucky that my mother-in-law helps look after my daughter when my husband and I are both at work. There are also times when my parents pick up my daughter from work, so they can look after her. We pre-plan our schedule and check everyone’s availability so we know who will look after our daughter on what day.

  1. Flexible working is the way forward

If you can work from home or do flexible hours, ask for it. From time to time, I work from home if there is no available babysitter that day or if I need to take my daughter to hospital.

  1. Avoid working outside hours

You might be tempted to bring some of the work home with you if you have tight deadlines but try to avoid doing this if possible. I used to bring work home with me to finish off some tasks, check slack messages and reply to emails but this meant that even when I’m home, I’m still thinking about work rather than just spending quality time with my daughter. This just made me more stressed in the end so if I do have deadlines, I try to be more focused at work and time box my tasks. If it’s something that your colleagues can definitely help, share the responsibility. Again, you can’t do everything by yourself 🙂

  1. Stop overthinking about your children

It’s natural that we tend to worry about our little ones. I used to worry a lot about my daughter at work and text my husband or my mother-in-law to see how she was doing, if she’s eaten or drank her milk, if she’s had her nap, if she’s crying, etc. and I always get the same answers – that she is doing ok. Rather than spending time worrying about things I couldn’t change, I now use that time to be focused at work so I can get home sooner and answer these questions myself

  1. Find time to learn

Now this might be difficult for some of you but if you can, still find time to learn something new every day. Doesn’t matter if it’s just for an hour or 30 minutes. Especially in the tech industry, there are always new tools coming up. So, once my daughter is asleep, I make a habit to read a book, read tech blogs, or do a little bit of coding.

  1. Find a company that appreciates you

I feel that this is as important as the first point. If you work for a company that micromanages and doesn’t give you room to improve, then this might be a red flag. It’s great that I work for a company that is appreciative of what I do and rewards those who have done a great job. Recently, I was nominated for an Outstanding People Award and it has given me a great boost to continue doing what it is I’m doing – I must be doing something right after all!

Achieving a work-life balance, especially if you are a mum, is a challenge, but it is doable. It was difficult at the beginning, but like everything else, it gets easier 🙂

Join our Women In Tech DevOps Playground on 8th November where we will be getting hands-on with Cypress!

Follow other stories from the ECS Digital team here.

Marie CruzOn being a mum and a woman in tech
read more