Xin’s Story as a QA and Continuous Delivery Consultant

Xin’s Story as a QA and Continuous Delivery Consultant

No comments

My name is Xin Wang, I am a QA and Continuous Delivery Consultant as ECS Digital. I recently published a blog explaining how I went from delivering presentations on Fashion Week 2017 fashion trends, to writing functional tests as a software developer engineer.

Working in a male dominated industry has been very different to what I was used to – the approaches that some male engineers take are sometimes very different to the approach that a female would take. But these perspectives combined give you a much valuable overview which is why I really enjoy working on coding challenges with my colleagues.

Take a look at my video if you are interested in understanding why I switched my career around and how I am continuing with my journey as a software developer engineer.

Xin WangXin’s Story as a QA and Continuous Delivery Consultant
read more
Is your master branch production ready?

Is your master branch production ready?

No comments

Delivering software in a continuous delivery capacity is something that nearly every project strives for. Problem is, not many projects are able to achieve continuous delivery because they don’t have the confidence in their applications quality, their build pipelines, their branching strategy or worst case, all of them.

A good indicator as to whether you fall into one of the above is to ask yourself: `can I confidently release master branch right now`.

If your answer is no, then how do we start to break down and resolve these problems.

Building confidence in quality

A recent project I have been working on fell into a few of the above categories. Nearly all their testing was done on a deployment to a long living environment, after a merge commit to master. Along with a lot of duplicated work throughout their pipeline.

The test strategy shown above was for a simple front-end application that reads data from an external API.

To start, we identified areas of our application that we knew were unloved, or treacherous to develop. Once identified, we put in place appropriate test automation. When writing test automation it is so important that your tests are robust, fast and deterministic.

We pushed as much of our UI automation down into the application. Ideally you want your application adhering to the testing pyramid principles. Testing elements that have particular classes with tools such as selenium are both time costly and of no value. There are better, more appropriate tools to do this.

Once our test scaffolding was in place, we started to feel more comfortable refactoring problem areas and reducing complexity.

We isolated our application by stubbing out external services or dependencies where necessary –  we didn’t want to be testing services outside our scope. Where possible, we recommend agreeing a contract with your external dependencies and using this to develop against.

We also recommend containerizing your app. Being able to deploy and run the same image of an application locally and on production is incredibly powerful. Long gone are the days of having long living application servers and the phrase of ‘well it works on my machine’.

Start failing fast 

Once we had confidence that when our tests all passed then the application could be deployed, we then looked to address where our tests were running.

Having tests run after a merge commit to master is too late in the process. Leaving it this long introduces a risk that someone pushes the release to production button before tests have been run.

We need to run tests earlier in the process.

In the past, to solve this problem you may have adopted complicated branching strategies dev, test, master which on paper seem reasonable, but in practice introduce horrendously slow unnecessary feedback loops and messy merges between multiple branches.

We decided to harness the power of pull request environments instead, to allow our tests to run on short living infrastructure before we merge to Master. With DevOps paradigms such as immutable infrastructure, infrastructure as code and containerisation, deploying a new environment becomes trivial.

This becomes even more powerful if you deploy your pull request environments in the same way as your production site, since you effectively test the deployment itself.

Having pull request environments spun up also caters for any testing requirements, such as exploratory testing or demos, and massively speeds up developer feedback loops.

The end result is a much higher confidence in your applications quality in master branch, which to any project is invaluable.

*******

This a two-part series, with the next article focusing on how we can start to deliver master branch to production. Watch this space.

Matt LowryIs your master branch production ready?
read more
Raising the profile of performance!

Raising the profile of performance!

No comments

Performance testing – typically the job of the Non Functional Test (NFT) team – should always be completed against a stable build of the system, in an environment that resembles the final production setting as closely as possible. Extensive functional testing is usually carried out beforehand, along with various other essential actions.

On paper the above looks fine and as expected, but by the time the system reaches the NFT team the underlying code has passed through various developers over many iterations. New code gets added to existing code and existing code is refactored, as different team members work their magic to create the system.

With the code receiving so much attention in the build up to performance testing, the NFT team has a hard time determining where any code-related performance issues originate.

The solution

If the above quandary is to be solved, projects must avoid vehemently segmenting each phase of a project. To reduce any wasted time in the performance testing phase, ‘performance profiling’ tests can be run during the functional test phase. Their purpose is to quickly identify any code-related performance issues. With performance profiling, a small number of virtual users is sufficient – say five or ten concurrent. These profiling tests should be run every time code is checked-in, making performance testing an essential part of the daily build process.

The key is to ensure these lightweight performance tests are run on a consistent and stable environment. Whilst this environment won’t resemble the final production architecture, it will quickly highlight any degradation in performance. Each test will be executed against the code in its most recent form, making it possible to highlight the root of the problem quickly. Once diagnosed, issues can be fixed and re-tested before the code is released to the NFT team for formal and extensive load testing.

A ten-step guide to performance profiling

The above form of lightweight performance testing can have a positive impact on the speed at which you can get a product to market. And by following the ten-step guide below, adopting such an approach doesn’t have to be difficult or costly.

  1. Source an environmentThis doesn’t necessarily have to be the production environment; it could even be the test environment, as long as it’s not being used when the profiling is conducted. It might be possible, for example, to complete the process out of normal office hours. Be sure to log the specification of the machine used, so that it forms part of your baseline setup.
  2. Choose a toolIdeally, this would be the same tool that’s used for full-scale performance testing (during the NFT phase) as the same scripts can be used but for much lower load, obviously. Otherwise, open source tools like JMeter and WebPageTest are excellent for both profiling and full-scale testing.
  3. Start earlyBehaviour-driven development and agile vertical slice development make it pretty easy to create some client-facing functionality relatively quickly. Once at this stage, it’s time to start writing the profile test – even if it’s just calling a simple GET against a web page. Simply put, the earlier you test, the earlier you find bugs.
  4. Run a base lineWith the script in place, you are ready to run a base line. Make sure you discuss the results with the team and business to ensure everything fits in with what was initially expected.
  5. Schedule your testsFor this, you can use a Continuous Integration (CI) server, or even a cron job. If you’re using Jenkins/JMeter, it’s worth using the JMeter plugin – this will not only run the tests but also report back some useful graphs. It makes sense to schedule these for every time new code is checked in.
  6. Monitor your resultsHooking your tests up to the CI can help with this – just be sure to implement some kind of threshold pass and failure conditions. This way, you can sound the alarm if the performance profile build goes ‘RED’.
  7. Learn from the improvementsIf it’s clear that the application is starting to perform better, determine the reasons for this and see if those improvements can be implemented elsewhere in the code. Negative changes should also be picked up on and investigated promptly.
  8. Maintain the testsWhenever more functionality is completed, adjust the tests accordingly. It’s also important to retire tests if they become obsolete. This should be treated like any other testing or development task.
  9. Monitor your systemMonitoring should be installed around all parts of the system, including the database, the webserver, CPU and memory usage. New Relic may come in handy here, or there are a number of open source tools on the market as well.
  10. Don’t forget the full load testThe goal of performance profiling is to make the process easier and more watertight – it’s not designed to replace full load testing. You should still stress/soak/load your application as normal.

Why introduce performance profiling?

When performance profiling is used as part of the daily build process, it becomes easier to instantly highlight any small performance issues early on in the process – long before it even reaches the NFT team. In turn, this means that developers are able to optimize and fine-tune the code as they go, ensuring the best possible results. It can be used to lighten the load on the NFT team, who will benefit from being able to focus on serious load, performance and stress tests to identify bottlenecks that aren’t necessarily code-related.

Performance profiling is definitely not a replacement for traditional load and performance testing, but its benefits as a complementary tactic are too significant to ignore. Implemented in the correct way, it will significantly cut overall costs, and help transform your releases into NFT from functionally good to operationally great.

QAWorks TeamRaising the profile of performance!
read more