The Holy Grail

There is a place we both have been That joins us together We often try the real world And end up in stormy weather. We’ve walked along the avenue And now we’re hand in glove Down the avenue of broken…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




7 common mistakes in Performance Testing

Being the performance testing so significant, it is important to be mindful in designing performance test strategy to make it more effective and worthwhile. This post is all about shedding the light on a few common mistakes which we make while designing performance test strategy.

The first mistake we make is to leave the performance testing for the last minute. Any performance issue which we discover at the last minute of release most likely goes to production or must be resolved quickly. Many of the companies practice this approach and they end up with serious performance issues in production.

The other common practice followed by many companies is to do a performance test before every release in parallel to regression testing or post regression. The problem with this approach is: if performance was not taken into account while designing the application, it might be costly to fix the issue at the last moment and eventually issue slips to production. Sometimes the issue might ask for significant design changes as well.

Developers and architects should consider the peformance aspect of the system while desinging the internals and QA should focus to carry out the peformance test as early as possible in the development lifecycle.

Similar to functional testing, the performance test scenarios should be close to the way the real user uses the system. Especially, testers should be mindful of the delay between the actions. An effective test journey would add such delays in between the user actions to avoid the excessive load to a certain service. In absence of the sufficient delay a particular service/entity will be overloaded resulting in misleading performance figures.

Apart from the delay, it is important to design the load profile correctly. The spike or ramp up load needs to be designed carefully to mock the real world behaviour otherwise the effectiveness of the test will be doubtful.

Any test should be carried out with a clear objective and the performance test is no different. The service level agreement (SLA) should be the key goal of the activity. Manier times QA conclueds the performance testing only measuring the response time. This could be an incomplete strategy as the system can fail to perform in other aspects under huge load.

There are a few important KPIs which should be taken into account in performance test strategy: system uptime, response time, scalability, network utilization, CPU utilization and memory utilization.

Performance tests carried out in non production like infrastructure does not make any sense. For example, test servers hosted in a private network and a test carried out by routing the traffic through VPN tunnel will have additional latency because of VPN. Similarly, a test environment setup with disabled server logs will mask the issues related to logging capacity under a huge load. The performance test environment should be an exact replica of production with less resources.

It is important to have server monitors to monitor the server’s system resources like CPU, Network Traffic, Storage etc. The monitoring dashboard will help to correlate the results generated by the test tool. The other KPIs like CPU utilization, storage utilization, etc can be monitored by monitoring tools during the test execution. Another important advantage which can be added by monitoring the dashboard is that the tester can verify the load generated by the test reached to the server.

All of your performance strategy fails on the launch day if you do not consider the popularity of your product. A highly advertised launch will attract more crowds on the launch day compared to normal day and your performance test should have considerations for such unusual spikes otherwise on the first day itself the servers will melt like ice.

In performance testing, it is very important to ensure that the load generated by the test tool is as per the instructions. Any shortcoming in load generation will give misleading measurements. The tool selection should be done based on the need rather than sticking to widely used common open source tools. For example, if you are planning to generate a load of millions of users, most of the tools will fail to do it on a single machine. This needs distributed setup and that becomes your key criteria to decide what to use.

Add a comment

Related posts:

Week 7 Reading

Understanding Comics — Chapter 4. “Week 7 Reading” is published by Helen Fang.

3 Ways to Actually Care Less What Others Think About You

Caring about what other people think is inherently normal. Historically speaking, it’s even good for us. It helped us fit in with our tribe, create powerful group dynamics to survive, and find mates…

Brain Health Supplements May Have Benefits

Strokes are one of the most common causes of death in the United States and a leading cause of disability. Harvard Medical School experts discuss evidence-based strategies to reduce the risk of…