The digital transformation of companies relies on the idea that all software is quality software and that all technological developments are equivalent in that sense. The practice has proved that the opposite is the case. Loss of time, security gaps, information leakage, or simply the interruption of the production chain are only some of the problems that can be caused by a piece of software that has not gone through a quality assurance process. They are not all good enough, which might impact the company’s production activities.

An increasing number of companies of all sizes and segments have come to better understand the value of technology in our everyday lives and, particularly, the need for technology to be available and functional whenever it is required. This is why software quality is an essential priority in the agendas of ITOs.

Delivering software that guarantees its proper functioning is the idea behind the concept of quality assurance (QA), a discipline that is not only implemented to detect errors in the product; instead, its purpose is to balance trial efforts to ensure the highest possible quality levels and, consequently, the greatest customer satisfaction. The debate should now center on how to do it most efficiently, considering that resources are limited and that everything is always urgent, as shown by different academic papers published by various universities. Who Should Fix this Bug? is a paper by the Department of Computer Science of the University of British Columbia. The paper has been cited often and is very interesting for those who wish to explore the issue more technically. The paper can be found here. 

To put it simply, the parties to this conversation include those who trust the results of manual tests, on the one hand, and those who are fascinated by the promise of automated tests, on the other. The objective, therefore, is to draw upon the benefits and the advantages of both worlds.

There is a limit to what manual tests can do. For example, in scenarios that are run more than once, each repetition is very costly, both in terms of the project budget and the time assigned to the human resources that will do the work. The combination with automated tests aims to eliminate that bottleneck and reserve manual tests for tasks that cannot be optimized through automation, such as identifying a bug that is escaping their attention.

The proposal that usually comes up immediately is to automate testing. Let’s not forget that the popular belief is that automated tests are replacing manual tests when they complement each other. Automated tests focus on finding or invalidating the specific path for which the code was developed. With the highest efficiency and speed, the tests look for all the faults in the path, but they do not explore the surroundings to detect exceptions or combinations of infrequent input-output data. In these cases, manual tests do more exhaustive and specific work.

The timely incorporation of automated tests guarantees the scalability of the project. Let’s suppose that, at the first sprint, the application has ten functionalities. Therefore, in the event that only manual tests are run, ten tests would be run. But the second sprint will indeed run twenty. And the third, thirty. However, at a given time, no matter how well prepared the QA team is, it will not be able to cope with the volume of cases, and the project will be delayed.

This is when the concept of full stack in QA comes alive. Full stack combines manual and automated tests with load & performance tests to cover a broader spectrum. The purpose of the tests known as load & performance is to make sure the app responds adequately as the load increases in the system. They analyze demand distribution, the expected growth in users, and potential peaks, among other variables. The three testing levels guarantee quality, agility, efficiency, and product sustainability over time.

An example of the optimal results delivered by the full stack strategy is developing mobile applications or software that must run in multiple browsers.

We recently worked on a project to guarantee compatibility with Google Chrome, Microsoft Edge, and Apple Safari. The same tests had to be run on different platforms. Thanks to automation, development costs are applied only once and then the test was run on all these browsers, without taking up more time or resources that may be allocated to activities of a higher added value. To put it simply, we developed a quality product that delivers a satisfactory experience, where the QA process that used to take two days now requires only three hours.

The benefits of the full stack model are not limited to time-saving. The use of load & performance tests guarantees that new developments will be supported, no matter the load. In this way, the number of errors is significantly minimized, and the development team feels confident enough to know that the product will not create problems and be up to the client’s expectations. The model also generates considerable monetary savings, which, in turn, accelerate the delivery of new releases so they can go into production precisely because test periods are shorter.

The three practices –manual testing, automated testing, and load & performance testing– combined in a full stack model make up an efficient quality process that guarantees that the product will not have functional or non-functional problems.

 

So, the next time your company needs to develop software and applications, make sure you read the fine print regarding the quality process and do not miss the opportunity to find out if full stack will be part of the process. Your business may depend on it.