The Quality Assurance (QA) team’s responsibility is to make sure that the software we are delivering to the client is what they are expecting to fulfill their needs and to assure that it works in a way we can measure. We look at performance and whether the product aligns with the business goals. We look at our role as providing a way to get ahead of the competition by providing a better service, product, or experience to the users.

This concept is also present in some literature, as Glenford J. Myers stated in his book “The art of software testing”:

However, more often than not, we tend to focus on testing the application on very edge cases that may lead to unexpected behavior that may reveal bugs or unintended results and this is not a good quality assurance strategy on its own.

What is a QA Strategy?

A strategy, as a general concept, is a plan viewed as long-term that will guide us through the process for reaching our goals. This means that when we create a strategy, we are planning for the future so we can have the tools and the means to achieve those said objectives.

A quality assurance strategy is no different. We should plan what the objectives are, define plans to reach them, and execute tasks that are in concordance with the level of quality we want to deliver to our clients (or customers) for them to have the best possible experience, product, or service.

This is a key step in the quality assurance process as a good strategy will guide the development team in a good direction. Without it, the team may be developing an undesired poor quality software when the pursuit goal is not clear.

So these questions arise:

  • Should we take into account all possible scenarios to make sure the software developed is errorless (or at least the notion of it)?
  • Should we spend a lot of time and resources trying to “break” the software so we can be more confident about it?.

There is not a simple answer to any of these questions (nor there will be in this blog). However, I will try to give you my idea of how to handle these situations.

Set the objectives clearly

It may seem obvious, but primarily we should be 100% certain that what we are developing and delivering is what the client is expecting. There is no point in delivering something perfect that does not meet the client’s requirements, right? So the first approach would be to ensure that the application is within expectations and the plans are in place to guide us through the process.

If we fail to recognize what the client’s expectations are, there is no point in the rest of the process and it does not really matter what the overall quality of the product is. We will be delivering a perfect functioning car to a client that wanted a plane.

To mitigate this risk, the quality assurance team should consider the details of the project, the scope, and what is the ultimate goal to the client. We need to understand and act according to objectives that will drive the process to a successful enterprise and a satisfied client.

Finally, communicating properly with the client, as well as our teammates, and making sure that the objectives are clear and shared will help us achieve that level of success and quality.


The hunting begins

Once we are clear to go and we have the plans to go with, now the development (and testing) begins for us. It is well known that a bug-free application is something we all aspire to (but which will never be achievable) as well as full and complete application testing. This is simply not possible, nor practical, for several reasons like lack of resources, the number of scenarios or the volume of data required to verify every single case, etc.

Effective test cases are those that bring to light the issues present within a program. This is considered an industry standard. Nonetheless, this does not mean that all the test cases should be defined around the idea of, once again, “break” the code so we can “proudly”, if I may, tell our teammates that we are doing a fine job in finding (or pointing) out issues.

This means that we should commit to a level of coverage (agreed with the client) and test as much as possible. Furthermore, we should always try to optimize the time spent when designing and executing test cases.

For that matter, when designing test cases we should focus more on assuring that the features are compliant with what is expected rather than adding a lot of edge cases or even tests solely created with the intent to, as said before, “break” the code.

Don’t get me wrong on this, we should cover these scenarios and validate that, for example, the application will not break into pieces if we put a letter input in a numeric type field. The question here is how can we reach a balance between these situations.

The “hunting down” of bugs is more of an analogy to the common practice of using, for testing purposes, software in a way it was not designed for, providing malicious or faulty data in order to provoke it to deliver non desired results.

There must be a balance between testing the application and deliberately pushing the software to fail and crash. If we put more effort into the second aspect, we may lose focus on what is important to validate and what is mandatory for the application to serve. As Thanos once said: “perfectly balanced, as all things should be”.


The answer is not even close to being easy. The ISTQB foundation syllabus, in its section “1.3 Seven Testing principles” states among other things:

  1. Testing shows the presence of defects, not their absence Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness

Does that mean we should spend all our efforts in finding intrinsic defects, defects that are hard to reproduce, or even bugs that depend on a specific configuration or data input? Of course not. Defects will always be a part of the software no matter what we do and how thorough we are in the process.

Nevertheless, it would be better to deliver software that is efficient, performant, compliant, and helps satisfy the client’s requirements rather than useless but bug-free.

We, the quality assurance team, do not “break” the code, we simply highlight the issues that are already present by using the software as a “specialized” user. As soon as we understand and assimilate that mindset, we can create test cases that are more efficient and effective, increasing the confidence in the product’s stability, robustness, and of course, the overall quality.