We all make mistakes. We’re only human. But when you are a company bringing out software that will be widely used around the world you have to try to get things right the first time, whenever possible.

Not only can bringing out deficient products lead to serious damage to your brand image and millions, or even billions, of dollars in losses in product recalls or lawsuits but some software errors can even be deadly.

A deadly mistake

One has to look no further than the two fatal crashes of Boeing 737 MAX passenger airliners in Indonesia in October 2018 and Ethiopia in March 2019, which have been attributed to flaws in the automated flight software (MCAS), which prompted planes to nosedive shortly after take-off.

Both tragedies can serve as powerful examples of how things can go wrong when software fixes are applied to correct a hardware problem, and insufficient testing and training of pilots is done before launching the product commercially.

The crashes led Boeing to ground more than 60 aircraft for 20 months to address the problem costing the aerospace company an estimated US$20bn in fines, compensation and legal fees and indirect losses in the form of cancelled orders at more than US$60bn.

There are other widely publicized, though less lethal examples, where companies have got things wrong, not necessarily due to software defects but due to wrong-guessing how their consumers would react to new design.

One such example was Microsoft’s launch in 2012 of the Windows 8 operating system, which introduced major changes to the user interface to improve the experience for mobile devices. The operating system was criticized for being confusing and difficult to use particularly with a mouse and keyboard, whose use the company had erroneously assumed would soon be phased out.

Microsoft was obliged to quickly bring out an update, Windows 8.1, to address some of these issues, while Windows 10 essentially went back to a keyboard and mouse focused interface.

Articles written at the time claimed that Microsoft had missed the mark with Windows 8 because they forgot the basics: they didn’t know their customer, they didn’t solve a problem (rather they created one), their message of what they were trying to achieve was unclear, and they ignored user testing research.

Essentially, at that time there was not a ready market of touch-screen only users. The touch interface seemed more targeted for entertainment purposes, when their main market was business users so they didn’t solve existing users’ computing problems. Microsoft must have ignored usability studies because the level of discoverability of many features was hidden. Essentially they had created a Frankenstein’s monster hybrid desktop/tablet interface.

Nobody is perfect

Today, in this hyper connected and fast-moving world where products must come out faster than ever before, where customer expectations are at an all-time high and where bad press can travel instantaneously on social media, software developers and designers and quality assurance (QA) testers must work more closely than ever before.

“You can never guarantee that software will have no defects,” says Matías Caria, Making Sense’s QA manager. “There will always be defects but you have to minimize them to the bare minimum.”

“We’re all human, we all make mistakes,” adds Making Sense QA Lead Patricio Cedano.

What occurs is that developers tend to be very focused on coding, the back-end, whereas testers have to see the bigger picture and look at how software will work in practice with the end-user on the front end.

“Our job is to ensure that all of the quality requirements are met and that users can access all functions of the applications,” Cedano said, adding, “We also have to be on the lookout that nothing will cause any legal problems for the company.”

The best time to start testing is at the beginning

According to Caria, ideally all testing should start at the beginning of the development process because things are easier and less costly to fix than at a later stage.

“The sooner you find a flaw, the less costly it is to fix it,” is a typical adage shared among testers, Caria said.

Sensitivity to criticism

In the same way that journalists are often sensitive to editors making changes to their stories, not all developers take kindly to the constant scrutiny of testers looking over their shoulders for errors in their coding, nor do designers appreciate testers picking holes in their vision for a perfect UX interface.

“Some people are more sensitive than others,” admits Cedano. “Essentially you are criticizing their work, but they have to be aware that there is always room for improvement.”

Caria underscores the importance of teamwork and doing things methodically, using agile techniquesto test at the end of each cycle. Failing to do that could end up in more work for the developer or designer anyway if major changes have to be made after much of the development work has already been finalized.

Cedano adds that User Acceptance testing is crucial, especially if last-minute changes have to be made.

Those tests should be carried out directly with the target user to get their acceptance and feedback, for example, when testing the purchasing experience on a retail website.

“You have to put yourself in their shoes,” he said.

Under pressure

But the testers also come under pressure as all of them have superiors in the hierarchical structure of a company.

The Covid-19 pandemic has forced many companies to take major leaps forward in their process of digital transformation and there is more pressure for time to market in offering products and services.

“We’re the last people to check the product before it goes to market. If something goes wrong, everyone looks at us. Errors can lead to billions of dollars in losses and product recalls,” says Cedano.

What makes a good tester?

According to Caria good QA testers are innately curious and have an almost obsessive attention to detail.

“We’re the sort of people that spot errors and anachronisms in movies and then make Youtube videos about them,” he said.

They also have to be able to see the big picture and second guess unexpected consumer behavior,” says Cedano.

“You have to think of potential user scenarios that other people haven’t thought of. We’re nitpickers, hairsplitters,” he said.

How will new technologies shape QA testing in the future?

Testing solutions are constantly evolving. QA is no longer just about finding bugs and errors, it also has to scrutinize product ideas, evaluate behavioral predictions and study opportunities and threats.

Consultancies and tech publications have predicted a series of trends that are likely to facilitate, speed up, and improve precision of QA testing in the future.

The following is a brief mention of some of the technologies that are expected to share QA testing in the short term. Over the coming weeks we will address these trends in more detail through its blog.


Automation is not a new trend but in relation to the type of client that Making Sense typically deals with it has brought about the biggest changes.

According to Caria, not only can a tester run automated workloads during the night to save time during the day, automation of regular checks leads to fewer errors.

“If you’re testing the same thing every 2 weeks, it becomes monotonous, you can make mistakes. With automation you can now run checks two or three times a week and improve accuracy and quality,” he said.

According to Cedano, when faced with a consumer culture that wants everything immediately, “automation introduces new methodologies and reduces development cycles and that can lead to savings in time and money.”

Artificial intelligence (AI) and Machine Learning (ML)

AI and ML can revolutionize the way we test data and improve bug detection algorithms. These technologies could substitute manual testing tasks so that QA resources can be allocated elsewhere. AI can process much more data than can be handled manually and draw on multiple customer usage trends, which software designers and developers can then interpret and incorporate in their products.


Blockchain technology works as a secure and transparent record-keeping database, commonly used in supply chain management, logistics and financial services. However for these complex applications to work efficiently they require next generation testing solutions to debug the code. Types of testing that need to be run include: performance, functional, node and API testing.

Big Data

Applications that have huge volumes of structured and unstructured data need to be analyzed and tested frequently. Sectors including healthcare, retail, finance and telecommunications constantly deal with big data. End-to-end big data testing tends to focus on accuracy. Effective testing can enhance decision making, business strategizing and market targeting.

IoT testing

With the number of interconnected gadgets expected to reach epic proportions over the next few years and revolutionizing industries from automotive to manufacturing, this represents an enormous need and opportunity for QA testing. IoT testing will focus not only on hardware and software but also on operating systems and communications and security protocols.

Agile & DevOps

Agile & DevOps methodologies allow tech teams to develop high-quality IT products, receive feedback more quickly and enter the market faster. In other words, the challenge for QA testers accuracy and speed and the capacity to respond quickly to changing trends. DevOps and Agile comprise practices, processes, rules and tools to help reduce cycles from development to operation to delivery.

Always have someone check your work

In summary, all of us need a third party to check our work as we are often incapable of seeing our mistakes. Proper QA testing helps build customer trust in a company as a reliable provider of products or services.

Failing to do proper QA testing can lead to fines and penalties for noncompliance with standards.

Higher quality products have fewer defects and require less maintenance and costs.

And failing to detect bugs early in the life cycle can be expensive to fix due to redesign, re-implementation and retesting.