According to various studies –such as the renowned Chaos Report– hardly 29% of software development projects successfully fulfill three fundamental axes: time, cost, and scope. In other words, most projects fail on one of the triangle’s vertices. This happens due to different factors, ranging from how the projects are managed to poor identification of the client’s needs. It also happens when companies underestimate the effort of testing, or that delivery dates are calculated “at a guess” based on experience and similar projects.

Is it possible to apply scientific methods to correct approximations that are both human and fallible and to incorporate predictive models which –based on the defects found in other developments in the past– can estimate the deviations of a new project? The answer can be found in neural networks: a model that can learn through training and emulate how the human brain processes information. Before that, business intelligence is used to collect the necessary data and machine learning to process them after adequate training.

The most interesting point is that companies already have the data necessary to feed the model; generally, they do not use them. For instance, during quality assurance (QA) processes, the different defects found are logged, with details for each one: severity, impact, module or functionality affected, and project stage during which defects were detected. All this is stored in specific tools to resolve the defects or review codes.

Going after critical defects

The purpose of QA is to deliver to users a product that meets their expectations and is functional. To achieve that, blocking or critical defects must be detected during the development process; we are referring to flaws that can significantly impact system functionality. The use of neural networks aims at taking preventive action so that defects do not delay the delivery of the product or, even worse, impact the release the client is using.

The results of the first tests conducted by Making Sense researchers were auspicious. In an algorithm trained with data from three complex releases that had taken almost two years to develop, a prediction level close to 97% was obtained when detecting defects of severity 1 and 2 (that is, those that generate the most severe conflicts). This means that the development team will know what they can expect next week, take early measures, and avoid deviations in costs and project execution times, e.g., including the budget for hours necessary to fix the defects that can be expected according to the degree of severity. The results generated by the algorithm can be taken as the point of entry to approach the following sprints or releases, plan tests, and even determine the cut-off point for testing.

Lower cost, more value

The use of this algorithm has a direct impact on project costs and the perception of value by the end client. It is not the same to detect a problem in the analysis or development stages as doing so when the product is being implemented, or even more complicated when it is in the production stage. If the defect is detected at a later phase, costs will increase exponentially. On the contrary, the present proposal –where problems are detected early– renders charges practically insignificant. For the QA team, there is more room for maneuver. In general terms, QA activities come at the end of the chain. Usually, there is not much time to conduct validations or meet deadlines, which often leads to shortening testing times.

In conclusion, we can say that incorporating intelligence in the QA phase of the software development process is that deadlines for delivery are met promptly, and the team is available both to conduct activities with a higher added value and, of course, to deliver higher-quality products.