AI is a disruptive technology, not a magic wand to solve any problem. Knowing its limitations allows for a more responsible use and better results.

Its responsiveness is surprising, as well as its natural language processing and its ability to solve a very large number of problems in real-time. But let us not be confused: artificial intelligence (AI) – including, of course, generative AI – is not an all-powerful magic tool, but a very powerful technology, evolving at great speed and still needing close supervision by humans to deliver the best possible results.

The limits of AI are linked to the “non-human” side of things. For example, the ability to analyze millions of medical images and anticipate diagnoses of certain types of cancer is proven, something that is already being used by the world’s most advanced clinics to promote early detection and preventive treatment. However, unlike physicians, AI does not understand what it means for patients to have cancer or what impact that has on their families. This sort of amorality and emotional deficit applies to all areas of knowledge. It can deliver intelligent answers but, simply put, it needs to know what it is talking about.

 Zombie intelligence

Another limit is what the University of Auckland calls “zombie intelligence”. AI cannot explain its actions or account for the solution offered. What happens between being given a prompt or request and receiving the result is handled in a true black box. Thus, the result can be correct, but also a hallucination. The New Zealand educational entity gives an example: if ChatGPT is asked who holds the world record for crossing the English Channel on foot, something that is impossible in reality, AI delivers an invented answer based on similar data it has, because it is unable to identify that the question itself is meaningless.

A study by the universities of Cambridge and Oslo goes a step further and claims that the paradox enunciated in the early 20th century by Turing and Gödel that “it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms”, also applies to AI since its algorithms cannot exist for certain problems.

Only human after all

The ethical and responsible use of AI, a discipline that is becoming more and more prevalent and which already has a specific regulatory framework, recently presented by the European Union, aims precisely to align the objectives of AI with those of society, i.e., that it not be used for situations that generate risks. Analyzing responses to verify that they are correct and in line with expectations is the first step to avoiding a catastrophe of any scale.

These premises apply to the universe of digital product development: at Making Sense we already use generative AI to enhance discovery activity, identify new features in digital products, strengthen our quality processes, boost our development teams or think strategically about our clients’ needs.

However, we never lose sight of the importance of the human factor within the project: the results we obtain from AI serve to speed up processes, automate repetitive or tedious tasks or obtain suggestions. But the knowledge and experience of our professionals is what allows us to reach excellence, what guarantees the correct operation, what ensures a perfect alignment with the requirements and needs of our customers and their users.

Humans are the ones who understand reality, who judiciously interpret what is needed, what a digital product should do, what a customer needs, and what user experience is the best to deliver. We can use AI as a co-pilot, as an aid to accelerate or improve the quality of pieces of code, but we cannot delegate to this technology an element that remains unquestionably human: the creation of meaning.