Can artificial intelligence be compared to human intelligence?

The buzz around generative AI tools like ChatGPT has revived the debate around how close Artificial Intelligence (AI) is to outperforming human intelligence and phrases like Artificial General Intelligence (AGI) and human-level AI have begun to enter the public dialog. Before delving into this subject, we need to understand the different facets of intelligence, which is a complex concept that has been studied by philosophers and psychologists for centuries.

“Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.” – R. J. Sternberg cited in a research paper by Shane Legg and Marcus Hutter.

The authors collected 70 definitions of intelligence and categorized them into collective definitions, psychologists’ definitions, and AI researchers’ definitions. From all those definitions, we understand that intelligence is not a single ability, but rather a composition of multiple abilities like reasoning, learning, interaction with environment, and emotions.

In another paper by Yoshihiro Maruyama, a computer science researcher from the Australian National University, the author tried to set the conditions of Artificial General Intelligence that encompass logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness.

As for the logic and reasoning ability, AI systems available at the present time are still far from AGI as it is defined in a recent paper by Sébastien Bubeck et al. from Microsoft Research as “systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level”. Even the most advanced AI systems like Large Language Models (LLMs) or Computer Vision systems, which can perform complex tasks such as questions answering or objects detection, fail at performing simple tasks such as calculus because they are not conceived to get out of their learning context. In the same paper, the authors performed an early experiment with GPT4, OpenAI’s latest LLM, and concluded that GPT4 shows sparks of AGI but it is still far from being qualified as a complete AGI.

The second important trait of intelligence that is still lacking in AI systems is autonomy. Autonomy means the faculty of spontaneously setting goals and actively acting towards them and not just passively processing information in response to external stimuli. It also implies the capacity of judgement and choosing options based on internal wills and desires. Autonomy is also essential for creativity. Many people might argue that text or image generation AI systems are creative as they can generate original and unusual content, but it is a pseudo-creativity as it mimics the behavior of humans who generated the data that those system learned from.

Another area where AI fails (at least for the time being) to be at or above human level is the ability to adapt to new environments. Take the example of self-driving cars; AI researchers have been trying to create entirely autonomous cars that can move in any environment, but yet, this goal is still not completely achieved.  We humans can drive in entirely new environments thanks to our faculty of inference and learning from only few examples. In AI research, it is called few-shot learning and has been a hot topic in the field during the last years.

Building systems that can explain their reasoning and inference as humans do is another hot topic in AI research. In some applications such as medical diagnosis, the most important is not to draw conclusions but explain how they were reached and how confident the system is in these conclusions. Some progress has been made in this area, but there is still a room for improvement.

By far, emotion is the area where artificial intelligence fails the most to be at or surpass human level. Emotion plays an important role in human judgment and decision-making. It is also closely related to morality which allows us to live in a society and base our decisions not only on our own goals and desires, but also on collective goals. AI morality comes from data corpora and rules containing human biases and cannot be compared to human morality. An AI-based chatbot doesn’t tell harmful content because we taught it not to do so and not because it believes that it may hurt someone.

In a nutshell, to answer the question of whether artificial intelligence can be compared to human intelligence, we see by breaking down the many definitions of intelligence that we are still a long way from that. In a survey performed in 2018, AI researchers from renowned institutions reported that human-level AI has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years.


Posted

in

, ,

by