Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Heliyon. 2021 Feb; 7(2): e06268.
Published online 2021 Feb 17. doi: 10.1016/j.heliyon.2021.e06268
PMCID: PMC7902546
PMID: 33665435

Towards an interdisciplinary framework about intelligence

Associated Data

Data Availability Statement

Abstract

In recent years, advances in science, technology, and the way in which we view our world have led to an increasingly broad use of the term “intelligence”. As we learn more about biological systems, we find more and more examples of complex and precise adaptive behavior in animals and plants. Similarly, as we build more complex computational systems, we recognize the emergence of highly sophisticated structures capable of solving increasingly complex problems. These behaviors show characteristics in common with the sort of complex behaviors and learning capabilities we find in humans, and therefore it is common to see them referred to as “intelligent”. These analogies are problematic as the term intelligence is inextricably associated with human-like capabilities. While these issues have been discussed by leading researchers of AI and renowned psychologists and biologists highlighting the commonalities and differences between AI and biological intelligence, there have been few rigorous attempts to create an interdisciplinary approach to the modern problem of intelligence. This article proposes a comparative framework to discuss what we call “purposeful behavior”, a characteristic shared by systems capable of gathering and processing information from their surroundings and modifying their actions in order to fulfill a series of implicit or explicit goals. Our aim is twofold: on the one hand, the term purposeful behavior allows us to describe the behavior of these systems without using the term “intelligence”, avoiding the comparison with human capabilities. On the other hand, we hope that our framework encourages interdisciplinary discussion to help advance our understanding of the relationships among different systems and their capabilities.

Keywords: Theoretical framework, Artificial intelligence, Philosophy, Non-human intelligence

Theoretical framework; Artificial intelligence; Philosophy; Non-human intelligence

1. Introduction

Since ancient times, the mind and its function have been a subject of great interest to philosophers, scientists, and intellectuals of all kinds. Throughout history, the way we think about intelligence and reason has changed several times as new schools of philosophy develop, our knowledge about the world increases, and technology advances. For most of the time, the definitions and discourse about intelligence have had humans at its center, and logically so, since humans are the most intelligent systems we know. Much effort has been dedicated in the field of psychology to the dissection of human intelligence as well as the creation of models, tests, and scales of measurement that can help us understand its workings (e.g. [1], [2], [3], [4], [5]). But as science advances, there have been several attempts to expand the study of intelligence to non-human systems.

Intelligence has traditionally been considered a biological phenomenon [6]. Novel discoveries about the brain, its structure, and its function have led to the recognition that there are no fundamental differences between the human brain and that of most of the other animals [7]. Although human brains have larger capabilities in many aspects when compared to non-human animals, the biological, chemical, and physical differences among them are a question of scale and reorganization of common structures rather than the presence of any unique element [8], [9]. In some cases, similarities in aspects of the brain's function hint at deep similarities between phylogenetically distant groups such as vertebrates and mollusks [10]. Nowadays, it is commonly accepted that non-human animals such as apes [11], corvids like ravens or crows [12], odontocetes like sperm or killer whales [13] and even insects [14] may possess a certain degree of intelligence and self-awareness.

Moreover, the fact that so much research and thought into the nature and characteristics of intelligence has been centered on humans has led to anthropocentric methods of evaluating intelligence in animals. This means, in most cases, that animals are considered intelligent only when they show human-like behavior and capabilities, usually related to the capability to build and use tools or to the presence of social skills [15]. However, intelligent behavior can exist in animals without their correspondence in humans. Indeed, some argue that “lower” animals such as insects and cephalopods can be considered to have capabilities normally associated with more complex organisms, such as consciousness [16] and even self-awareness [17]. In some cases, definitions have been broadened enough to encompass vegetal life [18]. Advances in the study of complex plant signaling and communication have led to the creation of a community of scientists studying “plant neurobiology” [19], and the resulting controversial arguments around the term (see [20] and responses). This discussion has been recently revitalized with the discovery of mycorrhizal networks that exchange information and nutrients among trees across entire forests [21].

One of the most outstanding advances in so-called “artificial intelligence” has been AlphaZero, a deep reinforcement learning computer program able to teach itself to play Go with little information apart from the basic game rules [22]. AlphaZero discovered several known Go strategies and invented new clever ones before beating the best human Go players. However, despite the impressive capability of the computer program, it is limited to a very narrow task, and cannot transfer the knowledge from the game and apply it to other contexts. Despite these advances, and the discussions around them, artificial intelligence is not a well-defined term [23] and although “intelligence” is in its name, it is not clear if it should be characterized by using the same concept of intelligence used in psychology, biology, or in everyday language [24].

Considering these different ideas and manifestations of intelligence, how can we make sense, in a single framework, of animal intelligence, plant intelligence, government intelligence agencies, artificial intelligence, smart devices, or even smart cities?

The topic of intelligence has gained great popularity in recent years. New discoveries in brain and computer sciences are also starting to provide new insights into how the physical and informational substance that drives and supports the emergence of intelligence operates [9], [25], [26], [27], [28], [29], [30], [31], [32].

A large part of the discussion about intelligence is terminological. In the case of animal and plant intelligence, as well as in the case of artificial intelligence, most of the arguments seem to stem from a tension between more general and narrow concepts of intelligence. On the one hand, there is a concept of intelligence that refers to human-only or human-like abilities such as self-awareness or reason, and on the other hand, there is a concept of intelligence that refers to more general problem-solving capabilities. Notably, the definitions and concepts between different fields of research vary greatly, and could present problems for interdisciplinary collaborations.

The aim of this article is to propose a general framework that describes the properties of intelligent-like behavior across different disciplines. This effort arises from the intuition that no single field of study provides a complete interpretation or an appropriate framework to make sense of the different manifestations of intelligence and an interdisciplinary approach is needed. In this sense, we are taking a perspective similar to the general systems theory (GST) founded by Ludwig von Bertalanffy [33], [34], [35], [36], [37], [38]. The GST conceptualizes systems as a set of interrelated and interdependent agents. The rules and dynamics of interaction between those agents explain and predict emergent behaviors within the system. Moreover, some of these rules and dynamics can be generalized across very different systems such as living beings, machines or societies to generate theoretical models that can be applicable and useful across different fields. The set of concepts and ideas of the GST are broadly applicable, and therefore facilitate interdisciplinary communication. Certainly, there is merit in this “big picture” perspective. Here, we take a similar approach to study modern perspectives of intelligence. It is important to note that we do not aim to solve the deep philosophical problem of the nature of reason or intelligence, but to create an abstract, comparative framework that captures the commonalities that can be identified in systems that show intelligence or intelligent behavior in the broader sense of the term.

In the following section, we provide short overviews of the problem of intelligence in each of the authors' fields of research, and explain why our proposed framework could provide a useful contribution for each of these fields. In the section “proposal of a novel conceptual framework” we describe our general approach to the problem of intelligence. First, we propose the term “purposeful behavior” as a stand-in for the concept of intelligence in a colloquial, broad sense when applied to non-human systems. Second, we describe what we consider to be some common characteristics of systems with purposeful behavior, that can be used to take comparative approaches to the study of the behavior of non-human systems. Finally, in the “discussion” section, we give some closing arguments on why we consider our framework to be a useful contribution for multidisciplinary studies about intelligence, and propose some avenues for further research.

2. Intelligence from different perspectives

The differences between natural and artificial intelligence are a very popular topic in mass/social media. One can find a vast amount of material about the perspectives of the leading researchers of AI and renowned psychologists highlighting the commonalities and differences between AI and human intelligence, and addressing the question of whether non-human animals, organisms, or machines can be considered intelligent. However, in academic environments, there is very little thorough treatment of this issue.

Human intelligence has been the subject of much dissection and decomposition in its main factors [1] and even in this case there is not a clear, agreed-upon definition, with concepts such as the g factor being the subject of vigorous debate [9], [39]. Outside of the human realm, the definition of the concept of intelligence is even more problematic. Several attempts to characterize versatile, adaptive or autonomous behavior in non-humans systems has lead to conceptual and semantic disagreements [20]. Humans are invariably (and rightfully) used as a reference for intelligence, and due to their advanced and specialised capabilities, discussions about adaptive or purposeful behavior in other systems are invariably compared to and measured against humans [15].

The fact that these discussions exist means that there is interest for a rigorous, academic treatment of the concept of complex, adaptive, goal-oriented behavior in non-human systems. Although some attempts have been previously made [23], [40], [41], [42], this problem remains largely unaddressed.

Before suggesting our framework, we provide a brief overview of what intelligence means in different disciplines that have addressed this concept. We focus here on philosophy, computer science and biology. This selection is arbitrary as certainly there are other disciplines that have largely contributed to the dialogue about intelligence, such as psychology, neurosciences, biochemistry or education, just to mention a few of them. In order to establish the background that led to the creation of the framework, and to better explain the problem that this paper is addressing, the next section will introduce the problem from the perspective of the disciplines of each author.

2.1. Intelligence from philosophy

Philosophy has a very important role when having an interdisciplinary dialogue. It is one of the most abstract disciplines and it can help other disciplines that are more concrete to dialogue. This is not to say that some disciplines are better than others, in an absolute sense, and that philosophy is the best among them. Rather, it is to say that all sciences can be placed in a hierarchy of abstraction. Philosophy is one of the most abstract disciplines. Thanks to its more abstract nature, philosophy can put aside accidental elements and focus on the essential. Philosophy is not only helpful but necessary to build interdisciplinary bridges and help other disciplines that are more concrete to dialogue.

Today, many disciplines use the word “intelligence” in a broad and inconsistent way. We have to distinguish between the ordinary use of the word “intelligence” and the scientific use. The first is blurry and flexible; the second should be rigorously defined. The problem arises when the two uses are mixed, and science starts to talk about “intelligence” in an ordinary way.

In philosophy, words are rigorously defined. This is the case with “intelligence”, one of the most important topics in philosophy. The ordinary use of the word “intelligence” is very recent. The word was created in a philosophical context to talk about a very specific facet of human reason. It is important to take a look at what philosophers have said on this issue.

Intelligence is an old philosophical topic that has been discussed for centuries. As words have history, a good starting point is to discuss the etymology of the word, in order to reveal its original meaning when first appeared. We also discuss the way in which the meaning of the word has changed throughout history.

The Greeks did not have just one word to speak of intelligence. They had several. Plato differentiated between two ways of using reason: discursive reason or dianoia διάνοις and intuitive reason or nous νοῦς or νόος [43], [44]. Discursive reason is exercised in the confrontation of theses. Thesis A is opposed to thesis B, and concludes with thesis C. Plato called it a dialectic exercise. This is the type of reasoning used in mathematical proofs and in logic exercises. It is a slow use of reason, which makes all the steps explicit before reaching a conclusion. Instead, intuitive reason is a quick use of reason. It starts from the premises and reaches the conclusion without going through the whole deductive process. Intuition is the simplified form of deductive reasoning. Plato considered intuitive reason as the highest form of human intelligence. He spoke of intuition as a direct contemplation of the truth. These two faculties are two ways of using reason. As reason is only human, Plato related these two modes to human beings alone.

Aristotle distinguished between three types of souls in the natural world [45]. First, the vegetative soul, was typical of plants. Its functions were growth and nutrition. Second, the sensitive soul was typical of animals capable of locomotion. Along with the previous functions, it was capable of locomotion and sense perception. While some people may argue that plants can also “sense”, it has to be clear that Aristotle was referring to the ability of receiving information via complex sense organs. Third, the intellectual soul corresponded to humans. Along with the previous functions, it had the ability to reason [46]. For Aristotle, the “soul” was not a spirit that survived the death of the body, as some understand it today. “Soul” only meant “anima”: the principle of animation or the principle of life. Any living being had a soul. Having a soul was synonymous with having life. So all organisms in biology have a “soul” in Aristotelian terms. In fact, this is the origin of the world “animal”: an organism that has an “anima” or soul [46]. Aristotle, like Plato and all later Roman and medieval philosophers, only assigned the rational soul to human beings. For them, intelligence and reason were almost synonymous.

But intelligence and reason are not the same thing as memory, sensory perception and imagination. These faculties fall within Aristotle's sensitive soul. He assigned these abilities to other creatures besides human beings [45]. This is consistent with recent scientific findings in animal memory, imagination and learning skills. But in this case, we are no longer speaking of intelligence in the strict sense, which implies reason, but about different cognitive abilities that do not imply reason.

It is also important to remember that neither Plato nor Aristotle talked about “intelligence”. This was a later word which was derived from the Latin “intellegere”. Medieval Latin philosophers translated it as “to read inward.” This gives us a first hint: intelligence implies a component of abstraction. Human language implies abstraction. Reading is the visual interpretation of human language. Therefore, reading also implies abstraction. To read inward is to use human language in the silent place of the inside. The etymology of the word “intelligence” is telling us that to be intelligent is to be able to read inward.

The current use of the word “intelligence” Words are social objects. The meaning of a word is determined by the way it is used within a society. Meaning is use [47]. The way we use the word “intelligence” today is also changing its meaning and adding something to its history [47]. Like all objects, words were created for a specific function, but they end up acquiring novel uses.

In today's world we make a very broad use of the word “intelligence”. We talk about “artificial intelligence”, “smart cars” and “multiple intelligences”. Superficially, we all understand each other. But when we perform a deeper analysis, nobody knows exactly what intelligence is.

It is difficult to reach a consensus about the definition of intelligence. This partially comes from a confusion between the ordinary use of the word “intelligence” and the scientific use. This misunderstanding also comes from the fact that biologists define intelligence according to biological terms, computer scientists define intelligence from a computational point of view, and so on. The same word is being used in different ways, providing multiple meanings and multiple definitions.

The word “intelligence” was created with a strict definition that has remained relatively stable throughout the centuries. It also has a broad definition that departs from the strict one. Broad definitions exist due to the dynamic and flexible nature of human language. The strict definition is the original one, and the broad definition only exists in relation to it, as an analogy. Forgetting this will lead us to a dark and messy place.

The strict definition of intelligence Intelligence in the strict sense is the ability to know with conscience [48]. Knowing with conscience implies awareness. This fully happens in rational acts (human animals) and does not fully happen in sensitive acts (non-human animals). In a strict sense, we can only speak of human intelligence. In an analogous sense, we may still speak of intelligence. However, the ability to know with conscience is not possessed by plants or machines.

Computational theories of the mind define intelligence in the strict sense as the ability to process information. But the ability to know with conscience includes the ability to process information. The act of knowing includes information processing, but it is not limited to it. To know is not only to process information, but to acquire it with awareness. In terms of classical knowledge theory, to know is “to immaterially possess the forms of things”. This is a very important idea, which is well-established in philosophy [43], [45], [49], [50].

The broad definition of intelligence Intelligence, in a broad sense, is the ability to process information. This can be applied to plants, machines, cells, etc. It does not imply knowledge. Therefore, it does not imply consciousness. Plants, computers and cells are capable of processing information, but not of acquiring it in a conscious, abstract and active way. That is, from the strict definition in philosophy, they do not know information.

Therefore, there is a difference between “containing” information and “knowing” information [51]:

-To contain information is to possess it passively. Computers and DNA contain information, such as bits and nucleotides. Computers do not have consciousness, capacity of abstraction nor an epistemologically active dimension, and therefore are not capable of knowing information. They only contain it. This thesis belongs to a well-established tradition of the philosophical theory of knowledge [50], [52], [53].

-To know information implies consciousness, capacity of abstraction and an epistemologically active dimension. To know is not a passive thing that happens, as if a book was poured into our head. To know is something active [54]. Consciousness is an immaterial element (not spirituality) that, in a strict way, only occurs in humans. And in an analogous way may occur in non-human animals.

In this broad definition, we have used the word “process” instead of “containing” information. Plants, computers and cells not only contain information, but they reproduce it. Thus, processing information implies the ability to reproduce it.

It is necessary to keep in mind the strict definition of intelligence, versus the broad one. Philosophy can provide some distinctions to correct the current misuse of the word. It is also important to have an overview of the history of the word “intelligence”. As the word “intelligence” is inevitably used by different disciplines, a good idea would be to go back to Aristotle's approach. He identified something common to human animals, non-human animals and plants, while distinguishing their differences. Aristotle's ideas would be a good toolbox in order to build a broader framework.

2.2. The importance of a general conceptual framework of intelligence from the point of view of computer science

The term Artificial intelligence (AI) was coined by John McCarthy when he held the first Dartmouth conference in the Summer of 1956. At that time Artificial intelligence was understood as “the science and engineering of making intelligent machines” [55]. Today AI is an interdisciplinary research field with a broad scope.

Throughout the history of computer science, there have been intensive philosophical and theoretical discussions over whether a computer can think or reason [56]. This question still remains a frequent source of discussion navigating the frontier between computer science, philosophy, and neuroscience. The impressive results in artificial intelligence research during the last years have promoted the revival of the question about the definition of intelligence. While the precise definition of the word “intelligence” is a subject of intense debate, a recurrent idea in the field of computer science is that intelligence refers to the capacity of agents (robots, animals, or computer programs) that receive percepts from the environment (information in multiple forms such as images, sounds, numbers, etc.) which they use to perform actions [57], [58].

Traditionally, the only way to make a computer to perform a task was to write down a detailed algorithm indicating what to do in each possible situation. Today, machine-learning (ML) algorithms are capable of doing more abstract operations such as finding patterns and making inferences and decisions without instructions or rules explicitly programmed. Over the last few years more and more examples have been showing that artificial intelligence algorithms are capable of making complex decisions in challenging, dynamic environments.

AI and ML have been intensively put into practice in recent years with the advent of technological progress [57]. AI technologies are constructed by mathematical processes that leverage increasing computing power to deliver faster and more accurate models to forecasts or enhance representations and combinations from large data sets. Whether we realize it or not, artificial intelligence is becoming ubiquitous, playing an active role in our daily lives. AI has even become influential in our democracies, as personality traits are highly predictable from algorithms that use social media digital records of human behavior [59], and therefore, can be used to influence future behavior. During the last years, there has been an attempt to differentiate between artificial general intelligence (AGI) or “strong AI” and classical AI or “narrow AI”. AGI refers to the creation of systems that carry out “intelligent” behavior in general contexts, while classical or “narrow AI” is more about specific contexts such as the program capable of beating the world champion of Go or automated vehicles [60].

Human intelligence is what shapes the emergence and adoption of artificial intelligence. It is human intelligence that seeks to ask ‘why’ and considers ‘what if’ through critical thinking. However, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. As engineering and technology continue to be challenged by complex problems with higher efficiency and accuracy, human expertise still plays a critical role in designing and utilizing AI technology.

Probably the best way to conceptualize the differences in capabilities and scope among different ideas of intelligence is through their complementarity rather than a competition. A better understanding of intelligence in a broader sense may provide a clearer picture of the capabilities of AI, and in turn, provide a better idea of their potential capabilities and threats in the future. In particular as in the field of AI it is usually said that we are at the cusp of a major technological transformation in society [61], [62].

2.3. The need for a framework in biology

Traditional definitions of intelligence are challenging to apply in non-human biology. This is because they often refer to human-only capabilities, or capabilities at which humans excel. Abstract thought, symbolic thinking, and language are good examples [32], [63]. But even if we do not call it intelligence, it is obvious that different animals have different degrees of cognitive capabilities.

It is not useful for a biologist to classify all non-human organisms as “not intelligent”, and it does not make sense based on what we see in nature and evolution. If we assume that the evolution of human intelligence followed the rules of any other biological character, then it did not just appear out of nowhere. Before human intelligence, there had to be slightly-lower-than-human intelligence, and slightly-lower-than-slightly-lower-than-human intelligence before that, until we reach the level of mental capabilities we see in nonhuman primates [64]. Human brains are scaled-up primate brains [65], and there is no reason to think human cognitive capabilities are substantially different than scaled-up primate cognitive capabilities.

A logical solution, then, is a scale, but here anthropocentrism presents a real problem beyond simple philosophical disagreements. Humans are specialized animals, and our intelligence is, to a certain extent, oriented to these specializations, such as language, tool use, and social interaction [66]. Tying intelligence to these characteristics severely underestimates the range of niche-appropriate behaviors and strategies that can be accomplished with the full range of bodies and nervous systems that exist in nature [15]. Each animal, and even each living organism, possesses its own Umwelt [67], however simple, and needs tools to survive given a physical body and an external environment. These tools include cognitive capabilities that generate and guide adaptive behavior. Therefore, in order to build a useful scale, we need to find the commonalities between what drives niche-appropriate behaviors.

In this regard, a bottom-up approach (evolutionarily speaking) could be useful. Instead of taking humans and seeing what other animals lack, it is possible to start from very simple organisms and consider what they have been adding to their bodies and nervous systems in order to improve their behavior in the face of a changing environment. Here, the old essay title turned adage for biologists: “Nothing in Biology Makes Sense Except in the Light of Evolution” [68] makes a lot of sense. Metacognition, abstract thought, learning, and other advanced processes are not modules that appear suddenly in a complete form, but progressions from simpler systems, which may or may not have fulfilled similar functions in more ancestral states [8]. The fact that such systems are not created de novo during evolution means that understanding their evolutionary history can go a long way towards understanding how and why it developed the way it did, and give us important insights into its function.

Moreover, assuming human intelligence shares a common base with animal cognitive abilities, a comparative approach may also be useful in order to understand human intelligence by revealing general principles of the way in which cognitive abilities appear and develop in different animal groups or even non-animal living organisms. It is generally easier to study such general principles in simpler systems and extrapolate to more complex cases than the other way around. This type of approach has been useful in neuroscience to find common organizational principles of neural structures and circuits, as well as evolutionarily conserved or divergent systems [69]. A similar approach could also prove useful in the study of the appearance and evolution of different cognitive abilities.

An evolutionary, comparative approach necessarily requires a certain abstraction of the capabilities we are evaluating, since we need to be able to speak in similar terms about systems that vary a great deal in their nature, complexity, and ecological needs. From the standpoint of biology, the framework presented in this paper is an attempt to create such an abstraction, a way to speak in similar terms of the cognitive or at least behavioral capabilities of a plant, a human, a squid, and a diatom. It asks the question of which common problems the brains or similar information managing systems throughout the tree of life are trying to solve, and tries to create a language to compare the solutions. It is very likely that, if used, this framework will, in time, need to change and become more complex in order to accommodate the diversity we find in nature. However, it is important that this process occurs organically from a base as minimalistic and unrestricted as possible, and with this in mind, the proposed framework is likely to help comparative studies.

3. Proposal of a novel conceptual framework

As discussed in the previous section, several fields of research may benefit from a conceptual framework that facilitates the discussion and study of intelligent-like behavior in non-human systems. However, the fact that the term “intelligence” in a strict sense is tied to human intellect is a major roadblock. The word “intelligence” causes trouble due to semantic arguments around the definition of the word, as well as due to specific expectations about intelligent behavior derived from human-specific capabilities.

To refer to the behavior of such systems while avoiding the connotations of the word “intelligence”, we suggest “purposeful behavior” (PB) as a term that can encompass any system that shows behavior that is directed towards some sort of goal. Such a goal can be very specific (e.g. play chess or drive a car) or more nebulous (stay alive, maintain homeostasis). Thus, instead of studying what separates humans from other autonomous behaving systems, we attempt to extract some common characteristics that are shared, to some extent, by systems with PB. Such common characteristics become valid dimensions of PB if they are shown to be developed to different levels in different systems, and to vary independently, as these properties are necessary for them to be useful in a comparative context.

The first proposed dimension is access to information. Any PB-capable system needs to gather information in order to trigger actions and make decisions. Note that this can take many forms, such as sensory organs, memory, the capability to communicate with other systems or listen in on them, etc. We can see this dimension varying when we look at different systems. For example, a thermostat would be an example of a very simple PB-capable system with very limited access to information (a temperature reading), an Arduino (a simple programmable circuit board) shows more capabilities in this dimension, as it may have access to several inputs. At the high-end of artificial systems we could consider DeepDream or self-driving cars, which have access to the Internet and, with it, to immense quantities of information.

Biological systems also show different degrees of access to information, from very simple unicellular organisms to animals with sensory organs of varying number and complexity, or organisms with short- and long-term memory or the ability to communicate with conspecifics. Examples of organisms with very high access to information would be killer whales (Orcinus orca), animals with sophisticated senses, complex intraspecific means of communication, and extended parental care. Killer whales have also been shown to develop complex hunting techniques and teach them to juveniles, which represents another source of information (culture) for these animals. Access to information is necessary in order to be able to conduct PB, and different systems natural and artificial can be compared to each other in their access to information, which suggests that this is a valid dimension of PB.

Another characteristic of PB that can be considered is information processing, which reflects the ability of a system to filter and transform available information, as well as the amount of “floating information” or working memory of a given system. Any system capable of PB does not simply receive information passively but uses it to produce reactions or make decisions. Therefore, some processing of the received information needs to take place at some point. Working memory is, in fact, considered a vital component of human intelligence [1]. In this case, we can also see that the quantity and speed with which information can be processed varies among systems. A thermostat can only receive one useful input and transform it into output in a purely reactive manner, as do other reactive systems such as simple biological organisms (algae or nematodes for example). More complex systems such as semi-autonomous vacuum cleaners and most animals show the ability to solve simple problems and make decisions based on the available information. At higher levels, we may place animals such as odontocetes, which are able to process multi-sensory information to navigate a 3D environment effectively and are able to use communication to keep track of complex social interactions. Finally, advanced, PB-capable artificial systems such as computers are able to carry out thousands of mathematical operations per second, or to play in a week more games of Go than humans have ever played over their whole existence. As information processing is a necessary characteristic for PB and has a range of variation across systems, it can be considered a dimension to be used in our comparative framework.

The last characteristic refers to the possible amount of behaviors that a system can generate given its structure and its environment. We suggest the name “behavioral space”. Every PB-capable system needs to have a behavioral space, otherwise it would be incapable of behavior, purposeful or not. There are large variations in the extent of the behavioral spaces of known systems. Behavioral space is generally large in biological systems as a result of the way in which they have evolved.

Living beings are usually surrounded by a complex and unpredictable environment. They are embodied systems with many moving parts, which considerably enlarges the ways in which they can react to their environment or interact with it when compared to artificial systems. Even the simplest single-celled organism has embedded within itself a large number of regulatory pathways to deal with changes in external and internal conditions.

More complex organisms also have general heuristics or instinctive behaviors that they can use to act in different situations. A hairless ape can meet a dog for the first time and activate a stress response, its whole body will be prepared to undertake a series of pre-programmed “routines” such as run, scream at it, or throw rocks at it. But the same hairless ape can learn that the dog is friendly, and activate another series of routines that activate favorable social interactions. The interaction between predetermined heuristics, environmental variables, the inherent variability of biological organisms, and flexible neural systems is what gives rise to innovative and even creative behaviors. Finally, the ability to manipulate the environment opens up large behavioral spaces, as is the case with coleoid cephalopods, corvids, thermites, bees, and primates. The need for a behavioral space in PB, and the large variability we see across systems, makes it a useful dimension to consider in our framework.

Artificial systems also have behavioral spaces, but their construction tends to be much more deterministic than biological systems. Moreover, they tend to be built for specific functions and their behavioral space is thus very limited. However, there is also a variation to be seen. At the lowest end, we would have a thermostat, which can only undertake a single behavior. Operating systems are examples of complex software that perform a wide range of tasks and handle the functioning of a machine that can be used for very different purposes. At the top end of the behavioral space in artificial systems we can find creations such as self-driving cars, software assistants (e.g. Alexa), and autonomous robots such as the Curiosity Rover. All these systems are designed to perform complex, varied tasks in a changing environment, demands which remind us of biological systems. A key requirement for the use of behavioral space as a comparative variable is that the breadth of behaviors is measured in absolute terms. One can argue that, within its environment of a Go game, AlphaGo is omnipotent, as the only thing that can be done in its little universe is moving Go pieces, and it can move them anywhere. However, if we measure behavioral space in absolute terms, it is possible for a crab to interact with a physical Go board in the same number of ways that Alpha Go can interact with its virtual equivalent. That same crab can also regulate its internal environment, capture prey, molt, and dig holes in the sand. Although a crab is probably never going to beat a human at Go, due to limitations in its processing power and access to information, its behavioral space is still vastly larger than the one accessible to AlphaGo.

It is important to emphasize that the three factors described above are not completely separate, but a proposed series of specific dimensions for the more general and abstract concept of “purposeful behavior”. Any system that shows purposeful behavior must have at least minimum capabilities in all three of these dimensions. For the purposes of our framework, we can place the thermostat in the lowest level of PB. It has minimum access to information, relying on the readings from a single temperature sensor. It has minimum processing power, as it can only compare the value it receives with a predetermined value to check if it is higher or lower. Finally, its behavioral space is minimum, as it can only turn a heating circuit on or off. Thus, a thermostat has a low value in all three dimensions, but it has all three. A system that lacks one or more of these dimensions cannot, therefore, be considered to have PB. A library has access to large quantities of information but is unable to do anything with it. A simple calculator can process complex input data into outputs but has no behavioral autonomy at all. Any non-robotic multi-tool can be an example of a system with a behavioral space but no ability to independently gather or process information.

All of these dimensions will tend to increase together to some extent, as increases in one dimension may require increases in another in order to be fully effective. Increasing access to information may also require an increase in processing power in order to filter the new information and determine which is relevant for the goals of the system. In the same manner, events that increase behavioral space may require an increase in information availability. For example, the evolution of manipulating arms in mollusks greatly increased their behavioral space by letting them influence their environment in new ways. However, it also led to an increase in information, as sensory information from these arms was available, and information for arm movement patterns needed to be stored. Finally, the new behavioral possibilities and information also led to an increase in processing power, which is required to handle the new sensory information and manage the new behavioral library.

Moreover, an increase in two or more dimensions at the same time may have synergistic effects that give rise to new capabilities. For example, information storage is a simple way to increase access to information gathered in the past. However, in the presence of high information-processing capabilities, that information can be filtered, examined, and transformed in order to search for the most relevant elements to the purpose of the system and discard the irrelevant parts. Therefore the combination of information processing plus access to information allows to learn rather than merely to memorize or store information. Information processing also allows to make predictions on the basis of past information, which is as close as we can get to accessing future information. An example of a system with high access to information, high processing power, and low behavioral space would be computer programs such as AlphaGo, able to use minimum information (rules of Go) to generate large quantities of new relevant data (games of Go against itself), store features and use them to generate purposeful behavior (play Go and hopefully win). AlphaGo, however, has limited behavioral space, as its algorithms are designed specifically for Go-related inputs and outputs, and would require human modification before being able to perform other tasks. Learning and prediction are therefore two examples of features that arise from the increase of both access to information and information processing.

Now let us consider the combination of behavioral space plus access to information. Systems with low information processing lack proactivity, as they require the ability to anticipate problems and ideate solutions, which in turn necessitates an integrated model of their own capabilities and the environment, which in turn requires processing information. Therefore systems with low information processing tend to be only reactive. Although their potential behavioral space may be large, their reactions are limited by their access to information. These systems need access to information, on the one hand, to detect the signals that elicit the reactive behavior and, on the other hand, to store predetermined reactions or behavioral libraries that can be used in different situations. This behavioral library can be very simple, being a simple reflection of the processes of the embodied system (balancing of osmotic pressure), or of genetically encoded biochemical pathways and neural circuits (stress responses, web-spinning in spiders). A system with a large behavioral space and enough information to store responses to a large amount of possible environmental cues, and the ability to access those cues in a precise and timely manner, should show fine-tuned, reflexive, adaptive reactions. Therefore, the simultaneous increase in available information and behavioral space favors homeostasis, pre-programmed adaptive responses to changing situations.

Let us consider an example. A forest-wide mycorrhizal network is an example of a system with high levels of access to information and high behavioral space. It consists of a large number of trees and plants that are interconnected via underground networks of symbiotic fungi [21]. They use these networks to exchange carbon, water, and nutrients between plants, and it has been shown that larger trees can provide saplings with necessary nutrients that improve their viability. In addition, stress signals can also be exchanged via the network, and insect infestation in a single tree can elicit fast (for a plant) defensive behavioral responses in nearby and distant members of the network. Ecosystem-wide mycorrhizal networks, therefore, show fine-tuned reactions to a large number of environmental variables thanks to the collective access to information of its members, as well as a large combined behavioral space that allows for precise self-regulation [70].

The final combination to consider is between the dimensions of behavioral space and processing power. A system that has a high value in both dimensions not only has a broad range of behaviors, but it also has the ability to create new behaviors, or what we could call flexibility or ingenuity. A system with flexibility goes beyond a simple library of reflexive behaviors and reactions, but is to a certain extent aware of its capabilities and is able to use them in a creative way. An example of a system with high processing power and behavioral space but (relatively) low access to information are coleoid cephalopods such as cuttlefish or octopuses. Although they have large brains for an invertebrate, their memories are short, and they cannot accumulate much information in their short lives. In addition, they are limited to eyesight for any long-range information gathering and they are mostly solitary animals. However, they are famously adept at using their arms and siphons for manipulation in innovative ways, including “playing” with objects to explore what they can do with them [71], and they are able to use their active camouflage flexibly depending on their current goals [72].

Given the nonlinear interaction between our three dimensions, it would be expected that the synergistic effects of high scores in all three dimensions would drastically increase the capabilities of a given system. In this way, systems with high access to information and high processing power can create complex models of the world that include themselves. Combined with a high behavioral space, they can enact complex and precise behaviors both proactively and reactively. They are also able to evaluate their own behavioral space and predict the outcomes of their behavior and the behavior of others. With the ability to make precise predictions about the future comes the ability to plan in advance.

An example of a system with high scores in all three dimensions would be corvids. Corvids have large brains and sharp senses. They also live in complex social groups and show elaborate intraspecific communication [73]. They show a large array of complex behaviors: they can fly, walk, hunt, gather, communicate with conspecifics, and also use their beaks for the manipulation of their environment, including the use of crude tools [12]. Notably, corvids have episodic memory and can imagine the future actions of themselves and others. They create food caches and can remember when they should retrieve them based on when they created the cache and how perishable the food they put in it is [74]. Some species of corvids pilfer the caches of conspecifics, and animals of those species know to wait until nobody is looking, hide behind an object when they are in the process of creating a cache, or even create decoy caches with stones when another bird is looking. Moreover, cache pilferers were able to project their experience onto other animals and were quick to recover and recache their food if they remembered another bird watching while they were creating the original cache [75].

Our framework, summarized in Fig. 1, is sufficient to explain advanced purposeful behavior in systems like the above-mentioned corvids, and gives us a tool to compare them to other, different systems and discuss what sorts of changes we would expect to see if we increased or decreased one or more of these dimensions. What capabilities would appear or disappear? What would the minimum requirements for the expression of specific capabilities such as episodic memory, the theory of the mind, or tool use be? Although one could certainly subdivide our proposed dimensions or create more, we find that a minimalistic, abstract framework is more useful to start the discussion and as a base to build upon.

Figure 1

Graphical summary of the proposed framework. Each of the circles represents the dimensions used to characterize purposeful behavior (PB). Outside the circles in bold are labels that indicate the emergent properties of the interaction between each pair of dimensions. Outside the circles in italics are examples of objects or systems that can be considered to have an elevated value in each dimension but not in the other two, and cannot, therefore, be considered PB-capable systems.

4. Discussion

An example of the type of problems that arise when specific concepts tied to certain systems and capabilities are “recycled” and applied to others is the current controversy in plant science on whether concepts such as “intelligence” and “consciousness” are applicable to plants [76], [77]. Specifically, there is a semantic component on the discussion, as terms such as “neurobiology”, “intelligence” and “consciousness”, are intrinsically tied to animal and human physiology and capabilities. This is a case where an overly broad use of narrow terms leads to confusion, more so when the discussion breaks out of academic circles into the general public.

Due to situations such as this one, we consider that there is a need for an approach that avoids the issue of applying a concept such as “intelligence”, deeply tied to the human mind, to the behavior of non-human systems. To this end, we propose the term “purposeful behavior” as a broader concept that encompasses any behavior that is aimed at responding to the environment to fulfill implicit or explicit goals. Additionally, this framework can be used to describe and compare the behavioral capabilities of diverse non-human systems.

4.1. On humans in this framework

The result of following the logic of our framework to the bitter end is that humans are scaled-up crows, as their capabilities, as contemplated by our system, are similar. It is possible to speculate that a high enough score in our three dimensions would lead to the emergence of even more advanced capabilities, as the large amounts of information, processing power and behavioral possibilities would lead to the appearance of more and more advanced environmental and internal models of the past, present and future. However, these are speculations, and more likely to spark unhelpful discussions rather than to facilitate communication among disciplines.

As mentioned earlier, our framework does not specifically address human intelligence nor any other specific intelligence. It is designed to compare mostly natural (non-human) and artificial systems with a high level of abstraction. As a consequence of its generality, it does not possess the specificity to characterize the emergence of human-level capabilities such as reason or symbolic thinking.

This is by design, as we intended to characterize a wide range of purposeful behaviors, not necessarily human-level intelligence. However, humans do show purposeful behavior, they have access to information, they have processing power and they have a behavioral space. All three of these dimensions could then be evaluated in humans and compared with other systems. For example, using our framework it is possible to ask if a forest-wide mycorrhizal network has more or less access to information than the average human, or if killer whales have more processing power than humans, and where in each dimension would Alexa fit in comparison to other systems.

4.2. Purposeful behavior and organizations

Our framework is consistent with the general systems theory (GST), founded by Ludwig von Bertalanffy, and further developed by other authors such as Ignacio Maturana, William Ross Ashby, and Wolfgang Wieser [33], [34], [35], [36], [37], [38]. The set of concepts and ideas of the GST are broadly applicable and have an interdisciplinary origin. These concepts and ideas attempt to describe, explain, and predict emergent behavior in systems formed by interrelated and interdependent agents. Under certain conditions and constraints imposed on the systems and the way they relate with their environment, they can learn and adapt their behavior. Here, we have used those ideas when considering institutions and human organizations as systems with purposeful behavior. Organizations and their capabilities represent a fruitful avenue of application and adjustment of this framework. Human organizations can be considered systems with purposeful behavior, as they usually operate with a series of implicit or explicit missions, values, and goals. These goals can range from setting timetables for play and tournaments in a chess club, coordinating information campaigns in the case of an activist NGO, and gathering information, and evaluating possible opportunities and threats to a country in the case of intelligence agencies.

As systems capable of purposeful behavior, organizations can be evaluated on the basis of our three dimensions. They are able to access information, from perhaps just the names and telephone numbers of people involved in a chess club, to the huge volumes of data processed by intelligence agencies. There are also radical differences in information processing, which are not always correlated with access to information. For example, bureaucratic organizations frequently suffer delays as the amount of information they have available grows faster than their ability to process it. Finally, like other systems, organizations have a behavioral space, which is usually partly affected by the laws under which they operate, the restrictiveness of the organization's policies, and the centralization or decentralization of its internal structure. Behavioral space, as in other systems, can interact with the other two dimensions. A good example was in the case of the recent COVID-19 pandemic, where country-level governments, which are organizations with a very large behavioral space, were limited in their ability to fulfill their goals due to insufficient access to accurate information about the novel coronavirus.

Therefore, in principle, it is possible to apply our framework to human organizations. In the same way, we can evaluate animal groups (schools of fish, flocks of birds), insect collectives (ants colonies), plant collectives (mycorrhizal networks), or cell collectives (corvids). The main issue is that human organizations are hard to compare to systems like a squid or a self-driving car, as their access to information, processing capabilities, and behavioral space may be orders of magnitude larger. However, it would be possible to use a framework such as the one presented for comparative discussions of organizations, and how the different structures, equipment, and policies can affect the different dimensions of purposeful behavior, and with them their capabilities and their success in achieving their goals. On the other hand, discussions about human organizations are interesting in themselves, as their capabilities are determined by both biological (human) and technological systems working together. Moreover, their goals are determined by human design as in the case of artificial systems, but they are also shaped by human behavior and therefore may show characteristics of organic systems.

5. Conclusion

To better conceptualize the different ways in which the term intelligence is understood today, new ways of thinking about it are needed. More examples of smart behavior in machines, animals, or plants may not help to make sense of our current use of the word “intelligence”. It seems timely to conceive broader ways to conceptualize and talk about intelligence, with the purpose of defining a common ground between fields and make different ideas about intelligence collaborate and mutually enrich each other.

To a certain extent, our proposal is inspired by the ideas of Aristotle and the three types of “souls” or animas [46], [50]. As we have previously explained, he stated that there is something common to plants, non-human animals and human animals: their “soul” or principle of movement (anima). However, that “soul” or anima is different in each case, and represents different levels of behavioral complexity. Aristotle, like the GST, acknowledged a hierarchy of beings or “systems” that have different properties while sharing a common basis. In this sense, we are bringing back an old idea, while expanding it to better conceptualize “intelligence” and its commonalities among different systems while avoiding the semantic and philosophical discussions that arise when using the word “intelligence” in an overly broad manner.

While every field is moving towards an increasing sophistication in their own understanding of the behavior of their study systems, this article is an attempt to proceed in the opposite direction, “zooming out” and increasing the level of abstraction, with the intention of contributing to the overall endeavor of bridging the gap between the existing conceptual understandings of intelligence (in the broadest sense) in different disciplines. We believe that even though there is no universally accepted definition nor reliable measures of intelligence, some progress can be made, and there is value in at least creating a way in which these concepts can be discussed without being dragged down into arguments about semantics.

Referring to “intelligent” behavior in non-human systems may be reasonable under a colloquial, broad-sense use of the term, but as the strict definitions of intelligence are connected to the capabilities of the human mind, this term becomes problematic in an academic environment. However, in the absence of the term “intelligence”, there is no proper way to talk about this set of behaviors across disciplines. In this paper, we have proposed the term “purposeful behavior”, and created a framework that tries to capture some generalities of these behaviors and the capabilities that make them possible.

Our framework was constructed from ideas and discussions found in the literature spanning three fields of knowledge where the term “intelligence” appears often: philosophy, computer science and biology. It hopes to complement existing attempts to integrate different ideas about intelligence in the broad sense, and represents a first step in finding an approach that can be used across the disciplinary spectrum. Our aim is to help people in different fields of knowledge to develop a common vocabulary and a set of conceptual tools to study the commonalities of purposeful behavior in disparate non-human systems such as animals, plants or software. A more ambitious goal would be for it to be useful for the benefit of the different disciplines, or society in general. For instance, it may serve to guide the development of a new assessment of intelligence-like behavior and capabilities in systems designed for multiple purposes.

It is our hope that this type of multidisciplinary dialogue will contribute to the development of the conceptual tools that better frame basic questions about intelligence in the broad and strict senses, elicit conversation about the topic, open minds, and foster new versions of this very initial proposal. We especially welcome the possibility of discussing this framework with specialists from other disciplines interested in expanding it to include humans or organizations, as well as exploring the possibilities of more complex or specific dimensions that can lead to novel avenues of thought and research.

Declarations

Author contribution statement

All authors listed have significantly contributed to the development and the writing of this article.

Funding statement

This work was supported by the National Fund for Scientific and Technological Development (FONDECYT Grant No: 11181072 and 3180149) and Fulbright scholarship ID E0588216.

Data availability statement

No data was used for the research described in the article.

Declaration of interests statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.

Acknowledgements

N.P.C. is funded by FONDECYT postdoctoral grant number 3180149.

B.S.T. thanks Cruz González-Ayesta's valuable insights on the topic of intelligence. B.S.T. is funded by a graduate Fulbright scholarship with grantee ID E0588216.

R.C. acknowledges financial support from FONDECYT Iniciación 2018 Proyecto 11181072.

References

1. Kovacs Kristof, Conway Andrew R.A. Process overlap theory: a unified account of the general factor of intelligence. Psychol. Inq. 2016 [Google Scholar]
2. Knight Rex, Piaget Jean, Piercy M., Berlyne D.E. The psychology of intelligence. Philos. Q. 1951 [Google Scholar]
3. Flynn James R. Cambridge University Press; 2007. What Is Intelligence?: Beyond the Flynn Effect. [Google Scholar]
4. Heuer Richards J. Center for the Study of Intelligence; 1999. Psychology of Intelligence Analysis. [Google Scholar]
5. Hampshire Adam, Highfield Roger R., Parkin Beth L., Owen Adrian M. Fractionating human intelligence. Neuron. 2012 [PubMed] [Google Scholar]
6. Haier Richard J. Cambridge University Press; 2016. The Neuroscience of Intelligence. [Google Scholar]
7. Lexcellent Christian. 2019. Animal Intelligence. (Springer Briefs in Applied Sciences and Technology). [Google Scholar]
8. Cesario Joseph, Johnson David J., Eisthen Heather L. Your brain is not an onion with a tiny reptile inside. Curr. Dir. Psychol. Sci. 2000;29(3) [Google Scholar]
9. Duncan John, Seitz Rüdiger J., Kolodny Jonathan, Bor Daniel, Herzog Hans, Ahmed Ayesha, Newell Fiona N., Emslie Hazel. A neural basis for general intelligence. Science. 2000 [PubMed] [Google Scholar]
10. Edsinger Eric, Dölen Gül. A conserved role for serotonergic neurotransmission in mediating social behavior in octopus. Curr. Biol. 2018 [PubMed] [Google Scholar]
11. Kaufman Allison B., Reynolds Matthew R., Kaufman Alan S. The structure of ape (Hominoidea) intelligence. J. Comp. Psychol. 2019 [PubMed] [Google Scholar]
12. Emery Nathan J., Clayton Nicola S. The mentality of crows: convergent evolution of intelligence in corvids and apes. Science. 2004 [PubMed] [Google Scholar]
13. Lori Marino, Convergence of complex cognitive abilities in cetaceans and primates, 2002. [PubMed]
14. Li Xiaodong, Clerc Maurice. 2019. Swarm Intelligence. (International Series in Operations Research and Management Science). [Google Scholar]
15. Barrett Louise. Why brains are not computers, why behaviorism is not satanism, and why dolphins are not aquatic apes. Behav. Anal. 2016 [PMC free article] [PubMed] [Google Scholar]
16. Barron Andrew B., Klein Colin. What insects can tell us about the origins of consciousness. Proc. Natl. Acad. Sci. USA. 2016 [PMC free article] [PubMed] [Google Scholar]
17. Mather Jennifer A., Dickel Ludovic. Cephalopod complex cognition. Curr. Opin. Behav. Sci. 2017 [Google Scholar]
18. Trewavas Anthony. Aspects of plant intelligence. Ann. Bot. 2003 [PMC free article] [PubMed] [Google Scholar]
19. Brenner Eric D., Stahlberg Rainer, Mancuso Stefano, Vivanco Jorge, Baluška František, Van Volkenburgh Elizabeth. Plant neurobiology: an integrated view of plant signaling. Trends Plant Sci. 2006 [PubMed] [Google Scholar]
20. Alpi Amedeo, Amrhein Nikolaus, Bertl Adam, Blatt Michael R., Blumwald Eduardo, Cervone Felice, Dainty Jack, De Michelis Maria Ida, Epstein Emanuel, Galston Arthur W., Helen Mary, Goldsmith M., Hawes Chris, Hell Rüdiger, Hetherington Alistair, Hofte Herman, Juergens Gerd, Leaver Chris J., Moroni Anna, Murphy Angus, Oparka Karl, Perata Pierdomenico, Quader Hartmut, Rausch Thomas, Ritzenthaler Christophe, Rivetta Alberto, Robinson David G., Sanders Dale, Scheres Ben, Schumacher Karin, Sentenac Hervé, Slayman Clifford L., Soave Carlo, Somerville Chris, Taiz Lincoln, Thiel Gerhard, Wagner Richard. Plant neurobiology: no brain, no gain? Trends Plant Sci. 2007 [PubMed] [Google Scholar]
21. Gorzelak Monika A., Asay Amanda K., Pickles Brian J., Simard Suzanne W. Inter-plant communication through mycorrhizal networks mediates complex adaptive behaviour in plant communities. AoB Plants. 2015 [PMC free article] [PubMed] [Google Scholar]
22. Silver David, Schrittwieser Julian, Simonyan Karen, Antonoglou Ioannis, Huang Aja, Guez Arthur, Hubert Thomas, Baker Lucas, Lai Matthew, Bolton Adrian, Chen Yutian, Lillicrap Timothy, Hui Fan, Sifre Laurent, Van Den Driessche George, Graepel Thore, Hassabis Demis. Mastering the game of Go without human knowledge. Nature. 2017 [PubMed] [Google Scholar]
23. Wang Pei. On defining artificial intelligence. J. Artif. Gen. Intell. 2019 [Google Scholar]
24. Legg Shane, Hutter Marcus. Universal intelligence: a definition of machine intelligence. Minds Mach. 2007 [Google Scholar]
25. Basten Ulrike, Hilger Kirsten, Fiebach Christian J. Where smart brains are different: a quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence. 2015 [Google Scholar]
26. Deary Ian J., Penke Lars, Johnson Wendy. The neuroscience of human intelligence differences. Nat. Rev. Neurosci. 2010 [PubMed] [Google Scholar]
27. Li Yonghui, Liu Yong, Li Jun, Qin Wen, Li Kuncheng, Yu Chunshui, Jiang Tianzi. Brain anatomical network and intelligence. PLoS Comput. Biol. 2009 [PMC free article] [PubMed] [Google Scholar]
28. Neubauer Aljoscha C., Fink Andreas. Intelligence and neural efficiency. Neurosci. Biobehav. Rev. 2009 [PubMed] [Google Scholar]
29. Gray Jeremy R., Thompson Paul M. Neurobiology of intelligence: science and ethics. Nat. Rev. Neurosci. 2004 [PubMed] [Google Scholar]
30. Hassabis Demis, Kumaran Dharshan, Summerfield Christopher, Botvinick Matthew. Neuroscience-inspired artificial intelligence. Neuron. 2017 [PubMed] [Google Scholar]
31. Dubois Julien, Galdi Paola, Paul Lynn K., Adolphs Ralph. A distributed brain network predicts general intelligence from resting-state human neuroimaging data. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 2018 [PMC free article] [PubMed] [Google Scholar]
32. Tenenbaum Joshua B., Kemp Charles, Griffiths Thomas L., Goodman Noah D. How to grow a mind: statistics, structure, and abstraction. Science. 2011 [PubMed] [Google Scholar]
33. Von Bertalanffy Ludwig. George Braziller; New York: 1967. General Systems Theory. Foundations, Developments, Applications. [Google Scholar]
34. Maturana Bernhard, Poerksen Ignacio. Carl-Auer; 2004. From Being to Doing. The Origins of the Biology of Cognition. [Google Scholar]
35. Ross Ashby William. Nabu Press; 2011. Design for a Brain: The Origin of Adaptive Behavior. [Google Scholar]
36. Ross Ashby William. Filiquarian Legacy Publishing; 2012. An Introduction to Cybernetics. [Google Scholar]
37. Wieser Wolfgang. Fischer Bücherei; 1959. Organismen, Strukuren, Maschinen. Zu einer Lehre vom Organismus. [Google Scholar]
38. Wieser Wolfgang. Thieme Publishing Group; 1989. Energy Transformations in Cells and Organisms. [Google Scholar]
39. Gignac Gilles E. Residual group-level factor associations: possibly negative implications for the mutualism theory of general intelligence. Intelligence. 2016;55:69–78. [Google Scholar]
40. Norman Donald A. Approaches to the study of intelligence. Artif. Intell. 1991 [Google Scholar]
41. Chang Kuo-Chin, Hong Tzung-Pei, Tseng Shian-Shyong. Machine learning by imitating human learning. Minds Mach. 1996;6(2):203–228. [Google Scholar]
42. Hawkins Jeff, Blakeslee Sandra. On intelligence: how a new understanding of the brain will lead to truly intelligent machines. Neural Netw. 2004 [Google Scholar]
43. Plato . Gredos; 1988. Diálogos IV. Republica (Republic) [Google Scholar]
44. Plato . Gredos; 2011. Phaedrus. [Google Scholar]
45. Aristotle . Gredos; 2014. Sobre el alma (De Anima) [Google Scholar]
46. Aristotle . Nueva Biblioteca Filosofica; 2017. Historia Animalium. [Google Scholar]
47. Wittgenstein Ludwig. Wiley-Blackwell; 2009. Philosophical Investigations. [Google Scholar]
48. Corazon Gonzalez Rafael. Eunsa; 2016. Filosofía del conocimiento. [Google Scholar]
49. Arendt Hannah. Harvest/HBJ Book; 1982. The Life of the Mind. [Google Scholar]
50. Aristotle . Gredos; 1998. Metaphysics. [Google Scholar]
51. Llano Cifuentes Alejandro. Eunsa; 2003. Gnoseología. [Google Scholar]
52. González-Ayesta Cruz. Tomás de Aquino en el debate externalismo-internalismo. Anu. Filos. 2006;39(3) [Google Scholar]
53. González-Ayesta Cruz. Escotismo y tomismo en la interpretación sareciana del entendimiento como potencia. Rev. Esp. Filos. Mediev. 2011 [Google Scholar]
54. Polo Leonardo. Eunsa; 2016. Antropología trascendental. [Google Scholar]
55. McCarthy John. Stanford University; 2007. What Is Artificial Intelligence? [Google Scholar]
56. Turing Alan M. Computing machinery and intelligence. Mind. 1950 [Google Scholar]
57. Russell Stuart, Norvig Peter. third edition. Pearson; 2010. Artificial Intelligence: A Modern Approach. [Google Scholar]
58. Searle John R. Machine Intelligence: Perspectives on the Computational Model. 2012. Minds, brains, and programs. [Google Scholar]
59. Kosinski Michal, Stillwell David, Graepel Thore. Private traits and attributes are predictable from digital records of human behavior. Proc. Natl. Acad. Sci. USA. 2013 [PMC free article] [PubMed] [Google Scholar]
60. Goertzel Ben, Pennachin Cassio. Artificial General Intelligence; 2007. [Google Scholar]
61. Hamet Pavel, Tremblay Johanne. Artificial intelligence in medicine. Metab. Clin. Exp. 2017 [PubMed] [Google Scholar]
62. Yu Kun Hsing, Beam Andrew L., Kohane Isaac S. Artificial intelligence in healthcare. Nat. Biomed. Eng. 2018 [PubMed] [Google Scholar]
63. Unger J. Marshall, Deacon Terrence W. The symbolic species: the co-evolution of language and the brain. Mod. Lang. J. 1998 [Google Scholar]
64. Roth Gerhard, Dicke Ursula. Evolution of the brain and intelligence. Trends Cogn. Sci. 2005 [PubMed] [Google Scholar]
65. Herculano-Houzel Suzana. The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. Proc. Natl. Acad. Sci. 2012;109(Supplement 1):10661–10668. [PMC free article] [PubMed] [Google Scholar]
66. Mayer John D., Roberts Richard D., Barsade Sigal G. Human abilities: emotional intelligence. Annu. Rev. Psychol. 2008 [PubMed] [Google Scholar]
67. Uexküll . Springer Berlin Heidelberg; 1921. Umwelt und Innenwelt der Tiere. [Google Scholar]
68. Dobzhansky Theodosius. Nothing in biology makes sense except in the light of evolution. Am. Biol. Teach. 1973;35(3):125–129. [Google Scholar]
69. Laurent Gilles. On the value of model diversity in neuroscience. Nat. Rev. Neurosci. 2020:1–2. [PubMed] [Google Scholar]
70. Pratt S.C. Encyclopedia of Animal Behavior. 2009. Collective intelligence. [Google Scholar]
71. Kuba Michael J., Byrne Ruth A., Meisel Daniela V., Mather Jennifer A. When do octopuses play? Effects of repeated testing, object type, age, and food deprivation on object play in octopus vulgaris. J. Comp. Psychol. 2006;120(3):184. [PubMed] [Google Scholar]
72. Langridge Keri V., Broom Mark, Osorio Daniel. Selective signalling by cuttlefish to predators. Curr. Biol. 2007;17(24):R1044–R1045. [PubMed] [Google Scholar]
73. Brecht Katharina F., Hage Steffen R., Gavrilov Natalja, Nieder Andreas. Volitional control of vocalizations in corvid songbirds. PLoS Biol. 2019;17(8) [PMC free article] [PubMed] [Google Scholar]
74. Clayton Nicola S., Griffiths D.P., Emery Nathan J., Dickinson Anthony. Elements of episodic–like memory in animals. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 2001;356(1413):1483–1491. [PMC free article] [PubMed] [Google Scholar]
75. Emery Nathan J., Clayton Nicola S. Effects of experience and social context on prospective caching strategies by scrub jays. Nature. 2001;414(6862):443–446. [PubMed] [Google Scholar]
76. Calvo Paco, Trewavas Anthony. Physiology and the (neuro) biology of plant behavior: a farewell to arms. Trends Plant Sci. 2020;25(3):214–216. [PubMed] [Google Scholar]
77. Taiz Lincoln, Alkon Daniel, Draguhn Andreas, Murphy Angus, Blatt Michael, Thiel Gerhard, Robinson David G. Reply to Trewavas et al. and Calvo and Trewavas. Trends Plant Sci. 2020;25(3):218–220. [PubMed] [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier