Volume 19: pp. 105-110

The Future Is Computational Comparative Cognition

Konstantinos Voudouris

Department of Psychology and Leverhulme Centre for the Future of Intelligence, University of Cambridge

Lucy G. Cheke

Department of Psychology and Leverhulme Centre for the Future of Intelligence, University of Cambridge

Marta Halina

Department of History and Philosophy of Science and Leverhulme Centre for the Future of Intelligence, University of Cambridge

Reading Options


Abstract

Computational modeling should and will play an increasingly important role in the future of comparative cognition. Computational comparative cognition is a burgeoning field, poised to tackle perennial questions about animal behavior as well as to take the opportunity to ask new ones. To establish computational comparative cognition as a field, researchers must work together to create interdisciplinary collaborations, to translate and disseminate findings that can be digested by a diverse audience, and to rethink the status of computational modeling. Blending models, experiment, and observation will deepen our understanding of animal behavior, promising a bright future for the field of comparative cognition.

Keywordscomputational modeling, machine learning, artificial intelligence, interdisciplinarity

Author Note Konstantinos Voudouris, Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, United Kingdom.

Correspondence concerning this article should be addressed to Konstantinos Voudouris at kv301@cam.ac.uk


Of 319 participants in a recent survey of animal behavior researchers, 75 (23.51%) reported using computational modeling in their research (Voudouris, Cheke, et al., 2023).1 We predict that future comparative cognitive scientists will use computational techniques to a much greater extent than today. Computational modeling is a new frontier, providing the tools we will need to settle old debates and answer perennial questions about nonhuman animal cognition. Although such methods have been implemented in associative learning theory, neuroethology, and computational neuroscience for several years, many open research questions remain about how to model, for example, episodic memory, theory of mind, communication, physical reasoning, and metacognition in nonhuman animals. We argue that this demands interdisciplinary collaboration with the computational cognitive sciences. Successful collaboration of this kind in turn requires, first, an infrastructure for conducting computational comparative cognition, including a shared vocabulary and a common methodology; and second, an openness to formal methods for studying nonhuman animal cognition.

The computational cognitive sciences, including artificial intelligence (AI) and machine learning, offer an exciting toolbox for understanding behavior. These methods can give comparative cognitive scientists the opportunity to bring new hypotheses and theories to the table; build formal, precisely stated theories of cognition that generate testable predictions; and a chance to understand the mechanisms underlying behavior. Rich computational models have already made waves in the study of human cognition. Pouncy and Gershman (2022) synthesised the theories of reinforcement learning in computer science with theories about induction and decision making from cognitive psychology (Griffiths et al., 2010; Lake et al., 2017) to build sophisticated models of human learning and decision making. The nascent research field of predictive processing (Friston, 2010) seeks to model how humans make predictions about their environment and behave accordingly, making use of techniques from Bayesian inference and causal modeling. The methods of causal inference (Gopnik et al., 2004; Gopnik & Tenenbaum, 2007; Pearl, 2009) and probabilistic programming (Lake et al., 2015; Ullman & Tenenbaum, 2020) have revolutionized how we think about human behavior and cognition. We think that this computational revolution should extend to comparative cognition, augmenting traditional experimental and ethological approaches with contemporary AI and machine-learning techniques (Griffiths, 2015).

In recent years, we have seen several examples of the use of computational models to offer testable novel explanations of animal behavior. For example, Elske van der Vaart and colleagues (2012) used a simulation to present a novel hypothesis about the caching behavior of scrub jays. When scrub jays are in the presence of conspecifics, they will either avoid caching food or cache it and then return later to move it to another location in private. One hypothesis about why they do this is that they are able to attribute mental states to conspecifics, reasoning that if they are present, they will remember the location of the cached food and intend to pilfer it at a later time. Another hypothesis is that the birds have simply learned to associate the presence of conspecifics with an increased likelihood of pilfering, without any attribution of mental states. Van der Vaart et al. suggest an alternative account, in which the presence of conspecifics increases stress, which in turn increases the frequency of (re-)caching. They built a simulation of a caching bird in an environment with a variety of onlookers and demonstrated how the behavior of the simulated bird matches the behavior of real birds in the laboratory. The result is a novel hypothesis about caching behavior that generates precise and testable predictions for future study.

More recently, Johanni Brea and colleagues (2023) built computational models of episodic-like memory of caching birds, including Clark’s nutcrackers and Eurasian jays. They tackled the important question of whether caching behavior, which requires the bird to remember what it stored where and when, implies that these birds can simulate past experiences (i.e., mental time travel). They built computational simulations to contrast this “mental replay” hypothesis with the idea that caching can be explained through associating spatiotemporal cues with motivational states, such as hunger (“plastic caching”). The plastic caching model flexibly encodes what the bird cached, where, and when. However, this type of what–where–when memory does not imply an ability for mental time travel in the same way as mental replay. To test these models, Brea and colleagues translated laboratory experiments into a formal schema that a neural network could interpret. They found that both models matched the behavior of real birds in the laboratory. These cases illustrate how researchers can use computational modeling to generate new hypotheses explaining behavior. Such approaches allow one to explore the space of plausible hypotheses more widely than traditional experimental methods.

Computational methods provide additional opportunities for understanding the mechanisms underlying sophisticated nonhuman animal behavior. Manuel Baum and colleagues (2022) proposed yoking as an important tool for doing this. In their yoking procedure, a computational model of behavior is designed to receive the same inputs and perform the same actions as an animal in the laboratory. Baum and colleagues used yoking to study how Goffin’s cockatoos learn to solve a complex physical problem. In their study, cockatoos were presented with a puzzle box containing a reward. To obtain the reward, participants needed to learn a specific sequence of actions: Opening the door to the box requires removing a metal bar, which requires removing a metal disc. Baum and colleagues then provided the states of the puzzle box and actions of the cockatoo as inputs into a reinforcement learning algorithm. Finally, they compared the performances of different algorithms to the learning trajectories of the cockatoo. Using this approach, the researchers were able to identify plausible cognitive mechanisms underlying problem-solving behavior in cockatoos with more precision than methods that rely on behavioral experiments alone.

Computational methods have a lot to offer comparative cognition. However, their integration into contemporary research might require a change in how we think about such models. Computational modeling and simulation are central to some areas of comparative cognition, such as those focused on associative learning. Researchers have made significant progress towards building formal models for how animals learn to relate stimuli with responses (Mackintosh, 1983; Rescorla, 1988; Rescorla & Wagner, 1972), as well as extending those accounts to explain phenomena such as apparent future planning (Lind, 2018), tool-use (Taylor et al., 2010, 2012), and other sophisticated behaviors (Cardoso et al., 2023; Lind & Vinken, 2021). However, in many areas of comparative cognition, where the focus has been the study of humanlike behavioral phenomena in nonhuman animals, there has been a tendency to view these models as too simplistic.

This tendency emerges in two main ways. Sometimes, the computational gymnastics that need to be performed to render a complex physical experiment into something that can be run as a simulation on a computer can eliminate many of the interesting complexities of the task. Take the yoking study of cockatoos. To characterize the puzzle box as a reinforcement learning problem, complex action sequences are abstracted to simple representational units. The action of “remove the metal disc” goes from being a complex collaboration between the bird’s beak and claws to a single binary variable. From there, the argument can be made that many of the complexities of the task for the bird have been simplified into oblivion by computational modeling, so its explanatory relevance is severely limited (Lind, 2018; Lind & Vinken, 2021). Second, mathematical models of cognition have often been viewed as necessarily simpler than equivalent cognitive theories. Take the classic dichotomies between associative learning and cognitive processes such as theory of mind (Penn & Povinelli, 2007), metacognition (Smith et al., 2014), and social learning (Papineau & Heyes, 2006). In many of these cases, the more computational, formalized hypotheses are taken to be simpler. Because of their putative simplicity, they are sometimes taken as necessarily incapable of capturing the flexibility and sophistication of non-human animal behavior (Buckner, 2011; Heyes, 2012).

Models and simulations of physical phenomena will almost always involve idealization and abstraction to some degree (Potochnik, 2017; Weisberg, 2012). This does not undermine their utility in revealing new insights about the world, when used in conjunction with other scientific methodologies, such as laboratory experiment and behavioral observation. Indeed, it is through a plurality of approaches that we arrive at robust answers about animal cognition (see Heesen et al., 2019). Moreover, that computational models are formal does not imply that they are simpler than other models in a pernicious sense. Simplicity comes in many forms (Dacey, 2016; Meketa, 2014; Starzak, 2017). Commonsense explanations, for example, are simpler for many of us to understand, but as C. Lloyd Morgan (1894, 1903) emphasized, this does not mean that we should prefer such accounts (Fitzpatrick, 2008). To fully embrace the benefits of computational modeling and simulation in comparative cognition, we may need to change how we think about these models. We believe a useful starting point involves spelling out the ways in which such models simplify the target system under investigation and assessing the epistemic benefits and obstacles that arise from such simplifications.

A computational future for comparative cognition looks bright, but it requires the synthesis of insights from multiple disparate fields. This interdisciplinary endeavour will pose challenges, such as philosophical and methodological divergences in terms of the stated role of computational modeling and a significant language barrier. Take, for example, the notion of instrumental learning in comparative cognition and contrast it to reinforcement learning in computer science. At a high level, these fields appear to be tackling the same problem: how to model the association between stimulus, action, and reward. However, apparent incommensurability emerges in the details, and efforts to translate between the fields are difficult (Sutton & Barto, 2018). We think that this sort of interdisciplinary integration will take time. To encourage and accelerate this process, comparative cognition must pursue expressly interdisciplinary projects. A fantastic example of how computer scientists, engineers, biologists, and cognitive scientists can work together is the Earth Species Project. This organization’s aim is to use the tools of modern machine learning and AI to improve our understanding of nonhuman animal communication (see, e.g., Hagiwara et al., 2022, 2023; Hoffman et al., 2023; Rutz et al., 2023). Similarly, the Major Transitions Project seeks to unite computational biologists, philosophers, and cognitive scientists to understand the evolution of cognition (Barron et al., 2023). In an effort to build a shared infrastructure for research across computer science, computational neuroscience, and psychology, the Leverhulme Centre for the Future of Intelligence at the University of Cambridge has developed the Animal-AI Environment (Beyret et al., 2019; Crosby et al., 2020; Voudouris, Alhas, et al., 2023). This research platform is for conducting cognitive experiments with artificial agents, humans, and nonhuman animals in a directly comparable, more ecologically valid manner (Voudouris et al., 2022). The Animal-AI Environment offers the opportunity for researchers from diverse fields to work side by side and collect computational and real-world laboratory data. As researchers work on similar problems on a common platform, the height of the language barrier can be slowly reduced, and these fields can cross-pollinate for mutual progress.

Computational comparative cognition is a burgeoning field, poised to tackle perennial questions about animal behavior, as well as to take the opportunity to ask new ones. To establish it as a field, we must work to create interdisciplinary collaborations, to translate and disseminate findings that can be digested by a diverse audience, and to rethink the status of computational modeling in comparative cognition. We are excited about an interdisciplinary future, blending models, experiment, and observation to deepen our understanding of animal behavior.

References

Barron, A. B., Halina, M., & Klein, C. (2023). Transitions in cognitive evolution. Proceedings of the Royal Society B: Biological Sciences, 290(2002), Article 20230671. https://doi.org/10.1098/rspb.2023.0671

Baum, M., Schattenhofer, L., Rössler, T., Osuna-Mascaró, A., Auersperg, A., Kacelnik, A., & Brock, O. (2022). Yoking-based identification of learning behavior in artificial and biological agents. In L. Cañamero, P. Gaussier, M. Wilson, S. Boucenna, & N. Cuperlier (Eds.), From animals to animats 16 (pp. 67–78). Springer International Publishing. https://doi.org/10.1007/978-3-031-16770-6_6

Beyret, B., Hernández-Orallo, J., Cheke, L., Halina, M., Shanahan, M., & Crosby, M. (2019). The animal-AI environment: Training and testing animal-like artificial cognition. ArXiv. http://arxiv.org/abs/1909.07483

Brea, J., Clayton, N. S., & Gerstner, W. (2023). Computational models of episodic-like memory in food-caching birds. Nature Communications, 14(1), Article 1. https://doi.org/10.1038/s41467-023-38570-x

Buckner, C. (2011). Two approaches to the distinction between cognition and ‘mere association.’ International Journal of Comparative Psychology, 24(4), 314–348. https://doi.org/10.46867/IJCP.2011.24.04.06

Cardoso, R. P., Donnelly, N., Keedwell, E., Cheke, L., & Shanahan, M. (2023, July 24). What is a stimulus? A computational perspective on an associative learning model. ALIFE 2023: Ghost in the Machine: Proceedings of the 2023 Artificial Life Conference. https://doi.org/10.1162/isal_a_00600

Crosby, M., Beyret, B., Shanahan, M., Hernández-Orallo, J., Cheke, L., & Halina, M. (2020). The animal-AI testbed and competition. NeurIPS 2019 Competition and Demonstration Track, 123, 164–176. https://proceedings.mlr.press/v123/crosby20a.html

Dacey, M. (2016). Rethinking associations in psychology. Synthese, 193(12), 3763–3786. https://doi.org/10.1007/s11229-016-1167-0

Fitzpatrick, S. (2008). Doing away with Morgan’s Canon. Mind & Language, 23(2), 224–246. https://doi.org/10.1111/j.1468-0017.2007.00338.x

Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787

Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111(1), 3–32. https://doi.org/10.1037/0033-295X.111.1.3

Gopnik, A., & Tenenbaum, J. B. (2007). Bayesian networks, Bayesian learning and cognitive development. Developmental Science, 10(3), 281–287. https://doi.org/10.1111/j.1467-7687.2007.00584.x

Griffiths, T. L. (2015). Manifesto for a new (computational) cognitive revolution. Cognition, 135, 21–23. https://doi.org/10.1016/j.cognition.2014.11.026

Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in Cognitive Sciences, 14(8), 357–364. https://doi.org/10.1016/j.tics.2010.05.004

Hagiwara, M., Cusimano, M., & Liu, J.-Y. (2022). Modeling Animal Vocalizations through Synthesizers. ArXiv. https://doi.org/10.48550/arXiv.2210.10857

Hagiwara, M., Hoffman, B., Liu, J.-Y., Cusimano, M., Effenberger, F., & Zacarian, K. (2023). BEANS: The Benchmark of Animal Sounds. ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing, 1–5. https://doi.org/10.1109/ICASSP49357.2023.10096686

Heesen, R., Bright, L. K., & Zucker, A. (2019). Vindicating methodological triangulation. Synthese, 196(8), 3067–3081. https://doi.org/10.1007/s11229-016-1294-7

Heyes, C. (2012). Simple minds: A qualified defence of associative learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1603), 2695–2703. https://doi.org/10.1098/rstb.2012.0217

Hoffman, B., Cusimano, M., Baglione, V., Canestrari, D., Chevallier, D., DeSantis, D. L., Jeantet, L., Ladds, M. A., Maekawa, T., Mata-Silva, V., Moreno-González, V., Trapote, E., Vainio, O., Vehkaoja, A., Yoda, K., Zacarian, K., Friedlaender, A., & Rutz, C. (2023). A benchmark for computational analysis of animal behavior, using animal-borne tags. ArXiv. https://doi.org/10.48550/arXiv.2305.10740

Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266), 1332–1338. https://doi.org/10.1126/science.aab3050

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, Article e253. https://doi.org/10.1017/S0140525X16001837

Lind, J. (2018). What can associative learning do for planning? Royal Society Open Science, 5(11), Article 180778. https://doi.org/10.1098/rsos.180778

Lind, J., & Vinken, V. (2021). Can associative learning be the general process for intelligent behavior in non-human animals? BioRxiv. https://doi.org/10.1101/2021.12.15.472737

Mackintosh, N. J. (1983). Conditioning and associative learning. Oxford University Press.

Meketa, I. (2014). A critique of the principle of cognitive simplicity in comparative cognition. Biology & Philosophy, 29(5), 731–745. https://doi.org/10.1007/s10539-014-9429-z

Morgan, C. L. (1894). Introduction to comparative psychology. Walter Scott Publishing. https://doi.org/10.1037/11344-000

Morgan, C. L. (1903). An introduction to comparative psychology (2nd ed., pp. xiv, 386). Walter Scott Publishing. https://doi.org/10.1037/13701-000

Papineau, D., & Heyes, C. (2006). Rational or associative? Imitation in Japanese quail. In S. Hurley & M. Nudds (Eds.), Rational animals (pp. 187–196). https://doi.org/10.1093/acprof:oso/9780198528272.003.0008

Pearl, J. (2009). Causality. Cambridge University Press. https://doi.org/10.1017/CBO9780511803161

Penn, D. C., & Povinelli, D. J. (2007). On the lack of evidence that non-human animals possess anything remotely resembling a ‘theory of mind.’ Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 731–744. https://doi.org/10.1098/rstb.2006.2023

Potochnik, A. (2017). Idealization and the aims of science. University of Chicago Press. https://doi.org/10.7208/9780226507194

Pouncy, T., & Gershman, S. J. (2022). Inductive biases in theory-based reinforcement learning. Cognitive Psychology, 138, Article 101509. https://doi.org/10.1016/j.cogpsych.2022.101509

Rescorla, R. A. (1988). Pavlovian conditioning. It’s not what you think it is. American Psychologist, 43(3), 151–160. https://doi.org/10.1037//0003-066X.43.3.151

Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 64–99). Appleton-Century-Crofts.

Rutz, C., Bronstein, M., Raskin, A., Vernes, S. C., Zacarian, K., & Blasi, D. E. (2023). Using machine learning to decode animal communication. Science, 381(6654), 152–155. https://doi.org/10.1126/science.adg7314

Smith, J. D., Couchman, J. J., & Beran, M. J. (2014). Animal metacognition: A tale of two comparative psychologies. Journal of Comparative Psychology, 128(2), 115–131. https://doi.org/10.1037/a0033105

Starzak, T. B. (2017). Interpretations without justification: A general argument against Morgan’s Canon. Synthese, 194(5), 1681–1701. https://doi.org/10.1007/s11229-016-1013-4

Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (2nd edition). MIT Press.

Taylor, A. H., Knaebe, B., & Gray, R. D. (2012). An end to insight? New Caledonian crows can spontaneously solve problems without planning their actions. Proceedings of the Royal Society B: Biological Sciences, 279(1749), 4977–4981. https://doi.org/10.1098/rspb.2012.1998

Taylor, A. H., Medina, F. S., Holzhaider, J. C., Hearne, L. J., Hunt, G. R., & Gray, R. D. (2010). An investigation into the cognition behind spontaneous string pulling in New Caledonian crows. PLOS ONE, 5(2), Article e9345. https://doi.org/10.1371/journal.pone.0009345

Ullman, T. D., & Tenenbaum, J. B. (2020). Bayesian models of conceptual development: Learning as building models of the world. Annual Review of Developmental Psychology, 2(1), 533–558. https://doi.org/10.1146/annurev-devpsych-121318-084833

van der Vaart, E., Verbrugge, R., & Hemelrijk, C. K. (2012). Corvid re-caching without ‘theory of mind’: A model. PLOS ONE, 7(3), Article e32904. https://doi.org/10.1371/journal.pone.0032904

Voudouris, K., Alhas, I., Schellaert, W., Crosby, M., Holmes, J., Burden, J., Chaubey, N., Donnelly, N., Patel, M., Halina, M., Hernández-Orallo, J., & Cheke, L. G. (2023). Animal-AI 3: What’s new & why you should care. ArXiv. https://doi.org/10.48550/arXiv.2312.11414

Voudouris, K., Cheke, L. G., Farrar, B. G., & Halina, M. (2023). The associative-cognitive distinction today: A survey of practitioners [Manuscript submitted for publication]. Department of Psychology, University of Cambridge.

Voudouris, K., Crosby, M., Beyret, B., Hernández-Orallo, J., Shanahan, M., Halina, M., & Cheke, L. G. (2022). Direct human-AI comparison in the animal-AI environment. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.711821

Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199933662.001.0001