Integration of cognitive tasks into artificial general intelligence test for large models


. 2024 Mar 22;27(4):109550.

doi: 10.1016/j.isci.2024.109550.

eCollection 2024 Apr 19.


Item in Clipboard


Youzhi Qu et al.




During the evolution of large models, performance evaluation is necessary for assessing their capabilities. However, current model evaluations mainly rely on specific tasks and datasets, lacking a united framework for assessing the multidimensional intelligence of large models. In this perspective, we advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests, including crystallized, fluid, social, and embodied intelligence. The AGI tests consist of well-designed cognitive tests adopted from human intelligence tests, and then naturally encapsulates into an immersive virtual community. We propose increasing the complexity of AGI testing tasks commensurate with advancements in large models and emphasizing the necessity for the interpretation of test results to avoid false negatives and false positives. We believe that cognitive science-inspired AGI tests will effectively guide the targeted improvement of large models in specific dimensions of intelligence and accelerate the integration of large models into human society.


Cognitive neuroscience; Neuroscience; Research methodology social sciences; Social sciences.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *