Uncategorized

[2401.09074] Code Simulation Challenges for Large Language Models



[Submitted on 17 Jan 2024]

Download a PDF of the paper titled Code Simulation Challenges for Large Language Models, by Emanuele La Malfa and 6 other authors

Download PDF
HTML (experimental)

Abstract:We investigate the extent to which Large Language Models (LLMs) can simulate the execution of computer code and algorithms. We begin by looking straight line programs, and show that current LLMs demonstrate poor performance even with such simple programs — performance rapidly degrades with the length of code. We then investigate the ability of LLMs to simulate programs that contain critical paths and redundant instructions. We also go beyond straight line program simulation with sorting algorithms and nested loops, and we show the computational complexity of a routine directly affects the ability of an LLM to simulate its execution. We observe that LLMs execute instructions sequentially and with a low error margin only for short programs or standard procedures. LLMs’ code simulation is in tension with their pattern recognition and memorisation capabilities: on tasks where memorisation is detrimental, we propose a novel prompting method to simulate code execution line by line. Empirically, our new Chain of Simulation (CoSm) method improves on the standard Chain of Thought prompting approach by avoiding the pitfalls of memorisation.

Submission history

From: Emanuele La Malfa [view email]
[v1]
Wed, 17 Jan 2024 09:23:59 UTC (8,355 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *