Uncategorized

The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code


@inproceedings{liu-etal-2023-magic,
    title = "The Magic of {IF}: Investigating Causal Reasoning Abilities in Large Language Models of Code",
    author = "Liu, Xiao  and
      Yin, Da  and
      Zhang, Chen  and
      Feng, Yansong  and
      Zhao, Dongyan",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-acl.574",
    doi = "10.18653/v1/2023.findings-acl.574",
    pages = "9009--9022",
    abstract = "Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like {``}if{``}, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure. Code and data are available at \url{https://github.com/xxxiaol/magic-if}.",
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="liu-etal-2023-magic">
    <titleInfo>
        <title>The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code</title>
    </titleInfo>
    <name type="personal">
        <namePart type="given">Xiao</namePart>
        <namePart type="family">Liu</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Da</namePart>
        <namePart type="family">Yin</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Chen</namePart>
        <namePart type="family">Zhang</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Yansong</namePart>
        <namePart type="family">Feng</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <name type="personal">
        <namePart type="given">Dongyan</namePart>
        <namePart type="family">Zhao</namePart>
        <role>
            <roleTerm authority="marcrelator" type="text">author</roleTerm>
        </role>
    </name>
    <originInfo>
        <dateIssued>2023-07</dateIssued>
    </originInfo>
    <typeOfResource>text</typeOfResource>
    <relatedItem type="host">
        <titleInfo>
            <title>Findings of the Association for Computational Linguistics: ACL 2023</title>
        </titleInfo>
        <name type="personal">
            <namePart type="given">Anna</namePart>
            <namePart type="family">Rogers</namePart>
            <role>
                <roleTerm authority="marcrelator" type="text">editor</roleTerm>
            </role>
        </name>
        <name type="personal">
            <namePart type="given">Jordan</namePart>
            <namePart type="family">Boyd-Graber</namePart>
            <role>
                <roleTerm authority="marcrelator" type="text">editor</roleTerm>
            </role>
        </name>
        <name type="personal">
            <namePart type="given">Naoaki</namePart>
            <namePart type="family">Okazaki</namePart>
            <role>
                <roleTerm authority="marcrelator" type="text">editor</roleTerm>
            </role>
        </name>
        <originInfo>
            <publisher>Association for Computational Linguistics</publisher>
            <place>
                <placeTerm type="text">Toronto, Canada</placeTerm>
            </place>
        </originInfo>
        <genre authority="marcgt">conference publication</genre>
    </relatedItem>
    <abstract>Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like “if“, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure. Code and data are available at https://github.com/xxxiaol/magic-if.</abstract>
    <identifier type="citekey">liu-etal-2023-magic</identifier>
    <identifier type="doi">10.18653/v1/2023.findings-acl.574</identifier>
    <location>
        <url>https://aclanthology.org/2023.findings-acl.574</url>
    </location>
    <part>
        <date>2023-07</date>
        <extent unit="page">
            <start>9009</start>
            <end>9022</end>
        </extent>
    </part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code
%A Liu, Xiao
%A Yin, Da
%A Zhang, Chen
%A Feng, Yansong
%A Zhao, Dongyan
%Y Rogers, Anna
%Y Boyd-Graber, Jordan
%Y Okazaki, Naoaki
%S Findings of the Association for Computational Linguistics: ACL 2023
%D 2023
%8 July
%I Association for Computational Linguistics
%C Toronto, Canada
%F liu-etal-2023-magic
%X Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like “if“, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are better causal reasoners. We further intervene on the prompts from different aspects, and discover that the key point is the programming structure. Code and data are available at https://github.com/xxxiaol/magic-if.
%R 10.18653/v1/2023.findings-acl.574
%U https://aclanthology.org/2023.findings-acl.574
%U https://doi.org/10.18653/v1/2023.findings-acl.574
%P 9009-9022

Markdown (Informal)

[The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code](https://aclanthology.org/2023.findings-acl.574) (Liu et al., Findings 2023)

ACL



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *