Uncategorized

Using Large Language Models for Predicting Flaky Test Fix Categories and Test Code Repair



Download a PDF of the paper titled FlakyFix: Using Large Language Models for Predicting Flaky Test Fix Categories and Test Code Repair, by Sakina Fatima and 2 other authors

Download PDF

Abstract:Flaky tests are problematic because they non-deterministically pass or fail for the same software version under test, causing confusion and wasting development effort. While machine learning models have been used to predict flakiness and its root causes, there is much less work on providing support to fix the problem. To address this gap, in this paper, we focus on predicting the type of fix that is required to remove flakiness and then repair the test code on that basis. We do this for a subset of flaky test cases where the root cause of flakiness is in the test case itself and not in the production code. Our key idea is to guide the repair process with additional knowledge about the test’s flakiness in the form of its predicted fix category. Thus, we first propose a framework that automatically generates labeled datasets for 13 fix categories and trains models to predict the fix category of a flaky test by analyzing the test code only. Our experimental results using code models and few-shot learning show that we can correctly predict most of the fix categories. To show the usefulness of such fix category labels for automatically repairing flakiness, in addition to informing testers, we augment a Large Language Model (LLM) like GPT with such extra knowledge to ask the LLM for repair suggestions. The results show that our suggested fix category labels significantly enhance the capability of GPT 3.5 Turbo, in generating fixes for flaky tests.

Submission history

From: Sakina Fatima Miss [view email]
[v1]
Wed, 21 Jun 2023 19:34:16 UTC (4,960 KB)
[v2]
Mon, 29 Jan 2024 16:28:00 UTC (5,744 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *