Introspective Self-Dialogue: Iterative APR with CoT and Error Logs

Keisuke Kitamura, So Onishi, Akihito Kohiga, Takahiro Koita

Abstract


In recent years, Automated Program Repair (APR) using Large Language Model (LLM) has been anticipated as a technology contributing to development process. Previous researches are advancing on interactive program repairing that incorporate test result feedback. However, it has been pointed out that simply repeating iterations that only communicate the fact of failure prevents LLM from understanding the root cause of errors, causing them to repeat the same mistakes.

This research proposes a novel correction approach called "introspective self-dialogue." This approach integrates the LLM's own "internal information" (Chain-of-Thought) with "external information" (error logs) to enable the LLM to self-analyze the
cause of failure. The goal is to prompt deeper self-analysis and more accurate re-corrections by having the LLM confront its own reasoning against objective facts. Our evaluation demonstrates whether the proposed method improves bug-fixing success rates
compared to conventional one-off correction techniques. Furthermore, we comprehensively verify its effectiveness and safety
from more practical perspectives, such as unintended effects and quality changes introduced by the corrected code.


Keywords


Automated Program Repair (APR); large language model (LLM); Self-Correction

Full Text:

PDF

Refbacks

  • There are currently no refbacks.