Quality Evaluation of Large Language Models Generated Unit Tests: Influence of Structured Output
Straipsniai
Dovydas Marius Zapkus
Vilniaus universitetas image/svg+xml
Asta Slotkienė
Vilniaus universitetas image/svg+xml
Publikuota 2025-05-12
https://doi.org/10.15388/MITT.2025.32
PDF

Santrauka

Unit testing is critical in software quality assurance, and large language models (LLMs) offer an approach to automate this process. This paper evaluates the quality of unit tests generated by large language models using structured output prompts. The research applied six LLMs in generating unit tests across different classes of cyclomatic complexity of C# focal methods. The experiment result shows that LLMs generated results according to a strict structure output (Arrange-Act-Assert pattern) that significantly influences the quality of the generated unit tests.

PDF
Kūrybinių bendrijų licencija

Šis kūrinys yra platinamas pagal Kūrybinių bendrijų Priskyrimas 4.0 tarptautinę licenciją.

Atsisiuntimai

Nėra atsisiuntimų.

Skaitomiausi šio autoriaus(ų) straipsniai