Quality Evaluation of Large Language Models Generated Unit Tests: Influence of Structured Output
Straipsniai
Dovydas Marius Zapkus
Vilniaus universitetas
Asta Slotkienė
Vilniaus universitetas
Publikuota 2025-05-12
https://doi.org/10.15388/MITT.2025.32
PDF

Kaip cituoti

Zapkus, D.M. and Slotkienė, A. (2025) “Quality Evaluation of Large Language Models Generated Unit Tests: Influence of Structured Output”, Vilnius University Open Series, pp. 281–288. doi:10.15388/MITT.2025.32.

Santrauka

Unit testing is critical in software quality assurance, and large language models (LLMs) offer an approach to automate this process. This paper evaluates the quality of unit tests generated by large language models using structured output prompts. The research applied six LLMs in generating unit tests across different classes of cyclomatic complexity of C# focal methods. The experiment result shows that LLMs generated results according to a strict structure output (Arrange-Act-Assert pattern) that significantly influences the quality of the generated unit tests.

PDF
Kūrybinių bendrijų licencija

Šis kūrinys yra platinamas pagal Kūrybinių bendrijų Priskyrimas 4.0 tarptautinę licenciją.

Atsisiuntimai

Nėra atsisiuntimų.