Automatinio sprendimų vertinimo informatikos olimpiadose raida ir perspektyvos
Švietimas žinių visuomenėje
Jūratė Skūpienė
Publikuota 2008-01-01
https://doi.org/10.15388/Im.2008.0.3444
43-49.pdf

Kaip cituoti

Skūpienė, J. (2008). Automatinio sprendimų vertinimo informatikos olimpiadose raida ir perspektyvos. Information & Media, 42(43), 43-49. https://doi.org/10.15388/Im.2008.0.3444

Santrauka

Pastaruoju metu mokslinėje visuomenėje vis daugiau atgarsio sulaukia įvairios informatikos (algoritmavimo) olimpiados. Ši sritis tampa mokslinių tyrimų objektu. Informatikos olimpiadų dalyviai turi sukurti algoritmus ir juos užrašyti nepriekaištingai veikiančiomis programomis viena iš leidžiamų programavimo kalbų. Darbai yra vertinami automatiškai testuojant juodosios dėžės principu, visiškai nebandant analizuoti paties algoritmo. Toks vertinimas sulaukia nemažai kritikos, tačiau kol kas gerų alternatyvų nėra. Straipsnyje pateikiama automatinio vertinimo informatikos olimpiadose raida nuo pirmųjų olimpiadų iki šių dienų, apžvelgiami darbai, kuriuose nagrinėjama ši problematika, aptariamos olimpiadose ir mokslinėje literatūroje siūlytos patobulintos vertinimo schemos, numatomos ateities gairės.

Development and perspectives of automated grading in informatics olympiads
Jūratė Skūpienė

Summary
International Olympiads in Informatics (IOI) lately are gaining more attention in the scientific community. Contestants in IOI have to design algorithms and implement them as programs in one of allowed programming languages. Currently all the submissions are graded automatically using black-box method and grading is based on execution of compiled programs with different tests (input data). The algorithm itself is not analysed or revealed in any other possible way. Current grading
system receives a lot of criticism due its unfairness (mistyping the name of a variable might lead to zero points), however, no better grading models have been proposed so far. The paper gives an overview of the development of grading in IOI starting from the very first IOI’s where the verbal description of an algorithm had to be presented and evaluated. Many grading problems emerged in 1990’s due to input/output format requirements which where inevitable in order to be able to perform automated testing of submissions. Input/output details required a lot of concentration from participants for mistyping a file name or redundant/missing end of line symbol might have resulted in zero points. Sometimes that even shifted focus from algorithm to formatting details. These problems were solved with the appearance of the first Informatics Contest Management System in 2001 which allowed the contestants to submit and run their program with sample tests thus checking their programs for compatibility with format requirements and correcting if there was such a need during contest time. After the contest management systems found their place in IOI’s the attention was shifted to the relationship between the grades given for a submission and the algorithm implemented. The paper presents overview of published papers and discussions in IOI community on these topics. The papers ends with perspectives and directions for future improvement of grading in IOI’s.

43-49.pdf

Atsisiuntimai

Nėra atsisiuntimų.