Atrial fibrillation, characterized by rapid and irregular electrical activity in the atria, is the most common sustained cardiac arrhythmia and a major risk factor for stroke. Early detection of AFib is critical for timely diagnosis and effective treatment. However, AFib detection from electrocardiogram recordings remains challenging due to noise contamination, particularly in wearable device recordings, where motion-related artifacts are frequently present. Numerous deep learning approaches have been proposed for AFib detection, but direct comparison between existing methods is difficult due to different experimental settings. Also, the effect of real-world noise on performance is underexplored. In response to this challenge, we performed a cross-dataset test on deep learning models CTRhythm, MFEGNet, and MGCNet under clean and noisy conditions. CTRhythm achieved the strongest performance on clean signals, but it was the most vulnerable to noise. MFEGNet demonstrated the greatest noise resilience, consistent with its architecture designed for noise suppression. Despite this, all models‘ performance degraded substantially when tested on noisy signals. These results highlight the importance of standardised cross-dataset evaluation under real-life conditions for assessing the true utility of AFib detection models and the development of robust pipelines that incorporate signal denoising.

Šis darbas apsaugotas Creative Commons priskyrimo 4.0 viešąja licencija.