Life, Vol. 16, Pages 357: Development and Clinical Validation of an Artificial Intelligence-Based Automated Visual Acuity Testing System
Life doi: 10.3390/life16020357
Authors:
Kelvin Zhenghao Li
Hnin Hnin Oo
Kenneth Chee Wei Liang
Najah Ismail
Jasmine Ling Ling Chua
Jackson Jie Sheng Chng
Yang Wu
Daryl Wei Ren Wong
Sumaya Rani Khan
Boon Peng Yap
Rong Tong
Choon Meng Kiew
Yufei Huang
Chun Hau Chua
Alva Khai Shin Lim
Xiuyi Fan
Background: To develop and validate an automated visual acuity (VA) testing system integrating artificial intelligence (AI)–driven speech and image recognition technologies, enabling self-administered, clinic-based VA assessment; Methods: The system incorporated a fine-tuned Whisper speech-recognition model with Silero voice activity detection and pose estimation through facial landmark and ArUco marker detection. A state-driven interface guided users through sequential testing with and without a pinhole. Speech recognition was enhanced using a local Singaporean English dataset. Laboratory validation assessed speech and pose recognition performance, while clinical validation compared automated and manual VA testing at a tertiary eye clinic; Results: The fine-tuned model reduced word error rates from 17.83% to 9.81% for letters and 2.76% to 1.97% for numbers. Pose detection accurately identified valid occluder states. Among 72 participants (144 eyes), automated unaided VA showed good agreement with manual VA (ICC = 0.77, 95% CI 0.62–0.85), while pinhole VA demonstrated moderate agreement (ICC = 0.63, 95% CI 0.25–0.83). Automated testing took longer (132.1 ± 47.5 s vs. 97.1 ± 47.8 s; p < 0.001), but user experience remained positive (mean Likert scale score 4.3 ± 0.8); Conclusions: The AI-based automated VA system delivered accurate, reliable, and user-friendly performance, supporting its feasibility for clinical implementation.
Source link
Kelvin Zhenghao Li www.mdpi.com
