RESUMO
Background: Prompt diagnosis of early gastric cancer (EGC) is crucial for improving patient survival. However, most previous computer-aided-diagnosis (CAD) systems did not concretize or explain diagnostic theories. We aimed to develop a logical anthropomorphic artificial intelligence (AI) diagnostic system named ENDOANGEL-LA (logical anthropomorphic) for EGCs under magnifying image enhanced endoscopy (M-IEE). Methods: We retrospectively collected data for 692 patients and 1897 images from Renmin Hospital of Wuhan University, Wuhan, China between Nov 15, 2016 and May 7, 2019. The images were randomly assigned to the training set and test set by patient with a ratio of about 4:1. ENDOANGEL-LA was developed based on feature extraction combining quantitative analysis, deep learning (DL), and machine learning (ML). 11 diagnostic feature indexes were integrated into seven ML models, and an optimal model was selected. The performance of ENDOANGEL-LA was evaluated and compared with endoscopists and sole DL models. The satisfaction of endoscopists on ENDOANGEL-LA and sole DL model was also compared. Findings: Random forest showed the best performance, and demarcation line and microstructures density were the most important feature indexes. The accuracy of ENDOANGEL-LA in images (88.76%) was significantly higher than that of sole DL model (82.77%, p = 0.034) and the novices (71.63%, p<0.001), and comparable to that of the experts (88.95%). The accuracy of ENDOANGEL-LA in videos (87.00%) was significantly higher than that of the sole DL model (68.00%, p<0.001), and comparable to that of the endoscopists (89.00%). The accuracy (87.45%, p<0.001) of novices with the assistance of ENDOANGEL-LA was significantly improved. The satisfaction of endoscopists on ENDOANGEL-LA was significantly higher than that of sole DL model. Interpretation: We established a logical anthropomorphic system (ENDOANGEL-LA) that can diagnose EGC under M-IEE with diagnostic theory concretization, high accuracy, and good explainability. It has the potential to increase interactivity between endoscopists and CADs, and improve trust and acceptability of endoscopists for CADs. Funding: This work was partly supported by a grant from the Hubei Province Major Science and Technology Innovation Project (2018-916-000-008) and the Fundamental Research Funds for the Central Universities (2042021kf0084).