RESUMO
OBJECTIVES: Cryptococcosis remains a severe global health concern, underscoring the urgent need for rapid and reliable diagnostic solutions. Point-of-care tests (POCTs), such as the cryptococcal antigen semi-quantitative (CrAgSQ) lateral flow assay (LFA), offer promise in addressing this challenge. However, their subjective interpretation poses a limitation. Our objectives encompass the development and validation of a digital platform based on Artificial Intelligence (AI), assessing its semi-quantitative LFA interpretation performance, and exploring its potential to quantify CrAg concentrations directly from LFA images. METHODS: We tested 53 cryptococcal antigen (CrAg) concentrations spanning from 0 to 5000 ng/ml. A total of 318 CrAgSQ LFAs were inoculated and systematically photographed twice, employing two distinct smartphones, resulting in a dataset of 1272 images. We developed an AI algorithm designed for the automated interpretation of CrAgSQ LFAs. Concurrently, we explored the relationship between quantified test line intensities and CrAg concentrations. RESULTS: Our algorithm surpasses visual reading in sensitivity, and shows fewer discrepancies (p < 0.0001). The system exhibited capability of predicting CrAg concentrations exclusively based on a photograph of the LFA (Pearson correlation coefficient of 0.85). CONCLUSIONS: This technology's adaptability for various LFAs suggests broader applications. AI-driven interpretations have transformative potential, revolutionizing cryptococcosis diagnosis, offering standardized, reliable, and efficient POCT results.
RESUMO
Filariasis, a neglected tropical disease caused by roundworms, is a significant public health concern in many tropical countries. Microscopic examination of blood samples can detect and differentiate parasite species, but it is time consuming and requires expert microscopists, a resource that is not always available. In this context, artificial intelligence (AI) can assist in the diagnosis of this disease by automatically detecting and differentiating microfilariae. In line with the target product profile for lymphatic filariasis as defined by the World Health Organization, we developed an edge AI system running on a smartphone whose camera is aligned with the ocular of an optical microscope that detects and differentiates filarias species in real time without the internet connection. Our object detection algorithm that uses the Single-Shot Detection (SSD) MobileNet V2 detection model was developed with 115 cases, 85 cases with 1903 fields of view and 3342 labels for model training, and 30 cases with 484 fields of view and 873 labels for model validation before clinical validation, is able to detect microfilariae at 10x magnification and distinguishes four species of them at 40x magnification: Loa loa, Mansonella perstans, Wuchereria bancrofti, and Brugia malayi. We validated our augmented microscopy system in the clinical environment by replicating the diagnostic workflow encompassed examinations at 10x and 40x with the assistance of the AI models analyzing 18 samples with the AI running on a middle range smartphone. It achieved an overall precision of 94.14%, recall of 91.90% and F1 score of 93.01% for the screening algorithm and 95.46%, 97.81% and 96.62% for the species differentiation algorithm respectively. This innovative solution has the potential to support filariasis diagnosis and monitoring, particularly in resource-limited settings where access to expert technicians and laboratory equipment is scarce.
Assuntos
Inteligência Artificial , Microscopia , Microscopia/métodos , Humanos , Animais , Filariose/diagnóstico , Filariose/parasitologia , Microfilárias/isolamento & purificação , Algoritmos , Smartphone , Filariose Linfática/diagnóstico , Filariose Linfática/parasitologiaRESUMO
Analysis of bone marrow aspirates (BMAs) is an essential step in the diagnosis of hematological disorders. This analysis is usually performed based on a visual examination of samples under a conventional optical microscope, which involves a labor-intensive process, limited by clinical experience and subject to high observer variability. In this work, we present a comprehensive digital microscopy system that enables BMA analysis for cell type counting and differentiation in an efficient and objective manner. This system not only provides an accessible and simple method to digitize, store, and analyze BMA samples remotely but is also supported by an Artificial Intelligence (AI) pipeline that accelerates the differential cell counting process and reduces interobserver variability. It has been designed to integrate AI algorithms with the daily clinical routine and can be used in any regular hospital workflow.
Assuntos
Inteligência Artificial , Doenças Hematológicas , Humanos , Medula Óssea , Microscopia , Doenças Hematológicas/diagnóstico , AlgoritmosRESUMO
BACKGROUND: Identifying predictive non-invasive biomarkers of immunotherapy response is crucial to avoid premature treatment interruptions or ineffective prolongation. Our aim was to develop a non-invasive biomarker for predicting immunotherapy clinical durable benefit, based on the integration of radiomics and clinical data monitored through early anti-PD-1/PD-L1 monoclonal antibodies treatment in patients with advanced non-small cell lung cancer (NSCLC). METHODS: In this study, 264 patients with pathologically confirmed stage IV NSCLC treated with immunotherapy were retrospectively collected from two institutions. The cohort was randomly divided into a training (n = 221) and an independent test set (n = 43), ensuring the balanced availability of baseline and follow-up data for each patient. Clinical data corresponding to the start of treatment was retrieved from electronic patient records, and blood test variables after the first and third cycles of immunotherapy were also collected. Additionally, traditional radiomics and deep-radiomics features were extracted from the primary tumors of the computed tomography (CT) scans before treatment and during patient follow-up. Random Forest was used to implementing baseline and longitudinal models using clinical and radiomics data separately, and then an ensemble model was built integrating both sources of information. RESULTS: The integration of longitudinal clinical and deep-radiomics data significantly improved clinical durable benefit prediction at 6 and 9 months after treatment in the independent test set, achieving an area under the receiver operating characteristic curve of 0.824 (95% CI: [0.658,0.953]) and 0.753 (95% CI: [0.549,0.931]). The Kaplan-Meier survival analysis showed that, for both endpoints, the signatures significantly stratified high- and low-risk patients (p-value< 0.05) and were significantly correlated with progression-free survival (PFS6 model: C-index 0.723, p-value = 0.004; PFS9 model: C-index 0.685, p-value = 0.030) and overall survival (PFS6 models: C-index 0.768, p-value = 0.002; PFS9 model: C-index 0.736, p-value = 0.023). CONCLUSIONS: Integrating multidimensional and longitudinal data improved clinical durable benefit prediction to immunotherapy treatment of advanced non-small cell lung cancer patients. The selection of effective treatment and the appropriate evaluation of clinical benefit are important for better managing cancer patients with prolonged survival and preserving quality of life.
Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Antígeno B7-H1 , Qualidade de Vida , Estudos Retrospectivos , Imunoterapia , Anticorpos Monoclonais , Inibidores de Checkpoint ImunológicoRESUMO
Cryptococcosis is a fungal infection that causes serious illness, particularly in immunocompromised individuals such as people living with HIV. Point of care tests (POCT) can help identify and diagnose patients with several advantages including rapid results and ease of use. The cryptococcal antigen (CrAg) lateral flow assay (LFA) has demonstrated excellent performance in diagnosing cryptococcosis, and it is particularly useful in resource-limited settings where laboratory-based tests may not be readily available. The use of artificial intelligence (AI) for the interpretation of rapid diagnostic tests can improve the accuracy and speed of test results, as well as reduce the cost and workload of healthcare professionals, reducing subjectivity associated with its interpretation. In this work, we analyze a smartphone-based digital system assisted by AI to automatically interpret CrAg LFA as well as to estimate the antigen concentration in the strip. The system showed excellent performance for predicting LFA qualitative interpretation with an area under the receiver operating characteristic curve of 0.997. On the other hand, its potential to predict antigen concentration based solely on a photograph of the LFA has also been demonstrated, finding a strong correlation between band intensity and antigen concentration, with a Pearson correlation coefficient of 0.953. The system, which is connected to a cloud web platform, allows for case identification, quality control, and real-time monitoring.
RESUMO
BACKGROUND: Rapid diagnostic tests (RDTs) are being widely used to manage COVID-19 pandemic. However, many results remain unreported or unconfirmed, altering a correct epidemiological surveillance. OBJECTIVE: Our aim was to evaluate an artificial intelligence-based smartphone app, connected to a cloud web platform, to automatically and objectively read RDT results and assess its impact on COVID-19 pandemic management. METHODS: Overall, 252 human sera were used to inoculate a total of 1165 RDTs for training and validation purposes. We then conducted two field studies to assess the performance on real-world scenarios by testing 172 antibody RDTs at two nursing homes and 96 antigen RDTs at one hospital emergency department. RESULTS: Field studies demonstrated high levels of sensitivity (100%) and specificity (94.4%, CI 92.8%-96.1%) for reading IgG band of COVID-19 antibody RDTs compared to visual readings from health workers. Sensitivity of detecting IgM test bands was 100%, and specificity was 95.8% (CI 94.3%-97.3%). All COVID-19 antigen RDTs were correctly read by the app. CONCLUSIONS: The proposed reading system is automatic, reducing variability and uncertainty associated with RDTs interpretation and can be used to read different RDT brands. The web platform serves as a real-time epidemiological tracking tool and facilitates reporting of positive RDTs to relevant health authorities.
Assuntos
Inteligência Artificial , COVID-19 , SARS-CoV-2 , Smartphone , Humanos , COVID-19/diagnóstico , Imunoensaio/métodos , Pandemias , Sensibilidade e EspecificidadeRESUMO
The main objective of this work is to develop and evaluate an artificial intelligence system based on deep learning capable of automatically identifying, quantifying, and characterizing COVID-19 pneumonia patterns in order to assess disease severity and predict clinical outcomes, and to compare the prediction performance with respect to human reader severity assessment and whole lung radiomics. We propose a deep learning based scheme to automatically segment the different lesion subtypes in nonenhanced CT scans. The automatic lesion quantification was used to predict clinical outcomes. The proposed technique has been independently tested in a multicentric cohort of 103 patients, retrospectively collected between March and July of 2020. Segmentation of lesion subtypes was evaluated using both overlapping (Dice) and distance-based (Hausdorff and average surface) metrics, while the proposed system to predict clinically relevant outcomes was assessed using the area under the curve (AUC). Additionally, other metrics including sensitivity, specificity, positive predictive value and negative predictive value were estimated. 95% confidence intervals were properly calculated. The agreement between the automatic estimate of parenchymal damage (%) and the radiologists' severity scoring was strong, with a Spearman correlation coefficient (R) of 0.83. The automatic quantification of lesion subtypes was able to predict patient mortality, admission to the Intensive Care Units (ICU) and need for mechanical ventilation with an AUC of 0.87, 0.73 and 0.68 respectively. The proposed artificial intelligence system enabled a better prediction of those clinically relevant outcomes when compared to the radiologists' interpretation and to whole lung radiomics. In conclusion, deep learning lesion subtyping in COVID-19 pneumonia from noncontrast chest CT enables quantitative assessment of disease severity and better prediction of clinical outcomes with respect to whole lung radiomics or radiologists' severity score.
Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , COVID-19/diagnóstico por imagem , Humanos , Estudos Retrospectivos , SARS-CoV-2 , Tomografia Computadorizada por Raios X/métodosRESUMO
Visual inspection of microscopic samples is still the gold standard diagnostic methodology for many global health diseases. Soil-transmitted helminth infection affects 1.5 billion people worldwide, and is the most prevalent disease among the Neglected Tropical Diseases. It is diagnosed by manual examination of stool samples by microscopy, which is a time-consuming task and requires trained personnel and high specialization. Artificial intelligence could automate this task making the diagnosis more accessible. Still, it needs a large amount of annotated training data coming from experts.In this work, we proposed the use of crowdsourced annotated medical images to train AI models (neural networks) for the detection of soil-transmitted helminthiasis in microscopy images from stool samples leveraging non-expert knowledge collected through playing a video game. We collected annotations made by both school-age children and adults, and we showed that, although the quality of crowdsourced annotations made by school-age children are sightly inferior than the ones made by adults, AI models trained on these crowdsourced annotations perform similarly (AUC of 0.928 and 0.939 respectively), and reach similar performance to the AI model trained on expert annotations (AUC of 0.932). We also showed the impact of the training sample size and continuous training on the performance of the AI models.In conclusion, the workflow proposed in this work combined collective and artificial intelligence for detecting soil-transmitted helminthiasis. Embedded within a digital health platform can be applied to any other medical image analysis task and contribute to reduce the burden of disease.
Assuntos
Inteligência Artificial , Crowdsourcing , Criança , Saúde Global , Humanos , Microscopia , Redes Neurais de ComputaçãoRESUMO
Soil-transmitted helminths (STH) are the most prevalent pathogens among the group of neglected tropical diseases (NTDs). The Kato-Katz technique is the diagnosis method recommended by the World Health Organization (WHO) although it often presents a decreased sensitivity in low transmission settings and it is labour intensive. Visual reading of Kato-Katz preparations requires the samples to be analyzed in a short period of time since its preparation. Digitizing the samples could provide a solution which allows to store the samples in a digital database and perform remote analysis. Artificial intelligence (AI) methods based on digitized samples can support diagnosis by performing an objective and automatic quantification of disease infection. In this work, we propose an end-to-end pipeline for microscopy image digitization and automatic analysis of digitized images of STH. Our solution includes (a) a digitization system based on a mobile app that digitizes microscope samples using a 3D printed microscope adapter, (b) a telemedicine platform for remote analysis and labelling, and (c) novel deep learning algorithms for automatic assessment and quantification of parasitological infections by STH. The deep learning algorithm has been trained and tested on 51 slides of stool samples containing 949 Trichuris spp. eggs from 6 different subjects. The algorithm evaluation was performed using a cross-validation strategy, obtaining a mean precision of 98.44% and a mean recall of 80.94%. The results also proved the potential of generalization capability of the method at identifying different types of helminth eggs. Additionally, the AI-assisted quantification of STH based on digitized samples has been compared to the one performed using conventional microscopy, showing a good agreement between measurements. In conclusion, this work has presented a comprehensive pipeline using smartphone-assisted microscopy. It is integrated with a telemedicine platform for automatic image analysis and quantification of STH infection using AI models.
Assuntos
Aprendizado Profundo , Microscopia/métodos , Telemedicina/métodos , Tricuríase/diagnóstico , Trichuris/isolamento & purificação , Algoritmos , Animais , Humanos , Tricuríase/parasitologiaRESUMO
Subtle interstitial changes in the lung parenchyma of smokers, known as Interstitial Lung Abnormalities (ILA), have been associated with clinical outcomes, including mortality, even in the absence of Interstitial Lung Disease (ILD). Although several methods have been proposed for the automatic identification of more advanced Interstitial Lung Disease (ILD) patterns, few have tackled ILA, which likely precedes the development ILD in some cases. In this context, we propose a novel methodology for automated identification and classification of ILA patterns in computed tomography (CT) images. The proposed method is an ensemble of deep convolutional neural networks (CNNs) that detect more discriminative features by incorporating two, two-and-a-half and three- dimensional architectures, thereby enabling more accurate classification. This technique is implemented by first training each individual CNN, and then combining its output responses to form the overall ensemble output. To train and test the system we used 37424 radiographic tissue samples corresponding to eight different parenchymal feature classes from 208 CT scans. The resulting ensemble performance including an average sensitivity of 91,41% and average specificity of 98,18% suggests it is potentially a viable method to identify radiographic patterns that precede the development of ILD.
Assuntos
Doenças Pulmonares Intersticiais/diagnóstico , Redes Neurais de Computação , Área Sob a Curva , Bases de Dados Factuais , Humanos , Pulmão/diagnóstico por imagem , Doenças Pulmonares Intersticiais/diagnóstico por imagem , Prognóstico , Curva ROC , Interpretação de Imagem Radiográfica Assistida por Computador , Tomografia Computadorizada por Raios XRESUMO
Lung vessel segmentation has been widely explored by the biomedical image processing community; however, the differentiation of arterial from venous irrigation is still a challenge. Pulmonary artery-vein (AV) segmentation using computed tomography (CT) is growing in importance owing to its undeniable utility in multiple cardiopulmonary pathological states, especially those implying vascular remodelling, allowing the study of both flow systems separately. We present a new framework to approach the separation of tree-like structures using local information and a specifically designed graph-cut methodology that ensures connectivity as well as the spatial and directional consistency of the derived subtrees. This framework has been applied to the pulmonary AV classification using a random forest (RF) pre-classifier to exploit the local anatomical differences of arteries and veins. The evaluation of the system was performed using 192 bronchopulmonary segment phantoms, 48 anthropomorphic pulmonary CT phantoms, and 26 lungs from noncontrast CT images with precise voxel-based reference standards obtained by manually labelling the vessel trees. The experiments reveal a relevant improvement in the accuracy (â¯â¼â¯20%) of the vessel particle classification with the proposed framework with respect to using only the pre-classification based on local information applied to the whole area of the lung under study. The results demonstrated the accurate differentiation between arteries and veins in both clinical and synthetic cases, specifically when the image quality can guarantee a good airway segmentation, which opens a huge range of possibilities in the clinical study of cardiopulmonary diseases.
Assuntos
Artéria Pulmonar/diagnóstico por imagem , Veias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Imagens de FantasmasRESUMO
Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time consuming, difficult to standardize, and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians to accurately diagnose pathological conditions. In this paper, we present a novel, fully automatic approach to classify vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3-D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts' optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2-D and 3-D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a random forests (RFs) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of 18 clinical cases with noncontrast chest CT scans, in comparison with manual classification. The proposed algorithm achieves an overall accuracy of 94%, which is higher than the accuracy obtained using other CNN architectures and RF. Our method was also validated with contrast-enhanced CT scans of patients with chronic thromboembolic pulmonary hypertension to demonstrate that our model generalizes well to contrast-enhanced modalities. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CNN for artery/vein classification in CT images.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Artéria Pulmonar/diagnóstico por imagem , Veias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Doença Pulmonar Obstrutiva Crônica/diagnóstico por imagemRESUMO
In this article we propose and validate a fully automatic tool for emphysema classification in Computed Tomography (CT) images. We hypothesize that a relatively simple Convolutional Neural Network (CNN) architecture can learn even better discriminative features from the input data compared with more complex and deeper architectures. The proposed architecture is comprised of only 4 convolutional and 3 pooling layers, where the input corresponds to a 2.5D multiview representation of the pulmonary segment tissue to classify, corresponding to axial, sagittal and coronal views. The proposed architecture is compared to similar 2D CNN and 3D CNN, and to more complex architectures which involve a larger number of parameters (up to six times larger). This method has been evaluated in 1553 tissue samples, and achieves an overall sensitivity of 81.78 % and a specificity of 97.34%, and results show that the proposed method outperforms deeper state-of-the-art architectures particularly designed for lung pattern classification. The method shows satisfactory results in full-lung classification.