RESUMO
The front-line imaging modalities computed tomography (CT) and X-ray play important roles for triaging COVID patients. Thoracic CT has been accepted to have higher sensitivity than a chest X-ray for COVID diagnosis. Considering the limited access to resources (both hardware and trained personnel) and issues related to decontamination, CT may not be ideal for triaging suspected subjects. Artificial intelligence (AI) assisted X-ray based application for triaging and monitoring require experienced radiologists to identify COVID patients in a timely manner with the additional ability to delineate and quantify the disease region is seen as a promising solution for widespread clinical use. Our proposed solution differs from existing solutions presented by industry and academic communities. We demonstrate a functional AI model to triage by classifying and segmenting a single chest X-ray image, while the AI model is trained using both X-ray and CT data. We report on how such a multi-modal training process improves the solution compared to single modality (X-ray only) training. The multi-modal solution increases the AUC (area under the receiver operating characteristic curve) from 0.89 to 0.93 for a binary classification between COVID-19 and non-COVID-19 cases. It also positively impacts the Dice coefficient (0.59 to 0.62) for localizing the COVID-19 pathology. To compare the performance of experienced readers to the AI model, a reader study is also conducted. The AI model showed good consistency with respect to radiologists. The DICE score between two radiologists on the COVID group was 0.53 while the AI had a DICE value of 0.52 and 0.55 when compared to the segmentation done by the two radiologists separately. From a classification perspective, the AUCs of two readers was 0.87 and 0.81 while the AUC of the AI is 0.93 based on the reader study dataset. We also conducted a generalization study by comparing our method to the-state-art methods on independent datasets. The results show better performance from the proposed method. Leveraging multi-modal information for the development benefits the single-modal inferencing.
RESUMO
Ultrasound scanners image the anatomy modulated by their characteristic texture. For certain anatomical regions such as the liver, the characteristic texture of the scanner itself becomes the anatomical marker. Deep Learning (DL) models trained on a scanner-type not only model the anatomical content, they also learn the scanner's characteristic texture. Portability of such models across scanner-types is affected by the learnt styles and results in suboptimal outcome (e.g., for segmentation models, lower Dice values when inferred on images procured from different scanner-type). Instead of retraining the DL model to accommodate this diversity, we transform the texture of the previously unseen data to match the training distribution. Neural style transfer in prior art has used features from the popular VGG network to accomplish this. We not only use a previously trained DL model for the image interpretation task e.g. segmentation, we also utilize its feature maps to accomplish style transfer as well, reducing the complexity of the algorithm pipeline. We demonstrate the improvement in segmentation outcome after such a such style transfer without retraining an existing model.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia , Fígado/diagnóstico por imagemRESUMO
Automated Breast Ultrasound (ABUS) is highly effective as breast cancer screening adjunct technology. Automation can greatly enhance the efficiency of the clinician sifting through the quantum of data in ABUS volumes to spot lesions. We have implemented a fully automatic generic algorithm pipeline for detection and characterization of lesions on such 3D volumes. We compare a wide range of features for region description on their effectiveness at the dual goals of lesion detection and characterization. On multiple feature images, we compute region descriptors at lesion candidate locations obviating the need for explicit lesion segmentation. We use Random Forests classifier to evaluate candidate region descriptors for lesion detection. Further, we categorize true lesions as Malignant or other masses (e.g. Cysts). Over a database of 145 volumes, with 36 biopsy verified lesions, we achieved Area Under the Curve (AUC) values of 92.6% for lesion detection and 89% for lesion characterization.
Assuntos
Automação , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer/métodos , Ultrassonografia Mamária/métodos , Algoritmos , Feminino , HumanosRESUMO
Image based quantitative stratification of the Left Ventricles (LV) across a population helps in unraveling the structure-function symbiosis of the heart. An unbiased, reference less grouping scheme that automatically determines the number of clusters and a physioanatomically relevant strategy that aligns the intra cluster LV shapes would enable the robust construction of pathology stratified cardiac atlas. This paper achieves this hitherto elusive stratification and alignment by adapting the conventional strategies routinely followed by clinicians. The individual LV shape models (N=127) are independently oriented to an "attitudinally consistent orientation" that captures the physioanatomic variations of the LV morphology. Affinity propagation technique based on the automatically identified inter-LV_landmark distances is used to group the LV shapes. The proposed algorithm is computationally efficient and, if the inter cluster variations are linked to pathology, could provide a clinically relevant cardiac atlas.