Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 178
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 51(4): 1173-1184, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38049657

RESUMEN

PURPOSE: The automatic segmentation and detection of prostate cancer (PC) lesions throughout the body are extremely challenging due to the lesions' complexity and variability in appearance, shape, and location. In this study, we investigated the performance of a three-dimensional (3D) convolutional neural network (CNN) to automatically characterize metastatic lesions throughout the body in a dataset of PC patients with recurrence after radical prostatectomy. METHODS: We retrospectively collected [68 Ga]Ga-PSMA-11 PET/CT images from 116 patients with metastatic PC at two centers: center 1 provided the data for fivefold cross validation (n = 78) and internal testing (n = 19), and center 2 provided the data for external testing (n = 19). PET and CT data were jointly input into a 3D U-Net to achieve whole-body segmentation and detection of PC lesions. The performance in both the segmentation and the detection of lesions throughout the body was evaluated using established metrics, including the Dice similarity coefficient (DSC) for segmentation and the recall, precision, and F1-score for detection. The correlation and consistency between tumor burdens (PSMA-TV and TL-PSMA) calculated from automatic segmentation and artificial ground truth were assessed by linear regression and Bland‒Altman plots. RESULTS: On the internal test set, the DSC, precision, recall, and F1-score values were 0.631, 0.961, 0.721, and 0.824, respectively. On the external test set, the corresponding values were 0.596, 0.888, 0.792, and 0.837, respectively. Our approach outperformed previous studies in segmenting and detecting metastatic lesions throughout the body. Tumor burden indicators derived from deep learning and ground truth showed strong correlation (R2 ≥ 0.991, all P < 0.05) and consistency. CONCLUSION: Our 3D CNN accurately characterizes whole-body tumors in relapsed PC patients; its results are highly consistent with those of manual contouring. This automatic method is expected to improve work efficiency and to aid in the assessment of tumor burden.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Radioisótopos de Galio , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Isótopos de Galio , Estudios Retrospectivos , Recurrencia Local de Neoplasia/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/cirugía , Neoplasias de la Próstata/patología , Prostatectomía , Ácido Edético
2.
Scand J Gastroenterol ; 59(8): 882-892, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38775234

RESUMEN

BACKGROUND: Adenoma detection rate (ADR) is higher after a positive fecal immunochemical test (FIT) compared to direct screening colonoscopy. OBJECTIVE: This meta-analysis evaluated how ADR, the rates of advanced adenoma detection (AADR), colorectal cancer detection (CDR), and sessile serrated lesion detection (SSLDR) are affected by different FIT positivity thresholds. METHODS: We searched MEDLINE, EMBASE, CINAHL, and EBM Reviews databases for studies reporting ADR, AADR, CDR, and SSLDR according to different FIT cut-off values in asymptomatic average-risk individuals aged 50-74 years old. Data were stratified according to sex, age, time to colonoscopy, publication year, continent, and FIT kit type. Study quality, heterogeneity, and publication bias were assessed. RESULTS: Overall, 4280 articles were retrieved and fifty-eight studies were included (277,661 FIT-positive colonoscopies; mean cecal intubation 96.3%; mean age 60.8 years; male 52.1%). Mean ADR was 56.1% (95% CI 53.4 - 58.7%), while mean AADR, CDR, and SSLDR were 27.2% (95% CI 24.4 - 30.1%), 5.3% (95% CI 4.7 - 6.0%), and 3.0% (95% CI 1.7 - 4.6%), respectively. For each 20 µg Hb/g increase in FIT cut-off level, ADR increased by 1.54% (95% CI 0.52 - 2.56%, p < 0.01), AADR by 3.90% (95% CI 2.76 - 5.05%, p < 0.01) and CDR by 1.46% (95% CI 0.66 - 2.24%, p < 0.01). Many detection rates were greater amongst males and Europeans. CONCLUSIONS: ADRs in FIT-positive colonoscopies are influenced by the adopted FIT positivity threshold, and identified targets, importantly, proved to be higher than most current societal recommendations.


Asunto(s)
Adenoma , Colonoscopía , Neoplasias Colorrectales , Detección Precoz del Cáncer , Sangre Oculta , Humanos , Adenoma/diagnóstico , Neoplasias Colorrectales/diagnóstico , Detección Precoz del Cáncer/métodos , Heces/química , Anciano , Persona de Mediana Edad , Masculino , Inmunoquímica , Femenino
3.
J Appl Clin Med Phys ; : e14434, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39078867

RESUMEN

BACKGROUND: Data collected from hospitals are usually partially annotated by radiologists due to time constraints. Developing and evaluating deep learning models on these data may result in over or under estimation PURPOSE: We aimed to quantitatively investigate how the percentage of annotated lesions in CT images will influence the performance of universal lesion detection (ULD) algorithms. METHODS: We trained a multi-view feature pyramid network with position-aware attention (MVP-Net) to perform ULD. Three versions of the DeepLesion dataset were created for training MVP-Net. Original DeepLesion Dataset (OriginalDL) is the publicly available, widely studied DeepLesion dataset that includes 32 735 lesions in 4427 patients which were partially labeled during routine clinical practice. Enriched DeepLesion Dataset (EnrichedDL) is an enhanced dataset that features fully labeled at one or more time points for 4145 patients with 34 317 lesions. UnionDL is the union of the OriginalDL and EnrichedDL with 54 510 labeled lesions in 4427 patients. Each dataset was used separately to train MVP-Net, resulting in the following models: OriginalCNN (replicating the original result), EnrichedCNN (testing the effect of increased annotation), and UnionCNN (featuring the greatest number of annotations). RESULTS: Although the reported mean sensitivity of OriginalCNN was 84.3% using the OriginalDL testing set, the performance fell sharply when tested on the EnrichedDL testing set, yielding mean sensitivities of 56.1%, 66.0%, and 67.8% for OriginalCNN, EnrichedCNN, and UnionCNN, respectively. We also found that increasing the percentage of annotated lesions in the training set increased sensitivity, but the margin of increase in performance gradually diminished according to the power law. CONCLUSIONS: We expanded and improved the existing DeepLesion dataset by annotating additional 21 775 lesions, and we demonstrated that using fully labeled CT images avoided overestimation of MVP-Net's performance while increasing the algorithm's sensitivity, which may have a huge impact to the future CT lesion detection research. The annotated lesions are at https://github.com/ComputationalImageAnalysisLab/DeepLesionData.

4.
BMC Bioinformatics ; 24(1): 401, 2023 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-37884877

RESUMEN

BACKGROUND: Recent advancements in computing power and state-of-the-art algorithms have helped in more accessible and accurate diagnosis of numerous diseases. In addition, the development of de novo areas in imaging science, such as radiomics and radiogenomics, have been adding more to personalize healthcare to stratify patients better. These techniques associate imaging phenotypes with the related disease genes. Various imaging modalities have been used for years to diagnose breast cancer. Nonetheless, digital breast tomosynthesis (DBT), a state-of-the-art technique, has produced promising results comparatively. DBT, a 3D mammography, is replacing conventional 2D mammography rapidly. This technological advancement is key to AI algorithms for accurately interpreting medical images. OBJECTIVE AND METHODS: This paper presents a comprehensive review of deep learning (DL), radiomics and radiogenomics in breast image analysis. This review focuses on DBT, its extracted synthetic mammography (SM), and full-field digital mammography (FFDM). Furthermore, this survey provides systematic knowledge about DL, radiomics, and radiogenomics for beginners and advanced-level researchers. RESULTS: A total of 500 articles were identified, with 30 studies included as the set criteria. Parallel benchmarking of radiomics, radiogenomics, and DL models applied to the DBT images could allow clinicians and researchers alike to have greater awareness as they consider clinical deployment or development of new models. This review provides a comprehensive guide to understanding the current state of early breast cancer detection using DBT images. CONCLUSION: Using this survey, investigators with various backgrounds can easily seek interdisciplinary science and new DL, radiomics, and radiogenomics directions towards DBT.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Humanos , Femenino , Intensificación de Imagen Radiográfica/métodos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/genética , Mamografía/métodos
5.
Prostate ; 83(9): 871-878, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36959777

RESUMEN

BACKGROUND: Multiparametric MRI (mpMRI) improves the detection of aggressive prostate cancer (PCa) subtypes. As cases of active surveillance (AS) increase and tumor progression triggers definitive treatment, we evaluated whether an AI-driven algorithm can detect clinically significant PCa (csPCa) in patients under AS. METHODS: Consecutive patients under AS who received mpMRI (PI-RADSv2.1 protocol) and subsequent MR-guided ultrasound fusion (targeted and extensive systematic) biopsy between 2017 and 2020 were retrospectively analyzed. Diagnostic performance of an automated clinically certified AI-driven algorithm was evaluated on both lesion and patient level regarding the detection of csPCa. RESULTS: Analysis of 56 patients resulted in 93 target lesions. Patient level sensitivity and specificity of the AI algorithm was 92.5%/31% for the detection of ISUP ≥ 1 and 96.4%/25% for the detection of ISUP ≥ 2, respectively. The only case of csPCa missed by the AI harbored only 1/47 Gleason 7a core (systematic biopsy; previous and subsequent biopsies rendered non-csPCa). CONCLUSIONS: AI-augmented lesion detection and PI-RADS scoring is a robust tool to detect progression to csPCa in patients under AS. Integration in the clinical workflow can serve as reassurance for the reader and streamline reporting, hence improve efficiency and diagnostic confidence.


Asunto(s)
Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos , Espera Vigilante , Biopsia Guiada por Imagen/métodos , Inteligencia Artificial
6.
Eur J Nucl Med Mol Imaging ; 50(8): 2441-2452, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36933075

RESUMEN

PURPOSE: The aim of this study was to develop a convolutional neural network (CNN) for the automatic detection and segmentation of gliomas using [18F]fluoroethyl-L-tyrosine ([18F]FET) PET. METHODS: Ninety-three patients (84 in-house/7 external) who underwent a 20-40-min static [18F]FET PET scan were retrospectively included. Lesions and background regions were defined by two nuclear medicine physicians using the MIM software, such that delineations by one expert reader served as ground truth for training and testing the CNN model, while delineations by the second expert reader were used to evaluate inter-reader agreement. A multi-label CNN was developed to segment the lesion and background region while a single-label CNN was implemented for a lesion-only segmentation. Lesion detectability was evaluated by classifying [18F]FET PET scans as negative when no tumor was segmented and vice versa, while segmentation performance was assessed using the dice similarity coefficient (DSC) and segmented tumor volume. The quantitative accuracy was evaluated using the maximal and mean tumor to mean background uptake ratio (TBRmax/TBRmean). CNN models were trained and tested by a threefold cross-validation (CV) using the in-house data, while the external data was used for an independent evaluation to assess the generalizability of the two CNN models. RESULTS: Based on the threefold CV, the multi-label CNN model achieved 88.9% sensitivity and 96.5% precision for discriminating between positive and negative [18F]FET PET scans compared to a 35.3% sensitivity and 83.1% precision obtained with the single-label CNN model. In addition, the multi-label CNN allowed an accurate estimation of the maximal/mean lesion and mean background uptake, resulting in an accurate TBRmax/TBRmean estimation compared to a semi-automatic approach. In terms of lesion segmentation, the multi-label CNN model (DSC = 74.6 ± 23.1%) demonstrated equal performance as the single-label CNN model (DSC = 73.7 ± 23.2%) with tumor volumes estimated by the single-label and multi-label model (22.9 ± 23.6 ml and 23.1 ± 24.3 ml, respectively) closely approximating the tumor volumes estimated by the expert reader (24.1 ± 24.4 ml). DSCs of both CNN models were in line with the DSCs by the second expert reader compared with the lesion segmentations by the first expert reader, while detection and segmentation performance of both CNN models as determined with the in-house data were confirmed by the independent evaluation using external data. CONCLUSION: The proposed multi-label CNN model detected positive [18F]FET PET scans with high sensitivity and precision. Once detected, an accurate tumor segmentation and estimation of background activity was achieved resulting in an automatic and accurate TBRmax/TBRmean estimation, such that user interaction and potential inter-reader variability can be minimized.


Asunto(s)
Glioma , Humanos , Estudios Retrospectivos , Glioma/diagnóstico por imagen , Glioma/patología , Tomografía de Emisión de Positrones/métodos , Tirosina , Redes Neurales de la Computación
7.
Methods ; 205: 46-52, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35598831

RESUMEN

Cervical cancer is the fourth most common cancer in women, and its precise detection plays a critical role in disease treatment and prognosis prediction. Fluorodeoxyglucose positron emission tomography and computed tomography, i.e., FDG-PET/CT and PET/CT, have established roles with superior sensitivity and specificity in most cancer imaging applications. However, a typical FDG-PET/CT analysis involves the time-consuming process of interpreting hundreds of images, and the intense image screening work has greatly hindered clinicians. We propose a computer-aided deep learning-based framework to detect cervical cancer using multimodal medical images to increase the efficiency of clinical diagnosis. This framework has three components: image registration, multimodal image fusion, and lesion object detection. Compared to traditional approaches, our adaptive image fusion method fuses multimodal medical images. We discuss the performance of deep learning in each modality, and we conduct extensive experiments to compare the performance of different image fusion methods with some state-of-the-art (SOTA) object-detection deep learning-based methods in images with different modalities. Compared with PET, which has the highest recognition accuracy in single-modality images, the recognition accuracy of our proposed method on multiple object detection models is improved by an average of 6.06%. And compared with the best results of other multimodal fusion methods, our results have an average improvement of 8.9%.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Femenino , Fluorodesoxiglucosa F18 , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones/métodos , Neoplasias del Cuello Uterino/diagnóstico por imagen
8.
Skeletal Radiol ; 52(1): 91-98, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35980454

RESUMEN

BACKGROUND: Whole-body low-dose CT is the recommended initial imaging modality to evaluate bone destruction as a result of multiple myeloma. Accurate interpretation of these scans to detect small lytic bone lesions is time intensive. A functional deep learning) algorithm to detect lytic lesions on CTs could improve the value of these CTs for myeloma imaging. Our objectives were to develop a DL algorithm and determine its performance at detecting lytic lesions of multiple myeloma. METHODS: Axial slices (2-mm section thickness) from whole-body low-dose CT scans of subjects with biochemically confirmed plasma cell dyscrasias were included in the study. Data were split into train and test sets at the patient level targeting a 90%/10% split. Two musculoskeletal radiologists annotated lytic lesions on the images with bounding boxes. Subsequently, we developed a two-step deep learning model comprising bone segmentation followed by lesion detection. Unet and "You Look Only Once" (YOLO) models were used as bone segmentation and lesion detection algorithms, respectively. Diagnostic performance was determined using the area under the receiver operating characteristic curve (AUROC). RESULTS: Forty whole-body low-dose CTs from 40 subjects yielded 2193 image slices. A total of 5640 lytic lesions were annotated. The two-step model achieved a sensitivity of 91.6% and a specificity of 84.6%. Lesion detection AUROC was 90.4%. CONCLUSION: We developed a deep learning model that detects lytic bone lesions of multiple myeloma on whole-body low-dose CTs with high performance. External validation is required prior to widespread adoption in clinical practice.


Asunto(s)
Aprendizaje Profundo , Mieloma Múltiple , Osteólisis , Humanos , Mieloma Múltiple/diagnóstico por imagen , Mieloma Múltiple/patología , Algoritmos , Tomografía Computarizada por Rayos X/métodos
9.
Sensors (Basel) ; 23(15)2023 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-37571620

RESUMEN

With a view of the post-COVID-19 world and probable future pandemics, this paper presents an Internet of Things (IoT)-based automated healthcare diagnosis model that employs a mixed approach using data augmentation, transfer learning, and deep learning techniques and does not require physical interaction between the patient and physician. Through a user-friendly graphic user interface and availability of suitable computing power on smart devices, the embedded artificial intelligence allows the proposed model to be effectively used by a layperson without the need for a dental expert by indicating any issues with the teeth and subsequent treatment options. The proposed method involves multiple processes, including data acquisition using IoT devices, data preprocessing, deep learning-based feature extraction, and classification through an unsupervised neural network. The dataset contains multiple periapical X-rays of five different types of lesions obtained through an IoT device mounted within the mouth guard. A pretrained AlexNet, a fast GPU implementation of a convolutional neural network (CNN), is fine-tuned using data augmentation and transfer learning and employed to extract the suitable feature set. The data augmentation avoids overtraining, whereas accuracy is improved by transfer learning. Later, support vector machine (SVM) and the K-nearest neighbors (KNN) classifiers are trained for lesion classification. It was found that the proposed automated model based on the AlexNet extraction mechanism followed by the SVM classifier achieved an accuracy of 98%, showing the effectiveness of the presented approach.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Internet de las Cosas , Humanos , Inteligencia Artificial , Análisis por Conglomerados
10.
J Digit Imaging ; 36(3): 1208-1215, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36650301

RESUMEN

Universal lesion detection (ULD) in computed tomography (CT) images is an important and challenging prerequisite for computer-aided diagnosis (CAD) to find abnormal tissue, such as tumors of lymph nodes, liver tumors, and lymphadenopathy. The key challenge is that lesions have a tiny size and high similarity with non-lesions, which can easily lead to high false positives. Specifically , non-lesions are nearby normal anatomy that include the bowel, vasculature, and mesentery, which decrease the conspicuity of small lesions since they are often hard to differentiate. In this study, we present a novel scale-attention module that enhances feature discrimination between lesion and non-lesion regions by utilizing the domain knowledge of radiologists to reduce false positives effectively. Inspired by the domain knowledge that radiologists tend to divide each CT image into multiple areas, then detect lesions in these smaller areas separately, a local axial scale-attention (LASA) module is proposed to re-weight each pixel in a feature map by aggregating local features from multiple scales adaptively. In addition, to keep the same weight, a combination of axial pixels in the height- and width-axes is designed, attached with position embedding. The model can be used in CNNs easily and flexibly. We test our method on the DeepLesion dataset. The sensitivities at 0.5, 1, 2, 4, 8, and 16 false positives (FPs) per image and average sensitivity at [0.5, 1, 2, 4] are calculated to evaluate the accuracy. The sensitivities are 78.30%, 84.96%, 89.86%, 93.14%, 95.36%, and 95.54% at 0.5, 1, 2, 4, 8, and 16 FPs per image; the average sensitivity is 86.56%, outperforming the former methods. The proposed method enhances feature discrimination between lesion and non-lesion regions by adding LASA modules. These encouraging results illustrate the potential advantage of exploiting the domain knowledge for lesion detection.


Asunto(s)
Diagnóstico por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
11.
J Digit Imaging ; 36(2): 468-485, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36478312

RESUMEN

Multiple sclerosis (MS) is one of the most serious neurological diseases. It is the most frequent reason of non-traumatic disability among young adults. MS is an autoimmune disease wherein the central nervous system wrongly destructs the myelin sheath surrounding and protecting axons of nerve cells of the brain and the spinal cord which results in presence of lesions called plaques. The damage of myelin sheath alters the normal transmission of nerve flow at the plaques level, consequently, a loss of communication between the brain and other organs. The consequence of this poor transmission of nerve impulses is the occurrence of various neurological symptoms. MS lesions cause mobility, vision, cognitive, and memory disorders. Indeed, early detection of lesions provides an accurate MS diagnosis. Consequently, and with the adequate treatment, clinicians will be able to deal effectively with the disease and reduce the number of relapses. Therefore, the use of magnetic resonance imaging (MRI) is primordial which is proven as the relevant imaging tool for early diagnosis of MS patients. But, low contrast MRI images can hide important objects in the image such lesions. In this paper, we propose a new automated contrast enhancement (CE) method to ameliorate the low contrast of MRI images for a better enhancement of MS lesions. This step is very important as it helps radiologists in confirming their diagnosis. The developed algorithm called BDS is based on Brightness Preserving Dynamic Fuzzy Histogram Equalization (BPDFHE) and Singular Value Decomposition with Discrete Wavelet Transform (SVD-DWT) techniques. BDS is dedicated to improve the low quality of MRI images with preservation of the brightness level and the edge details from degradation and without added artifacts or noise. These features are essential in CE approaches for a better lesion recognition. A modified version of BDS called MBDS is also implemented in the second part of this paper wherein we have proposed a new method for computing the correction factor. Indeed, with the use of the new correction factor, the entropy has been increased and the contrast is greatly enhanced. MBDS is specially dedicated for very low contrast MRI images. The experimental results proved the effectiveness of developed methods in improving low contrast of MRI images with preservation of brightness level and edge information. Moreover, performances of both proposed BDS and MBDS algorithms exceeded conventional CE methods.


Asunto(s)
Esclerosis Múltiple , Humanos , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/patología , Imagen por Resonancia Magnética/métodos , Encéfalo , Algoritmos , Cabeza , Aumento de la Imagen
12.
J Digit Imaging ; 36(4): 1723-1738, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37231287

RESUMEN

Melanoma is the most lethal of all skin cancers. This necessitates the need for a machine learning-driven skin cancer detection system to help medical professionals with early detection. We propose an integrated multi-modal ensemble framework that combines deep convolution neural representations with extracted lesion characteristics and patient meta-data. This study intends to integrate transfer-learned image features, global and local textural information, and patient data using a custom generator to diagnose skin cancer accurately. The architecture combines multiple models in a weighted ensemble strategy, which was trained and validated on specific and distinct datasets, namely, HAM10000, BCN20000 + MSK, and the ISIC2020 challenge datasets. They were evaluated on the mean values of precision, recall or sensitivity, specificity, and balanced accuracy metrics. Sensitivity and specificity play a major role in diagnostics. The model achieved sensitivities of 94.15%, 86.69%, and 86.48% and specificity of 99.24%, 97.73%, and 98.51% for each dataset, respectively. Additionally, the accuracy on the malignant classes of the three datasets was 94%, 87.33%, and 89%, which is significantly higher than the physician recognition rate. The results demonstrate that our weighted voting integrated ensemble strategy outperforms existing models and could serve as an initial diagnostic tool for skin cancer.


Asunto(s)
Melanoma , Anomalías Cutáneas , Neoplasias Cutáneas , Humanos , Algoritmos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Melanoma/diagnóstico por imagen , Piel/diagnóstico por imagen , Piel/patología
13.
Mult Scler ; 28(8): 1209-1218, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-34859704

RESUMEN

BACKGROUND: Active (new/enlarging) T2 lesion counts are routinely used in the clinical management of multiple sclerosis. Thus, automated tools able to accurately identify active T2 lesions would be of high interest to neuroradiologists for assisting in their clinical activity. OBJECTIVE: To compare the accuracy in detecting active T2 lesions and of radiologically active patients based on different visual and automated methods. METHODS: One hundred multiple sclerosis patients underwent two magnetic resonance imaging examinations within 12 months. Four approaches were assessed for detecting active T2 lesions: (1) conventional neuroradiological reports; (2) prospective visual analyses performed by an expert; (3) automated unsupervised tool; and (4) supervised convolutional neural network. As a gold standard, a reference outcome was created by the consensus of two observers. RESULTS: The automated methods detected a higher number of active T2 lesions, and a higher number of active patients, but a higher number of false-positive active patients than visual methods. The convolutional neural network model was more sensitive in detecting active T2 lesions and active patients than the other automated method. CONCLUSION: Automated convolutional neural network models show potential as an aid to neuroradiological assessment in clinical practice, although visual supervision of the outcomes is still required.


Asunto(s)
Esclerosis Múltiple , Humanos , Imagen por Resonancia Magnética/métodos , Esclerosis Múltiple/patología , Estudios Prospectivos
14.
Acta Neurochir Suppl ; 134: 171-182, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34862541

RESUMEN

This chapter describes technical considerations and current and future clinical applications of lesion detection using machine learning in the clinical setting. Lesion detection is central to neuroradiology and precedes all further processes which include but are not limited to lesion characterization, quantification, longitudinal disease assessment, prognosis, and prediction of treatment response. A number of machine learning algorithms focusing on lesion detection have been developed or are currently under development which may either support or extend the imaging process. Examples include machine learning applications in stroke, aneurysms, multiple sclerosis, neuro-oncology, neurodegeneration, and epilepsy.


Asunto(s)
Inteligencia Artificial , Accidente Cerebrovascular , Algoritmos , Humanos , Aprendizaje Automático , Neuroimagen
15.
J Toxicol Pathol ; 35(2): 135-147, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35516841

RESUMEN

Artificial intelligence (AI)-based image analysis is increasingly being used for preclinical safety-assessment studies in the pharmaceutical industry. In this paper, we present an AI-based solution for preclinical toxicology studies. We trained a set of algorithms to learn and quantify multiple typical histopathological findings in whole slide images (WSIs) of the livers of young Sprague Dawley rats by using a U-Net-based deep learning network. The trained algorithms were validated using 255 liver WSIs to detect, classify, and quantify seven types of histopathological findings (including vacuolation, bile duct hyperplasia, and single-cell necrosis) in the liver. The algorithms showed consistently good performance in detecting abnormal areas. Approximately 75% of all specimens could be classified as true positive or true negative. In general, findings with clear boundaries with the surrounding normal structures, such as vacuolation and single-cell necrosis, were accurately detected with high statistical scores. The results of quantitative analyses and classification of the diagnosis based on the threshold values between "no findings" and "abnormal findings" correlated well with diagnoses made by professional pathologists. However, the scores for findings ambiguous boundaries, such as hepatocellular hypertrophy, were poor. These results suggest that deep learning-based algorithms can detect, classify, and quantify multiple findings simultaneously on rat liver WSIs. Thus, it can be a useful supportive tool for a histopathological evaluation, especially for primary screening in rat toxicity studies.

16.
Magn Reson Med ; 86(3): 1662-1673, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33885165

RESUMEN

PURPOSE: To develop and evaluate a domain adaptive and fully automated review workflow (lesion assessment through tracklet evaluation, LATTE) for assessment of atherosclerotic disease in 3D carotid MR vessel wall imaging (MR VWI). METHODS: VWI of 279 subjects with carotid atherosclerosis were used to develop LATTE, mainly convolutional neural network (CNN)-based domain adaptive lesion classification after image quality assessment and artery of interest localization. Heterogeneity in test sets from various sites usually causes inferior CNN performance. With our novel unsupervised domain adaptation (DA), LATTE was designed to accurately classify arteries into normal arteries and early and advanced lesions without additional annotations on new datasets. VWI of 271 subjects from four datasets (eight sites) with slightly different imaging parameters/signal patterns were collected to assess the effectiveness of DA of LATTE using the area under the receiver operating characteristic curve (AUC) on all lesions and advanced lesions before and after DA. RESULTS: LATTE had good performance with advanced/all lesion classification, with the AUC of >0.88/0.83, significant improvements from >0.82/0.80 if without DA. CONCLUSIONS: LATTE can locate target arteries and distinguish carotid atherosclerotic lesions with consistently improved performance with DA on new datasets. It may be useful for carotid atherosclerosis detection and assessment on various clinical sites.


Asunto(s)
Aterosclerosis , Enfermedades de las Arterias Carótidas , Inteligencia Artificial , Aterosclerosis/diagnóstico por imagen , Arterias Carótidas/diagnóstico por imagen , Enfermedades de las Arterias Carótidas/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética
17.
Diabetes Metab Res Rev ; 37(4): e3445, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33713564

RESUMEN

AIMS: To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS: A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS: Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS: The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Retinopatía Diabética/diagnóstico , Humanos , Índice de Severidad de la Enfermedad
18.
Toxicol Pathol ; 49(4): 815-842, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33618634

RESUMEN

Digital pathology platforms with integrated artificial intelligence have the potential to increase the efficiency of the nonclinical pathologist's workflow through screening and prioritizing slides with lesions and highlighting areas with specific lesions for review. Herein, we describe the comparison of various single- and multi-magnification convolutional neural network (CNN) architectures to accelerate the detection of lesions in tissues. Different models were evaluated for defining performance characteristics and efficiency in accurately identifying lesions in 5 key rat organs (liver, kidney, heart, lung, and brain). Cohorts for liver and kidney were collected from TG-GATEs open-source repository, and heart, lung, and brain from internally selected R&D studies. Annotations were performed, and models were trained on each of the available lesion classes in the available organs. Various class-consolidation approaches were evaluated from generalized lesion detection to individual lesion detections. The relationship between the amount of annotated lesions and the precision/accuracy of model performance is elucidated. The utility of multi-magnification CNN implementations in specific tissue subtypes is also demonstrated. The use of these CNN-based models offers users the ability to apply generalized lesion detection to whole-slide images, with the potential to generate novel quantitative data that would not be possible with conventional image analysis techniques.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Animales , Procesamiento de Imagen Asistido por Computador , Ratas
19.
Surg Endosc ; 35(12): 6532-6538, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-33185766

RESUMEN

BACKGROUND: This study was aimed to develop a computer-aided diagnosis (CAD) system with deep-learning technique and to validate its efficiency on detecting the four categories of lesions such as polyps, advanced cancer, erosion/ulcer and varices at endoscopy. METHODS: A deep convolutional neural network (CNN) that consists of more than 50 layers were trained with a big dataset containing 327,121 white light images (WLI) of endoscopy from 117,005 cases collected from 2012 to 2017. Two CAD models were developed using images with or without annotation of the training dataset. The efficiency of the CAD system detecting the four categories of lesions was validated by another dataset containing consecutive cases from 2018 to 2019. RESULTS: A total of 1734 cases with 33,959 images were included in the validation datasets which containing lesions of polyps 1265, advanced cancer 500, erosion/ulcer 486, and varices 248. The CAD system developed in this study may detect polyps, advanced cancer, erosion/ulcer and varices as abnormality with the sensitivity of 88.3% and specificity of 90.3%, respectively, in 0.05 s. The training datasets with annotation may enhance either sensitivity or specificity about 20%, p = 0.000. The sensitivities and specificities for polyps, advanced cancer, erosion/ulcer and varices reached about 90%, respectively. The detect efficiency for the four categories of lesions reached to 89.7%. CONCLUSION: The CAD model for detection of multiple lesions in gastrointestinal lumen would be potentially developed into a double check along with real-time assessment and interpretation of the findings encountered by the endoscopists and may be a benefit to reduce the events of missing lesions.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Endoscopía Gastrointestinal , Tracto Gastrointestinal , Humanos , Proyectos Piloto
20.
Sensors (Basel) ; 21(19)2021 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-34640959

RESUMEN

Melanoma is one of the most lethal and rapidly growing cancers, causing many deaths each year. This cancer can be treated effectively if it is detected quickly. For this reason, many algorithms and systems have been developed to support automatic or semiautomatic detection of neoplastic skin lesions based on the analysis of optical images of individual moles. Recently, full-body systems have gained attention because they enable the analysis of the patient's entire body based on a set of photos. This paper presents a prototype of such a system, focusing mainly on assessing the effectiveness of algorithms developed for the detection and segmentation of lesions. Three detection algorithms (and their fusion) were analyzed, one implementing deep learning methods and two classic approaches, using local brightness distribution and a correlation method. For fusion of algorithms, detection sensitivity = 0.95 and precision = 0.94 were obtained. Moreover, the values of the selected geometric parameters of segmented lesions were calculated and compared for all algorithms. The obtained results showed a high accuracy of the evaluated parameters (error of area estimation <10%), especially for lesions with dimensions greater than 3 mm, which are the most suspected of being neoplastic lesions.


Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Algoritmos , Imagen Corporal , Humanos , Melanoma/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda