Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 38
Filtrer
1.
BMC Oral Health ; 24(1): 772, 2024 Jul 10.
Article de Anglais | MEDLINE | ID: mdl-38987714

RÉSUMÉ

Integrating artificial intelligence (AI) into medical and dental applications can be challenging due to clinicians' distrust of computer predictions and the potential risks associated with erroneous outputs. We introduce the idea of using AI to trigger second opinions in cases where there is a disagreement between the clinician and the algorithm. By keeping the AI prediction hidden throughout the diagnostic process, we minimize the risks associated with distrust and erroneous predictions, relying solely on human predictions. The experiment involved 3 experienced dentists, 25 dental students, and 290 patients treated for advanced caries across 6 centers. We developed an AI model to predict pulp status following advanced caries treatment. Clinicians were asked to perform the same prediction without the assistance of the AI model. The second opinion framework was tested in a 1000-trial simulation. The average F1-score of the clinicians increased significantly from 0.586 to 0.645.


Sujet(s)
Intelligence artificielle , Caries dentaires , Humains , Caries dentaires/thérapie , Orientation vers un spécialiste , Planification des soins du patient , Algorithmes
2.
Med Phys ; 2024 Jul 19.
Article de Anglais | MEDLINE | ID: mdl-39031886

RÉSUMÉ

BACKGROUND: The pancreas is a complex abdominal organ with many anatomical variations, and therefore automated pancreas segmentation from medical images is a challenging application. PURPOSE: In this paper, we present a framework for segmenting individual pancreatic subregions and the pancreatic duct from three-dimensional (3D) computed tomography (CT) images. METHODS: A multiagent reinforcement learning (RL) network was used to detect landmarks of the head, neck, body, and tail of the pancreas, and landmarks along the pancreatic duct in a selected target CT image. Using the landmark detection results, an atlas of pancreases was nonrigidly registered to the target image, resulting in anatomical probability maps for the pancreatic subregions and duct. The probability maps were augmented with multilabel 3D U-Net architectures to obtain the final segmentation results. RESULTS: To evaluate the performance of our proposed framework, we computed the Dice similarity coefficient (DSC) between the predicted and ground truth manual segmentations on a database of 82 CT images with manually segmented pancreatic subregions and 37 CT images with manually segmented pancreatic ducts. For the four pancreatic subregions, the mean DSC improved from 0.38, 0.44, and 0.39 with standard 3D U-Net, Attention U-Net, and shifted windowing (Swin) U-Net architectures, to 0.51, 0.47, and 0.49, respectively, when utilizing the proposed RL-based framework. For the pancreatic duct, the RL-based framework achieved a mean DSC of 0.70, significantly outperforming the standard approaches and existing methods on different datasets. CONCLUSIONS: The resulting accuracy of the proposed RL-based segmentation framework demonstrates an improvement against segmentation with standard U-Net architectures.

3.
Radiother Oncol ; 198: 110410, 2024 09.
Article de Anglais | MEDLINE | ID: mdl-38917883

RÉSUMÉ

BACKGROUND AND PURPOSE: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge. MATERIALS AND METHODS: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test. RESULTS: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities. CONCLUSION: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.


Sujet(s)
Tumeurs de la tête et du cou , Imagerie par résonance magnétique , Organes à risque , Planification de radiothérapie assistée par ordinateur , Tomodensitométrie , Humains , Tomodensitométrie/méthodes , Imagerie par résonance magnétique/méthodes , Tumeurs de la tête et du cou/imagerie diagnostique , Tumeurs de la tête et du cou/radiothérapie , Organes à risque/effets des radiations , Planification de radiothérapie assistée par ordinateur/méthodes
4.
IEEE J Biomed Health Inform ; 28(6): 3597-3612, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38421842

RÉSUMÉ

Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.


Sujet(s)
Technologie d'oculométrie , Apprentissage machine , Humains , Imagerie diagnostique/méthodes , Algorithmes , Mouvements oculaires/physiologie , Traitement d'image par ordinateur/méthodes
5.
Med Phys ; 51(3): 2175-2186, 2024 Mar.
Article de Anglais | MEDLINE | ID: mdl-38230752

RÉSUMÉ

BACKGROUND: Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE: To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS: In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD 95 $_{95}$ ). RESULTS: The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD 95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS: The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.


Sujet(s)
Tumeurs de la tête et du cou , Planification de radiothérapie assistée par ordinateur , Humains , Planification de radiothérapie assistée par ordinateur/méthodes , Cou , Tomodensitométrie , Imagerie par résonance magnétique , Tête , Organes à risque , Biais de l'observateur , Tumeurs de la tête et du cou/imagerie diagnostique , Tumeurs de la tête et du cou/radiothérapie
6.
J Dent ; 138: 104732, 2023 11.
Article de Anglais | MEDLINE | ID: mdl-37778496

RÉSUMÉ

OBJECTIVES: The objective was to examine the effect of giving Artificial Intelligence (AI)-based radiographic information versus standard radiographic and clinical information to dental students on their pulp exposure prediction ability. METHODS: 292 preoperative bitewing radiographs from patients previously treated were used. A multi-path neural network was implemented. The first path was a convolutional neural network (CNN) based on ResNet-50 architecture. The second path was a neural network trained on the distance between the pulp and lesion extracted from X-ray segmentations. Both paths merged and were followed by fully connected layers that predicted the probability of pulp exposure. A trial concerning the prediction of pulp exposure based on radiographic input and information on age and pain was conducted, involving 25 dental students. The data displayed was divided into 4 groups (G): GX-ray, GX-ray+clinical data, GX-ray+AI, GX-ray+clinical data+AI. RESULTS: The results showed that AI surpassed the performance of students in all groups with an F1-score of 0.71 (P < 0.001). The students' F1-score in GX-ray+AI and GX-ray+clinical data+AI with model prediction (0.61 and 0.61 respectively) was slightly higher than the F1-score in GX-ray and GX-ray+clinical data (0.58 and 0.59 respectively) with a borderline statistical significance of P = 0.054. CONCLUSIONS: Although the AI model had much better performance than all groups, the participants when given AI prediction, benefited only 'slightly'. AI technology seems promising, but more explainable AI predictions along with a 'learning curve' are warranted.


Sujet(s)
Apprentissage profond , Caries dentaires , Humains , Intelligence artificielle , Susceptibilité à la carie dentaire , , Caries dentaires/imagerie diagnostique , Caries dentaires/thérapie
7.
J Digit Imaging ; 36(3): 767-775, 2023 06.
Article de Anglais | MEDLINE | ID: mdl-36622464

RÉSUMÉ

The workload of some radiologists increased dramatically in the last several, which resulted in a potentially reduced quality of diagnosis. It was demonstrated that diagnostic accuracy of radiologists significantly reduces at the end of work shifts. The study aims to investigate how radiologists cover chest X-rays with their gaze in the presence of different chest abnormalities and high workload. We designed a randomized experiment to quantitatively assess how radiologists' image reading patterns change with the radiological workload. Four radiologists read chest X-rays on a radiological workstation equipped with an eye-tracker. The lung fields on the X-rays were automatically segmented with U-Net neural network allowing to measure the lung coverage with radiologists' gaze. The images were randomly split so that each image was shown at a different time to a different radiologist. Regression models were fit to the gaze data to calculate the treads in lung coverage for individual radiologists and chest abnormalities. For the study, a database of 400 chest X-rays with reference diagnoses was assembled. The average lung coverage with gaze ranged from 55 to 65% per radiologist. For every 100 X-rays read, the lung coverage reduced from 1.3 to 7.6% for the different radiologists. The coverage reduction trends were consistent for all abnormalities ranging from 3.4% per 100 X-rays for cardiomegaly to 4.1% per 100 X-rays for atelectasis. The more image radiologists read, the smaller part of the lung fields they cover with the gaze. This pattern is very stable for all abnormality types and is not affected by the exact order the abnormalities are viewed by radiologists. The proposed randomized experiment captured and quantified consistent changes in X-ray reading for different lung abnormalities that occur due to high workload.


Sujet(s)
Radiologues , Radiologie , Humains , Rayons X , Radiographie , Poumon/imagerie diagnostique
8.
Sci Rep ; 13(1): 1135, 2023 01 20.
Article de Anglais | MEDLINE | ID: mdl-36670118

RÉSUMÉ

In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.


Sujet(s)
Pneumothorax , Radiographie thoracique , Humains , Radiographie thoracique/méthodes , Poumon/imagerie diagnostique , Thorax , Intelligence artificielle
9.
Med Phys ; 50(3): 1917-1927, 2023 Mar.
Article de Anglais | MEDLINE | ID: mdl-36594372

RÉSUMÉ

PURPOSE: For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS: The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES: The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value  files. POTENTIAL APPLICATIONS: The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.


Sujet(s)
Tumeurs de la tête et du cou , Radiothérapie guidée par l'image , Humains , Algorithmes , Tumeurs de la tête et du cou/imagerie diagnostique , Tumeurs de la tête et du cou/radiothérapie , Traitement d'image par ordinateur/méthodes , Organes à risque/imagerie diagnostique , Tomodensitométrie/méthodes
10.
Acta Odontol Scand ; 81(6): 422-435, 2023 Aug.
Article de Anglais | MEDLINE | ID: mdl-36548872

RÉSUMÉ

OBJECTIVES: To assess the efficiency of AI methods in finding radiographic features in Endodontic treatment considerations. MATERIAL AND METHODS: This review was based on the PRISMA guidelines and QUADAS 2 tool. A systematic search was performed of the literature on cases with endodontic treatments, comparing AI algorithms (test) versus conventional image assessments (control) for finding radiographic features. The search was conducted in PubMed, Scopus, Google Scholar and the Cochrane library. Inclusion criteria were studies on the use of AI and machine learning in endodontic treatments using dental X-rays. RESULTS: The initial search retrieved 1131 papers, from which 24 were included. High heterogeneity of the materials left out a meta-analysis. The reported subcategories were periapical lesion, vertical root fractures, predicting root/canal morphology, locating minor apical foramen, tooth segmentation and endodontic retreatment prediction. Radiographic features assessed were mostly periapical lesions. The studies mostly considered the decision of 1-3 experts as the reference for training their models. Almost half of the included materials campared their trained neural network model with other methods. More than 58% of studies had some level of bias. CONCLUSIONS: AI-based models have shown effectiveness in finding radiographic features in different endodontic treatments. While the reported accuracy measurements seem promising, the papers mostly were biased methodologically.


Sujet(s)
Intelligence artificielle , Dent , Humains , Soins dentaires , Traitement de canal radiculaire/méthodes
11.
IEEE J Biomed Health Inform ; 26(9): 4541-4550, 2022 09.
Article de Anglais | MEDLINE | ID: mdl-35704540

RÉSUMÉ

Around 60-80% of radiological errors are attributed to overlooked abnormalities, the rate of which increases at the end of work shifts. In this study, we run an experiment to investigate if artificial intelligence (AI) can assist in detecting radiologists' gaze patterns that correlate with fatigue. A retrospective database of lung X-ray images with the reference diagnoses was used. The X-ray images were acquired from 400 subjects with a mean age of 49 ± 17, and 61% men. Four practicing radiologists read these images while their eye movements were recorded. The radiologists passed a series of concentration tests at prearranged breaks of the experiment. A U-Net neural network was adapted to annotate lung anatomy on X-rays and calculate coverage and information gain features from the radiologists' eye movements over lung fields. The lung coverage, information gain, and eye tracker-based features were compared with the cumulative work done (CDW) label for each radiologist. The gaze-traveled distance, X-ray coverage, and lung coverage statistically significantly (p < 0.01) deteriorated with cumulative work done (CWD) for three out of four radiologists. The reading time and information gain over lungs statistically significantly deteriorated for all four radiologists. We discovered a novel AI-based metric blending reading time, speed, and organ coverage, which can be used to predict changes in the fatigue-related image reading patterns.


Sujet(s)
Intelligence artificielle , Charge de travail , Adulte , Sujet âgé , Fatigue , Femelle , Humains , Mâle , Adulte d'âge moyen , Radiologues , Études rétrospectives
12.
Eur Spine J ; 31(8): 2115-2124, 2022 08.
Article de Anglais | MEDLINE | ID: mdl-35596800

RÉSUMÉ

PURPOSE: To propose a fully automated deep learning (DL) framework for the vertebral morphometry and Cobb angle measurement from three-dimensional (3D) computed tomography (CT) images of the spine, and validate the proposed framework on an external database. METHODS: The vertebrae were first localized and segmented in each 3D CT image using a DL architecture based on an ensemble of U-Nets, and then automated vertebral morphometry in the form of vertebral body (VB) and intervertebral disk (IVD) heights, and spinal curvature measurements in the form of coronal and sagittal Cobb angles (thoracic kyphosis and lumbar lordosis) were performed using dedicated machine learning techniques. The framework was trained on 1725 vertebrae from 160 CT images and validated on an external database of 157 vertebrae from 15 CT images. RESULTS: The resulting mean absolute errors (± standard deviation) between the obtained DL and corresponding manual measurements were 1.17 ± 0.40 mm for VB heights, 0.54 ± 0.21 mm for IVD heights, and 3.42 ± 1.36° for coronal and sagittal Cobb angles, with respective maximal absolute errors of 2.51 mm, 1.64 mm, and 5.52°. Linear regression revealed excellent agreement, with Pearson's correlation coefficient of 0.943, 0.928, and 0.996, respectively. CONCLUSION: The obtained results are within the range of values, obtained by existing DL approaches without external validation. The results therefore confirm the scalability of the proposed DL framework from the perspective of application to external data, and time and computational resource consumption required for framework training.


Sujet(s)
Apprentissage profond , Cyphose , Lordose , Scoliose , Humains , Vertèbres lombales/imagerie diagnostique , Vertèbres thoraciques/imagerie diagnostique
13.
Eur Spine J ; 31(8): 2031-2045, 2022 08.
Article de Anglais | MEDLINE | ID: mdl-35278146

RÉSUMÉ

PURPOSE: To summarize and critically evaluate the existing studies for spinopelvic measurements of sagittal balance that are based on deep learning (DL). METHODS: Three databases (PubMed, WoS and Scopus) were queried for records using keywords related to DL and measurement of sagittal balance. After screening the resulting 529 records that were augmented with specific web search, 34 studies published between 2017 and 2022 were included in the final review, and evaluated from the perspective of the observed sagittal spinopelvic parameters, properties of spine image datasets, applied DL methodology and resulting measurement performance. RESULTS: Studies reported DL measurement of up to 18 different spinopelvic parameters, but the actual number depended on the image field of view. Image datasets were composed of lateral lumbar spine and whole spine X-rays, biplanar whole spine X-rays and lumbar spine magnetic resonance cross sections, and were increasing in size or enriched by augmentation techniques. Spinopelvic parameter measurement was approached either by landmark detection or structure segmentation, and U-Net was the most frequently applied DL architecture. The latest DL methods achieved excellent performance in terms of mean absolute error against reference manual measurements (~ 2° or ~ 1 mm). CONCLUSION: Although the application of relatively complex DL architectures resulted in an improved measurement accuracy of sagittal spinopelvic parameters, future methods should focus on multi-institution and multi-observer analyses as well as uncertainty estimation and error handling implementations for integration into the clinical workflow. Further advances will enhance the predictive analytics of DL methods for spinopelvic parameter measurement. LEVEL OF EVIDENCE I: Diagnostic: individual cross-sectional studies with the consistently applied reference standard and blinding.


Sujet(s)
Apprentissage profond , Études transversales , Humains , Vertèbres lombales/imagerie diagnostique , Région lombosacrale/imagerie diagnostique , Pelvis/imagerie diagnostique , Radiographie
14.
Med Image Anal ; 78: 102417, 2022 05.
Article de Anglais | MEDLINE | ID: mdl-35325712

RÉSUMÉ

Morphological abnormalities of the femoroacetabular (hip) joint are among the most common human musculoskeletal disorders and often develop asymptomatically at early easily treatable stages. In this paper, we propose an automated framework for landmark-based detection and quantification of hip abnormalities from magnetic resonance (MR) images. The framework relies on a novel idea of multi-landmark environment analysis with reinforcement learning. In particular, we merge the concepts of the graphical lasso and Morris sensitivity analysis with deep neural networks to quantitatively estimate the contribution of individual landmark and landmark subgroup locations to the other landmark locations. Convolutional neural networks for image segmentation are utilized to propose the initial landmark locations, and landmark detection is then formulated as a reinforcement learning (RL) problem, where each landmark-agent can adjust its position by observing the local MR image neighborhood and the locations of the most-contributive landmarks. The framework was validated on T1-, T2- and proton density-weighted MR images of 260 patients with the aim to measure the lateral center-edge angle (LCEA), femoral neck-shaft angle (NSA), and the anterior and posterior acetabular sector angles (AASA and PASA) of the hip, and derive the quantitative abnormality metrics from these angles. The framework was successfully tested using the UNet and feature pyramid network (FPN) segmentation architectures for landmark proposal generation, and the deep Q-network (DeepQN), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and actor-critic policy gradient (A2C) RL networks for landmark position optimization. The resulting overall landmark detection error of 1.5 mm and angle measurement error of 1.4° indicates a superior performance in comparison to existing methods. Moreover, the automatically estimated abnormality labels were in 95% agreement with those generated by an expert radiologist.


Sujet(s)
Articulation de la hanche/malformations , , Articulation de la hanche/imagerie diagnostique , Humains , Apprentissage , Imagerie par résonance magnétique
15.
IEEE J Biomed Health Inform ; 25(10): 3886-3897, 2021 10.
Article de Anglais | MEDLINE | ID: mdl-33945490

RÉSUMÉ

Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.


Sujet(s)
Coloscopie , , Humains , Sémantique
16.
Sci Rep ; 11(1): 3246, 2021 02 05.
Article de Anglais | MEDLINE | ID: mdl-33547335

RÉSUMÉ

Patients with severe COVID-19 have overwhelmed healthcare systems worldwide. We hypothesized that machine learning (ML) models could be used to predict risks at different stages of management and thereby provide insights into drivers and prognostic markers of disease progression and death. From a cohort of approx. 2.6 million citizens in Denmark, SARS-CoV-2 PCR tests were performed on subjects suspected for COVID-19 disease; 3944 cases had at least one positive test and were subjected to further analysis. SARS-CoV-2 positive cases from the United Kingdom Biobank was used for external validation. The ML models predicted the risk of death (Receiver Operation Characteristics-Area Under the Curve, ROC-AUC) of 0.906 at diagnosis, 0.818, at hospital admission and 0.721 at Intensive Care Unit (ICU) admission. Similar metrics were achieved for predicted risks of hospital and ICU admission and use of mechanical ventilation. Common risk factors, included age, body mass index and hypertension, although the top risk features shifted towards markers of shock and organ dysfunction in ICU patients. The external validation indicated fair predictive performance for mortality prediction, but suboptimal performance for predicting ICU admission. ML may be used to identify drivers of progression to more severe disease and for prognostication patients in patients with COVID-19. We provide access to an online risk calculator based on these findings.


Sujet(s)
COVID-19/diagnostic , COVID-19/mortalité , Simulation numérique , Apprentissage machine , Facteurs âges , Sujet âgé , Sujet âgé de 80 ans ou plus , Indice de masse corporelle , COVID-19/complications , COVID-19/physiopathologie , Comorbidité , Soins de réanimation , Femelle , Hospitalisation , Humains , Hypertension artérielle/complications , Unités de soins intensifs , Mâle , Adulte d'âge moyen , Pronostic , Études prospectives , Courbe ROC , Ventilation artificielle , Facteurs de risque , Facteurs sexuels
17.
IEEE J Biomed Health Inform ; 25(5): 1660-1672, 2021 05.
Article de Anglais | MEDLINE | ID: mdl-32956067

RÉSUMÉ

Pneumothorax is potentially a life-threatening disease that requires urgent diagnosis and treatment. The chest X-ray is the diagnostic modality of choice when pneumothorax is suspected. The computer-aided diagnosis of pneumothorax has received a dramatic boost in the last few years due to deep learning advances and the first public pneumothorax diagnosis competition with 15257 chest X-rays manually annotated by a team of 19 radiologists. This paper describes one of the top frameworks that participated in the competition. The framework investigates the benefits of combining the Unet convolutional neural network with various backbones, namely ResNet34, SE-ResNext50, SE-ResNext101, and DenseNet121. The paper presents a step-by-step instruction for the framework application, including data augmentation, and different pre- and post-processing steps. The performance of the framework was of 0.8574 measured in terms of the Dice coefficient. The second contribution of the paper is the comparison of the deep learning framework against three experienced radiologists on the pneumothorax detection and segmentation on challenging X-rays. We also evaluated how diagnostic confidence of radiologists affects the accuracy of the diagnosis and observed that the deep learning framework and radiologists find the same X-rays to be easy/difficult to analyze (p-value <1e4). Finally, the methodology of all top-performing teams from the competition leaderboard was analyzed to find the consistent methodological patterns of accurate pneumothorax detection and segmentation.


Sujet(s)
Apprentissage profond , Pneumothorax , Diagnostic assisté par ordinateur , Humains , Traitement d'image par ordinateur , , Pneumothorax/imagerie diagnostique , Radiologues
18.
Med Phys ; 47(9): e929-e950, 2020 Sep.
Article de Anglais | MEDLINE | ID: mdl-32510603

RÉSUMÉ

Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.


Sujet(s)
Apprentissage profond , Tumeurs de la tête et du cou , Tête , Tumeurs de la tête et du cou/imagerie diagnostique , Tumeurs de la tête et du cou/radiothérapie , Humains , Organes à risque , Planification de radiothérapie assistée par ordinateur
19.
Med Phys ; 47(8): 3721-3731, 2020 Aug.
Article de Anglais | MEDLINE | ID: mdl-32406531

RÉSUMÉ

PURPOSE: Radiation therapy (RT) is prescribed for curative and palliative treatment for around 50% of patients with solid tumors. Radiation-induced toxicities of healthy organs accompany many RTs and represent one of the main limiting factors during dose delivery. The existing RT planning solutions generally discard spatial dose distribution information and lose the ability to recognize radiosensitive regions of healthy organs potentially linked to toxicity manifestation. This study proposes a universal deep learning-based algorithm for recognitions of consistent dose patterns and generation of toxicity risk maps for the abdominal area. METHODS: We investigated whether convolutional neural networks (CNNs) can automatically associate abdominal computed tomography (CT) images and RT dose plans with post-RT toxicities without being provided segmentation of abdominal organs. The CNNs were also applied to study RT plans, where doses at specific anatomical regions were reduced/increased, with the aim to pinpoint critical regions sparing of which significantly reduces toxicity risks. The obtained risk maps were computed for individual anatomical regions inside the liver and statistically compared to the existing clinical studies. RESULTS: A database of 122 liver stereotactic body RT (SBRT) executed at Stanford Hospital from July 2004 and November 2015 was assembled. All patients treated for primary liver cancer, mainly hepatocellular carcinoma and cholangiocarcinoma, with complete follow-ups were extracted from the database. The SBRT treatment doses ranged from 26 to 50 Gy delivered in 1-5 fractions for primary liver cancer. The patients were followed up for 1-68 months depending on the survival time. The CNNs were trained to recognize acute and late grade 3+ biliary stricture/obstruction, hepatic failure or decompensation, hepatobiliary infection, liver function test (LFT) elevation or/and portal vein thrombosis, named for convenience hepatobiliary (HB) toxicities. The toxicity prediction accuracy was of 0.73 measured in terms of the area under the receiving operator characteristic curve. Significantly higher risk scores (P < 0.05) of HB toxicity manifestation were associated with irradiation for the hepatobiliary tract in comparison to the risk scores for liver segments I-VIII and portal vein. This observation is in strong agreement with anatomical and clinical expectations. CONCLUSION: In this work, we proposed and validated a universal deep learning-based solution for the identification of radiosensitive anatomical regions. Without any prior anatomical knowledge, CNNs automatically recognized the importance of hepatobiliary tract sparing during liver SBRT.


Sujet(s)
Carcinome hépatocellulaire , Apprentissage profond , Tumeurs du foie , Radiochirurgie , Humains , Tumeurs du foie/imagerie diagnostique , Tumeurs du foie/radiothérapie , Tumeurs du foie/chirurgie , Radiochirurgie/effets indésirables
20.
IEEE J Biomed Health Inform ; 23(5): 1821-1833, 2019 09.
Article de Anglais | MEDLINE | ID: mdl-30869633

RÉSUMÉ

Stereotactic body radiation therapy (SBRT) is a relatively novel treatment modality, with little post-treatment prognostic information reported. This study proposes a novel neural network based paradigm for accurate prediction of liver SBRT outcomes. We assembled a database of patients treated with liver SBRT at our institution. Together with a three-dimensional (3-D) dose delivery plans for each SBRT treatment, other variables such as patients' demographics, quantified abdominal anatomy, history of liver comorbidities, other liver-directed therapies, and liver function tests were collected. We developed a multi-path neural network with the convolutional path for 3-D dose plan analysis and fully connected path for other variables analysis, where the network was trained to predict post-SBRT survival and local cancer progression. To enhance the network robustness, it was initially pre-trained on a large database of computed tomography images. Following n-fold cross-validation, the network automatically identified patients that are likely to have longer survival or late cancer recurrence, i.e., patients with the positive predicted outcome (PPO) of SBRT, and vice versa, i.e., negative predicted outcome (NPO). The predicted results agreed with actual SBRT outcomes with 56% of PPO patients and 0% NPO patients with primary liver cancer survived more than two years after SBRT. Similarly, 82% of PPO patients and 0% of NPO patients with metastatic liver cancer survived two-year threshold. The obtained results were superior to the performance of support vector machine and random forest classifiers. Furthermore, the network was able to identify the critical-to-spare liver regions, and the critical clinical features associated with the highest risks of negative SBRT outcomes.


Sujet(s)
Tumeurs du foie/radiothérapie , , Radiochirurgie , Planification de radiothérapie assistée par ordinateur , Adulte , Sujet âgé , Sujet âgé de 80 ans ou plus , Algorithmes , Apprentissage profond , Évolution de la maladie , Femelle , Humains , Estimation de Kaplan-Meier , Foie/chirurgie , Tumeurs du foie/mortalité , Mâle , Adulte d'âge moyen , Courbe ROC , Radiochirurgie/méthodes , Radiochirurgie/mortalité , Planification de radiothérapie assistée par ordinateur/méthodes , Planification de radiothérapie assistée par ordinateur/mortalité
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE