Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
1.
J Med Imaging (Bellingham) ; 11(3): 037502, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38737491

RESUMO

Purpose: Immune checkpoint inhibitors (ICIs) are now one of the standards of care for patients with lung cancer and have greatly improved both progression-free and overall survival, although <20% of the patients respond to the treatment, and some face acute adverse events. Although a few predictive biomarkers have integrated the clinical workflow, they require additional modalities on top of whole-slide images and lack efficiency or robustness. In this work, we propose a biomarker of immunotherapy outcome derived solely from the analysis of histology slides. Approach: We develop a three-step framework, combining contrastive learning and nonparametric clustering to distinguish tissue patterns within the slides, before exploiting the adjacencies of previously defined regions to derive features and train a proportional hazards model for survival analysis. We test our approach on an in-house dataset of 193 patients from 5 medical centers and compare it with the gold standard tumor proportion score (TPS) biomarker. Results: On a fivefold cross-validation (CV) of the entire dataset, the whole-slide image-based survival analysis for patients treated with immunotherapy (WhARIO) features are able to separate a low- and a high-risk group of patients with a hazard ratio (HR) of 2.29 (CI95=1.48 to 3.56), whereas the TPS 1% reference threshold only reaches a HR of 1.81 (CI95=1.21 to 2.69). Combining the two yields a higher HR of 2.60 (CI95=1.72 to 3.94). Additional experiments on the same dataset, where one out of five centers is excluded from the CV and used as a test set, confirm these trends. Conclusions: Our uniquely designed WhARIO features are an efficient predictor of survival for lung cancer patients who received ICI treatment. We achieve similar performance to the current gold standard biomarker, without the need to access other imaging modalities, and show that both can be used together to reach even better results.

2.
IEEE Trans Med Imaging ; PP2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38551825

RESUMO

Cross-modality data translation has attracted great interest in medical image computing. Deep generative models show performance improvement in addressing related challenges. Nevertheless, as a fundamental challenge in image translation, the problem of zero-shot learning cross-modality image translation with fidelity remains unanswered. To bridge this gap, we propose a novel unsupervised zero-shot learning method called Mutual Information guided Diffusion Model, which learns to translate an unseen source image to the target modality by leveraging the inherent statistical consistency of Mutual Information between different modalities. To overcome the prohibitive high dimensional Mutual Information calculation, we propose a differentiable local-wise mutual information layer for conditioning the iterative denoising process. The Local-wise-Mutual-Information-Layer captures identical cross-modality features in the statistical domain, offering diffusion guidance without relying on direct mappings between the source and target domains. This advantage allows our method to adapt to changing source domains without the need for retraining, making it highly practical when sufficient labeled source domain data is not available. We demonstrate the superior performance of MIDiffusion in zero-shot cross-modality translation tasks through empirical comparisons with other generative models, including adversarial-based and diffusion-based models. Finally, we showcase the real-world application of MIDiffusion in 3D zero-shot learning-based cross-modality image segmentation tasks.

3.
Diagn Interv Imaging ; 105(2): 65-73, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37822196

RESUMO

PURPOSE: The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS: Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS: The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION: The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.


Assuntos
Inteligência Artificial , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Variações Dependentes do Observador , Imageamento por Ressonância Magnética/métodos , Algoritmos
4.
J Med Imaging (Bellingham) ; 10(3): 034502, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37216152

RESUMO

Purpose: The purpose of this study is to examine the utilization of unlabeled data for abdominal organ classification in multi-label (non-mutually exclusive classes) ultrasound images, as an alternative to the conventional transfer learning approach. Approach: We present a new method for classifying abdominal organs in ultrasound images. Unlike previous approaches that only relied on labeled data, we consider the use of both labeled and unlabeled data. To explore this approach, we first examine the application of deep clustering for pretraining a classification model. We then compare two training methods, fine-tuning with labeled data through supervised learning and fine-tuning with both labeled and unlabeled data using semisupervised learning. All experiments were conducted on a large dataset of unlabeled images (nu=84967) and a small set of labeled images (ns=2742) comprising progressively 10%, 20%, 50%, and 100% of the images. Results: We show that for supervised fine-tuning, deep clustering is an effective pre-training method, with performance matching that of ImageNet pre-training using five times less labeled data. For semi-supervised learning, deep clustering pre-training also yields higher performance when the amount of labeled data is limited. Best performance is obtained with deep clustering pre-training combined with semi-supervised learning and 2742 labeled example images with an F1-score weighted average of 84.1%. Conclusions: This method can be used as a tool to preprocess large unprocessed databases, thus reducing the need for prior annotations of abdominal ultrasound studies for the training of image classification algorithms, which in turn could improve the clinical use of ultrasound images.

5.
Med Image Anal ; 85: 102763, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36764037

RESUMO

Given the size of digitized Whole Slide Images (WSIs), it is generally laborious and time-consuming for pathologists to exhaustively delineate objects within them, especially with datasets containing hundreds of slides to annotate. Most of the time, only slide-level labels are available, giving rise to the development of weakly-supervised models. However, it is often difficult to obtain from such models accurate object localization, e.g., patches with tumor cells in a tumor detection task, as they are mainly designed for slide-level classification. Using the attention-based deep Multiple Instance Learning (MIL) model as our base weakly-supervised model, we propose to use mixed supervision - i.e., the use of both slide-level and patch-level labels - to improve both the classification and the localization performances of the original model, using only a limited amount of patch-level labeled slides. In addition, we propose an attention loss term to regularize the attention between key instances, and a paired batch method to create balanced batches for the model. First, we show that the changes made to the model already improve its performance and interpretability in the weakly-supervised setting. Furthermore, when using only between 12 and 62% of the total available patch-level annotations, we can reach performance close to fully-supervised models on the tumor classification datasets DigestPath2019 and Camelyon16.


Assuntos
Bivalves , Neoplasias , Humanos , Animais , Compostos Radiofarmacêuticos
6.
J Clin Med ; 12(2)2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36675438

RESUMO

Understanding cochlear anatomy is crucial for developing less traumatic electrode arrays and insertion guidance for cochlear implantation. The human cochlea shows considerable variability in size and morphology. This study analyses 1000+ clinical temporal bone CT images using a web-based image analysis tool. Cochlear size and shape parameters were obtained to determine population statistics and perform regression and correlation analysis. The analysis revealed that cochlear morphology follows Gaussian distribution, while cochlear dimensions A and B are not well-correlated to each other. Additionally, dimension B is more correlated to duct lengths, the wrapping factor and volume than dimension A. The scala tympani size varies considerably among the population, with the size generally decreasing along insertion depth with dimensional jumps through the trajectory. The mean scala tympani radius was 0.32 mm near the 720° insertion angle. Inter-individual variability was four times that of intra-individual variation. On average, the dimensions of both ears are similar. However, statistically significant differences in clinical dimensions were observed between ears of the same patient, suggesting that size and shape are not the same. Harnessing deep learning-based, automated image analysis tools, our results yielded important insights into cochlear morphology and implant development, helping to reduce insertion trauma and preserving residual hearing.

7.
Insights Imaging ; 13(1): 202, 2022 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-36543901

RESUMO

OBJECTIVES: Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS: A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS: A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS: Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.

8.
J Clin Med ; 11(22)2022 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-36431117

RESUMO

The robust delineation of the cochlea and its inner structures combined with the detection of the electrode of a cochlear implant within these structures is essential for envisaging a safer, more individualized, routine image-guided cochlear implant therapy. We present Nautilus-a web-based research platform for automated pre- and post-implantation cochlear analysis. Nautilus delineates cochlear structures from pre-operative clinical CT images by combining deep learning and Bayesian inference approaches. It enables the extraction of electrode locations from a post-operative CT image using convolutional neural networks and geometrical inference. By fusing pre- and post-operative images, Nautilus is able to provide a set of personalized pre- and post-operative metrics that can serve the exploration of clinically relevant questions in cochlear implantation therapy. In addition, Nautilus embeds a self-assessment module providing a confidence rating on the outputs of its pipeline. We present a detailed accuracy and robustness analyses of the tool on a carefully designed dataset. The results of these analyses provide legitimate grounds for envisaging the implementation of image-guided cochlear implant practices into routine clinical workflows.

10.
Radiol Artif Intell ; 4(3): e210110, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652113

RESUMO

Purpose: To train and assess the performance of a deep learning-based network designed to detect, localize, and characterize focal liver lesions (FLLs) in the liver parenchyma on abdominal US images. Materials and Methods: In this retrospective, multicenter, institutional review board-approved study, two object detectors, Faster region-based convolutional neural network (Faster R-CNN) and Detection Transformer (DETR), were fine-tuned on a dataset of 1026 patients (n = 2551 B-mode abdominal US images obtained between 2014 and 2018). Performance of the networks was analyzed on a test set of 48 additional patients (n = 155 B-mode abdominal US images obtained in 2019) and compared with the performance of three caregivers (one nonexpert and two experts) blinded to the clinical history. The sign test was used to compare accuracy, specificity, sensitivity, and positive predictive value among all raters. Results: DETR achieved a specificity of 90% (95% CI: 75, 100) and a sensitivity of 97% (95% CI: 97, 97) for the detection of FLLs. The performance of DETR met or exceeded that of the three caregivers for this task. DETR correctly localized 80% of the lesions, and it achieved a specificity of 81% (95% CI: 67, 91) and a sensitivity of 82% (95% CI: 62, 100) for FLL characterization (benign vs malignant) among lesions localized by all raters. The performance of DETR met or exceeded that of two experts and Faster R-CNN for these tasks. Conclusion: DETR demonstrated high specificity for detection, localization, and characterization of FLLs on abdominal US images. Supplemental material is available for this article. RSNA, 2022Keywords: Computer-aided Diagnosis (CAD), Ultrasound, Abdomen/GI, Liver, Tissue Characterization, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN).

11.
IEEE Trans Med Imaging ; 41(9): 2532-2542, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35404813

RESUMO

Recently, super-resolution ultrasound imaging with ultrasound localization microscopy (ULM) has received much attention. However, ULM relies on low concentrations of microbubbles in the blood vessels, ultimately resulting in long acquisition times. Here, we present an alternative super-resolution approach, based on direct deconvolution of single-channel ultrasound radio-frequency (RF) signals with a one-dimensional dilated convolutional neural network (CNN). This work focuses on low-frequency ultrasound (1.7 MHz) for deep imaging (10 cm) of a dense cloud of monodisperse microbubbles (up to 1000 microbubbles in the measurement volume, corresponding to an average echo overlap of 94%). Data are generated with a simulator that uses a large range of acoustic pressures (5-250 kPa) and captures the full, nonlinear response of resonant, lipid-coated microbubbles. The network is trained with a novel dual-loss function, which features elements of both a classification loss and a regression loss and improves the detection-localization characteristics of the output. Whereas imposing a localization tolerance of 0 yields poor detection metrics, imposing a localization tolerance corresponding to 4% of the wavelength yields a precision and recall of both 0.90. Furthermore, the detection improves with increasing acoustic pressure and deteriorates with increasing microbubble density. The potential of the presented approach to super-resolution ultrasound imaging is demonstrated with a delay-and-sum reconstruction with deconvolved element data. The resulting image shows an order-of-magnitude gain in axial resolution compared to a delay-and-sum reconstruction with unprocessed element data.


Assuntos
Aprendizado Profundo , Microbolhas , Meios de Contraste , Microscopia/métodos , Ondas de Rádio , Ultrassonografia/métodos
12.
Cancers (Basel) ; 14(7)2022 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-35406511

RESUMO

The histological distinction of lung neuroendocrine carcinoma, including small cell lung carcinoma (SCLC), large cell neuroendocrine carcinoma (LCNEC) and atypical carcinoid (AC), can be challenging in some cases, while bearing prognostic and therapeutic significance. To assist pathologists with the differentiation of histologic subtyping, we applied a deep learning classifier equipped with a convolutional neural network (CNN) to recognize lung neuroendocrine neoplasms. Slides of primary lung SCLC, LCNEC and AC were obtained from the Laboratory of Clinical and Experimental Pathology (University Hospital Nice, France). Three thoracic pathologists blindly established gold standard diagnoses. The HALO-AI module (Indica Labs, UK) trained with 18,752 image tiles extracted from 60 slides (SCLC = 20, LCNEC = 20, AC = 20 cases) was then tested on 90 slides (SCLC = 26, LCNEC = 22, AC = 13 and combined SCLC with LCNEC = 4 cases; NSCLC = 25 cases) by F1-score and accuracy. A HALO-AI correct area distribution (AD) cutoff of 50% or more was required to credit the CNN with the correct diagnosis. The tumor maps were false colored and displayed side by side to original hematoxylin and eosin slides with superimposed pathologist annotations. The trained HALO-AI yielded a mean F1-score of 0.99 (95% CI, 0.939-0.999) on the testing set. Our CNN model, providing further larger validation, has the potential to work side by side with the pathologist to accurately differentiate between the different lung neuroendocrine carcinoma in challenging cases.

13.
Med Image Anal ; 78: 102398, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35349837

RESUMO

The fusion of probability maps is required when trying to analyse a collection of image labels or probability maps produced by several segmentation algorithms or human raters. The challenge is to weight the combination of maps correctly, in order to reflect the agreement among raters, the presence of outliers and the spatial uncertainty in the consensus. In this paper, we address several shortcomings of prior work in continuous label fusion. We introduce a novel approach to jointly estimate a reliable consensus map and to assess the presence of outliers and the confidence in each rater. Our robust approach is based on heavy-tailed distributions allowing local estimates of raters performances. In particular, we investigate the Laplace, the Student's t and the generalized double Pareto distributions, and compare them with respect to the classical Gaussian likelihood used in prior works. We unify these distributions into a common tractable inference scheme based on variational calculus and scale mixture representations. Moreover, the introduction of bias and spatial priors leads to proper rater bias estimates and control over the smoothness of the consensus map. Finally, we propose an approach that clusters raters based on variational boosting, and thus may produce several alternative consensus maps. Our approach was successfully tested on MR prostate delineations and on lung nodule segmentations from the LIDC-IDRI dataset.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Teorema de Bayes , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Probabilidade
15.
J Med Imaging (Bellingham) ; 9(2): 024001, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35300345

RESUMO

Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.

16.
Eur Radiol ; 32(7): 4931-4941, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35169895

RESUMO

OBJECTIVE: A reliable estimation of prostate volume (PV) is essential to prostate cancer management. The objective of our multi-rater study was to compare intra- and inter-rater variability of PV from manual planimetry and ellipsoid formulas. METHODS: Forty treatment-naive patients who underwent prostate MRI were selected from a local database. PV and corresponding PSA density (PSAd) were estimated on 3D T2-weighted MRI (3 T) by 7 independent radiologists using the traditional ellipsoid formula (TEF), the newer biproximate ellipsoid formula (BPEF), and the manual planimetry method (MPM) used as ground truth. Intra- and inter-rater variability was calculated using the mixed model-based intraclass correlation coefficient (ICC). RESULTS: Mean volumes were 67.00 (± 36.61), 66.07 (± 35.03), and 64.77 (± 38.27) cm3 with the TEF, BPEF, and MPM methods, respectively. Both TEF and BPEF overestimated PV relative to MPM, with the former presenting significant differences (+ 1.91 cm3, IQ = [- 0.33 cm3, 5.07 cm3], p val = 0.03). Both intra- (ICC > 0.90) and inter-rater (ICC > 0.90) reproducibility were excellent. MPM had the highest inter-rater reproducibility (ICC = 0.999). Inter-rater PV variation led to discrepancies in classification according to the clinical criterion of PSAd > 0.15 ng/mL for 2 patients (5%), 7 patients (17.5%), and 9 patients (22.5%) when using MPM, TEF, and BPEF, respectively. CONCLUSION: PV measurements using ellipsoid formulas and MPM are highly reproducible. MPM is a robust method for PV assessment and PSAd calculation, with the lowest variability. TEF showed a high degree of concordance with MPM but a slight overestimation of PV. Precise anatomic landmarks as defined with the BPEF led to a more accurate PV estimation, but also to a higher variability. KEY POINTS: • Manual planimetry used for prostate volume estimation is robust and reproducible, with the lowest variability between readers. • Ellipsoid formulas are accurate and reproducible but with higher variability between readers. • The traditional ellipsoid formula tends to overestimate prostate volume.


Assuntos
Próstata , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Reprodutibilidade dos Testes
17.
Med Image Anal ; 75: 102268, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34710654

RESUMO

Incorporating shape information is essential for the delineation of many organs and anatomical structures in medical images. While previous work has mainly focused on parametric spatial transformations applied to reference template shapes, in this paper, we address the Bayesian inference of parametric shape models for segmenting medical images with the objective of providing interpretable results. The proposed framework defines a likelihood appearance probability and a prior label probability based on a generic shape function through a logistic function. A reference length parameter defined in the sigmoid controls the trade-off between shape and appearance information. The inference of shape parameters is performed within an Expectation-Maximisation approach in which a Gauss-Newton optimization stage provides an approximation of the posterior probability of the shape parameters. This framework is applied to the segmentation of cochlear structures from clinical CT images constrained by a 10-parameter shape model. It is evaluated on three different datasets, one of which includes more than 200 patient images. The results show performances comparable to supervised methods and better than previously proposed unsupervised ones. It also enables an analysis of parameter distributions and the quantification of segmentation uncertainty, including the effect of the shape model.


Assuntos
Algoritmos , Teorema de Bayes , Humanos , Modelos Logísticos
18.
Sci Rep ; 11(1): 22683, 2021 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-34811411

RESUMO

Better models to identify individuals at low risk of ventricular arrhythmia (VA) are needed for implantable cardioverter-defibrillator (ICD) candidates to mitigate the risk of ICD-related complications. We designed the CERTAINTY study (CinE caRdiac magneTic resonAnce to predIct veNTricular arrhYthmia) with deep learning for VA risk prediction from cine cardiac magnetic resonance (CMR). Using a training cohort of primary prevention ICD recipients (n = 350, 97 women, median age 59 years, 178 ischemic cardiomyopathy) who underwent CMR immediately prior to ICD implantation, we developed two neural networks: Cine Fingerprint Extractor and Risk Predictor. The former extracts cardiac structure and function features from cine CMR in a form of cine fingerprint in a fully unsupervised fashion, and the latter takes in the cine fingerprint and outputs disease outcomes as a cine risk score. Patients with VA (n = 96) had a significantly higher cine risk score than those without VA. Multivariate analysis showed that the cine risk score was significantly associated with VA after adjusting for clinical characteristics, cardiac structure and function including CMR-derived scar extent. These findings indicate that non-contrast, cine CMR inherently contains features to improve VA risk prediction in primary prevention ICD candidates. We solicit participation from multiple centers for external validation.


Assuntos
Arritmias Cardíacas/etiologia , Arritmias Cardíacas/prevenção & controle , Cardiomiopatias/diagnóstico por imagem , Cardiomiopatias/terapia , Desfibriladores Implantáveis/efeitos adversos , Imagem Cinética por Ressonância Magnética/métodos , Isquemia Miocárdica/diagnóstico por imagem , Isquemia Miocárdica/terapia , Prevenção Primária/métodos , Idoso , Cicatriz/diagnóstico por imagem , Tomada de Decisão Clínica/métodos , Aprendizado Profundo , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Fatores de Risco , Disfunção Ventricular Esquerda/diagnóstico por imagem , Função Ventricular Esquerda
19.
Comput Med Imaging Graph ; 93: 101990, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34607275

RESUMO

Metal Artifacts creates often difficulties for a high quality visual assessment of post-operative imaging in computed tomography (CT). A vast body of methods have been proposed to tackle this issue, but these methods were designed for regular CT scans and their performance is usually insufficient when imaging tiny implants. In the context of post-operative high-resolution CT imaging, we propose a 3D metal artifact reduction algorithm based on a generative adversarial neural network. It is based on the simulation of physically realistic CT metal artifacts created by cochlea implant electrodes on preoperative images. The generated images serve to train a 3D generative adversarial networks for artifacts reduction. The proposed approach was assessed qualitatively and quantitatively on clinical conventional and cone beam CT of cochlear implant postoperative images. These experiments show that the proposed method outperforms other general metal artifact reduction approaches.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Algoritmos , Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
20.
Insights Imaging ; 12(1): 71, 2021 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-34089410

RESUMO

BACKGROUND: Accurate prostate zonal segmentation on magnetic resonance images (MRI) is a critical prerequisite for automated prostate cancer detection. We aimed to assess the variability of manual prostate zonal segmentation by radiologists on T2-weighted (T2W) images, and to study factors that may influence it. METHODS: Seven radiologists of varying levels of experience segmented the whole prostate gland (WG) and the transition zone (TZ) on 40 axial T2W prostate MRI images (3D T2W images for all patients, and both 3D and 2D images for a subgroup of 12 patients). Segmentation variabilities were evaluated based on: anatomical and morphological variation of the prostate (volume, retro-urethral lobe, intensity contrast between zones, presence of a PI-RADS ≥ 3 lesion), variation in image acquisition (3D vs 2D T2W images), and reader's experience. Several metrics including Dice Score (DSC) and Hausdorff Distance were used to evaluate differences, with both a pairwise and a consensus (STAPLE reference) comparison. RESULTS: DSC was 0.92 (± 0.02) and 0.94 (± 0.03) for WG, 0.88 (± 0.05) and 0.91 (± 0.05) for TZ respectively with pairwise comparison and consensus reference. Variability was significantly (p < 0.05) lower for the mid-gland (DSC 0.95 (± 0.02)), higher for the apex (0.90 (± 0.06)) and the base (0.87 (± 0.06)), and higher for smaller prostates (p < 0.001) and when contrast between zones was low (p < 0.05). Impact of the other studied factors was non-significant. CONCLUSIONS: Variability is higher in the extreme parts of the gland, is influenced by changes in prostate morphology (volume, zone intensity ratio), and is relatively unaffected by the radiologist's level of expertise.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...