Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
J Oral Maxillofac Surg ; 82(2): 181-190, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-37995761

RESUMO

BACKGROUND: Jaw deformity diagnosis requires objective tests. Current methods, like cephalometry, have limitations. However, recent studies have shown that machine learning can diagnose jaw deformities in two dimensions. Therefore, we hypothesized that a multilayer perceptron (MLP) could accurately diagnose jaw deformities in three dimensions (3D). PURPOSE: Examine the hypothesis by focusing on anomalous mandibular position. We aimed to: (1) create a machine learning model to diagnose mandibular retrognathism and prognathism; and (2) compare its performance with traditional cephalometric methods. STUDY DESIGN, SETTING, SAMPLE: An in-silico experiment on deidentified retrospective data. The study was conducted at the Houston Methodist Research Institute and Rensselaer Polytechnic Institute. Included were patient records with jaw deformities and preoperative 3D facial models. Patients with significant jaw asymmetry were excluded. PREDICTOR VARIABLES: The tests used to diagnose mandibular anteroposterior position are: (1) SNB angle; (2) facial angle; (3) mandibular unit length (MdUL); and (4) MLP model. MAIN OUTCOME VARIABLE: The resultant diagnoses: normal, prognathic, or retrognathic. COVARIATES: None. ANALYSES: A senior surgeon labeled the patients' mandibles as prognathic, normal, or retrognathic, creating a gold standard. Scientists at Rensselaer Polytechnic Institute developed an MLP model to diagnose mandibular prognathism and retrognathism using the 3D coordinates of 50 landmarks. The performance of the MLP model was compared with three traditional cephalometric measurements: (1) SNB, (2) facial angle, and (3) MdUL. The primary metric used to assess the performance was diagnostic accuracy. McNemar's exact test tested the difference between traditional cephalometric measurement and MLP. Cohen's Kappa measured inter-rater agreement between each method and the gold standard. RESULTS: The sample included 101 patients. The diagnostic accuracy of SNB, facial angle, MdUL, and MLP were 74.3, 74.3, 75.3, and 85.2%, respectively. McNemar's test shows that our MLP performs significantly better than the SNB (P = .027), facial angle (P = .019), and MdUL (P = .031). The agreement between the traditional cephalometric measurements and the surgeon's diagnosis was fair. In contrast, the agreement between the MLP and the surgeon was moderate. CONCLUSION AND RELEVANCE: The performance of the MLP is significantly better than that of the traditional cephalometric measurements.


Assuntos
Anormalidades Maxilomandibulares , Má Oclusão Classe III de Angle , Prognatismo , Retrognatismo , Humanos , Prognatismo/diagnóstico por imagem , Retrognatismo/diagnóstico por imagem , Estudos Retrospectivos , Mandíbula/diagnóstico por imagem , Mandíbula/anormalidades , Má Oclusão Classe III de Angle/cirurgia , Cefalometria/métodos
2.
Proc Natl Acad Sci U S A ; 116(48): 24019-24030, 2019 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-31719196

RESUMO

Fluorescence lifetime imaging (FLI) provides unique quantitative information in biomedical and molecular biology studies but relies on complex data-fitting techniques to derive the quantities of interest. Herein, we propose a fit-free approach in FLI image formation that is based on deep learning (DL) to quantify fluorescence decays simultaneously over a whole image and at fast speeds. We report on a deep neural network (DNN) architecture, named fluorescence lifetime imaging network (FLI-Net) that is designed and trained for different classes of experiments, including visible FLI and near-infrared (NIR) FLI microscopy (FLIM) and NIR gated macroscopy FLI (MFLI). FLI-Net outputs quantitatively the spatially resolved lifetime-based parameters that are typically employed in the field. We validate the utility of the FLI-Net framework by performing quantitative microscopic and preclinical lifetime-based studies across the visible and NIR spectra, as well as across the 2 main data acquisition technologies. These results demonstrate that FLI-Net is well suited to accurately quantify complex fluorescence lifetimes in cells and, in real time, in intact animals without any parameter settings. Hence, FLI-Net paves the way to reproducible and quantitative lifetime studies at unprecedented speeds, for improved dissemination and impact of FLI in many important biomedical applications ranging from fundamental discoveries in molecular and cellular biology to clinical translation.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imagem Óptica/métodos , Animais , Linhagem Celular , Feminino , Humanos , Camundongos , Camundongos Nus
3.
J Magn Reson Imaging ; 52(5): 1499-1507, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32478955

RESUMO

BACKGROUND: The Prostate Imaging Reporting and Data System (PI-RADS) provides guidelines for risk stratification of lesions detected on multiparametric MRI (mpMRI) of the prostate but suffers from high intra/interreader variability. PURPOSE: To develop an artificial intelligence (AI) solution for PI-RADS classification and compare its performance with an expert radiologist using targeted biopsy results. STUDY TYPE: Retrospective study including data from our institution and the publicly available ProstateX dataset. POPULATION: In all, 687 patients who underwent mpMRI of the prostate and had one or more detectable lesions (PI-RADS score >1) according to PI-RADSv2. FIELD STRENGTH/SEQUENCE: T2 -weighted, diffusion-weighted imaging (DWI; five evenly spaced b values between b = 0-750 s/mm2 ) for apparent diffusion coefficient (ADC) mapping, high b-value DWI (b = 1500 or 2000 s/mm2 ), and dynamic contrast-enhanced T1 -weighted series were obtained at 3.0T. ASSESSMENT: PI-RADS lesions were segmented by a radiologist. Bounding boxes around the T2 /ADC/high-b value segmentations were stacked and saved as JPEGs. These images were used to train a convolutional neural network (CNN). The PI-RADS scores obtained by the CNN were compared with radiologist scores. The cancer detection rate was measured from a subset of patients who underwent biopsy. STATISTICAL TESTS: Agreement between the AI and the radiologist-driven PI-RADS scores was assessed using a kappa score, and differences between categorical variables were assessed with a Wald test. RESULTS: For the 1034 detection lesions, the kappa score for the AI system vs. the expert radiologist was moderate, at 0.40. However, there was no significant difference in the rates of detection of clinically significant cancer for any PI-RADS score in 86 patients undergoing targeted biopsy (P = 0.4-0.6). DATA CONCLUSION: We developed an AI system for assignment of a PI-RADS score on segmented lesions on mpMRI with moderate agreement with an expert radiologist and a similar ability to detect clinically significant cancer. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 2.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos
4.
J Urol ; 193(2): 473-478, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25150645

RESUMO

PURPOSE: Men diagnosed with atypical small acinar proliferation are counseled to undergo early rebiopsy because the risk of prostate cancer is high. However, random rebiopsies may not resample areas of concern. Magnetic resonance imaging/transrectal ultrasound fusion guided biopsy offers an opportunity to accurately target and later retarget specific areas in the prostate. We describe the ability of magnetic resonance imaging/transrectal ultrasound fusion guided prostate biopsy to detect prostate cancer in areas with an initial diagnosis of atypical small acinar proliferation. MATERIALS AND METHODS: Multiparametric magnetic resonance imaging of the prostate and magnetic resonance imaging/transrectal ultrasound fusion guided biopsy were performed in 1,028 patients from March 2007 to February 2014. Of the men 20 met the stringent study inclusion criteria, which were no prostate cancer history, index biopsy showing at least 1 core of atypical small acinar proliferation with benign glands in all remaining cores and fusion targeted rebiopsy with at least 1 targeted core directly resampling an area of the prostate that previously contained atypical small acinar proliferation. RESULTS: At index biopsy median age of the 20 patients was 60 years (IQR 57-64) and median prostate specific antigen was 5.92 ng/ml (IQR 3.34-7.48). At fusion targeted rebiopsy at a median of 11.6 months 5 of 20 patients (25%, 95% CI 6.02-43.98) were diagnosed with primary Gleason grade 3, low volume prostate cancer. On fusion rebiopsy cores that directly retargeted areas of previous atypical small acinar proliferation detected the highest tumor burden. CONCLUSIONS: When magnetic resonance imaging/transrectal ultrasound fusion guided biopsy detects isolated atypical small acinar proliferation on index biopsy, early rebiopsy is unlikely to detect clinically significant prostate cancer. Cores that retarget areas of previous atypical small acinar proliferation are more effective than random rebiopsy cores.


Assuntos
Células Acinares/diagnóstico por imagem , Células Acinares/patologia , Imageamento por Ressonância Magnética , Próstata/diagnóstico por imagem , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Ultrassonografia de Intervenção , Proliferação de Células , Humanos , Biópsia Guiada por Imagem , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
5.
J Vasc Interv Radiol ; 25(5): 675-84, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24581731

RESUMO

Prostate biopsies are usually performed by urologists in the office setting using transrectal ultrasound (US) guidance. The current standard of care involves obtaining 10-14 cores from different anatomic sections. Biopsies are usually not directed into a specific lesion because most prostate cancers are not visible on transrectal US. Color Doppler, US contrast agents, elastography, magnetic resonance (MR) imaging, and MR imaging/US fusion are proposed as imaging methods to guide prostate biopsies. Prostate MR imaging and fusion biopsy create opportunities for diagnostic and interventional radiologists to play an increasingly important role in the screening, evaluation, diagnosis, targeted biopsy, surveillance, and focal therapy of patients with prostate cancer.


Assuntos
Biópsia Guiada por Imagem/métodos , Próstata/patologia , Neoplasias da Próstata/patologia , Radiografia Intervencionista/métodos , Humanos , Masculino
6.
Vis Comput Ind Biomed Art ; 7(1): 4, 2024 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-38386109

RESUMO

Flipover, an enhanced dropout technique, is introduced to improve the robustness of artificial neural networks. In contrast to dropout, which involves randomly removing certain neurons and their connections, flipover randomly selects neurons and reverts their outputs using a negative multiplier during training. This approach offers stronger regularization than conventional dropout, refining model performance by (1) mitigating overfitting, matching or even exceeding the efficacy of dropout; (2) amplifying robustness to noise; and (3) enhancing resilience against adversarial attacks. Extensive experiments across various neural networks affirm the effectiveness of flipover in deep learning.

7.
Meta Radiol ; 2(3)2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38947177

RESUMO

Fairness of artificial intelligence and machine learning models, often caused by imbalanced datasets, has long been a concern. While many efforts aim to minimize model bias, this study suggests that traditional fairness evaluation methods may be biased, highlighting the need for a proper evaluation scheme with multiple evaluation metrics due to varying results under different criteria. Moreover, the limited data size of minority groups introduces significant data uncertainty, which can undermine the judgement of fairness. This paper introduces an innovative evaluation approach that estimates data uncertainty in minority groups through bootstrapping from majority groups for a more objective statistical assessment. Extensive experiments reveal that traditional evaluation methods might have drawn inaccurate conclusions about model fairness. The proposed method delivers an unbiased fairness assessment by adeptly addressing the inherent complications of model evaluation on imbalanced datasets. The results show that such comprehensive evaluation can provide more confidence when adopting those models.

8.
J Pers Med ; 14(4)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38673048

RESUMO

Alzheimer's disease (AD) is the most prevalent neurodegenerative disease, yet its current treatments are limited to stopping disease progression. Moreover, the effectiveness of these treatments remains uncertain due to the heterogeneity of the disease. Therefore, it is essential to identify disease subtypes at a very early stage. Current data-driven approaches can be used to classify subtypes during later stages of AD or related disorders, but making predictions in the asymptomatic or prodromal stage is challenging. Furthermore, the classifications of most existing models lack explainability, and these models rely solely on a single modality for assessment, limiting the scope of their analysis. Thus, we propose a multimodal framework that utilizes early-stage indicators, including imaging, genetics, and clinical assessments, to classify AD patients into progression-specific subtypes at an early stage. In our framework, we introduce a tri-modal co-attention mechanism (Tri-COAT) to explicitly capture cross-modal feature associations. Data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (slow progressing = 177, intermediate = 302, and fast = 15) were used to train and evaluate Tri-COAT using a 10-fold stratified cross-testing approach. Our proposed model outperforms baseline models and sheds light on essential associations across multimodal features supported by known biological mechanisms. The multimodal design behind Tri-COAT allows it to achieve the highest classification area under the receiver operating characteristic curve while simultaneously providing interpretability to the model predictions through the co-attention mechanism.

9.
Artigo em Inglês | MEDLINE | ID: mdl-38869779

RESUMO

PURPOSE: Accurate estimation of reference bony shape models is fundamental for orthognathic surgical planning. Existing methods to derive this model are of two types: one determines the reference model by estimating the deformation field to correct the patient's deformed jaw, often introducing distortions in the predicted reference model; The other derives the reference model using a linear combination of their landmarks/vertices but overlooks the intricate nonlinear relationship between the subjects, compromising the model's precision and quality. METHODS: We have created a self-supervised learning framework to estimate the reference model. The core of this framework is a deep query network, which estimates the similarity scores between the patient's midface and those of the normal subjects in a high-dimensional space. Subsequently, it aggregates high-dimensional features of these subjects and projects these features back to 3D structures, ultimately achieving a patient-specific reference model. RESULTS: Our approach was trained using a dataset of 51 normal subjects and tested on 30 patient subjects to estimate their reference models. Performance assessment against the actual post-operative bone revealed a mean Chamfer distance error of 2.25 mm and an average surface distance error of 2.30 mm across the patient subjects. CONCLUSION: Our proposed method emphasizes the correlation between the patients and the normal subjects in a high-dimensional space, facilitating the generation of the patient-specific reference model. Both qualitative and quantitative results demonstrate its superiority over current state-of-the-art methods in reference model estimation.

10.
Radiol Artif Intell ; 6(1): e220221, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38166328

RESUMO

Purpose To determine whether saliency maps in radiology artificial intelligence (AI) are vulnerable to subtle perturbations of the input, which could lead to misleading interpretations, using prediction-saliency correlation (PSC) for evaluating the sensitivity and robustness of saliency methods. Materials and Methods In this retrospective study, locally trained deep learning models and a research prototype provided by a commercial vendor were systematically evaluated on 191 229 chest radiographs from the CheXpert dataset and 7022 MR images from a human brain tumor classification dataset. Two radiologists performed a reader study on 270 chest radiograph pairs. A model-agnostic approach for computing the PSC coefficient was used to evaluate the sensitivity and robustness of seven commonly used saliency methods. Results The saliency methods had low sensitivity (maximum PSC, 0.25; 95% CI: 0.12, 0.38) and weak robustness (maximum PSC, 0.12; 95% CI: 0.0, 0.25) on the CheXpert dataset, as demonstrated by leveraging locally trained model parameters. Further evaluation showed that the saliency maps generated from a commercial prototype could be irrelevant to the model output, without knowledge of the model specifics (area under the receiver operating characteristic curve decreased by 8.6% without affecting the saliency map). The human observer studies confirmed that it is difficult for experts to identify the perturbed images; the experts had less than 44.8% correctness. Conclusion Popular saliency methods scored low PSC values on the two datasets of perturbed chest radiographs, indicating weak sensitivity and robustness. The proposed PSC metric provides a valuable quantification tool for validating the trustworthiness of medical AI explainability. Keywords: Saliency Maps, AI Trustworthiness, Dynamic Consistency, Sensitivity, Robustness Supplemental material is available for this article. © RSNA, 2023 See also the commentary by Yanagawa and Sato in this issue.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Estudos Retrospectivos , Radiografia , Radiologistas
11.
Med Image Anal ; 93: 103094, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38306802

RESUMO

In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and computationally inefficient, deep learning-based methods offer an efficient and robust modeling alternative. However, current methods do not account for the physical relationship between facial soft tissue and bony structure, causing them to fall short in accuracy compared to FEM. In this work, we propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to predict facial changes by correlating facial soft tissue changes with bony movement through a point-to-point attentive correspondence matrix. To ensure efficient training, we also introduce a contrastive loss for self-supervised pre-training of the ACMT-Net with a k-Nearest Neighbors (k-NN) based clustering. Experimental results on patients with jaw deformities show that our proposed solution can achieve significantly improved computational efficiency over the state-of-the-art FEM-based method with comparable facial change prediction accuracy.


Assuntos
Face , Movimento , Humanos , Face/diagnóstico por imagem , Fenômenos Biomecânicos , Simulação por Computador
12.
Artigo em Inglês | MEDLINE | ID: mdl-37022907

RESUMO

In the past several years, various adversarial training (AT) approaches have been invented to robustify deep learning model against adversarial attacks. However, mainstream AT methods assume the training and testing data are drawn from the same distribution and the training data are annotated. When the two assumptions are violated, existing AT methods fail because either they cannot pass knowledge learnt from a source domain to an unlabeled target domain or they are confused by the adversarial samples in that unlabeled space. In this paper, we first point out this new and challenging problem-adversarial training in unlabeled target domain. We then propose a novel framework named Unsupervised Cross-domain Adversarial Training (UCAT) to address this problem. UCAT effectively leverages the knowledge of the labeled source domain to prevent the adversarial samples from misleading the training process, under the guidance of automatically selected high quality pseudo labels of the unannotated target domain data together with the discriminative and robust anchor representations of the source domain data. The experiments on four public benchmarks show that models trained with UCAT can achieve both high accuracy and strong robustness. The effectiveness of the proposed components is demonstrated through a large set of ablation studies. The source code is publicly available at https://github.com/DIAL-RPI/UCAT.

13.
J Orthop Res ; 41(1): 72-83, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35438803

RESUMO

Finite element models of the knee can be used to identify regions at risk of mechanical failure in studies of osteoarthritis. Models of the knee often implement joint geometry obtained from magnetic resonance imaging (MRI) or gait kinematics from motion capture to increase model specificity for a given subject. However, differences exist in cartilage material properties regionally as well as between subjects. This paper presents a method to create subject-specific finite element models of the knee that assigns cartilage material properties from T2 relaxometry. We compared our T2 -refined model to identical models with homogeneous material properties. When tested on three subjects from the Osteoarthritis Initiative data set, we found the T2 -refined models estimated higher principal stresses and shear strains in most cartilage regions and corresponded better to increases in KL grade in follow-ups compared to their corresponding homogeneous material models. Measures of cumulative stress within regions of a T2 -refined model also correlated better with the region's cartilage morphology MRI Osteoarthritis Knee Score as compared with the homogeneous model. We conclude that spatially heterogeneous T2 -refined material properties improve the subject-specificity of finite element models compared to homogeneous material properties in osteoarthritis progression studies. Statement of Clinical Significance: T2 -refined material properties can improve subject-specific finite element model assessments of cartilage degeneration.


Assuntos
Análise de Elementos Finitos , Osteoartrite do Joelho , Humanos
14.
IEEE Trans Med Imaging ; 42(10): 2948-2960, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37097793

RESUMO

Federated learning is an emerging paradigm allowing large-scale decentralized learning without sharing data across different data owners, which helps address the concern of data privacy in medical image analysis. However, the requirement for label consistency across clients by the existing methods largely narrows its application scope. In practice, each clinical site may only annotate certain organs of interest with partial or no overlap with other sites. Incorporating such partially labeled data into a unified federation is an unexplored problem with clinical significance and urgency. This work tackles the challenge by using a novel federated multi-encoding U-Net (Fed-MENU) method for multi-organ segmentation. In our method, a multi-encoding U-Net (MENU-Net) is proposed to extract organ-specific features through different encoding sub-networks. Each sub-network can be seen as an expert of a specific organ and trained for that client. Moreover, to encourage the organ-specific features extracted by different sub-networks to be informative and distinctive, we regularize the training of the MENU-Net by designing an auxiliary generic decoder (AGD). Extensive experiments on six public abdominal CT datasets show that our Fed-MENU method can effectively obtain a federated learning model using the partially labeled datasets with superior performance to other models trained by either localized or centralized learning methods. Source code is publicly available at https://github.com/DIAL-RPI/Fed-MENU.


Assuntos
Relevância Clínica , Software , Humanos
15.
medRxiv ; 2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38187692

RESUMO

Orthognathic surgery traditionally focuses on correcting skeletal abnormalities and malocclusion, with the expectation that an optimal facial appearance will naturally follow. However, this skeletal-driven approach can lead to undesirable facial aesthetics and residual asymmetry. To address these issues, a soft-tissue-driven planning method has been proposed. This innovative method bases bone movement estimates on the targeted ideal facial appearance, thus increasing the surgical plan's accuracy and effectiveness. This study explores the initial phase of implementing a soft-tissue-driven approach, simulating the patient's optimal facial look by repositioning deformed facial landmarks to an ideal state. The algorithm incorporates symmetrization and weighted optimization strategies, aligning projected optimal landmarks with standard cephalometric values for both facial symmetry and form, which are integral to facial aesthetics in orthognathic surgery. It also includes regularization to preserve the patient's original facial characteristics. Validated using retrospective analysis of data from both preoperative patients and normal subjects, this approach effectively achieves not only facial symmetry, particularly in the lower face, but also a more natural and normalized facial form. This novel approach, aligning with soft-tissue-driven planning principles, shows promise in surpassing traditional methods, potentially leading to enhanced facial outcomes and patient satisfaction in orthognathic surgery.

16.
IEEE Trans Biomed Eng ; 70(3): 970-979, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36103448

RESUMO

Transrectal ultrasound is commonly used for guiding prostate cancer biopsy, where 3D ultrasound volume reconstruction is often desired. Current methods for 3D reconstruction from freehand ultrasound scans require external tracking devices to provide spatial information of an ultrasound transducer. This paper presents a novel deep learning approach for sensorless ultrasound volume reconstruction, which efficiently exploits content correspondence between ultrasound frames to reconstruct 3D volumes without external tracking. The underlying deep learning model, deep contextual-contrastive network (DC 2-Net), utilizes self-attention to focus on the speckle-rich areas to estimate spatial movement and then minimizes a margin ranking loss for contrastive feature learning. A case-wise correlation loss over the entire input video helps further smooth the estimated trajectory. We train and validate DC 2-Net on two independent datasets, one containing 619 transrectal scans and the other having 100 transperineal scans. Our proposed approach attained superior performance compared with other methods, with a drift rate of 9.64 % and a prostate Dice of 0.89. The promising results demonstrate the capability of deep neural networks for universal ultrasound volume reconstruction from freehand 2D ultrasound scans without tracking information.


Assuntos
Imageamento Tridimensional , Redes Neurais de Computação , Masculino , Humanos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Próstata/diagnóstico por imagem , Movimento
17.
Artigo em Inglês | MEDLINE | ID: mdl-37015418

RESUMO

Fusing intraoperative 2-D ultrasound (US) frames with preoperative 3-D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2-D/3-D data. To overcome these challenges, we propose a novel US frame-to-volume registration (FVReg) pipeline to bridge the dimensionality gap between 2-D US frames and 3-D US volume. The developed pipeline is implemented using deep neural networks, which are fully automatic without requiring external tracking devices. The framework consists of three major components, including one) a frame-to-frame registration network (Frame2Frame) that estimates the current frame's 3-D spatial position based on previous video context, two) a frame-to-slice correction network (Frame2Slice) adjusting the estimated frame position using the 3-D US volumetric information, and three) a similarity filtering (SF) mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2-D-US to 3-D-MR fusion navigation tasks. The proposed FVReg achieved an average target navigation error of 1.93 mm at 5-14 fps. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Masculino , Humanos , Imageamento Tridimensional/métodos , Ultrassonografia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Redes Neurais de Computação
18.
BJU Int ; 110(11): 1642-7, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22973825

RESUMO

UNLABELLED: Study Type--Diagnosis (case series) Level of Evidence 4. What's known on the subject? and What does the study add? Benign prostatic hyperplasia is the most common symptomatic disorder of the prostate and its severity varies greatly in the population. Various methods have been used to estimate prostate volumes in the past including the digital rectal examination and ultrasound measurements. High-resolution T2 weighted MRI can provide accurate measurements of zonal volumes and total volumes, which can be used to better understand the etiology of lower urinary tract symptoms of men. OBJECTIVE: • To use ability of magnetic resonance imaging (MRI) to investigate age-related changes in zonal prostate volumes. PATIENTS AND METHODS: • This Institutional Review Board approved, Health Insurance Portability and Accountability Act-compliant study consisted of 503 patients who underwent 3 T prostate MRI before any treatment for prostate cancer. • Whole prostate (WP) and central gland (CG) volumes were manually contoured on T2-weighted MRI using a semi-automated segmentation tool. WP, CG, peripheral zone (PZ) volumes were measured for each patient. • WP, CG, PZ volumes were correlated with age, serum prostate-specific antigen (PSA) level, International Prostate Symptom Score (IPSS), Sexual Health Inventory for Men (SHIM) scores. RESULTS: • Linear regression analysis showed positive correlations between WP, CG volumes and patient age (P < 0.001); there was no correlation between age and PZ volume (P= 0.173). • There was a positive correlation between WP, CG volumes and serum PSA level (P < 0.001), as well as between PZ volume and serum PSA level (P= 0.002). • At logistic regression analysis, IPSS positively correlated with WP, CG volumes (P < 0.001). • SHIM positively correlated with WP (P= 0.015) and CG (P= 0.023) volumes. • As expected, the IPSS of patients with prostate volumes (WP, CG) in first decile for age were significantly lower than those in tenth decile. CONCLUSIONS: • Prostate MRI is able to document age-related changes in prostate zonal volumes. • Changes in WP and CG volumes correlated inversely with changes in lower urinary tract symptoms. • These findings suggest a role for MRI in measuring accurate prostate zonal volumes; have interesting implications for study of age-related changes in the prostate.


Assuntos
Imageamento por Ressonância Magnética/métodos , Próstata/patologia , Hiperplasia Prostática/patologia , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Humanos , Masculino , Pessoa de Meia-Idade , Tamanho do Órgão
19.
J Pers Med ; 12(8)2022 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-36013263

RESUMO

There has been a rapid increase in the number of artificial intelligence (AI)/machine learning (ML)-based biomarker diagnostic classifiers in recent years. However, relatively little work has focused on assessing the robustness of these biomarkers, i.e., investigating the uncertainty of the AI/ML models that these biomarkers are based upon. This paper addresses this issue by proposing a framework to evaluate the already-developed classifiers with regard to their robustness by focusing on the variability of the classifiers' performance and changes in the classifiers' parameter values using factor analysis and Monte Carlo simulations. Specifically, this work evaluates (1) the importance of a classifier's input features and (2) the variability of a classifier's output and model parameter values in response to data perturbations. Additionally, it was found that one can estimate a priori how much replacement noise a classifier can tolerate while still meeting accuracy goals. To illustrate the evaluation framework, six different AI/ML-based biomarkers are developed using commonly used techniques (linear discriminant analysis, support vector machines, random forest, partial-least squares discriminant analysis, logistic regression, and multilayer perceptron) for a metabolomics dataset involving 24 measured metabolites taken from 159 study participants. The framework was able to correctly predict which of the classifiers should be less robust than others without recomputing the classifiers itself, and this prediction was then validated in a detailed analysis.

20.
IEEE Trans Med Imaging ; 41(6): 1331-1345, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34971530

RESUMO

Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.


Assuntos
Próstata , Aprendizado de Máquina Supervisionado , Artefatos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Pelve , Próstata/diagnóstico por imagem , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA