Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 268
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Eur Radiol ; 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38536464

RESUMO

BACKGROUND: Accurate mortality risk quantification is crucial for the management of hepatocellular carcinoma (HCC); however, most scoring systems are subjective. PURPOSE: To develop and independently validate a machine learning mortality risk quantification method for HCC patients using standard-of-care clinical data and liver radiomics on baseline magnetic resonance imaging (MRI). METHODS: This retrospective study included all patients with multiphasic contrast-enhanced MRI at the time of diagnosis treated at our institution. Patients were censored at their last date of follow-up, end-of-observation, or liver transplantation date. The data were randomly sampled into independent cohorts, with 85% for development and 15% for independent validation. An automated liver segmentation framework was adopted for radiomic feature extraction. A random survival forest combined clinical and radiomic variables to predict overall survival (OS), and performance was evaluated using Harrell's C-index. RESULTS: A total of 555 treatment-naïve HCC patients (mean age, 63.8 years ± 8.9 [standard deviation]; 118 females) with MRI at the time of diagnosis were included, of which 287 (51.7%) died after a median time of 14.40 (interquartile range, 22.23) months, and had median followed up of 32.47 (interquartile range, 61.5) months. The developed risk prediction framework required 1.11 min on average and yielded C-indices of 0.8503 and 0.8234 in the development and independent validation cohorts, respectively, outperforming conventional clinical staging systems. Predicted risk scores were significantly associated with OS (p < .00001 in both cohorts). CONCLUSIONS: Machine learning reliably, rapidly, and reproducibly predicts mortality risk in patients with hepatocellular carcinoma from data routinely acquired in clinical practice. CLINICAL RELEVANCE STATEMENT: Precision mortality risk prediction using routinely available standard-of-care clinical data and automated MRI radiomic features could enable personalized follow-up strategies, guide management decisions, and improve clinical workflow efficiency in tumor boards. KEY POINTS: • Machine learning enables hepatocellular carcinoma mortality risk prediction using standard-of-care clinical data and automated radiomic features from multiphasic contrast-enhanced MRI. • Automated mortality risk prediction achieved state-of-the-art performances for mortality risk quantification and outperformed conventional clinical staging systems. • Patients were stratified into low, intermediate, and high-risk groups with significantly different survival times, generalizable to an independent evaluation cohort.

2.
BMC Ophthalmol ; 24(1): 98, 2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38438876

RESUMO

Image segmentation is a fundamental task in deep learning, which is able to analyse the essence of the images for further development. However, for the supervised learning segmentation method, collecting pixel-level labels is very time-consuming and labour-intensive. In the medical image processing area for optic disc and cup segmentation, we consider there are two challenging problems that remain unsolved. One is how to design an efficient network to capture the global field of the medical image and execute fast in real applications. The other is how to train the deep segmentation network using a few training data due to some medical privacy issues. In this paper, to conquer such issues, we first design a novel attention-aware segmentation model equipped with the multi-scale attention module in the pyramid structure-like encoder-decoder network, which can efficiently learn the global semantics and the long-range dependencies of the input images. Furthermore, we also inject the prior knowledge that the optic cup lies inside the optic disc by a novel loss function. Then, we propose a self-supervised contrastive learning method for optic disc and cup segmentation. The unsupervised feature representation is learned by matching an encoded query to a dictionary of encoded keys using a contrastive technique. Finetuning the pre-trained model using the proposed loss function can help achieve good performance for the task. To validate the effectiveness of the proposed method, extensive systemic evaluations on different public challenging optic disc and cup benchmarks, including DRISHTI-GS and REFUGE datasets demonstrate the superiority of the proposed method, which can achieve new state-of-the-art performance approaching 0.9801 and 0.9087 F1 score respectively while gaining 0.9657 D C disc and 0.8976 D C cup . The code will be made publicly available.


Assuntos
Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagem , Conscientização , Benchmarking , Processamento de Imagem Assistida por Computador , Atenção
3.
J Appl Clin Med Phys ; 25(2): e14266, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38269961

RESUMO

PURPOSE: Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS: A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS: Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was ∼4.72, whereas the difference between CECT and the scanned NCECT was ∼64.52, indicating a ∼93% reduction in mean HU difference. CONCLUSIONS: A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.


Assuntos
Aprendizado Profundo , Terapia com Prótons , Humanos , Prótons , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional , Radiometria , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Terapia com Prótons/métodos
4.
Sensors (Basel) ; 24(13)2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-39001081

RESUMO

In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pele/diagnóstico por imagem , Pele/patologia
5.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39066143

RESUMO

The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494.


Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Dente/diagnóstico por imagem
6.
Sensors (Basel) ; 24(3)2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38339491

RESUMO

Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis.


Assuntos
Oclusão da Veia Retiniana , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Reprodutibilidade dos Testes , Oclusão da Veia Retiniana/diagnóstico , Vasos Retinianos/diagnóstico por imagem , Angiografia
7.
J Xray Sci Technol ; 32(4): 931-951, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38848160

RESUMO

BACKGROUND: The rapid development of deep learning techniques has greatly improved the performance of medical image segmentation, and medical image segmentation networks based on convolutional neural networks and Transformer have been widely used in this field. However, due to the limitation of the restricted receptive field of convolutional operation and the lack of local fine information extraction ability of the self-attention mechanism in Transformer, the current neural networks with pure convolutional or Transformer structure as the backbone still perform poorly in medical image segmentation. METHODS: In this paper, we propose FDB-Net (Fusion Double Branch Network, FDB-Net), a double branch medical image segmentation network combining CNN and Transformer, by using a CNN containing gnConv blocks and a Transformer containing Varied-Size Window Attention (VWA) blocks as the feature extraction backbone network, the dual-path encoder ensures that the network has a global receptive field as well as access to the target local detail features. We also propose a new feature fusion module (Deep Feature Fusion, DFF), which helps the image to simultaneously fuse features from two different structural encoders during the encoding process, ensuring the effective fusion of global and local information of the image. CONCLUSION: Our model achieves advanced results in all three typical tasks of medical image segmentation, which fully validates the effectiveness of FDB-Net.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , Tomografia Computadorizada por Raios X/métodos
8.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 213-219, 2024 Apr 25.
Artigo em Zh | MEDLINE | ID: mdl-38686400

RESUMO

Medical image registration plays an important role in medical diagnosis and treatment planning. However, the current registration methods based on deep learning still face some challenges, such as insufficient ability to extract global information, large number of network model parameters, slow reasoning speed and so on. Therefore, this paper proposed a new model LCU-Net, which used parallel lightweight convolution to improve the ability of global information extraction. The problem of large number of network parameters and slow inference speed was solved by multi-scale fusion. The experimental results showed that the Dice coefficient of LCU-Net reached 0.823, the Hausdorff distance was 1.258, and the number of network parameters was reduced by about one quarter compared with that before multi-scale fusion. The proposed algorithm shows remarkable advantages in medical image registration tasks, and it not only surpasses the existing comparison algorithms in performance, but also has excellent generalization performance and wide application prospects.


Assuntos
Algoritmos , Encéfalo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado Profundo
9.
Neuroimage ; 268: 119863, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36610676

RESUMO

Domain adaptation (DA) is an important technique for modern machine learning-based medical data analysis, which aims at reducing distribution differences between different medical datasets. A proper domain adaptation method can significantly enhance the statistical power by pooling data acquired from multiple sites/centers. To this end, we have developed the Domain Adaptation Toolbox for Medical data analysis (DomainATM) - an open-source software package designed for fast facilitation and easy customization of domain adaptation methods for medical data analysis. The DomainATM is implemented in MATLAB with a user-friendly graphical interface, and it consists of a collection of popular data adaptation algorithms that have been extensively applied to medical image analysis and computer vision. With DomainATM, researchers are able to facilitate fast feature-level and image-level adaptation, visualization and performance evaluation of different adaptation methods for medical data analysis. More importantly, the DomainATM enables the users to develop and test their own adaptation methods through scripting, greatly enhancing its utility and extensibility. An overview characteristic and usage of DomainATM is presented and illustrated with three example experiments, demonstrating its effectiveness, simplicity, and flexibility. The software, source code, and manual are available online.


Assuntos
Algoritmos , Software , Humanos , Adaptação Fisiológica
10.
Eur Radiol ; 33(4): 2450-2460, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36462042

RESUMO

OBJECTIVES: To assess epicardial adipose tissue (EAT) volume and attenuation of different virtual non-contrast (VNC) reconstructions derived from coronary CTA (CCTA) datasets of a photon-counting detector (PCD) CT-system to replace true non-contrast (TNC) series. METHODS: Consecutive patients (n = 42) with clinically indicated CCTA and coronary TNC were included. Two VNC series were reconstructed, using a conventional (VNCConv) and a novel calcium-preserving (VNCPC) algorithm. EAT was segmented on TNC, VNCConv, VNCPC, and CCTA (CTA-30) series using thresholds of -190 to -30 HU and an additional segmentation on the CCTA series with an upper threshold of 0 HU (CTA0). EAT volumes and their histograms were assessed for each series. Linear regression was used to correlate EAT volumes and the Euclidian distance for histograms. The paired t-test and the Wilcoxon signed-rank test were used to assess differences for parametric and non-parametric data. RESULTS: EAT volumes from VNC and CCTA series showed significant differences compared to TNC (all p < .05), but excellent correlation (all R2 > 0.9). Measurements on the novel VNCPC series showed the best correlation (R2 = 0.99) and only minor absolute differences compared to TNC values. Mean volume differences were -12%, -3%, -13%, and +10% for VNCConv, VNCPC, CTA-30, and CTA0 compared to TNC. Distribution of CT values on VNCPC showed less difference to TNC than on VNCConv (mean attenuation difference +7% vs. +2%; Euclidean distance of histograms 0.029 vs. 0.016). CONCLUSIONS: VNCPC-reconstructions of PCD-CCTA datasets can be used to reliably assess EAT volume with a high accuracy and only minor differences in CT values compared to TNC. Substitution of TNC would significantly decrease patient's radiation dose. KEY POINTS: • Measurement of epicardial adipose tissue (EAT) volume and attenuation are feasible on virtual non-contrast (VNC) series with excellent correlation to true non-contrast series (all R2>0.9). • Differences in VNC algorithms have a significant impact on EAT volume and CT attenuation values. • A novel VNC algorithm (VNCPC) enables reliable assessment of EAT volume and attenuation with superior accuracy compared to measurements on conventional VNC- and CCTA-series.


Assuntos
Angiografia , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Reprodutibilidade dos Testes , Fótons , Tecido Adiposo/diagnóstico por imagem , Estudos Retrospectivos
11.
Eur Radiol ; 33(10): 7056-7065, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37083742

RESUMO

OBJECTIVES: Evaluate a novel algorithm for noise reduction in obese patients using dual-source dual-energy (DE) CT imaging. METHODS: Seventy-nine patients with contrast-enhanced abdominal imaging (54 women; age: 58 ± 14 years; BMI: 39 ± 5 kg/m2, range: 35-62 kg/m2) from seven DECT (SOMATOM Flash or Force) were retrospectively included (01/2019-12/2020). Image domain data were reconstructed with the standard clinical algorithm (ADMIRE/SAFIRE 2), and denoised with a comparison (ME-NLM) and a test algorithm (rank-sparse kernel regression). Contrast-to-noise ratio (CNR) was calculated. Four blinded readers evaluated the same original and denoised images (0 (worst)-100 (best)) in randomized order for perceived image noise, quality, and their comfort making a diagnosis from a table of 80 options. Comparisons between algorithms were performed using paired t-tests and mixed-effects linear modeling. RESULTS: Average CNR was 5.0 ± 1.9 (original), 31.1 ± 10.3 (comparison; p < 0.001), and 8.9 ± 2.9 (test; p < 0.001). Readers were in good to moderate agreement over perceived image noise (ICC: 0.83), image quality (ICC: 0.71), and diagnostic comfort (ICC: 0.6). Diagnostic accuracy was low across algorithms (accuracy: 66, 63, and 67% (original, comparison, test)). The noise received a mean score of 54, 84, and 66 (p < 0.05); image quality 59, 61, and 65; and the diagnostic comfort 63, 68, and 68, respectively. Quality and comfort scores were not statistically significantly different between algorithms. CONCLUSIONS: The test algorithm produces quantitatively higher image quality than current standard and existing denoising algorithms in obese patients imaged with DECT and readers show a preference for it. CLINICAL RELEVANCE STATEMENT: Accurate diagnosis on CT imaging of obese patients is challenging and denoising algorithms can increase the diagnostic comfort and quantitative image quality. This could lead to better clinical reads. KEY POINTS: • Improving image quality in DECT imaging of obese patients is important for accurate and confident clinical reads, which may be aided by novel denoising algorithms using image domain data. • Accurate diagnosis on CT imaging of obese patients is especially challenging and denoising algorithms can increase quantitative and qualitative image quality. • Image domain algorithms can generalize well and can be implemented at other institutions.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Feminino , Adulto , Pessoa de Meia-Idade , Idoso , Tomografia Computadorizada por Raios X/métodos , Estudos Retrospectivos , Imagens de Fantasmas , Obesidade/complicações , Obesidade/diagnóstico por imagem , Doses de Radiação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Razão Sinal-Ruído
12.
Methods ; 205: 149-156, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35809770

RESUMO

According to global and Chinese cancer statistics, lung cancer is the second most common cancer globally with the highest mortality rate and a severe threat to human life and health. In recent years, immunotherapy has made significant breakthroughs in the treatment of cancer patients. However, only 30% of patients are applicable and may have immune-related adverse events. The traditional immunological inspection methods have limitations and often can not obtain the expected benefits. Deep learning is a typical representation learning method that can spontaneously mine the hidden feature of effective classification from seas of data. In order to alleviate medical resources and reduce costs, this paper proposes a deep learning-based method to predict patients best suited for immune checkpoint blocking therapy from patients CT images. The deep immunotherapy analysis method proposed in this paper is divided into three steps:(1) Using LUNA16 public dataset to develop a deep learning model for nodule detection. (2) Nodule detection was performed on the Anti-PD-1_Lung dataset, and the effectiveness of immunotherapy was determined by comparing the detection results of nodules before and after immunotherapy. (3) After the data set was processed, the deep learning method trained and analyzed the Lung images. According to the experimental results and comparative analysis, the proposed deep immunotherapy analysis method has a good performance in the detection of nodules. It works for the predictions for the applicability of immunotherapy for lung cancer.1.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Imunoterapia , Pulmão , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/terapia , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
13.
BMC Med Imaging ; 23(1): 113, 2023 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-37620849

RESUMO

PURPOSE: This study aimed to develop and validate a deep learning-based method that detects inter-breath-hold motion from an estimated cardiac long axis image reconstructed from a stack of short axis cardiac cine images. METHODS: Cardiac cine magnetic resonance image data from all short axis slices and 2-/3-/4-chamber long axis slices were considered for the study. Data from 740 subjects were used for model development, and data from 491 subjects were used for testing. The method utilized the slice orientation information to calculate the intersection line of a short axis plane and a long axis plane. An estimated long axis image is shown along with a long axis image as a motion-free reference image, which enables visual assessment of the inter-breath-hold motion from the estimated long axis image. The estimated long axis image was labeled as either a motion-corrupted or a motion-free image. Deep convolutional neural network (CNN) models were developed and validated using the labeled data. RESULTS: The method was fully automatic in obtaining long axis images reformatted from a 3D stack of short axis slices and predicting the presence/absence of inter-breath-hold motion. The deep CNN model with EfficientNet-B0 as a feature extractor was effective at motion detection with an area under the receiver operating characteristic (AUC) curve of 0.87 for the testing data. CONCLUSION: The proposed method can automatically assess inter-breath-hold motion in a stack of cardiac cine short axis slices. The method can help prospectively reacquire problematic short axis slices or retrospectively correct motion.


Assuntos
Suspensão da Respiração , Coração , Humanos , Estudos Retrospectivos , Coração/diagnóstico por imagem , Movimento (Física) , Redes Neurais de Computação
14.
Sensors (Basel) ; 23(2)2023 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-36679721

RESUMO

This paper describes the process of developing a classification model for the effective detection of malignant melanoma, an aggressive type of cancer in skin lesions. Primary focus is given on fine-tuning and improving a state-of-the-art convolutional neural network (CNN) to obtain the optimal ROC-AUC score. The study investigates a variety of artificial intelligence (AI) clustering techniques to train the developed models on a combined dataset of images across data from the 2019 and 2020 IIM-ISIC Melanoma Classification Challenges. The models were evaluated using varying cross-fold validations, with the highest ROC-AUC reaching a score of 99.48%.


Assuntos
Inteligência Artificial , Melanoma , Humanos , Dermoscopia/métodos , Melanoma/diagnóstico , Redes Neurais de Computação , Análise por Conglomerados , Melanoma Maligno Cutâneo
15.
Sensors (Basel) ; 23(10)2023 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-37430748

RESUMO

Bone age assessment (BAA) is a typical clinical technique for diagnosing endocrine and metabolic diseases in children's development. Existing deep learning-based automatic BAA models are trained on the Radiological Society of North America dataset (RSNA) from Western populations. However, due to the difference in developmental process and BAA standards between Eastern and Western children, these models cannot be applied to bone age prediction in Eastern populations. To address this issue, this paper collects a bone age dataset based on the East Asian populations for model training. Nevertheless, it is laborious and difficult to obtain enough X-ray images with accurate labels. In this paper, we employ ambiguous labels from radiology reports and transform them into Gaussian distribution labels of different amplitudes. Furthermore, we propose multi-branch attention learning with ambiguous labels network (MAAL-Net). MAAL-Net consists of a hand object location module and an attention part extraction module to discover the informative regions of interest (ROIs) based only on image-level labels. Extensive experiments on both the RSNA dataset and the China Bone Age (CNBA) dataset demonstrate that our method achieves competitive results with the state-of-the-arts, and performs on par with experienced physicians in children's BAA tasks.


Assuntos
Osso e Ossos , População do Leste Asiático , Doenças do Sistema Endócrino , Doenças Metabólicas , Criança , Humanos , China , Distribuição Normal , Osso e Ossos/diagnóstico por imagem , Doenças Metabólicas/diagnóstico , Doenças do Sistema Endócrino/diagnóstico
16.
Sensors (Basel) ; 23(24)2023 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-38139718

RESUMO

Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Realidade Virtual , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador
17.
J Digit Imaging ; 36(4): 1565-1577, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37253895

RESUMO

To train an artificial neural network model using 3D radiomic features to differentiate benign from malignant vertebral compression fractures (VCFs) on MRI. This retrospective study analyzed sagittal T1-weighted lumbar spine MRIs from 91 patients (average age of 64.24 ± 11.75 years) diagnosed with benign or malignant VCFs from 2010 to 2019, of them 47 (51.6%) had benign VCFs and 44 (48.4%) had malignant VCFs. The lumbar fractures were three-dimensionally segmented and had their radiomic features extracted and selected with the wrapper method. The training set consisted of 100 fractured vertebral bodies from 61 patients (average age of 63.2 ± 12.5 years), and the test set was comprised of 30 fractured vertebral bodies from 30 patients (average age of 66.4 ± 9.9 years). Classification was performed with the multilayer perceptron neural network with a back-propagation algorithm. To validate the model, the tenfold cross-validation technique and an independent test set (holdout) were used. The performance of the model was evaluated using the average with a 95% confidence interval for the ROC AUC, accuracy, sensitivity, and specificity (considering the threshold = 0.5). In the internal validation test, the best model reached a ROC AUC of 0.98, an accuracy of 95% (95/100), a sensitivity of 93.5% (43/46), and specificity of 96.3% (52/54). In the validation with independent test set, the model achieved a ROC AUC of 0.97, an accuracy of 93.3% (28/30), a sensitivity of 93.3% (14/15), and a specificity of 93.3% (14/15). The model proposed in this study using radiomic features could differentiate benign from malignant vertebral compression fractures with excellent performance and is promising as an aid to radiologists in the characterization of VCFs.


Assuntos
Fraturas por Compressão , Fraturas da Coluna Vertebral , Neoplasias da Coluna Vertebral , Humanos , Pessoa de Meia-Idade , Idoso , Fraturas da Coluna Vertebral/diagnóstico por imagem , Fraturas por Compressão/diagnóstico por imagem , Fraturas por Compressão/patologia , Estudos Retrospectivos , Neoplasias da Coluna Vertebral/complicações , Neoplasias da Coluna Vertebral/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Vértebras Lombares/diagnóstico por imagem , Vértebras Lombares/patologia , Redes Neurais de Computação
18.
Strahlenther Onkol ; 198(6): 582-592, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35403891

RESUMO

PURPOSE: Thiel embalming followed by freezing in the desired position and acquiring CT + MRI scans is expected to be the ideal approach to obtain accurate, enhanced CT data for delineation guideline development. The effect of Thiel embalming and freezing on MRI image quality is not known. This study evaluates the above-described process to obtain enhanced CT datasets, focusing on the integration of MRI data obtained from frozen, Thiel-embalmed specimens. METHODS: Three Thiel-embalmed specimens were frozen in prone crawl position and MRI scanning protocols were evaluated based on contrast detail and structural conformity between 3D renderings from corresponding structures, segmented on corresponding MRI and CT scans. The measurement error of the dataset registration procedure was also assessed. RESULTS: Scanning protocol T1 VIBE FS enabled swift differentiation of soft tissues based on contrast detail, even allowing a fully detailed segmentation of the brachial plexus. Structural conformity between the reconstructed structures on CT and MRI was excellent, with nerves and blood vessels imported into the CT scan never intersecting with the bones. The mean measurement error for the image registration procedure was consistently in the submillimeter range (range 0.77-0.94 mm). CONCLUSION: Based on the excellent MRI image quality and the submillimeter error margin, the procedure of scanning frozen Thiel-embalmed specimens in the treatment position to obtain enhanced CT scans is recommended. The procedure can be used to support the postulation of delineation guidelines, or for training deep learning algorithms, considering automated segmentations.


Assuntos
Embalsamamento , Imageamento por Ressonância Magnética , Cadáver , Embalsamamento/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X
19.
MAGMA ; 35(4): 573-585, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35150363

RESUMO

OBJECTIVE: Signal intensity normalization is necessary to reduce heterogeneity in T2-weighted (T2W) magnetic resonance imaging (MRI) for quantitative analysis of multicenter data. AutoRef is an automated dual-reference tissue normalization method that normalizes transversal prostate T2W MRI by creating a pseudo-T2 map. The aim of this study was to evaluate the accuracy of pseudo-T2s and multicenter standardization performance for AutoRef with three pairs of reference tissues: fat/muscle (AutoRefF), femoral head/muscle (AutoRefFH) and pelvic bone/muscle (AutoRefPB). MATERIALS AND METHODS: T2s measured by multi-echo spin echo (MESE) were compared to AutoRef pseudo-T2s in the whole prostate (WP) and zones (PZ and TZ/CZ/AFS) for seven asymptomatic volunteers with a paired Wilcoxon signed-rank test. AutoRef normalization was assessed on T2W images from a multicenter evaluation set of 1186 prostate cancer patients. Performance was measured by inter-patient histogram intersections of voxel intensities in the WP before and after normalization in a selected subset of 80 cases. RESULTS: AutoRefFH pseudo-T2s best approached MESE T2s in the volunteer study, with no significant difference shown (WP: p = 0.30, TZ/CZ/AFS: p = 0.22, PZ: p = 0.69). All three AutoRef versions increased inter-patient histogram intersections in the multicenter dataset, with median histogram intersections of 0.505 (original data), 0.738 (AutoRefFH), 0.739 (AutoRefF) and 0.726 (AutoRefPB). DISCUSSION: All AutoRef versions reduced variation in the multicenter data. AutoRefFH pseudo-T2s were closest to experimentally measured T2s.


Assuntos
Próstata , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética/métodos , Espectroscopia de Ressonância Magnética , Masculino , Pelve , Próstata/diagnóstico por imagem , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
20.
Eur Arch Otorhinolaryngol ; 279(12): 5631-5638, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35727414

RESUMO

PURPOSE: During cochlear implantation surgery, a range of complications may occur such as tip fold-over. We recently developed a method to estimate the insertion orientation of the electrode array. The aim of the study was to determine the optimal angle of orientation in a cohort of cochlear implanted patients. METHODS: On eighty-five CT scans (80 uncomplicated insertions and 5 cases with tip fold-over), location of the electrode array's Insertion Guide (IG), Orientation marker (OM) and two easily identifiable landmarks (the round window (RW) and the incus short process (ISP)) were manually marked. The angle enclosed by ISP-RW line and the Cochlear™ Slim Modiolar electrode array's OM line determined the electrode array insertion angle. RESULTS: The average insertion angle was 45.0-47.2° ± 10.4-12° SD and was validated with 98% confidence interval. Based on the measurements obtained, patients' sex and age had no impact on the size of this angle. Although the angles of the tip fold-over cases (44.9°, 46.9°, 34.2°, 54.3°, 55.9°) fell within this average range, the further it diverted from the average it increased the likelihood for tip fold-over. CONCLUSION: Electrode array insertion in the individually calculated angle relative to the visible incus short process provides a useful guide for the surgeon when aiming for the optimal angle, and potentially enhances good surgical outcomes. Our results show that factors other than the orientation angle may additionally contribute to failures in implantation when the Slim Modiolar electrode is used.


Assuntos
Implante Coclear , Implantes Cocleares , Humanos , Implante Coclear/métodos , Janela da Cóclea/cirurgia , Cóclea/cirurgia , Eletrodos Implantados
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA