Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 164
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801692

RESUMO

Dynamic contrast-enhanced ultrasound (CEUS) imaging can reflect the microvascular distribution and blood flow perfusion, thereby holding clinical significance in distinguishing between malignant and benign thyroid nodules. Notably, CEUS offers a meticulous visualization of the microvascular distribution surrounding the nodule, leading to an apparent increase in tumor size compared to gray-scale ultrasound (US). In the dual-image obtained, the lesion size enlarged from gray-scale US to CEUS, as the microvascular appeared to be continuously infiltrating the surrounding tissue. Although the infiltrative dilatation of microvasculature remains ambiguous, sonographers believe it may promote the diagnosis of thyroid nodules. We propose a deep learning model designed to emulate the diagnostic reasoning process employed by sonographers. This model integrates the observation of microvascular infiltration on dynamic CEUS, leveraging the additional insights provided by gray-scale US for enhanced diagnostic support. Specifically, temporal projection attention is implemented on time dimension of dynamic CEUS to represent the microvascular perfusion. Additionally, we employ a group of confidence maps with flexible Sigmoid Alpha Functions to aware and describe the infiltrative dilatation process. Moreover, a self-adaptive integration mechanism is introduced to dynamically integrate the assisted gray-scale US and the confidence maps of CEUS for individual patients, ensuring a trustworthy diagnosis of thyroid nodules. In this retrospective study, we collected a thyroid nodule dataset of 282 CEUS videos. The method achieves a superior diagnostic accuracy and sensitivity of 89.52% and 93.75%, respectively. These results suggest that imitating the diagnostic thinking of sonographers, encompassing dynamic microvascular perfusion and infiltrative expansion, proves beneficial for CEUS-based thyroid nodule diagnosis.

2.
IEEE Trans Med Imaging ; PP2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38652607

RESUMO

Proximal femoral fracture segmentation in computed tomography (CT) is essential in the preoperative planning of orthopedic surgeons. Recently, numerous deep learning-based approaches have been proposed for segmenting various structures within CT scans. Nevertheless, distinguishing various attributes between fracture fragments and soft tissue regions in CT scans frequently poses challenges, which have received comparatively limited research attention. Besides, the cornerstone of contemporary deep learning methodologies is the availability of annotated data, while detailed CT annotations remain scarce. To address the challenge, we propose a novel weakly-supervised framework, namely Rough Turbo Net (RT-Net), for the segmentation of proximal femoral fractures. We emphasize the utilization of human resources to produce rough annotations on a substantial scale, as opposed to relying on limited fine-grained annotations that demand a substantial time to create. In RT-Net, rough annotations pose fractured-region constraints, which have demonstrated significant efficacy in enhancing the accuracy of the network. Conversely, the fine annotations can provide more details for recognizing edges and soft tissues. Besides, we design a spatial adaptive attention module (SAAM) that adapts to the spatial distribution of the fracture regions and align feature in each decoder. Moreover, we propose a fine-edge loss which is applied through an edge discrimination network to penalize the absence or imprecision edge features. Extensive quantitative and qualitative experiments demonstrate the superiority of RT-Net to state-of-the-art approaches. Furthermore, additional experiments show that RT-Net has the capability to produce pseudo labels for raw CT images that can further improve fracture segmentation performance and has the potential to improve segmentation performance on public datasets. The code is available at: https://github.com/zyairelu/RT-Net.

3.
Adv Sci (Weinh) ; : e2307965, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38634608

RESUMO

Diffusion magnetic resonance imaging is an important tool for mapping tissue microstructure and structural connectivity non-invasively in the in vivo human brain. Numerous diffusion signal models are proposed to quantify microstructural properties. Nonetheless, accurate estimation of model parameters is computationally expensive and impeded by image noise. Supervised deep learning-based estimation approaches exhibit efficiency and superior performance but require additional training data and may be not generalizable. A new DIffusion Model OptimizatioN framework using physics-informed and self-supervised Deep learning entitled "DIMOND" is proposed to address this problem. DIMOND employs a neural network to map input image data to model parameters and optimizes the network by minimizing the difference between the input acquired data and synthetic data generated via the diffusion model parametrized by network outputs. DIMOND produces accurate diffusion tensor imaging results and is generalizable across subjects and datasets. Moreover, DIMOND outperforms conventional methods for fitting sophisticated microstructural models including the kurtosis and NODDI model. Importantly, DIMOND reduces NODDI model fitting time from hours to minutes, or seconds by leveraging transfer learning. In summary, the self-supervised manner, high efficacy, and efficiency of DIMOND increase the practical feasibility and adoption of microstructure and connectivity mapping in clinical and neuroscientific applications.

4.
Med Image Anal ; 95: 103182, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38688039

RESUMO

Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Neuroanatomia
5.
EClinicalMedicine ; 70: 102518, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38495520

RESUMO

Background: Effective monitoring and management are crucial during long-term home noninvasive positive pressure ventilation (NPPV) in patients with hypercapnic chronic obstructive pulmonary disease (COPD). This study investigated the benefit of Internet of Things (IOT)-based management of home NPPV. Methods: This multicenter, prospective, parallel-group, randomized controlled non-inferiority trial enrolled patients requiring long-term home NPPV for hypercapnic COPD. Patients were randomly assigned (1:1), via a computer-generated randomization sequence, to standard home management or IOT management based on telemonitoring of clinical and ventilator parameters over 12 months. The intervention was unblinded, but outcome assessment was blinded to management assignment. The primary outcome was the between-group comparison of the change in health-related quality of life, based on severe respiratory insufficiency questionnaire scores with a non-inferiority margin of -5. This study is registered with Chinese Clinical Trials Registry (No. ChiCTR1800019536). Findings: Overall, 148 patients (age: 72.7 ± 6.8 years; male: 85.8%; forced expiratory volume in 1 s: 0.7 ± 0.3 L; PaCO2: 66.4 ± 12.0 mmHg), recruited from 11 Chinese hospitals between January 24, 2019, and June 28, 2021, were randomly allocated to the intervention group (n = 73) or the control group (n = 75). At 12 months, the mean severe respiratory insufficiency questionnaire score was 56.5 in the intervention group and 50.0 in the control group (adjusted between-group difference: 6.26 [95% CI, 3.71-8.80]; P < 0.001), satisfying the hypothesis of non-inferiority. The 12-month risk of readmission was 34.3% in intervention group compared with 56.0% in the control group, adjusted hazard ratio of 0.56 (95% CI, 0.34-0.92; P = 0.023). No severe adverse events were reported. Interpretation: Among stable patients with hypercapnic COPD, using IOT-based management for home NPPV improved health-related quality of life and prolonged the time to readmission. Funding: Air Liquide Healthcare (Beijing) Co., Ltd.

6.
Biomed Opt Express ; 15(2): 506-523, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38404328

RESUMO

As endoscopic imaging technology advances, there is a growing clinical demand for enhanced imaging capabilities. Although conventional white light imaging (WLI) endoscopy offers realistic images, it often cannot reveal detailed characteristics of the mucosa. On the other hand, optical staining endoscopy, such as Compound Band Imaging (CBI), can discern subtle structures, serving to some extent as an optical biopsy. However, its image brightness is low, and the colors can be abrupt. These two techniques, commonly used in clinical settings, have complementary advantages. Nonetheless, they require different lighting conditions, which makes it challenging to combine their imaging strengths on living tissues. In this study, we introduce a novel endoscopic imaging technique that effectively combines the advantages of both WLI and CBI. Doctors don't need to manually switch between these two observation modes, as they can obtain the image information of both modes in one image. We calibrated an appropriate proportion for simultaneous illumination with the light required for WLI and CBI. We designed a new illumination spectrum tailored for gastrointestinal examination, achieving their fusion at the optical level. Using a new algorithm that focuses on enhancing specific hemoglobin tissue features, we restored narrow-band image characteristics lost due to the introduction of white light. Our hardware and software innovations not only boost the illumination brightness of the endoscope but also ensure the narrow-band feature details of the image. To evaluate the reliability and safety of the new endoscopic system, we conducted a series of tests in line with relevant international standards and validated the design parameters. For clinical trials, we collected a total of 256 sets of images, each set comprising images of the same lesion location captured using WLI, CBI, and our proposed method. We recruited four experienced clinicians to conduct subjective evaluations of the collected images. The results affirmed the significant advantages of our method. We believe that the novel endoscopic system we introduced has vast potential for clinical application in the future.

7.
Theranostics ; 14(1): 341-362, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38164160

RESUMO

Minimally-invasive diagnosis and therapy have gradually become the trend and research hotspot of current medical applications. The integration of intraoperative diagnosis and treatment is a development important direction for real-time detection, minimally-invasive diagnosis and therapy to reduce mortality and improve the quality of life of patients, so called minimally-invasive theranostics (MIT). Light is an important theranostic tool for the treatment of cancerous tissues. Light-mediated minimally-invasive theranostics (LMIT) is a novel evolutionary technology that integrates diagnosis and therapeutics for the less invasive treatment of diseased tissues. Intelligent theranostics would promote precision surgery based on the optical characterization of cancerous tissues. Furthermore, MIT also requires the assistance of smart medical devices or robots. And, optical multimodality lay a solid foundation for intelligent MIT. In this review, we summarize the important state-of-the-arts of optical MIT or LMIT in oncology. Multimodal optical image-guided intelligent treatment is another focus. Intraoperative imaging and real-time analysis-guided optical treatment are also systemically discussed. Finally, the potential challenges and future perspectives of intelligent optical MIT are discussed.


Assuntos
Neoplasias , Medicina de Precisão , Humanos , Qualidade de Vida , Neoplasias/diagnóstico , Neoplasias/terapia , Nanomedicina Teranóstica/métodos , Procedimentos Neurocirúrgicos/métodos
8.
Eur Radiol ; 34(3): 1434-1443, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37672052

RESUMO

OBJECTIVES: The histologic subtype of intracranial germ cell tumours (IGCTs) is an important factor in deciding the treatment strategy, especially for teratomas. In this study, we aimed to non-invasively diagnose teratomas based on fractal and radiomic features. MATERIALS AND METHODS: This retrospective study included 330 IGCT patients, including a discovery set (n = 296) and an independent validation set (n = 34). Fractal and radiomic features were extracted from T1-weighted, T2-weighted, and post-contrast T1-weighted images. Five classifiers, including logistic regression, random forests, support vector machines, K-nearest neighbours, and XGBoost, were compared for our task. Based on the optimal classifier, we compared the performance of clinical, fractal, and radiomic models and the model combining these features in predicting teratomas. RESULTS: Among the diagnostic models, the fractal and radiomic models performed better than the clinical model. The final model that combined all the features showed the best performance, with an area under the curve, precision, sensitivity, and specificity of 0.946 [95% confidence interval (CI): 0.882-0.994], 95.65% (95% CI: 88.64-100%), 88.00% (95% CI: 77.78-96.36%), and 91.67% (95% CI: 78.26-100%), respectively, in the test set of the discovery set, and 0.944 (95% CI: 0.855-1.000), 85.71% (95% CI: 68.18-100%), 94.74% (95% CI: 83.33-100%), and 80.00% (95% CI: 58.33-100%), respectively, in the independent validation set. SHapley Additive exPlanations indicated that two fractal features, two radiomic features, and age were the top five features highly associated with the presence of teratomas. CONCLUSION: The predictive model including image and clinical features could help guide treatment strategies for IGCTs. CLINICAL RELEVANCE STATEMENT: Our machine learning model including image and clinical features can non-invasively predict teratoma components, which could help guide treatment strategies for intracranial germ cell tumours (IGCT). KEY POINTS: • Fractals and radiomics can quantitatively evaluate imaging characteristics of intracranial germ cell tumours. • Model combing imaging and clinical features had the best predictive performance. • The diagnostic model could guide treatment strategies for intracranial germ cell tumours.


Assuntos
Neoplasias Embrionárias de Células Germinativas , Teratoma , Humanos , Estudos Retrospectivos , Fractais , Diagnóstico Diferencial , Radiômica , Neoplasias Embrionárias de Células Germinativas/diagnóstico por imagem , Teratoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
9.
Int J Comput Assist Radiol Surg ; 19(2): 331-344, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37603164

RESUMO

PURPOSE: White light imaging (WLI) is a commonly seen examination mode in endoscopy. The particular light in compound band imaging (CBI) can highlight delicate structures, such as capillaries and tiny structures on the mucosal surface. These two modes complement each other, and doctors switch between them manually to complete the examination. This paper proposes an endoscopy image fusion system to combine WLI and CBI. METHODS: We add a real-time rotatable color wheel in the light source device of the AQ-200 endoscopy system to achieve rapid imaging of two modes at the same position of living tissue. The two images corresponding to the pixel level can avoid registration and lay the foundation for image fusion. We propose a multi-scale image fusion framework, which involves Laplacian pyramid (LP) and convolutional sparse representation (CSR) and strengthens the details in the fusion rule. RESULTS: Volunteer experiments and ex vivo pig stomach trials are conducted to verify the feasibility of our proposed system. We also conduct comparative experiments with other image fusion methods, evaluate the quality of the fused images, and verify the effectiveness of our fusion framework. The results show that our fused image has rich details, high color contrast, apparent structures, and clear lesion boundaries. CONCLUSION: An endoscopy image fusion system is proposed, which does not change the doctor's operation and makes the fusion of WLI and CBI optical staining technology a reality. We change the light source device of the endoscope, propose an image fusion framework, and verify the feasibility and effectiveness of our scheme. Our method fully integrates the advantages of WLI and CBI, which can help doctors make more accurate judgments than before. The endoscopy image fusion system is of great significance for improving the detection rate of early lesions and has broad application prospects.


Assuntos
Endoscopia Gastrointestinal , Endoscopia , Humanos , Animais , Suínos , Luz , Imagem de Banda Estreita/métodos
10.
IEEE Trans Biomed Eng ; 71(3): 1010-1021, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37856261

RESUMO

OBJECTIVE: The precise alignment of full and partial 3D point sets is a crucial technique in computer-aided orthopedic surgery, but remains a significant challenge. This registration process is complicated by the partial overlap between the full and partial 3D point sets, as well as the susceptibility of 3D point sets to noise interference and poor initialization conditions. METHODS: To address these issues, we propose a novel full-to-partial registration framework for computer-aided orthopedic surgery that utilizes reinforcement learning. Our proposed framework is both generalized and robust, effectively handling the challenges of noise, poor initialization, and partial overlap. Moreover, this framework demonstrates exceptional generalization capabilities for various bones, including the pelvis, femurs, and tibias. RESULTS: Extensive experimentation on several bone datasets has demonstrated that the proposed method achieves a superior C.D. error of 8.211 e-05 and our method consistently outperforms state-of-the-art registration techniques. CONCLUSION AND SIGNIFICANCE: Hence, our proposed method is capable of achieving precise bone alignments for computer-aided orthopedic surgery.


Assuntos
Procedimentos Ortopédicos , Cirurgia Assistida por Computador , Algoritmos , Pelve , Cirurgia Assistida por Computador/métodos , Computadores , Imageamento Tridimensional/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-38059130

RESUMO

During minimal invasive surgery (MIS), the laparoscope only provides a single viewpoint to the surgeon, leaving a lack of 3D perception. Many works have been proposed to obtain depth and 3D reconstruction by designing a new optical structure or by depending on the camera pose and image sequences. Most of these works modify the structure of the conventional laparoscopes and cannot provide 3D reconstruction of different magnification views. In this study, we propose a laparoscopic system based on double liquid lenses, which provide doctors with variable magnification rates, near observation, and real-time monocular 3D reconstruction. Our system composes of an optical structure that can obtain auto magnification change and autofocus without any physically moving element, and a deep learning network based on the Depth from Defocus (DFD) method, trained to suit inconsistent camera intrinsic situations and estimate depth from images of different focal lengths. The optical structure is portable and can be mounted on conventional laparoscopes. The depth estimation network estimates depth in real-time from monocular images of different focal lengths and magnification rates. Experiments show that our system provides a 0.68-1.44x zoom rate and can estimate depth from different magnification rates at 6fps. Monocular 3D reconstruction reaches at least 6mm accuracy. The system also provides a clear view even under 1mm close working distance. Ex-vivo experiments and implementation on clinical images prove that our system provides doctors with a magnified clear view of the lesion, as well as quick monocular depth perception during laparoscopy, which help surgeons get better detection and size diagnosis of the abdomen during laparoscope surgeries.


Assuntos
Laparoscopia , Cristalino , Lentes , Laparoscópios , Laparoscopia/métodos , Abdome
12.
J Opt Soc Am A Opt Image Sci Vis ; 40(12): 2156-2163, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38086024

RESUMO

The rendering of specular highlights is a critical aspect of 3D rendering on autostereoscopic displays. However, the conventional highlight rendering techniques on autostereoscopic displays result in depth conflicts between highlights and diffuse surfaces. To address this issue, we propose a viewpoint-dependent highlight depiction method with head tracking, which incorporates microdisparity of highlights in binocular parallax and preserves the motion parallax of highlights. Our method was found to outperform physical highlight depiction and highlight depiction with microdisparity in terms of depth perception and realism, as demonstrated by experimental results. The proposed approach offers a promising alternative to traditional physical highlights on autostereoscopic displays, particularly in applications that require accurate depth perception.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38083587

RESUMO

Alzheimer's disease (AD) is a progressive neurode-generative disease. Identifying the mild cognitive impairment (MCI) subjects who will convert to AD is essential for early intervention to slow the irreversible brain damage and cognitive decline. In this paper, we propose a novel double-attention assisted multi-task framework for the MCI conversion prediction task. By introducing an auxiliary grey matter segmentation task along with an adaptive dynamic weight average strategy to balance the impact of each task. Then, a double-attention module is incorporated to leverage both the classification and the segmentation attention information to guide the network to focus more on the structural alteration regions for better discrimination of AD pathology, as well as increase the interpretability of the network. Extensive experiments on a publicly available dataset demonstrate that the proposed method significantly outperforms the approaches using the same image modality.


Assuntos
Doença de Alzheimer , Lesões Encefálicas , Disfunção Cognitiva , Humanos , Imageamento por Ressonância Magnética/métodos , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/patologia , Aprendizagem , Disfunção Cognitiva/diagnóstico
14.
Eur Radiol ; 2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37926739

RESUMO

OBJECTIVES: To investigate the value of diffusion MRI (dMRI) in H3K27M genotyping of brainstem glioma (BSG). METHODS: A primary cohort of BSG patients with dMRI data (b = 0, 1000 and 2000 s/mm2) and H3K27M mutation information were included. A total of 13 diffusion tensor and kurtosis imaging (DTI; DKI) metrics were calculated, then 17 whole-tumor histogram features and 29 along-tract white matter (WM) microstructural measurements were extracted from each metric and assessed within genotypes. After feature selection through univariate analysis and the least absolute shrinkage and selection operator method, multivariate logistic regression was used to build dMRI-derived genotyping models based on retained tumor and WM features separately and jointly. Model performances were tested using ROC curves and compared by the DeLong approach. A nomogram incorporating the best-performing dMRI model and clinical variables was generated by multivariate logistic regression and validated in an independent cohort of 27 BSG patients. RESULTS: At total of 117 patients (80 H3K27M-mutant) were included in the primary cohort. In total, 29 tumor histogram features and 41 WM tract measurements were selected for subsequent genotyping model construction. Incorporating WM tract measurements significantly improved diagnostic performances (p < 0.05). The model incorporating tumor and WM features from both DKI and DTI metrics showed the best performance (AUC = 0.9311). The nomogram combining this dMRI model and clinical variables achieved AUCs of 0.9321 and 0.8951 in the primary and validation cohort respectively. CONCLUSIONS: dMRI is valuable in BSG genotyping. Tumor diffusion histogram features are useful in genotyping, and WM tract measurements are more valuable in improving genotyping performance. CLINICAL RELEVANCE STATEMENT: This study found that diffusion MRI is valuable in predicting H3K27M mutation in brainstem gliomas, which is helpful to realize the noninvasive detection of brainstem glioma genotypes and improve the diagnosis of brainstem glioma. KEY POINTS: • Diffusion MRI has significant value in brainstem glioma H3K27M genotyping, and models with satisfactory performances were built. • Whole-tumor diffusion histogram features are useful in H3K27M genotyping, and quantitative measurements of white matter tracts are valuable as they have the potential to improve model performance. • The model combining the most discriminative diffusion MRI model and clinical variables can help make clinical decision.

15.
IEEE Trans Med Imaging ; 42(12): 3779-3793, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37695964

RESUMO

Accurate ultrasound (US) image segmentation is crucial for the screening and diagnosis of diseases. However, it faces two significant challenges: 1) pixel-level annotation is a time-consuming and laborious process; 2) the presence of shadow artifacts leads to missing anatomy and ambiguous boundaries, which negatively impact reliable segmentation results. To address these challenges, we propose a novel semi-supervised shadow aware network with boundary refinement (SABR-Net). Specifically, we add shadow imitation regions to the original US, and design shadow-masked transformer blocks to perceive missing anatomy of shadow regions. Shadow-masked transformer block contains an adaptive shadow attention mechanism that introduces an adaptive mask, which is updated automatically to promote the network training. Additionally, we utilize unlabeled US images to train a missing structure inpainting path with shadow-masked transformer, which further facilitates semi-supervised segmentation. Experiments on two public US datasets demonstrate the superior performance of the SABR-Net over other state-of-the-art semi-supervised segmentation methods. In addition, experiments on a private breast US dataset prove that our method has a good generalization to clinical small-scale US datasets.


Assuntos
Artefatos , Ultrassonografia Mamária , Feminino , Humanos , Ultrassonografia , Processamento de Imagem Assistida por Computador
16.
IEEE J Biomed Health Inform ; 27(11): 5381-5392, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37651479

RESUMO

Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.


Assuntos
Neoplasias Embrionárias de Células Germinativas , Neoplasias , Criança , Humanos , Adolescente , Aprendizado de Máquina não Supervisionado , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
17.
Comput Methods Programs Biomed ; 240: 107642, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37480644

RESUMO

In ultrasound-guided liver surgery, the lack of large-scale intraoperative ultrasound images with important anatomical structures remains an obstacle hindering the successful application of AI to ultrasound guidance. In this case, intraoperative ultrasound (iUS) simulation should be conducted from preoperative magnetic resonance (pMR), which not only helps doctors understand the characteristics of iUS in advance, but also expands the iUS dataset from various imaging positions, thereby promoting the automatic iUS analysis in ultrasound guidance. Herein, a novel anatomy preserving generative adversarial network (ApGAN) framework was proposed to generate simulated intraoperative ultrasound (Sim-iUS) of liver with precise structure information from pMR. Specifically, the low-rank factors based bimodal fusion was first established focusing on the effective information of hepatic parenchyma. Then, a deformation field based correction module was introduced to learn and correct the slight structural distortion from surgical operations. Meanwhile, the multiple loss functions were designed to constrain the simulation of the content, structures, and style. Empirical results of clinical data showed that the proposed ApGAN obtained higher Structural Similarity (SSIM) of 0.74 and Fr´echet Inception Distance (FID) of 35.54 compared to existing methods. Furthermore, the average Hausdorff Distance (HD) error of the liver capsule structure was less than 0.25 mm, and the average relative (Euclidean Distance) ED error for polyps was 0.12 mm, indicating the high-level precision of this ApGAN in simulating the anatomical structures and focal areas.


Assuntos
Fígado , Médicos , Humanos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Ultrassonografia , Simulação por Computador , Aprendizagem
18.
Comput Biol Med ; 164: 107248, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37515875

RESUMO

The security of AI systems has gained significant attention in recent years, particularly in the medical diagnosis field. To develop a secure medical image classification system based on deep neural networks, it is crucial to design effective adversarial attacks that can embed hidden, malicious behaviors into the system. However, designing a unified attack method that can generate imperceptible attack samples with high content similarity and be applied to diverse medical image classification systems is challenging due to the diversity of medical imaging modalities and dimensionalities. Most existing attack methods are designed to attack natural image classification models, which inevitably corrupt the semantics of pixels by applying spatial perturbations. To address this issue, we propose a novel frequency constraint-based adversarial attack method capable of delivering attacks in various medical image classification tasks. Specially, our method introduces a frequency constraint to inject perturbation into high-frequency information while preserving low-frequency information to ensure content similarity. Our experiments include four public medical image datasets, including a 3D CT dataset, a 2D chest X-Ray image dataset, a 2D breast ultrasound dataset, and a 2D thyroid ultrasound dataset, which contain different imaging modalities and dimensionalities. The results demonstrate the superior performance of our model over other state-of-the-art adversarial attack methods for attacking medical image classification tasks on different imaging modalities and dimensionalities.


Assuntos
Redes Neurais de Computação , Semântica , Tórax
19.
Radiother Oncol ; 186: 109789, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37414255

RESUMO

PURPOSE: To establish an individualized predictive model to identify patients with brainstem gliomas (BSGs) at high risk of H3K27M mutation, with the inclusion of brain structural connectivity analysis based on diffusion MRI (dMRI). MATERIALS AND METHODS: A primary cohort of 133 patients with BSGs (80 H3K27M-mutant) were retrospectively included. All patients underwent preoperative conventional MRI and dMRI. Tumor radiomics features were extracted from conventional MRI, while two kinds of global connectomics features were extracted from dMRI. A machine learning-based individualized H3K27M mutation prediction model combining radiomics and connectomics features was generated with a nested cross validation strategy. Relief algorithm and SVM method were used in each outer LOOCV loop to select the most robust and discriminative features. Additionally, two predictive signatures were established using the LASSO method, and simplified logistic models were built using multivariable logistic regression analysis. An independent cohort of 27 patients was used to validate the best model. RESULTS: 35 tumor-related radiomics features, 51 topological properties of brain structural connectivity networks, and 11 microstructural measures along white matter tracts were selected to construct a machine learning-based H3K27M mutation prediction model, which achieved an AUC of 0.9136 in the independent validation set. Radiomics- and connectomics-based signatures were generated and simplified combined logistic model was built, upon which derived nomograph achieved an AUC of 0.8827 in the validation cohort. CONCLUSION: dMRI is valuable in predicting H3K27M mutation in BSGs, and connectomics analysis is a promising approach. Combining multiple MRI sequences and clinical features, the established models have good performance.


Assuntos
Neoplasias do Tronco Encefálico , Conectoma , Glioma , Humanos , Estudos Retrospectivos , Neoplasias do Tronco Encefálico/diagnóstico por imagem , Neoplasias do Tronco Encefálico/genética , Imagem de Difusão por Ressonância Magnética , Glioma/diagnóstico por imagem , Glioma/genética , Mutação , Imageamento por Ressonância Magnética
20.
Biosci Trends ; 17(3): 190-192, 2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37394613

RESUMO

Deep learning has brought about a revolution in the field of medical diagnosis and treatment. The use of deep learning in healthcare has grown exponentially in recent years, achieving physician-level accuracy in various diagnostic tasks and supporting applications such as electronic health records and clinical voice assistants. The emergence of medical foundation models, as a new approach to deep learning, has greatly improved the reasoning ability of machines. Characterized by large training datasets, context awareness, and multi-domain applications, medical foundation models can integrate various forms of medical data to provide user-friendly outputs based on a patien's information. Medical foundation models have the potential to integrate current diagnostic and treatment systems, providing the ability to understand multi-modal diagnostic information and real-time reasoning ability in complex surgical scenarios. Future research on foundation model-based deep learning methods will focus more on the collaboration between physicians and machines. On the one hand, developing new deep learning methods will reduce the repetitive labor of physicians and compensate for shortcomings in their diagnostic and treatment capabilities. On the other hand, physicians need to embrace new deep learning technologies, comprehend the principles and technical risks of deep learning methods, and master the procedures for integrating them into clinical practice. Ultimately, the integration of artificial intelligence analysis with human decision-making will facilitate accurate personalized medical care and enhance the efficiency of physicians.


Assuntos
Aprendizado Profundo , Médicos , Humanos , Inteligência Artificial , Atenção à Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA