Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 163
Filter
1.
Adv Sci (Weinh) ; : e2307965, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38634608

ABSTRACT

Diffusion magnetic resonance imaging is an important tool for mapping tissue microstructure and structural connectivity non-invasively in the in vivo human brain. Numerous diffusion signal models are proposed to quantify microstructural properties. Nonetheless, accurate estimation of model parameters is computationally expensive and impeded by image noise. Supervised deep learning-based estimation approaches exhibit efficiency and superior performance but require additional training data and may be not generalizable. A new DIffusion Model OptimizatioN framework using physics-informed and self-supervised Deep learning entitled "DIMOND" is proposed to address this problem. DIMOND employs a neural network to map input image data to model parameters and optimizes the network by minimizing the difference between the input acquired data and synthetic data generated via the diffusion model parametrized by network outputs. DIMOND produces accurate diffusion tensor imaging results and is generalizable across subjects and datasets. Moreover, DIMOND outperforms conventional methods for fitting sophisticated microstructural models including the kurtosis and NODDI model. Importantly, DIMOND reduces NODDI model fitting time from hours to minutes, or seconds by leveraging transfer learning. In summary, the self-supervised manner, high efficacy, and efficiency of DIMOND increase the practical feasibility and adoption of microstructure and connectivity mapping in clinical and neuroscientific applications.

2.
IEEE Trans Med Imaging ; PP2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38652607

ABSTRACT

Proximal femoral fracture segmentation in computed tomography (CT) is essential in the preoperative planning of orthopedic surgeons. Recently, numerous deep learning-based approaches have been proposed for segmenting various structures within CT scans. Nevertheless, distinguishing various attributes between fracture fragments and soft tissue regions in CT scans frequently poses challenges, which have received comparatively limited research attention. Besides, the cornerstone of contemporary deep learning methodologies is the availability of annotated data, while detailed CT annotations remain scarce. To address the challenge, we propose a novel weakly-supervised framework, namely Rough Turbo Net (RT-Net), for the segmentation of proximal femoral fractures. We emphasize the utilization of human resources to produce rough annotations on a substantial scale, as opposed to relying on limited fine-grained annotations that demand a substantial time to create. In RT-Net, rough annotations pose fractured-region constraints, which have demonstrated significant efficacy in enhancing the accuracy of the network. Conversely, the fine annotations can provide more details for recognizing edges and soft tissues. Besides, we design a spatial adaptive attention module (SAAM) that adapts to the spatial distribution of the fracture regions and align feature in each decoder. Moreover, we propose a fine-edge loss which is applied through an edge discrimination network to penalize the absence or imprecision edge features. Extensive quantitative and qualitative experiments demonstrate the superiority of RT-Net to state-of-the-art approaches. Furthermore, additional experiments show that RT-Net has the capability to produce pseudo labels for raw CT images that can further improve fracture segmentation performance and has the potential to improve segmentation performance on public datasets. The code is available at: https://github.com/zyairelu/RT-Net.

3.
Med Image Anal ; 95: 103182, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38688039

ABSTRACT

Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.

4.
EClinicalMedicine ; 70: 102518, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38495520

ABSTRACT

Background: Effective monitoring and management are crucial during long-term home noninvasive positive pressure ventilation (NPPV) in patients with hypercapnic chronic obstructive pulmonary disease (COPD). This study investigated the benefit of Internet of Things (IOT)-based management of home NPPV. Methods: This multicenter, prospective, parallel-group, randomized controlled non-inferiority trial enrolled patients requiring long-term home NPPV for hypercapnic COPD. Patients were randomly assigned (1:1), via a computer-generated randomization sequence, to standard home management or IOT management based on telemonitoring of clinical and ventilator parameters over 12 months. The intervention was unblinded, but outcome assessment was blinded to management assignment. The primary outcome was the between-group comparison of the change in health-related quality of life, based on severe respiratory insufficiency questionnaire scores with a non-inferiority margin of -5. This study is registered with Chinese Clinical Trials Registry (No. ChiCTR1800019536). Findings: Overall, 148 patients (age: 72.7 ± 6.8 years; male: 85.8%; forced expiratory volume in 1 s: 0.7 ± 0.3 L; PaCO2: 66.4 ± 12.0 mmHg), recruited from 11 Chinese hospitals between January 24, 2019, and June 28, 2021, were randomly allocated to the intervention group (n = 73) or the control group (n = 75). At 12 months, the mean severe respiratory insufficiency questionnaire score was 56.5 in the intervention group and 50.0 in the control group (adjusted between-group difference: 6.26 [95% CI, 3.71-8.80]; P < 0.001), satisfying the hypothesis of non-inferiority. The 12-month risk of readmission was 34.3% in intervention group compared with 56.0% in the control group, adjusted hazard ratio of 0.56 (95% CI, 0.34-0.92; P = 0.023). No severe adverse events were reported. Interpretation: Among stable patients with hypercapnic COPD, using IOT-based management for home NPPV improved health-related quality of life and prolonged the time to readmission. Funding: Air Liquide Healthcare (Beijing) Co., Ltd.

5.
Biomed Opt Express ; 15(2): 506-523, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38404328

ABSTRACT

As endoscopic imaging technology advances, there is a growing clinical demand for enhanced imaging capabilities. Although conventional white light imaging (WLI) endoscopy offers realistic images, it often cannot reveal detailed characteristics of the mucosa. On the other hand, optical staining endoscopy, such as Compound Band Imaging (CBI), can discern subtle structures, serving to some extent as an optical biopsy. However, its image brightness is low, and the colors can be abrupt. These two techniques, commonly used in clinical settings, have complementary advantages. Nonetheless, they require different lighting conditions, which makes it challenging to combine their imaging strengths on living tissues. In this study, we introduce a novel endoscopic imaging technique that effectively combines the advantages of both WLI and CBI. Doctors don't need to manually switch between these two observation modes, as they can obtain the image information of both modes in one image. We calibrated an appropriate proportion for simultaneous illumination with the light required for WLI and CBI. We designed a new illumination spectrum tailored for gastrointestinal examination, achieving their fusion at the optical level. Using a new algorithm that focuses on enhancing specific hemoglobin tissue features, we restored narrow-band image characteristics lost due to the introduction of white light. Our hardware and software innovations not only boost the illumination brightness of the endoscope but also ensure the narrow-band feature details of the image. To evaluate the reliability and safety of the new endoscopic system, we conducted a series of tests in line with relevant international standards and validated the design parameters. For clinical trials, we collected a total of 256 sets of images, each set comprising images of the same lesion location captured using WLI, CBI, and our proposed method. We recruited four experienced clinicians to conduct subjective evaluations of the collected images. The results affirmed the significant advantages of our method. We believe that the novel endoscopic system we introduced has vast potential for clinical application in the future.

6.
Theranostics ; 14(1): 341-362, 2024.
Article in English | MEDLINE | ID: mdl-38164160

ABSTRACT

Minimally-invasive diagnosis and therapy have gradually become the trend and research hotspot of current medical applications. The integration of intraoperative diagnosis and treatment is a development important direction for real-time detection, minimally-invasive diagnosis and therapy to reduce mortality and improve the quality of life of patients, so called minimally-invasive theranostics (MIT). Light is an important theranostic tool for the treatment of cancerous tissues. Light-mediated minimally-invasive theranostics (LMIT) is a novel evolutionary technology that integrates diagnosis and therapeutics for the less invasive treatment of diseased tissues. Intelligent theranostics would promote precision surgery based on the optical characterization of cancerous tissues. Furthermore, MIT also requires the assistance of smart medical devices or robots. And, optical multimodality lay a solid foundation for intelligent MIT. In this review, we summarize the important state-of-the-arts of optical MIT or LMIT in oncology. Multimodal optical image-guided intelligent treatment is another focus. Intraoperative imaging and real-time analysis-guided optical treatment are also systemically discussed. Finally, the potential challenges and future perspectives of intelligent optical MIT are discussed.


Subject(s)
Neoplasms , Precision Medicine , Humans , Quality of Life , Neoplasms/diagnosis , Neoplasms/therapy , Theranostic Nanomedicine/methods , Neurosurgical Procedures/methods
7.
Int J Comput Assist Radiol Surg ; 19(2): 331-344, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37603164

ABSTRACT

PURPOSE: White light imaging (WLI) is a commonly seen examination mode in endoscopy. The particular light in compound band imaging (CBI) can highlight delicate structures, such as capillaries and tiny structures on the mucosal surface. These two modes complement each other, and doctors switch between them manually to complete the examination. This paper proposes an endoscopy image fusion system to combine WLI and CBI. METHODS: We add a real-time rotatable color wheel in the light source device of the AQ-200 endoscopy system to achieve rapid imaging of two modes at the same position of living tissue. The two images corresponding to the pixel level can avoid registration and lay the foundation for image fusion. We propose a multi-scale image fusion framework, which involves Laplacian pyramid (LP) and convolutional sparse representation (CSR) and strengthens the details in the fusion rule. RESULTS: Volunteer experiments and ex vivo pig stomach trials are conducted to verify the feasibility of our proposed system. We also conduct comparative experiments with other image fusion methods, evaluate the quality of the fused images, and verify the effectiveness of our fusion framework. The results show that our fused image has rich details, high color contrast, apparent structures, and clear lesion boundaries. CONCLUSION: An endoscopy image fusion system is proposed, which does not change the doctor's operation and makes the fusion of WLI and CBI optical staining technology a reality. We change the light source device of the endoscope, propose an image fusion framework, and verify the feasibility and effectiveness of our scheme. Our method fully integrates the advantages of WLI and CBI, which can help doctors make more accurate judgments than before. The endoscopy image fusion system is of great significance for improving the detection rate of early lesions and has broad application prospects.


Subject(s)
Endoscopy, Gastrointestinal , Endoscopy , Humans , Animals , Swine , Light , Narrow Band Imaging/methods
8.
IEEE Trans Biomed Eng ; 71(3): 1010-1021, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37856261

ABSTRACT

OBJECTIVE: The precise alignment of full and partial 3D point sets is a crucial technique in computer-aided orthopedic surgery, but remains a significant challenge. This registration process is complicated by the partial overlap between the full and partial 3D point sets, as well as the susceptibility of 3D point sets to noise interference and poor initialization conditions. METHODS: To address these issues, we propose a novel full-to-partial registration framework for computer-aided orthopedic surgery that utilizes reinforcement learning. Our proposed framework is both generalized and robust, effectively handling the challenges of noise, poor initialization, and partial overlap. Moreover, this framework demonstrates exceptional generalization capabilities for various bones, including the pelvis, femurs, and tibias. RESULTS: Extensive experimentation on several bone datasets has demonstrated that the proposed method achieves a superior C.D. error of 8.211 e-05 and our method consistently outperforms state-of-the-art registration techniques. CONCLUSION AND SIGNIFICANCE: Hence, our proposed method is capable of achieving precise bone alignments for computer-aided orthopedic surgery.


Subject(s)
Orthopedic Procedures , Surgery, Computer-Assisted , Algorithms , Pelvis , Surgery, Computer-Assisted/methods , Computers , Imaging, Three-Dimensional/methods
9.
Eur Radiol ; 34(3): 1434-1443, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37672052

ABSTRACT

OBJECTIVES: The histologic subtype of intracranial germ cell tumours (IGCTs) is an important factor in deciding the treatment strategy, especially for teratomas. In this study, we aimed to non-invasively diagnose teratomas based on fractal and radiomic features. MATERIALS AND METHODS: This retrospective study included 330 IGCT patients, including a discovery set (n = 296) and an independent validation set (n = 34). Fractal and radiomic features were extracted from T1-weighted, T2-weighted, and post-contrast T1-weighted images. Five classifiers, including logistic regression, random forests, support vector machines, K-nearest neighbours, and XGBoost, were compared for our task. Based on the optimal classifier, we compared the performance of clinical, fractal, and radiomic models and the model combining these features in predicting teratomas. RESULTS: Among the diagnostic models, the fractal and radiomic models performed better than the clinical model. The final model that combined all the features showed the best performance, with an area under the curve, precision, sensitivity, and specificity of 0.946 [95% confidence interval (CI): 0.882-0.994], 95.65% (95% CI: 88.64-100%), 88.00% (95% CI: 77.78-96.36%), and 91.67% (95% CI: 78.26-100%), respectively, in the test set of the discovery set, and 0.944 (95% CI: 0.855-1.000), 85.71% (95% CI: 68.18-100%), 94.74% (95% CI: 83.33-100%), and 80.00% (95% CI: 58.33-100%), respectively, in the independent validation set. SHapley Additive exPlanations indicated that two fractal features, two radiomic features, and age were the top five features highly associated with the presence of teratomas. CONCLUSION: The predictive model including image and clinical features could help guide treatment strategies for IGCTs. CLINICAL RELEVANCE STATEMENT: Our machine learning model including image and clinical features can non-invasively predict teratoma components, which could help guide treatment strategies for intracranial germ cell tumours (IGCT). KEY POINTS: • Fractals and radiomics can quantitatively evaluate imaging characteristics of intracranial germ cell tumours. • Model combing imaging and clinical features had the best predictive performance. • The diagnostic model could guide treatment strategies for intracranial germ cell tumours.


Subject(s)
Neoplasms, Germ Cell and Embryonal , Teratoma , Humans , Retrospective Studies , Fractals , Diagnosis, Differential , Radiomics , Neoplasms, Germ Cell and Embryonal/diagnostic imaging , Teratoma/diagnostic imaging , Magnetic Resonance Imaging/methods
10.
Article in English | MEDLINE | ID: mdl-38059130

ABSTRACT

During minimal invasive surgery (MIS), the laparoscope only provides a single viewpoint to the surgeon, leaving a lack of 3D perception. Many works have been proposed to obtain depth and 3D reconstruction by designing a new optical structure or by depending on the camera pose and image sequences. Most of these works modify the structure of the conventional laparoscopes and cannot provide 3D reconstruction of different magnification views. In this study, we propose a laparoscopic system based on double liquid lenses, which provide doctors with variable magnification rates, near observation, and real-time monocular 3D reconstruction. Our system composes of an optical structure that can obtain auto magnification change and autofocus without any physically moving element, and a deep learning network based on the Depth from Defocus (DFD) method, trained to suit inconsistent camera intrinsic situations and estimate depth from images of different focal lengths. The optical structure is portable and can be mounted on conventional laparoscopes. The depth estimation network estimates depth in real-time from monocular images of different focal lengths and magnification rates. Experiments show that our system provides a 0.68-1.44x zoom rate and can estimate depth from different magnification rates at 6fps. Monocular 3D reconstruction reaches at least 6mm accuracy. The system also provides a clear view even under 1mm close working distance. Ex-vivo experiments and implementation on clinical images prove that our system provides doctors with a magnified clear view of the lesion, as well as quick monocular depth perception during laparoscopy, which help surgeons get better detection and size diagnosis of the abdomen during laparoscope surgeries.


Subject(s)
Laparoscopy , Lens, Crystalline , Lenses , Laparoscopes , Laparoscopy/methods , Abdomen
11.
J Opt Soc Am A Opt Image Sci Vis ; 40(12): 2156-2163, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-38086024

ABSTRACT

The rendering of specular highlights is a critical aspect of 3D rendering on autostereoscopic displays. However, the conventional highlight rendering techniques on autostereoscopic displays result in depth conflicts between highlights and diffuse surfaces. To address this issue, we propose a viewpoint-dependent highlight depiction method with head tracking, which incorporates microdisparity of highlights in binocular parallax and preserves the motion parallax of highlights. Our method was found to outperform physical highlight depiction and highlight depiction with microdisparity in terms of depth perception and realism, as demonstrated by experimental results. The proposed approach offers a promising alternative to traditional physical highlights on autostereoscopic displays, particularly in applications that require accurate depth perception.

12.
Article in English | MEDLINE | ID: mdl-38083587

ABSTRACT

Alzheimer's disease (AD) is a progressive neurode-generative disease. Identifying the mild cognitive impairment (MCI) subjects who will convert to AD is essential for early intervention to slow the irreversible brain damage and cognitive decline. In this paper, we propose a novel double-attention assisted multi-task framework for the MCI conversion prediction task. By introducing an auxiliary grey matter segmentation task along with an adaptive dynamic weight average strategy to balance the impact of each task. Then, a double-attention module is incorporated to leverage both the classification and the segmentation attention information to guide the network to focus more on the structural alteration regions for better discrimination of AD pathology, as well as increase the interpretability of the network. Extensive experiments on a publicly available dataset demonstrate that the proposed method significantly outperforms the approaches using the same image modality.


Subject(s)
Alzheimer Disease , Brain Injuries , Cognitive Dysfunction , Humans , Magnetic Resonance Imaging/methods , Alzheimer Disease/diagnosis , Alzheimer Disease/pathology , Learning , Cognitive Dysfunction/diagnosis
13.
Eur Radiol ; 2023 Nov 06.
Article in English | MEDLINE | ID: mdl-37926739

ABSTRACT

OBJECTIVES: To investigate the value of diffusion MRI (dMRI) in H3K27M genotyping of brainstem glioma (BSG). METHODS: A primary cohort of BSG patients with dMRI data (b = 0, 1000 and 2000 s/mm2) and H3K27M mutation information were included. A total of 13 diffusion tensor and kurtosis imaging (DTI; DKI) metrics were calculated, then 17 whole-tumor histogram features and 29 along-tract white matter (WM) microstructural measurements were extracted from each metric and assessed within genotypes. After feature selection through univariate analysis and the least absolute shrinkage and selection operator method, multivariate logistic regression was used to build dMRI-derived genotyping models based on retained tumor and WM features separately and jointly. Model performances were tested using ROC curves and compared by the DeLong approach. A nomogram incorporating the best-performing dMRI model and clinical variables was generated by multivariate logistic regression and validated in an independent cohort of 27 BSG patients. RESULTS: At total of 117 patients (80 H3K27M-mutant) were included in the primary cohort. In total, 29 tumor histogram features and 41 WM tract measurements were selected for subsequent genotyping model construction. Incorporating WM tract measurements significantly improved diagnostic performances (p < 0.05). The model incorporating tumor and WM features from both DKI and DTI metrics showed the best performance (AUC = 0.9311). The nomogram combining this dMRI model and clinical variables achieved AUCs of 0.9321 and 0.8951 in the primary and validation cohort respectively. CONCLUSIONS: dMRI is valuable in BSG genotyping. Tumor diffusion histogram features are useful in genotyping, and WM tract measurements are more valuable in improving genotyping performance. CLINICAL RELEVANCE STATEMENT: This study found that diffusion MRI is valuable in predicting H3K27M mutation in brainstem gliomas, which is helpful to realize the noninvasive detection of brainstem glioma genotypes and improve the diagnosis of brainstem glioma. KEY POINTS: • Diffusion MRI has significant value in brainstem glioma H3K27M genotyping, and models with satisfactory performances were built. • Whole-tumor diffusion histogram features are useful in H3K27M genotyping, and quantitative measurements of white matter tracts are valuable as they have the potential to improve model performance. • The model combining the most discriminative diffusion MRI model and clinical variables can help make clinical decision.

14.
IEEE Trans Med Imaging ; 42(12): 3779-3793, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37695964

ABSTRACT

Accurate ultrasound (US) image segmentation is crucial for the screening and diagnosis of diseases. However, it faces two significant challenges: 1) pixel-level annotation is a time-consuming and laborious process; 2) the presence of shadow artifacts leads to missing anatomy and ambiguous boundaries, which negatively impact reliable segmentation results. To address these challenges, we propose a novel semi-supervised shadow aware network with boundary refinement (SABR-Net). Specifically, we add shadow imitation regions to the original US, and design shadow-masked transformer blocks to perceive missing anatomy of shadow regions. Shadow-masked transformer block contains an adaptive shadow attention mechanism that introduces an adaptive mask, which is updated automatically to promote the network training. Additionally, we utilize unlabeled US images to train a missing structure inpainting path with shadow-masked transformer, which further facilitates semi-supervised segmentation. Experiments on two public US datasets demonstrate the superior performance of the SABR-Net over other state-of-the-art semi-supervised segmentation methods. In addition, experiments on a private breast US dataset prove that our method has a good generalization to clinical small-scale US datasets.


Subject(s)
Artifacts , Ultrasonography, Mammary , Female , Humans , Ultrasonography , Image Processing, Computer-Assisted
15.
IEEE J Biomed Health Inform ; 27(11): 5381-5392, 2023 11.
Article in English | MEDLINE | ID: mdl-37651479

ABSTRACT

Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.


Subject(s)
Neoplasms, Germ Cell and Embryonal , Neoplasms , Child , Humans , Adolescent , Unsupervised Machine Learning , Algorithms , Image Processing, Computer-Assisted/methods
16.
Biosci Trends ; 17(3): 190-192, 2023 Jul 11.
Article in English | MEDLINE | ID: mdl-37394613

ABSTRACT

Deep learning has brought about a revolution in the field of medical diagnosis and treatment. The use of deep learning in healthcare has grown exponentially in recent years, achieving physician-level accuracy in various diagnostic tasks and supporting applications such as electronic health records and clinical voice assistants. The emergence of medical foundation models, as a new approach to deep learning, has greatly improved the reasoning ability of machines. Characterized by large training datasets, context awareness, and multi-domain applications, medical foundation models can integrate various forms of medical data to provide user-friendly outputs based on a patien's information. Medical foundation models have the potential to integrate current diagnostic and treatment systems, providing the ability to understand multi-modal diagnostic information and real-time reasoning ability in complex surgical scenarios. Future research on foundation model-based deep learning methods will focus more on the collaboration between physicians and machines. On the one hand, developing new deep learning methods will reduce the repetitive labor of physicians and compensate for shortcomings in their diagnostic and treatment capabilities. On the other hand, physicians need to embrace new deep learning technologies, comprehend the principles and technical risks of deep learning methods, and master the procedures for integrating them into clinical practice. Ultimately, the integration of artificial intelligence analysis with human decision-making will facilitate accurate personalized medical care and enhance the efficiency of physicians.


Subject(s)
Deep Learning , Physicians , Humans , Artificial Intelligence , Delivery of Health Care
17.
Radiother Oncol ; 186: 109789, 2023 09.
Article in English | MEDLINE | ID: mdl-37414255

ABSTRACT

PURPOSE: To establish an individualized predictive model to identify patients with brainstem gliomas (BSGs) at high risk of H3K27M mutation, with the inclusion of brain structural connectivity analysis based on diffusion MRI (dMRI). MATERIALS AND METHODS: A primary cohort of 133 patients with BSGs (80 H3K27M-mutant) were retrospectively included. All patients underwent preoperative conventional MRI and dMRI. Tumor radiomics features were extracted from conventional MRI, while two kinds of global connectomics features were extracted from dMRI. A machine learning-based individualized H3K27M mutation prediction model combining radiomics and connectomics features was generated with a nested cross validation strategy. Relief algorithm and SVM method were used in each outer LOOCV loop to select the most robust and discriminative features. Additionally, two predictive signatures were established using the LASSO method, and simplified logistic models were built using multivariable logistic regression analysis. An independent cohort of 27 patients was used to validate the best model. RESULTS: 35 tumor-related radiomics features, 51 topological properties of brain structural connectivity networks, and 11 microstructural measures along white matter tracts were selected to construct a machine learning-based H3K27M mutation prediction model, which achieved an AUC of 0.9136 in the independent validation set. Radiomics- and connectomics-based signatures were generated and simplified combined logistic model was built, upon which derived nomograph achieved an AUC of 0.8827 in the validation cohort. CONCLUSION: dMRI is valuable in predicting H3K27M mutation in BSGs, and connectomics analysis is a promising approach. Combining multiple MRI sequences and clinical features, the established models have good performance.


Subject(s)
Brain Stem Neoplasms , Connectome , Glioma , Humans , Retrospective Studies , Brain Stem Neoplasms/diagnostic imaging , Brain Stem Neoplasms/genetics , Diffusion Magnetic Resonance Imaging , Glioma/diagnostic imaging , Glioma/genetics , Mutation , Magnetic Resonance Imaging
18.
Comput Biol Med ; 164: 107248, 2023 09.
Article in English | MEDLINE | ID: mdl-37515875

ABSTRACT

The security of AI systems has gained significant attention in recent years, particularly in the medical diagnosis field. To develop a secure medical image classification system based on deep neural networks, it is crucial to design effective adversarial attacks that can embed hidden, malicious behaviors into the system. However, designing a unified attack method that can generate imperceptible attack samples with high content similarity and be applied to diverse medical image classification systems is challenging due to the diversity of medical imaging modalities and dimensionalities. Most existing attack methods are designed to attack natural image classification models, which inevitably corrupt the semantics of pixels by applying spatial perturbations. To address this issue, we propose a novel frequency constraint-based adversarial attack method capable of delivering attacks in various medical image classification tasks. Specially, our method introduces a frequency constraint to inject perturbation into high-frequency information while preserving low-frequency information to ensure content similarity. Our experiments include four public medical image datasets, including a 3D CT dataset, a 2D chest X-Ray image dataset, a 2D breast ultrasound dataset, and a 2D thyroid ultrasound dataset, which contain different imaging modalities and dimensionalities. The results demonstrate the superior performance of our model over other state-of-the-art adversarial attack methods for attacking medical image classification tasks on different imaging modalities and dimensionalities.


Subject(s)
Neural Networks, Computer , Semantics , Thorax
19.
Comput Methods Programs Biomed ; 240: 107642, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37480644

ABSTRACT

In ultrasound-guided liver surgery, the lack of large-scale intraoperative ultrasound images with important anatomical structures remains an obstacle hindering the successful application of AI to ultrasound guidance. In this case, intraoperative ultrasound (iUS) simulation should be conducted from preoperative magnetic resonance (pMR), which not only helps doctors understand the characteristics of iUS in advance, but also expands the iUS dataset from various imaging positions, thereby promoting the automatic iUS analysis in ultrasound guidance. Herein, a novel anatomy preserving generative adversarial network (ApGAN) framework was proposed to generate simulated intraoperative ultrasound (Sim-iUS) of liver with precise structure information from pMR. Specifically, the low-rank factors based bimodal fusion was first established focusing on the effective information of hepatic parenchyma. Then, a deformation field based correction module was introduced to learn and correct the slight structural distortion from surgical operations. Meanwhile, the multiple loss functions were designed to constrain the simulation of the content, structures, and style. Empirical results of clinical data showed that the proposed ApGAN obtained higher Structural Similarity (SSIM) of 0.74 and Fr´echet Inception Distance (FID) of 35.54 compared to existing methods. Furthermore, the average Hausdorff Distance (HD) error of the liver capsule structure was less than 0.25 mm, and the average relative (Euclidean Distance) ED error for polyps was 0.12 mm, indicating the high-level precision of this ApGAN in simulating the anatomical structures and focal areas.


Subject(s)
Liver , Physicians , Humans , Liver/diagnostic imaging , Liver/surgery , Ultrasonography , Computer Simulation , Learning
20.
Comput Methods Programs Biomed ; 240: 107605, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37390795

ABSTRACT

PURPOSE: A capsule robot can be controlled inside gastrointestinal (GI) tract by an external permanent magnet outside of human body for finishing non-invasive diagnosis and treatment. Locomotion control of capsule robot relies on the precise angle feedback that can be achieved by ultrasound imaging. However, ultrasound-based angle estimation of capsule robot is interfered by gastric wall tissue and the mixture of air, water, and digestive matter existing in the stomach. METHODS: To tackle these issues, we introduce a heatmap guided two-stage network to detect the position and estimate the angle of the capsule robot in ultrasound images. Specifically, this network proposes the probability distribution module and skeleton extraction-based angle calculation to obtain accurate capsule robot position and angle estimation. RESULTS: Extensive experiments were finished on the ultrasound image dataset of capsule robot within porcine stomach. Empirical results showed that our method obtained small position center error of 0.48 mm and high angle estimation accuracy of 96.32%. CONCLUSION: Our method can provide precise angle feedback for locomotion control of capsule robot.


Subject(s)
Robotics , Animals , Swine , Humans , Robotics/methods , Ultrasonography
SELECTION OF CITATIONS
SEARCH DETAIL
...