Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 167
Filtrar
1.
IEEE Trans Biomed Eng ; PP2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38968023

RESUMEN

Oral diseases have imposed a heavy social and financial burden on many countries and regions. If left untreated, severe cases can lead to malignant tumours. Common devices can no longer meet the high-resolution and non-invasive requirement, while Optical Coherence Tomography Angiography (OCTA) provides an ideal perspective for detecting vascular microcirculation. However, acquiring high-quality OCTA images takes time and can result in unpredictable motion artefacts. Therefore, we propose a systematic workflow for rapid OCTA data acquisition. Initially, we implement a fourfold reduction in sampling points to enhance the scanning speed. Then, we apply a deep neural network for rapid image reconstruction, elevating the resolution to the level achieved through full scanning. Specifically, it is a hybrid attention model with a structure-aware loss to extract local and global information on angiography, which improves the visualisation performance and quantitative metrics of numerous classical and recent-presented models by 3.536%-9.943% in SSIM and 0.930%-2.946% in MS-SSIM. Through this approach, the time of constructing one OCTA volume can be reduced from nearly 30 s to about 3 s. The rapid-scanning protocol of high-quality imaging also presents feasibility for future real-time detection applications.

2.
Int J Med Robot ; 20(4): e2664, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38994900

RESUMEN

BACKGROUND: This study aimed to develop a novel deep convolutional neural network called Dual-path Double Attention Transformer (DDA-Transformer) designed to achieve precise and fast knee joint CT image segmentation and to validate it in robotic-assisted total knee arthroplasty (TKA). METHODS: The femoral, tibial, patellar, and fibular segmentation performance and speed were evaluated and the accuracy of component sizing, bone resection and alignment of the robotic-assisted TKA system constructed using this deep learning network was clinically validated. RESULTS: Overall, DDA-Transformer outperformed six other networks in terms of the Dice coefficient, intersection over union, average surface distance, and Hausdorff distance. DDA-Transformer exhibited significantly faster segmentation speeds than nnUnet, TransUnet and 3D-Unet (p < 0.01). Furthermore, the robotic-assisted TKA system outperforms the manual group in surgical accuracy. CONCLUSIONS: DDA-Transformer exhibited significantly improved accuracy and robustness in knee joint segmentation, and this convenient and stable knee joint CT image segmentation network significantly improved the accuracy of the TKA procedure.


Asunto(s)
Artroplastia de Reemplazo de Rodilla , Aprendizaje Profundo , Articulación de la Rodilla , Procedimientos Quirúrgicos Robotizados , Tomografía Computarizada por Rayos X , Humanos , Artroplastia de Reemplazo de Rodilla/métodos , Procedimientos Quirúrgicos Robotizados/métodos , Tomografía Computarizada por Rayos X/métodos , Articulación de la Rodilla/cirugía , Articulación de la Rodilla/diagnóstico por imagen , Masculino , Redes Neurales de la Computación , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Cirugía Asistida por Computador/métodos , Anciano , Reproducibilidad de los Resultados , Persona de Mediana Edad , Tibia/cirugía , Tibia/diagnóstico por imagen , Algoritmos , Fémur/cirugía , Fémur/diagnóstico por imagen , Imagenología Tridimensional/métodos
3.
Artículo en Inglés | MEDLINE | ID: mdl-38862746

RESUMEN

PURPOSE: Tracheal intubation is the gold standard of airway protection and constitutes a pivotal life-saving technique frequently employed in emergency medical interventions. Hence, in this paper, a system is designed to execute tracheal intubation tasks automatically, offering a safer and more efficient solution, thereby alleviating the burden on physicians. METHODS: The system comprises a tracheal tube with a bendable front end, a drive system, and a tip endoscope. The soft actuator provides two degrees of freedom for precise orientation. It is fabricated with varying-hardness silicone and reinforced with fibers and spiral steel wire for flexibility and safety. The hydraulic actuation system and tube feeding mechanism enable controlled bending and delivery. Object detection of key anatomical features guides the robotic arm and soft actuator. The control strategy involves visual servo control for coordinated robotic arm and soft actuator movements, ensuring accurate and safe tracheal intubation. RESULTS: The kinematics of the soft actuator were established using a constant curvature model, allowing simulation of its workspace. Through experiments, the actuator is capable of 90° bending as well as 20° deflection on the left and right sides. The maximum insertion force of the tube is 2 N. Autonomous tracheal intubation experiments on a training manikin were successful in all 10 trials, with an average insertion time of 45.6 s. CONCLUSION: Experimental validation on the manikin demonstrated that the robot tracheal intubation system based on a soft actuator was able to perform safe, stable, and automated tracheal intubation. In summary, this paper proposed a safe and automated robot-assisted tracheal intubation system based on a soft actuator, showing considerable potential for clinical applications.

4.
IEEE Trans Med Imaging ; PP2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38801692

RESUMEN

Dynamic contrast-enhanced ultrasound (CEUS) imaging can reflect the microvascular distribution and blood flow perfusion, thereby holding clinical significance in distinguishing between malignant and benign thyroid nodules. Notably, CEUS offers a meticulous visualization of the microvascular distribution surrounding the nodule, leading to an apparent increase in tumor size compared to gray-scale ultrasound (US). In the dual-image obtained, the lesion size enlarged from gray-scale US to CEUS, as the microvascular appeared to be continuously infiltrating the surrounding tissue. Although the infiltrative dilatation of microvasculature remains ambiguous, sonographers believe it may promote the diagnosis of thyroid nodules. We propose a deep learning model designed to emulate the diagnostic reasoning process employed by sonographers. This model integrates the observation of microvascular infiltration on dynamic CEUS, leveraging the additional insights provided by gray-scale US for enhanced diagnostic support. Specifically, temporal projection attention is implemented on time dimension of dynamic CEUS to represent the microvascular perfusion. Additionally, we employ a group of confidence maps with flexible Sigmoid Alpha Functions to aware and describe the infiltrative dilatation process. Moreover, a self-adaptive integration mechanism is introduced to dynamically integrate the assisted gray-scale US and the confidence maps of CEUS for individual patients, ensuring a trustworthy diagnosis of thyroid nodules. In this retrospective study, we collected a thyroid nodule dataset of 282 CEUS videos. The method achieves a superior diagnostic accuracy and sensitivity of 89.52% and 93.75%, respectively. These results suggest that imitating the diagnostic thinking of sonographers, encompassing dynamic microvascular perfusion and infiltrative expansion, proves beneficial for CEUS-based thyroid nodule diagnosis.

5.
Adv Sci (Weinh) ; 11(24): e2307965, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38634608

RESUMEN

Diffusion magnetic resonance imaging is an important tool for mapping tissue microstructure and structural connectivity non-invasively in the in vivo human brain. Numerous diffusion signal models are proposed to quantify microstructural properties. Nonetheless, accurate estimation of model parameters is computationally expensive and impeded by image noise. Supervised deep learning-based estimation approaches exhibit efficiency and superior performance but require additional training data and may be not generalizable. A new DIffusion Model OptimizatioN framework using physics-informed and self-supervised Deep learning entitled "DIMOND" is proposed to address this problem. DIMOND employs a neural network to map input image data to model parameters and optimizes the network by minimizing the difference between the input acquired data and synthetic data generated via the diffusion model parametrized by network outputs. DIMOND produces accurate diffusion tensor imaging results and is generalizable across subjects and datasets. Moreover, DIMOND outperforms conventional methods for fitting sophisticated microstructural models including the kurtosis and NODDI model. Importantly, DIMOND reduces NODDI model fitting time from hours to minutes, or seconds by leveraging transfer learning. In summary, the self-supervised manner, high efficacy, and efficiency of DIMOND increase the practical feasibility and adoption of microstructure and connectivity mapping in clinical and neuroscientific applications.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Imagen de Difusión Tensora/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos
6.
IEEE Trans Med Imaging ; PP2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38652607

RESUMEN

Proximal femoral fracture segmentation in computed tomography (CT) is essential in the preoperative planning of orthopedic surgeons. Recently, numerous deep learning-based approaches have been proposed for segmenting various structures within CT scans. Nevertheless, distinguishing various attributes between fracture fragments and soft tissue regions in CT scans frequently poses challenges, which have received comparatively limited research attention. Besides, the cornerstone of contemporary deep learning methodologies is the availability of annotated data, while detailed CT annotations remain scarce. To address the challenge, we propose a novel weakly-supervised framework, namely Rough Turbo Net (RT-Net), for the segmentation of proximal femoral fractures. We emphasize the utilization of human resources to produce rough annotations on a substantial scale, as opposed to relying on limited fine-grained annotations that demand a substantial time to create. In RT-Net, rough annotations pose fractured-region constraints, which have demonstrated significant efficacy in enhancing the accuracy of the network. Conversely, the fine annotations can provide more details for recognizing edges and soft tissues. Besides, we design a spatial adaptive attention module (SAAM) that adapts to the spatial distribution of the fracture regions and align feature in each decoder. Moreover, we propose a fine-edge loss which is applied through an edge discrimination network to penalize the absence or imprecision edge features. Extensive quantitative and qualitative experiments demonstrate the superiority of RT-Net to state-of-the-art approaches. Furthermore, additional experiments show that RT-Net has the capability to produce pseudo labels for raw CT images that can further improve fracture segmentation performance and has the potential to improve segmentation performance on public datasets. The code is available at: https://github.com/zyairelu/RT-Net.

7.
Med Image Anal ; 95: 103182, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38688039

RESUMEN

Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Neuroanatomía
8.
EClinicalMedicine ; 70: 102518, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38495520

RESUMEN

Background: Effective monitoring and management are crucial during long-term home noninvasive positive pressure ventilation (NPPV) in patients with hypercapnic chronic obstructive pulmonary disease (COPD). This study investigated the benefit of Internet of Things (IOT)-based management of home NPPV. Methods: This multicenter, prospective, parallel-group, randomized controlled non-inferiority trial enrolled patients requiring long-term home NPPV for hypercapnic COPD. Patients were randomly assigned (1:1), via a computer-generated randomization sequence, to standard home management or IOT management based on telemonitoring of clinical and ventilator parameters over 12 months. The intervention was unblinded, but outcome assessment was blinded to management assignment. The primary outcome was the between-group comparison of the change in health-related quality of life, based on severe respiratory insufficiency questionnaire scores with a non-inferiority margin of -5. This study is registered with Chinese Clinical Trials Registry (No. ChiCTR1800019536). Findings: Overall, 148 patients (age: 72.7 ± 6.8 years; male: 85.8%; forced expiratory volume in 1 s: 0.7 ± 0.3 L; PaCO2: 66.4 ± 12.0 mmHg), recruited from 11 Chinese hospitals between January 24, 2019, and June 28, 2021, were randomly allocated to the intervention group (n = 73) or the control group (n = 75). At 12 months, the mean severe respiratory insufficiency questionnaire score was 56.5 in the intervention group and 50.0 in the control group (adjusted between-group difference: 6.26 [95% CI, 3.71-8.80]; P < 0.001), satisfying the hypothesis of non-inferiority. The 12-month risk of readmission was 34.3% in intervention group compared with 56.0% in the control group, adjusted hazard ratio of 0.56 (95% CI, 0.34-0.92; P = 0.023). No severe adverse events were reported. Interpretation: Among stable patients with hypercapnic COPD, using IOT-based management for home NPPV improved health-related quality of life and prolonged the time to readmission. Funding: Air Liquide Healthcare (Beijing) Co., Ltd.

9.
Biomed Opt Express ; 15(2): 506-523, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38404328

RESUMEN

As endoscopic imaging technology advances, there is a growing clinical demand for enhanced imaging capabilities. Although conventional white light imaging (WLI) endoscopy offers realistic images, it often cannot reveal detailed characteristics of the mucosa. On the other hand, optical staining endoscopy, such as Compound Band Imaging (CBI), can discern subtle structures, serving to some extent as an optical biopsy. However, its image brightness is low, and the colors can be abrupt. These two techniques, commonly used in clinical settings, have complementary advantages. Nonetheless, they require different lighting conditions, which makes it challenging to combine their imaging strengths on living tissues. In this study, we introduce a novel endoscopic imaging technique that effectively combines the advantages of both WLI and CBI. Doctors don't need to manually switch between these two observation modes, as they can obtain the image information of both modes in one image. We calibrated an appropriate proportion for simultaneous illumination with the light required for WLI and CBI. We designed a new illumination spectrum tailored for gastrointestinal examination, achieving their fusion at the optical level. Using a new algorithm that focuses on enhancing specific hemoglobin tissue features, we restored narrow-band image characteristics lost due to the introduction of white light. Our hardware and software innovations not only boost the illumination brightness of the endoscope but also ensure the narrow-band feature details of the image. To evaluate the reliability and safety of the new endoscopic system, we conducted a series of tests in line with relevant international standards and validated the design parameters. For clinical trials, we collected a total of 256 sets of images, each set comprising images of the same lesion location captured using WLI, CBI, and our proposed method. We recruited four experienced clinicians to conduct subjective evaluations of the collected images. The results affirmed the significant advantages of our method. We believe that the novel endoscopic system we introduced has vast potential for clinical application in the future.

10.
Theranostics ; 14(1): 341-362, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38164160

RESUMEN

Minimally-invasive diagnosis and therapy have gradually become the trend and research hotspot of current medical applications. The integration of intraoperative diagnosis and treatment is a development important direction for real-time detection, minimally-invasive diagnosis and therapy to reduce mortality and improve the quality of life of patients, so called minimally-invasive theranostics (MIT). Light is an important theranostic tool for the treatment of cancerous tissues. Light-mediated minimally-invasive theranostics (LMIT) is a novel evolutionary technology that integrates diagnosis and therapeutics for the less invasive treatment of diseased tissues. Intelligent theranostics would promote precision surgery based on the optical characterization of cancerous tissues. Furthermore, MIT also requires the assistance of smart medical devices or robots. And, optical multimodality lay a solid foundation for intelligent MIT. In this review, we summarize the important state-of-the-arts of optical MIT or LMIT in oncology. Multimodal optical image-guided intelligent treatment is another focus. Intraoperative imaging and real-time analysis-guided optical treatment are also systemically discussed. Finally, the potential challenges and future perspectives of intelligent optical MIT are discussed.


Asunto(s)
Neoplasias , Medicina de Precisión , Humanos , Calidad de Vida , Neoplasias/diagnóstico , Neoplasias/terapia , Nanomedicina Teranóstica/métodos , Procedimientos Neuroquirúrgicos/métodos
11.
Eur Radiol ; 34(3): 1434-1443, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37672052

RESUMEN

OBJECTIVES: The histologic subtype of intracranial germ cell tumours (IGCTs) is an important factor in deciding the treatment strategy, especially for teratomas. In this study, we aimed to non-invasively diagnose teratomas based on fractal and radiomic features. MATERIALS AND METHODS: This retrospective study included 330 IGCT patients, including a discovery set (n = 296) and an independent validation set (n = 34). Fractal and radiomic features were extracted from T1-weighted, T2-weighted, and post-contrast T1-weighted images. Five classifiers, including logistic regression, random forests, support vector machines, K-nearest neighbours, and XGBoost, were compared for our task. Based on the optimal classifier, we compared the performance of clinical, fractal, and radiomic models and the model combining these features in predicting teratomas. RESULTS: Among the diagnostic models, the fractal and radiomic models performed better than the clinical model. The final model that combined all the features showed the best performance, with an area under the curve, precision, sensitivity, and specificity of 0.946 [95% confidence interval (CI): 0.882-0.994], 95.65% (95% CI: 88.64-100%), 88.00% (95% CI: 77.78-96.36%), and 91.67% (95% CI: 78.26-100%), respectively, in the test set of the discovery set, and 0.944 (95% CI: 0.855-1.000), 85.71% (95% CI: 68.18-100%), 94.74% (95% CI: 83.33-100%), and 80.00% (95% CI: 58.33-100%), respectively, in the independent validation set. SHapley Additive exPlanations indicated that two fractal features, two radiomic features, and age were the top five features highly associated with the presence of teratomas. CONCLUSION: The predictive model including image and clinical features could help guide treatment strategies for IGCTs. CLINICAL RELEVANCE STATEMENT: Our machine learning model including image and clinical features can non-invasively predict teratoma components, which could help guide treatment strategies for intracranial germ cell tumours (IGCT). KEY POINTS: • Fractals and radiomics can quantitatively evaluate imaging characteristics of intracranial germ cell tumours. • Model combing imaging and clinical features had the best predictive performance. • The diagnostic model could guide treatment strategies for intracranial germ cell tumours.


Asunto(s)
Neoplasias de Células Germinales y Embrionarias , Teratoma , Humanos , Estudios Retrospectivos , Fractales , Diagnóstico Diferencial , Radiómica , Neoplasias de Células Germinales y Embrionarias/diagnóstico por imagen , Teratoma/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos
12.
Int J Comput Assist Radiol Surg ; 19(2): 331-344, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37603164

RESUMEN

PURPOSE: White light imaging (WLI) is a commonly seen examination mode in endoscopy. The particular light in compound band imaging (CBI) can highlight delicate structures, such as capillaries and tiny structures on the mucosal surface. These two modes complement each other, and doctors switch between them manually to complete the examination. This paper proposes an endoscopy image fusion system to combine WLI and CBI. METHODS: We add a real-time rotatable color wheel in the light source device of the AQ-200 endoscopy system to achieve rapid imaging of two modes at the same position of living tissue. The two images corresponding to the pixel level can avoid registration and lay the foundation for image fusion. We propose a multi-scale image fusion framework, which involves Laplacian pyramid (LP) and convolutional sparse representation (CSR) and strengthens the details in the fusion rule. RESULTS: Volunteer experiments and ex vivo pig stomach trials are conducted to verify the feasibility of our proposed system. We also conduct comparative experiments with other image fusion methods, evaluate the quality of the fused images, and verify the effectiveness of our fusion framework. The results show that our fused image has rich details, high color contrast, apparent structures, and clear lesion boundaries. CONCLUSION: An endoscopy image fusion system is proposed, which does not change the doctor's operation and makes the fusion of WLI and CBI optical staining technology a reality. We change the light source device of the endoscope, propose an image fusion framework, and verify the feasibility and effectiveness of our scheme. Our method fully integrates the advantages of WLI and CBI, which can help doctors make more accurate judgments than before. The endoscopy image fusion system is of great significance for improving the detection rate of early lesions and has broad application prospects.


Asunto(s)
Endoscopía Gastrointestinal , Endoscopía , Humanos , Animales , Porcinos , Luz , Imagen de Banda Estrecha/métodos
13.
IEEE Trans Biomed Eng ; 71(3): 1010-1021, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37856261

RESUMEN

OBJECTIVE: The precise alignment of full and partial 3D point sets is a crucial technique in computer-aided orthopedic surgery, but remains a significant challenge. This registration process is complicated by the partial overlap between the full and partial 3D point sets, as well as the susceptibility of 3D point sets to noise interference and poor initialization conditions. METHODS: To address these issues, we propose a novel full-to-partial registration framework for computer-aided orthopedic surgery that utilizes reinforcement learning. Our proposed framework is both generalized and robust, effectively handling the challenges of noise, poor initialization, and partial overlap. Moreover, this framework demonstrates exceptional generalization capabilities for various bones, including the pelvis, femurs, and tibias. RESULTS: Extensive experimentation on several bone datasets has demonstrated that the proposed method achieves a superior C.D. error of 8.211 e-05 and our method consistently outperforms state-of-the-art registration techniques. CONCLUSION AND SIGNIFICANCE: Hence, our proposed method is capable of achieving precise bone alignments for computer-aided orthopedic surgery.


Asunto(s)
Procedimientos Ortopédicos , Cirugía Asistida por Computador , Algoritmos , Pelvis , Cirugía Asistida por Computador/métodos , Computadores , Imagenología Tridimensional/métodos
14.
Artículo en Inglés | MEDLINE | ID: mdl-38059130

RESUMEN

During minimal invasive surgery (MIS), the laparoscope only provides a single viewpoint to the surgeon, leaving a lack of 3D perception. Many works have been proposed to obtain depth and 3D reconstruction by designing a new optical structure or by depending on the camera pose and image sequences. Most of these works modify the structure of the conventional laparoscopes and cannot provide 3D reconstruction of different magnification views. In this study, we propose a laparoscopic system based on double liquid lenses, which provide doctors with variable magnification rates, near observation, and real-time monocular 3D reconstruction. Our system composes of an optical structure that can obtain auto magnification change and autofocus without any physically moving element, and a deep learning network based on the Depth from Defocus (DFD) method, trained to suit inconsistent camera intrinsic situations and estimate depth from images of different focal lengths. The optical structure is portable and can be mounted on conventional laparoscopes. The depth estimation network estimates depth in real-time from monocular images of different focal lengths and magnification rates. Experiments show that our system provides a 0.68-1.44x zoom rate and can estimate depth from different magnification rates at 6fps. Monocular 3D reconstruction reaches at least 6mm accuracy. The system also provides a clear view even under 1mm close working distance. Ex-vivo experiments and implementation on clinical images prove that our system provides doctors with a magnified clear view of the lesion, as well as quick monocular depth perception during laparoscopy, which help surgeons get better detection and size diagnosis of the abdomen during laparoscope surgeries.


Asunto(s)
Laparoscopía , Cristalino , Lentes , Laparoscopios , Laparoscopía/métodos , Abdomen
15.
J Opt Soc Am A Opt Image Sci Vis ; 40(12): 2156-2163, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38086024

RESUMEN

The rendering of specular highlights is a critical aspect of 3D rendering on autostereoscopic displays. However, the conventional highlight rendering techniques on autostereoscopic displays result in depth conflicts between highlights and diffuse surfaces. To address this issue, we propose a viewpoint-dependent highlight depiction method with head tracking, which incorporates microdisparity of highlights in binocular parallax and preserves the motion parallax of highlights. Our method was found to outperform physical highlight depiction and highlight depiction with microdisparity in terms of depth perception and realism, as demonstrated by experimental results. The proposed approach offers a promising alternative to traditional physical highlights on autostereoscopic displays, particularly in applications that require accurate depth perception.

16.
Artículo en Inglés | MEDLINE | ID: mdl-38083587

RESUMEN

Alzheimer's disease (AD) is a progressive neurode-generative disease. Identifying the mild cognitive impairment (MCI) subjects who will convert to AD is essential for early intervention to slow the irreversible brain damage and cognitive decline. In this paper, we propose a novel double-attention assisted multi-task framework for the MCI conversion prediction task. By introducing an auxiliary grey matter segmentation task along with an adaptive dynamic weight average strategy to balance the impact of each task. Then, a double-attention module is incorporated to leverage both the classification and the segmentation attention information to guide the network to focus more on the structural alteration regions for better discrimination of AD pathology, as well as increase the interpretability of the network. Extensive experiments on a publicly available dataset demonstrate that the proposed method significantly outperforms the approaches using the same image modality.


Asunto(s)
Enfermedad de Alzheimer , Lesiones Encefálicas , Disfunción Cognitiva , Humanos , Imagen por Resonancia Magnética/métodos , Enfermedad de Alzheimer/diagnóstico , Enfermedad de Alzheimer/patología , Aprendizaje , Disfunción Cognitiva/diagnóstico
17.
Eur Radiol ; 2023 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-37926739

RESUMEN

OBJECTIVES: To investigate the value of diffusion MRI (dMRI) in H3K27M genotyping of brainstem glioma (BSG). METHODS: A primary cohort of BSG patients with dMRI data (b = 0, 1000 and 2000 s/mm2) and H3K27M mutation information were included. A total of 13 diffusion tensor and kurtosis imaging (DTI; DKI) metrics were calculated, then 17 whole-tumor histogram features and 29 along-tract white matter (WM) microstructural measurements were extracted from each metric and assessed within genotypes. After feature selection through univariate analysis and the least absolute shrinkage and selection operator method, multivariate logistic regression was used to build dMRI-derived genotyping models based on retained tumor and WM features separately and jointly. Model performances were tested using ROC curves and compared by the DeLong approach. A nomogram incorporating the best-performing dMRI model and clinical variables was generated by multivariate logistic regression and validated in an independent cohort of 27 BSG patients. RESULTS: At total of 117 patients (80 H3K27M-mutant) were included in the primary cohort. In total, 29 tumor histogram features and 41 WM tract measurements were selected for subsequent genotyping model construction. Incorporating WM tract measurements significantly improved diagnostic performances (p < 0.05). The model incorporating tumor and WM features from both DKI and DTI metrics showed the best performance (AUC = 0.9311). The nomogram combining this dMRI model and clinical variables achieved AUCs of 0.9321 and 0.8951 in the primary and validation cohort respectively. CONCLUSIONS: dMRI is valuable in BSG genotyping. Tumor diffusion histogram features are useful in genotyping, and WM tract measurements are more valuable in improving genotyping performance. CLINICAL RELEVANCE STATEMENT: This study found that diffusion MRI is valuable in predicting H3K27M mutation in brainstem gliomas, which is helpful to realize the noninvasive detection of brainstem glioma genotypes and improve the diagnosis of brainstem glioma. KEY POINTS: • Diffusion MRI has significant value in brainstem glioma H3K27M genotyping, and models with satisfactory performances were built. • Whole-tumor diffusion histogram features are useful in H3K27M genotyping, and quantitative measurements of white matter tracts are valuable as they have the potential to improve model performance. • The model combining the most discriminative diffusion MRI model and clinical variables can help make clinical decision.

18.
IEEE Trans Med Imaging ; 42(12): 3779-3793, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37695964

RESUMEN

Accurate ultrasound (US) image segmentation is crucial for the screening and diagnosis of diseases. However, it faces two significant challenges: 1) pixel-level annotation is a time-consuming and laborious process; 2) the presence of shadow artifacts leads to missing anatomy and ambiguous boundaries, which negatively impact reliable segmentation results. To address these challenges, we propose a novel semi-supervised shadow aware network with boundary refinement (SABR-Net). Specifically, we add shadow imitation regions to the original US, and design shadow-masked transformer blocks to perceive missing anatomy of shadow regions. Shadow-masked transformer block contains an adaptive shadow attention mechanism that introduces an adaptive mask, which is updated automatically to promote the network training. Additionally, we utilize unlabeled US images to train a missing structure inpainting path with shadow-masked transformer, which further facilitates semi-supervised segmentation. Experiments on two public US datasets demonstrate the superior performance of the SABR-Net over other state-of-the-art semi-supervised segmentation methods. In addition, experiments on a private breast US dataset prove that our method has a good generalization to clinical small-scale US datasets.


Asunto(s)
Artefactos , Ultrasonografía Mamaria , Femenino , Humanos , Ultrasonografía , Procesamiento de Imagen Asistido por Computador
19.
IEEE J Biomed Health Inform ; 27(11): 5381-5392, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37651479

RESUMEN

Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.


Asunto(s)
Neoplasias de Células Germinales y Embrionarias , Neoplasias , Niño , Humanos , Adolescente , Aprendizaje Automático no Supervisado , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
20.
Comput Methods Programs Biomed ; 240: 107642, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37480644

RESUMEN

In ultrasound-guided liver surgery, the lack of large-scale intraoperative ultrasound images with important anatomical structures remains an obstacle hindering the successful application of AI to ultrasound guidance. In this case, intraoperative ultrasound (iUS) simulation should be conducted from preoperative magnetic resonance (pMR), which not only helps doctors understand the characteristics of iUS in advance, but also expands the iUS dataset from various imaging positions, thereby promoting the automatic iUS analysis in ultrasound guidance. Herein, a novel anatomy preserving generative adversarial network (ApGAN) framework was proposed to generate simulated intraoperative ultrasound (Sim-iUS) of liver with precise structure information from pMR. Specifically, the low-rank factors based bimodal fusion was first established focusing on the effective information of hepatic parenchyma. Then, a deformation field based correction module was introduced to learn and correct the slight structural distortion from surgical operations. Meanwhile, the multiple loss functions were designed to constrain the simulation of the content, structures, and style. Empirical results of clinical data showed that the proposed ApGAN obtained higher Structural Similarity (SSIM) of 0.74 and Fr´echet Inception Distance (FID) of 35.54 compared to existing methods. Furthermore, the average Hausdorff Distance (HD) error of the liver capsule structure was less than 0.25 mm, and the average relative (Euclidean Distance) ED error for polyps was 0.12 mm, indicating the high-level precision of this ApGAN in simulating the anatomical structures and focal areas.


Asunto(s)
Hígado , Médicos , Humanos , Hígado/diagnóstico por imagen , Hígado/cirugía , Ultrasonografía , Simulación por Computador , Aprendizaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA