Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.629
Filtrar
Más filtros




Intervalo de año de publicación
1.
Phys Eng Sci Med ; 47(3): 1123-1140, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39222214

RESUMEN

Manual contouring of organs at risk (OAR) is time-consuming and subject to inter-observer variability. AI-based auto-contouring is proposed as a solution to these problems if it can produce clinically acceptable results. This study investigated the performance of multiple AI-based auto-contouring systems in different OAR segmentations. The auto-contouring was performed using seven different AI-based segmentation systems (Radiotherapy AI, Limbus AI version 1.5 and 1.6, Therapanacea, MIM, Siemens AI-Rad Companion and RadFormation) on a total of 42 clinical cases with varying anatomical sites. Volumetric and surface dice similarity coefficients and maximum Hausdorff distance (HD) between the expert's contours and automated contours were calculated to evaluate their performance. Radiotherapy AI has shown better performance than other software in most tested structures considered in the head and neck, and brain cases. No specific software had shown overall superior performance over other software in lung, breast, pelvis and abdomen cases. Each tested AI system was able to produce comparable contours to the experts' contours of organs at risk which can potentially be used for clinical use. A reduced performance of AI systems in the case of small and complex anatomical structures was found and reported, showing that it is still essential to review each contour produced by AI systems for clinical uses. This study has also demonstrated a method of comparing contouring software options which could be replicated in clinics or used for ongoing quality assurance of purchased systems.


Asunto(s)
Órganos en Riesgo , Humanos , Inteligencia Artificial , Programas Informáticos , Procesamiento de Imagen Asistido por Computador , Algoritmos , Tomografía Computarizada por Rayos X
2.
bioRxiv ; 2024 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-39282435

RESUMEN

In spite of the great progress that has been made towards automating brain extraction in human magnetic resonance imaging (MRI), challenges remain in the automation of this task for mouse models of brain disorders. Researchers often resort to editing brain segmentation results manually when automated methods fail to produce accurate delineations. However, manual corrections can be labor-intensive and introduce interrater variability. This motivated our development of a new deep-learning-based method for brain segmentation of mouse MRI, which we call Mouse Brain Extractor. We adapted the existing SwinUNETR architecture (Hatamizadeh et al., 2021) with the goal of making it more robust to scale variance. Our approach is to supply the network model with supplementary spatial information in the form of absolute positional encoding. We use a new scheme for positional encoding, which we call Global Positional Encoding (GPE). GPE is based on a shared coordinate frame that is relative to the entire input image. This differs from the positional encoding used in SwinUNETR, which solely employs relative pairwise image patch positions. GPE also differs from the conventional absolute positional encoding approach, which encodes position relative to a subimage rather than the entire image. We trained and tested our method on a heterogeneous dataset of N=223 mouse MRI, for which we generated a corresponding set of manually-edited brain masks. These data were acquired previously in other studies using several different scanners and imaging protocols and included in vivo and ex vivo images of mice with heterogeneous brain structure due to different genotypes, strains, diseases, ages, and sexes. We evaluated our method's results against those of seven existing rodent brain extraction methods and two state-of-the art deep-learning approaches, nnU-Net (Isensee et al., 2018) and SwinUNETR. Overall, our proposed method achieved average Dice scores on the order of 0.98 and average HD95 measures on the order of 100 µm when compared to the manually-labeled brain masks. In statistical analyses, our method significantly outperformed the conventional approaches and performed as well as or significantly better than the nnU-Net and SwinUNETR methods. These results suggest that Global Positional Encoding provides additional contextual information that enables our Mouse Brain Extractor to perform competitively on datasets containing multiple resolutions.

3.
Sci Rep ; 14(1): 21643, 2024 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284813

RESUMEN

The main bottleneck in training a robust tumor segmentation algorithm for non-small cell lung cancer (NSCLC) on H&E is generating sufficient ground truth annotations. Various approaches for generating tumor labels to train a tumor segmentation model was explored. A large dataset of low-cost low-accuracy panCK-based annotations was used to pre-train the model and determine the minimum required size of the expensive but highly accurate pathologist annotations dataset. PanCK pre-training was compared to foundation models and various architectures were explored for model backbone. Proper study design and sample procurement for training a generalizable model that captured variations in NSCLC H&E was studied. H&E imaging was performed on 112 samples (three centers, two scanner types, different staining and imaging protocols). Attention U-Net architecture was trained using the large panCK-based annotations dataset (68 samples, total area 10,326 [mm2]) followed by fine-tuning using a small pathologist annotations dataset (80 samples, total area 246 [mm2]). This approach resulted in mean intersection over union (mIoU) of 82% [77 87]. Using panCK pretraining provided better performance compared to foundation models and allowed for 70% reduction in pathologist annotations with no drop in performance. Study design ensured model generalizability over variations on H&E where performance was consistent across centers, scanners, and subtypes.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Patólogos , Humanos , Neoplasias Pulmonares/patología , Carcinoma de Pulmón de Células no Pequeñas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
4.
BMC Med Imaging ; 24(1): 241, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39285324

RESUMEN

Recently emerged SAM-Med2D represents a state-of-the-art advancement in medical image segmentation. Through fine-tuning the Large Visual Model, Segment Anything Model (SAM), on extensive medical datasets, it has achieved impressive results in cross-modal medical image segmentation. However, its reliance on interactive prompts may restrict its applicability under specific conditions. To address this limitation, we introduce SAM-AutoMed, which achieves automatic segmentation of medical images by replacing the original prompt encoder with an improved MobileNet v3 backbone. The performance on multiple datasets surpasses both SAM and SAM-Med2D. Current enhancements on the Large Visual Model SAM lack applications in the field of medical image classification. Therefore, we introduce SAM-MedCls, which combines the encoder of SAM-Med2D with our designed attention modules to construct an end-to-end medical image classification model. It performs well on datasets of various modalities, even achieving state-of-the-art results, indicating its potential to become a universal model for medical image classification.


Asunto(s)
Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
5.
Sci Rep ; 14(1): 21874, 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39300243

RESUMEN

Deep learning-based defect detection methods have gained widespread application in industrial quality inspection. However, limitations such as insufficient sample sizes, low data utilization, and issues with accuracy and speed persist. This paper proposes a semi-supervised semantic segmentation framework that addresses these challenges through perturbation invariance at both the image and feature space. The framework employs diverse perturbation cross-pseudo-supervision to reduce dependency on extensive labeled datasets. Our lightweight method incorporates edge pixel-level semantic information and shallow feature fusion to enhance real-time performance and improve the accuracy of defect edge detection and small target segmentation in industrial inspection. Experimental results demonstrate that the proposed method outperforms the current state-of-the-art (SOTA) semi-supervised semantic segmentation methods across various industrial scenarios. Specifically, our method achieves a mean Intersection over Union (mIoU) 3.11% higher than the SOTA method on our dataset and 4.39% higher on the public KolektorSDD dataset. Additionally, our semantic segmentation network matches the speed of the fastest network, U-net, while achieving a mIoU 2.99% higher than DeepLabv3Plus.

6.
BMC Med Imaging ; 24(1): 251, 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39300334

RESUMEN

The analysis of the psoas muscle in morphological and functional imaging has proved to be an accurate approach to assess sarcopenia, i.e. a systemic loss of skeletal muscle mass and function that may be correlated to multifactorial etiological aspects. The inclusion of sarcopenia assessment into a radiological workflow would need the implementation of computational pipelines for image processing that guarantee segmentation reliability and a significant degree of automation. The present study utilizes three-dimensional numerical schemes for psoas segmentation in low-dose X-ray computed tomography images. Specifically, here we focused on the level set methodology and compared the performances of two standard approaches, a classical evolution model and a three-dimension geodesic model, with the performances of an original first-order modification of this latter one. The results of this analysis show that these gradient-based schemes guarantee reliability with respect to manual segmentation and that the first-order scheme requires a computational burden that is significantly smaller than the one needed by the second-order approach.


Asunto(s)
Imagenología Tridimensional , Músculos Psoas , Sarcopenia , Tomografía Computarizada por Rayos X , Humanos , Músculos Psoas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Imagenología Tridimensional/métodos , Sarcopenia/diagnóstico por imagen , Reproducibilidad de los Resultados , Algoritmos , Masculino , Femenino , Anciano , Persona de Mediana Edad , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
7.
Comput Methods Programs Biomed ; 256: 108398, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39236562

RESUMEN

BACKGROUND AND OBJECTIVE: Tendon segmentation is crucial for studying tendon-related pathologies like tendinopathy, tendinosis, etc. This step further enables detailed analysis of specific tendon regions using automated or semi-automated methods. This study specifically aims at the segmentation of Achilles tendon, the largest tendon in the human body. METHODS: This study proposes a comprehensive end-to-end tendon segmentation module composed of a preliminary superpixel-based coarse segmentation preceding the final segmentation task. The final segmentation results are obtained through two distinct approaches. In the first approach, the coarsely generated superpixels are subjected to classification using Random Forest (RF) and Support Vector Machine (SVM) classifiers to classify whether each superpixel belongs to a tendon class or not (resulting in tendon segmentation). In the second approach, the arrangements of superpixels are converted to graphs instead of being treated as conventional image grids. This classification process uses a graph-based convolutional network (GCN) to determine whether each superpixel corresponds to a tendon class or not. RESULTS: All experiments are conducted on a custom-made ankle MRI dataset. The dataset comprises 76 subjects and is divided into two sets: one for training (Dataset 1, trained and evaluated using leave-one-group-out cross-validation) and the other as unseen test data (Dataset 2). Using our first approach, the final test AUC (Area Under the ROC Curve) scores using RF and SVM classifiers on the test data (Dataset 2) are 0.992 and 0.987, respectively, with sensitivities of 0.904 and 0.966. On the other hand, using our second approach (GCN-based node classification), the AUC score for the test set is 0.933 with a sensitivity of 0.899. CONCLUSIONS: Our proposed pipeline demonstrates the efficacy of employing superpixel generation as a coarse segmentation technique for the final tendon segmentation. Whether utilizing RF, SVM-based superpixel classification, or GCN-based classification for tendon segmentation, our system consistently achieves commendable AUC scores, especially the non-graph-based approach. Given the limited dataset, our graph-based method did not perform as well as non-graph-based superpixel classifications; however, the results obtained provide valuable insights into how well the models can distinguish between tendons and non-tendons. This opens up opportunities for further exploration and improvement.


Asunto(s)
Tendón Calcáneo , Aprendizaje Automático , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Máquina de Vectores de Soporte , Humanos , Imagen por Resonancia Magnética/métodos , Tendón Calcáneo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Tendinopatía/diagnóstico por imagen , Tendinopatía/clasificación , Tendones/diagnóstico por imagen
8.
J Stomatol Oral Maxillofac Surg ; : 102048, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39244033

RESUMEN

INTRODUCTION: In orthodontic treatments, accurately assessing the upper airway volume and morphology is essential for proper diagnosis and planning. Cone beam computed tomography (CBCT) is used for assessing upper airway volume through manual, semi-automatic, and automatic airway segmentation methods. This study evaluates upper airway segmentation accuracy by comparing the results of an automatic model and a semi-automatic method against the gold standard manual method. MATERIALS AND METHODS: An automatic segmentation model was trained using the MONAI Label framework to segment the upper airway from CBCT images. An open-source program, ITK-SNAP, was used for semi-automatic segmentation. The accuracy of both methods was evaluated against manual segmentations. Evaluation metrics included Dice Similarity Coefficient (DSC), Precision, Recall, 95% Hausdorff Distance (HD), and volumetric differences. RESULTS: The automatic segmentation group averaged a DSC score of 0.915±0.041, while the semi-automatic group scored 0.940±0.021, indicating clinically acceptable accuracy for both methods. Analysis of the 95% HD revealed that semi-automatic segmentation (0.997±0.585) was more accurate and closer to manual segmentation than automatic segmentation (1.447±0.674). Volumetric comparisons revealed no statistically significant differences between automatic and manual segmentation for total, oropharyngeal, and velopharyngeal airway volumes. Similarly, no significant differences were noted between the semi-automatic and manual methods across these regions. CONCLUSION: It has been observed that both automatic and semi-automatic methods, which utilise open-source software, align effectively with manual segmentation. Implementing these methods can aid in decision-making by allowing faster and easier upper airway segmentation with comparable accuracy in orthodontic practice.

9.
Phys Imaging Radiat Oncol ; 32: 100648, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39319094

RESUMEN

Background and purpose: In online adaptive magnetic resonance image (MRI)-guided radiotherapy (MRIgRT), manual contouring of rectal tumors on daily images is labor-intensive and time-consuming. Automation of this task is complex due to substantial variation in tumor shape and location between patients. The aim of this work was to investigate different approaches of propagating patient-specific prior information to the online adaptive treatment fractions to improve deep-learning based auto-segmentation of rectal tumors. Materials and methods: 243 T2-weighted MRI scans of 49 rectal cancer patients treated on the 1.5T MR-Linear accelerator (MR-Linac) were utilized to train models to segment rectal tumors. As benchmark, an MRI_only auto-segmentation model was trained. Three approaches of including a patient-specific prior were studied: 1. include the segmentations of fraction 1 as extra input channel for the auto-segmentation of subsequent fractions, 2. fine-tuning of the MRI_only model to fraction 1 (PSF_1) and 3. fine-tuning of the MRI_only model on all earlier fractions (PSF_cumulative). Auto-segmentations were compared to the manual segmentation using geometric similarity metrics. Clinical impact was assessed by evaluating post-treatment target coverage. Results: All patient-specific methods outperformed the MRI_only segmentation approach. Median 95th percentile Hausdorff (95HD) were 22.0 (range: 6.1-76.6) mm for MRI_only segmentation, 9.9 (range: 2.5-38.2) mm for MRI+prior segmentation, 6.4 (range: 2.4-17.8) mm for PSF_1 and 4.8 (range: 1.7-26.9) mm for PSF_cumulative. PSF_cumulative was found to be superior to PSF_1 from fraction 4 onward (p = 0.014). Conclusion: Patient-specific fine-tuning of automatically segmented rectal tumors, using images and segmentations from all previous fractions, yields superior quality compared to other auto-segmentation approaches.

10.
Heliyon ; 10(17): e36678, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39319152

RESUMEN

This study is presented to examine the performance of a newly proposed metaheuristic algorithm within discrete and continuous search spaces. Therefore, the multithresholding image segmentation problem and parameter estimation problem of both the proton exchange membrane fuel cell (PEMFC) and photovoltaic (PV) models, which have different search spaces, are used to test and verify this algorithm. The traditional techniques could not find approximate solutions for those problems in a reasonable amount of time, so researchers have used metaheuristic algorithms to overcome those shortcomings. However, the majority of metaheuristic algorithms still suffer from slow convergence speed and stagnation into local minima problems, which makes them unsuitable for tackling these optimization problems. Therefore, this study proposes an improved nutcracker optimization algorithm (INOA) for better solving those problems in an acceptable amount of time. INOA is based on improving the performance of the standard algorithm using a newly proposed convergence improvement strategy that aims to improve the convergence speed and prevent stagnation in local minima. This algorithm is first applied to estimating the unknown parameters of the single-diode, double-diode, and triple-diode models for a PV module and a solar cell. Second, four PEMFC modules are used to further observe INOA's performance for the continuous optimization challenge. Finally, the performance of INOA is investigated for solving the multi-thresholding image segmentation problem to test its effectiveness in a discrete search space. Several test images with different threshold levels were used to validate its effectiveness, stability, and scalability. Comparison to several rival optimizers using various performance indicators, such as convergence curve, standard deviation, average fitness value, and Wilcoxon rank-sum test, demonstrates that INOA is an effective alternative for solving both discrete and continuous optimization problems. Quantitively, INOA could solve those problems better than the other rival optimizers, with improvement rates for final results ranging between 0.8355 % and 3.34 % for discrete problems and 4.97 % and 99.9 % for continuous problems.

11.
J Imaging Inform Med ; 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39320548

RESUMEN

Ultrasound-guided quadratus lumborum block (QLB) technology has become a widely used perioperative analgesia method during abdominal and pelvic surgeries. Due to the anatomical complexity and individual variability of the quadratus lumborum muscle (QLM) on ultrasound images, nerve blocks heavily rely on anesthesiologist experience. Therefore, using artificial intelligence (AI) to identify different tissue regions in ultrasound images is crucial. In our study, we retrospectively collected 112 patients (3162 images) and developed a deep learning model named Q-VUM, which is a U-shaped network based on the Visual Geometry Group 16 (VGG16) network. Q-VUM precisely segments various tissues, including the QLM, the external oblique muscle, the internal oblique muscle, the transversus abdominis muscle (collectively referred to as the EIT), and the bones. Furthermore, we evaluated Q-VUM. Our model demonstrated robust performance, achieving mean intersection over union (mIoU), mean pixel accuracy, dice coefficient, and accuracy values of 0.734, 0.829, 0.841, and 0.944, respectively. The IoU, recall, precision, and dice coefficient achieved for the QLM were 0.711, 0.813, 0.850, and 0.831, respectively. Additionally, the Q-VUM predictions showed that 85% of the pixels in the blocked area fell within the actual blocked area. Finally, our model exhibited stronger segmentation performance than did the common deep learning segmentation networks (0.734 vs. 0.720 and 0.720, respectively). In summary, we proposed a model named Q-VUM that can accurately identify the anatomical structure of the quadratus lumborum in real time. This model aids anesthesiologists in precisely locating the nerve block site, thereby reducing potential complications and enhancing the effectiveness of nerve block procedures.

12.
J Imaging Inform Med ; 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39320547

RESUMEN

This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.

13.
Med Biol Eng Comput ; 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-39316283

RESUMEN

Previous 3D encoder-decoder segmentation architectures struggled with fine-grained feature decomposition, resulting in unclear feature hierarchies when fused across layers. Furthermore, the blurred nature of contour boundaries in medical imaging limits the focus on high-frequency contour features. To address these challenges, we propose a Multi-oriented Hierarchical Extraction and Dual-frequency Decoupling Network (HEDN), which consists of three modules: Encoder-Decoder Module (E-DM), Multi-oriented Hierarchical Extraction Module (Multi-HEM), and Dual-frequency Decoupling Module (Dual-DM). The E-DM performs the basic encoding and decoding tasks, while Multi-HEM decomposes and fuses spatial and slice-level features in 3D, enriching the feature hierarchy by weighting them through 3D fusion. Dual-DM separates high-frequency features from the reconstructed network using self-supervision. Finally, the self-supervised high-frequency features separated by Dual-DM are inserted into the process following Multi-HEM, enhancing interactions and complementarities between contour features and hierarchical features, thereby mutually reinforcing both aspects. On the Synapse dataset, HEDN outperforms existing methods, boosting Dice Similarity Score (DSC) by 1.38% and decreasing 95% Hausdorff Distance (HD95) by 1.03 mm. Likewise, on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset, HEDN achieves  0.5% performance gains across all categories.

14.
Neurooncol Adv ; 6(1): vdae140, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39290874

RESUMEN

Background: Evaluating longitudinal changes in gliomas is a time-intensive process with significant interrater variability. Automated segmentation could reduce interrater variability and increase workflow efficiency for assessment of treatment response. We sought to evaluate whether neural networks would be comparable to expert assessment of pre- and posttreatment diffuse gliomas tissue subregions including resection cavities. Methods: A retrospective cohort of 647 MRIs of patients with diffuse gliomas (average 55.1 years; 29%/36%/34% female/male/unknown; 396 pretreatment and 251 posttreatment, median 237 days post-surgery) from 7 publicly available repositories in The Cancer Imaging Archive were split into training (536) and test/generalization (111) samples. T1, T1-post-contrast, T2, and FLAIR images were used as inputs into a 3D nnU-Net to predict 3 tumor subregions and resection cavities. We evaluated the performance of networks trained on pretreatment training cases (Pre-Rx network), posttreatment training cases (Post-Rx network), and both pre- and posttreatment cases (Combined networks). Results: Segmentation performance was as good as or better than interrater reliability with median dice scores for main tumor subregions ranging from 0.82 to 0.94 and strong correlations between manually segmented and predicted total lesion volumes (0.94 < R 2 values < 0.98). The Combined network performed similarly to the Pre-Rx network on pretreatment cases and the Post-Rx network on posttreatment cases with fewer false positive resection cavities (7% vs 59%). Conclusions: Neural networks that accurately segment pre- and posttreatment diffuse gliomas have the potential to improve response assessment in clinical trials and reduce provider burden and errors in measurement.

15.
Cureus ; 16(8): e67119, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39290911

RESUMEN

This study presents a detailed methodology for integrating three-dimensional (3D) printing technology into preoperative planning in neurosurgery. The increasing capabilities of 3D printing over the last decade have made it a valuable tool in medical fields such as orthopedics and dental practices. Neurosurgery can similarly benefit from these advancements, though the creation of accurate 3D models poses a significant challenge due to the technical expertise required and the cost of specialized software. This paper demonstrates a step-by-step process for developing a 3D physical model for preoperative planning using free, open-source software. A case involving a 62-year-old male with a large infiltrating tumor in the sacrum, originating from renal cell carcinoma, is used to illustrate the method. The process begins with the acquisition of a CT scan, followed by image reconstruction using InVesalius 3, an open-source software. The resulting 3D model is then processed in Autodesk Meshmixer (Autodesk, Inc., San Francisco, CA), where individual anatomical structures are segmented and prepared for printing. The model is printed using the Bambu Lab X1 Carbon 3D printer (Bambu Lab, Austin, TX), allowing for multicolor differentiation of structures such as bones, tumors, and blood vessels. The study highlights the practical aspects of model creation, including artifact removal, surface separation, and optimization for print volume. It discusses the advantages of multicolor printing for visual clarity in surgical planning and compares it with monochromatic and segmented printing approaches. The findings underscore the potential of 3D printing to enhance surgical precision and planning, providing a replicable protocol that leverages accessible technology. This work supports the broader adoption of 3D printing in neurosurgery, emphasizing the importance of collaboration between medical and engineering professionals to maximize the utility of these models in clinical practice.

16.
JOR Spine ; 7(3): e70003, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39291096

RESUMEN

Background: Lumbar disc herniation (LDH) is a prevalent cause of low back pain. LDH patients commonly experience paraspinal muscle atrophy and fatty infiltration (FI), which further exacerbates the symptoms of low back pain. Magnetic resonance imaging (MRI) is crucial for assessing paraspinal muscle condition. Our study aims to develop a dual-model for automated muscle segmentation and FI annotation on MRI, assisting clinicians evaluate LDH conditions comprehensively. Methods: The study retrospectively collected data diagnosed with LDH from December 2020 to May 2022. The dataset was split into a 7:3 ratio for training and testing, with an external test set prepared to validate model generalizability. The model's performance was evaluated using average precision (AP), recall and F1 score. The consistency was assessed using the Dice similarity coefficient (DSC) and Cohen's Kappa. The mean absolute percentage error (MAPE) was calculated to assess the error of the model measurements of relative cross-sectional area (rCSA) and FI. Calculate the MAPE of FI measured by threshold algorithms to compare with the model. Results: A total of 417 patients being evaluated, comprising 216 males and 201 females, with a mean age of 49 ± 15 years. In the internal test set, the muscle segmentation model achieved an overall DSC of 0.92 ± 0.10, recall of 92.60%, and AP of 0.98. The fat annotation model attained a recall of 91.30%, F1 Score of 0.82, and Cohen's Kappa of 0.76. However, there was a decrease on the external test set. For rCSA measurements, except for longissimus (10.89%), the MAPE of other muscles was less than 10%. When comparing the errors of FI for each paraspinal muscle, the MAPE of the model was lower than that of the threshold algorithm. Conclusion: The models demonstrate outstanding performance, with lower error in FI measurement compared to thresholding algorithms.

18.
Med Phys ; 2024 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-39306864

RESUMEN

BACKGROUND: Accurate pancreas and pancreatic tumor segmentation from abdominal scans is crucial for diagnosing and treating pancreatic diseases. Automated and reliable segmentation algorithms are highly desirable in both clinical practice and research. PURPOSE: Segmenting the pancreas and tumors is challenging due to their low contrast, irregular morphologies, and variable anatomical locations. Additionally, the substantial difference in size between the pancreas and small tumors makes this task difficult. This paper proposes an attention-enhanced multiscale feature fusion network (AMFF-Net) to address these issues via 3D attention and multiscale context fusion methods. METHODS: First, to prevent missed segmentation of tumors, we design the residual depthwise attention modules (RDAMs) to extract global features by expanding receptive fields of shallow layers in the encoder. Second, hybrid transformer modules (HTMs) are proposed to model deep semantic features and suppress irrelevant regions while highlighting critical anatomical characteristics. Additionally, the multiscale feature fusion module (MFFM) fuses adjacent top and bottom scale semantic features to address the size imbalance issue. RESULTS: The proposed AMFF-Net was evaluated on the public MSD dataset, achieving 82.12% DSC for pancreas and 57.00% for tumors. It also demonstrated effective segmentation performance on the NIH and private datasets, outperforming previous State-Of-The-Art (SOTA) methods. Ablation studies verify the effectiveness of RDAMs, HTMs, and MFFM. CONCLUSIONS: We propose an effective deep learning network for pancreas and tumor segmentation from abdominal CT scans. The proposed modules can better leverage global dependencies and semantic information and achieve significantly higher accuracy than the previous SOTA methods.

19.
Psychiatry Res Neuroimaging ; 345: 111901, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39307122

RESUMEN

RATIONALE AND OBJECTIVES: To explore the characteristics of brain structure in Chinese children with autism spectrum disorder (ASD) using artificial intelligence automatic brain segmentation technique, and to diagnose children with ASD using machine learning (ML) methods in combination with structural magnetic resonance imaging (sMRI) features. METHODS: A total of 60 ASD children and 48 age- and sex-matched typically developing (TD) children were prospectively enrolled from January 2023 to April 2024. All subjects were scanned using 3D-T1 sequences. Automated brain segmentation techniques were utilized to obtain the standardized volume of each brain structure (the ratio of the absolute volume of brain structure to the whole brain volume). The standardized volumes of each brain structure in the two groups were statistically compared, and the volume data of brain areas with significant differences were combined with ML methods to diagnose and predict ASD patients. RESULTS: Compared with the TD group, the volumes of the right lateral orbitofrontal cortex, right medial orbitofrontal cortex, right pars opercularis, right pars triangularis, left hippocampus, bilateral parahippocampal gyrus, left fusiform gyrus, right superior temporal gyrus, bilateral insula, bilateral inferior parietal cortex, right precuneus cortex, bilateral putamen, left pallidum, and right thalamus were significantly increased in the ASD group (P< 0.05). Among six ML algorithms, support vector machine (SVM) and adaboost (AB) had better performance in differentiating subjects with ASD from those TD children, with their average area under curve (AUC) reaching 0.91 and 0.92, respectively. CONCLUSION: Automatic brain segmentation technology based on artificial intelligence can rapidly and directly measure and display the volume of brain structures in children with autism spectrum disorder and typically developing children. Children with ASD show abnormalities in multiple brain structures, and when paired with sMRI features, ML algorithms perform well in the diagnosis of ASD.

20.
Artículo en Inglés | MEDLINE | ID: mdl-39307323

RESUMEN

PURPOSE: Online adaptive proton therapy (oAPT) is essential to address interfractional anatomical changes in patients receiving pencil beam scanning proton therapy (PBSPT). Artificial intelligence (AI)-based auto-segmentation can increase the efficiency and accuracy. Linear energy transfer (LET)-based biological effect evaluation can potentially mitigate possible adverse events caused by high LET. New spot arrangement based on the verification CT (vCT) can further improve the re-plan quality. We propose an oAPT workflow that incorporates all these functionalities and validate its clinical implementation feasibility with prostate patients. METHODS AND MATERIALS: AI-based auto-segmentation tool AccuContourTM (Manteia, Xiamen, China) was seamlessly integrated into oAPT. Initial spot arrangement tool on the vCT for re-optimization was implemented using raytracing. An LET-based biological effect evaluation tool was developed to assess the overlap region of high dose and high LET in selected OARs. Eleven prostate cancer patients were retrospectively selected to verify the efficacy and efficiency of the proposed oAPT workflow. The time cost of each component in the workflow was recorded for analysis. RESULTS: The verification plan showed significant degradation of the CTV coverage and rectum and bladder sparing due to the interfractional anatomical changes. Re-optimization on the vCT resulted in great improvement of the plan quality. No overlap regions of high dose and high LET distributions were observed in bladder or rectum in re-plans. 3D Gamma analyses in PSQA confirmed the accuracy of the re-plan doses before delivery (Gamma passing rate = 99.57 ± 0.46%), and after delivery (98.59 ± 1.29%). The robustness of the re-plans passed all clinical requirements. The average time for the complete execution of the workflow was 9.12 ± 0.85 minutes, excluding manual intervention time. CONCLUSION: The AI-facilitated oAPT workflow was demonstrated to be both efficient and effective by generating a re-plan that significantly improved the plan quality in prostate cancer treated with PBSPT.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA