Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 114
Filtrar
1.
J Med Imaging (Bellingham) ; 11(4): 044001, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988990

RESUMO

Purpose: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans. Approach: We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net. Results: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases. Conclusions: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.

2.
Sensors (Basel) ; 24(14)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-39065873

RESUMO

In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements.

3.
J Imaging Inform Med ; 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075249

RESUMO

Central Serous Chorioretinopathy (CSCR) is a significant cause of vision impairment worldwide, with Photodynamic Therapy (PDT) emerging as a promising treatment strategy. The capability to precisely segment fluid regions in Optical Coherence Tomography (OCT) scans and predict the response to PDT treatment can substantially augment patient outcomes. This paper introduces a novel deep learning (DL) methodology for automated 3D segmentation of fluid regions in OCT scans, followed by a subsequent PDT response analysis for CSCR patients. Our approach utilizes the rich 3D contextual information from OCT scans to train a model that accurately delineates fluid regions. This model not only substantially reduces the time and effort required for segmentation but also offers a standardized technique, fostering further large-scale research studies. Additionally, by incorporating pre- and post-treatment OCT scans, our model is capable of predicting PDT response, hence enabling the formulation of personalized treatment strategies and optimized patient management. To validate our approach, we employed a robust dataset comprising 2,769 OCT scans (124 3D volumes), and the results obtained were significantly satisfactory, outperforming the current state-of-the-art methods. This research signifies an important milestone in the integration of DL advancements with practical clinical applications, propelling us a step closer towards improved management of CSCR. Furthermore, the methodologies and systems developed can be adapted and extrapolated to tackle similar challenges in the diagnosis and treatment of other retinal pathologies, favoring more comprehensive and personalized patient care.

4.
Sensors (Basel) ; 24(11)2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38894336

RESUMO

The paranasal sinuses, a bilaterally symmetrical system of eight air-filled cavities, represent one of the most complex parts of the equine body. This study aimed to extract morphometric measures from computed tomography (CT) images of the equine head and to implement a clustering analysis for the computer-aided identification of age-related variations. Heads of 18 cadaver horses, aged 2-25 years, were CT-imaged and segmented to extract their volume, surface area, and relative density from the frontal sinus (FS), dorsal conchal sinus (DCS), ventral conchal sinus (VCS), rostral maxillary sinus (RMS), caudal maxillary sinus (CMS), sphenoid sinus (SS), palatine sinus (PS), and middle conchal sinus (MCS). Data were grouped into young, middle-aged, and old horse groups and clustered using the K-means clustering algorithm. Morphometric measurements varied according to the sinus position and age of the horses but not the body side. The volume and surface area of the VCS, RMS, and CMS increased with the age of the horses. With accuracy values of 0.72 for RMS, 0.67 for CMS, and 0.31 for VCS, the possibility of the age-related clustering of CT-based 3D images of equine paranasal sinuses was confirmed for RMS and CMS but disproved for VCS.


Assuntos
Imageamento Tridimensional , Seios Paranasais , Cavalos , Animais , Análise por Conglomerados , Seios Paranasais/diagnóstico por imagem , Imageamento Tridimensional/métodos , Tomografia Computadorizada Multidetectores/métodos , Algoritmos
5.
Acad Radiol ; 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38902109

RESUMO

RATIONALE AND OBJECTIVES: Cardiac magnetic resonance imaging is a crucial tool for analyzing, diagnosing, and formulating treatment plans for cardiovascular diseases. Currently, there is very little research focused on balancing cardiac segmentation performance with lightweight methods. Despite the existence of numerous efficient image segmentation algorithms, they primarily rely on complex and computationally intensive network models, making it challenging to implement them on resource-constrained medical devices. Furthermore, simplified models designed to meet the requirements of device lightweighting may have limitations in comprehending and utilizing both global and local information for cardiac segmentation. MATERIALS AND METHODS: We propose a novel 3D high-performance lightweight medical image segmentation network, HL-UNet, for application in cardiac image segmentation. Specifically, in HL-UNet, we propose a novel residual-enhanced Adaptive attention (REAA) module that combines residual-enhanced connectivity with an adaptive attention mechanism to efficiently capture key features of input images and optimize their representation capabilities, and integrates the Visual Mamba (VSS) module to enhance the performance of HL-UNet. RESULTS: Compared to large-scale models such as TransUNet, HL-UNet increased the Dice of the right ventricular cavity (RV), left ventricular myocardia (MYO), and left ventricular cavity (LV), the key indicators of cardiac image segmentation, by 1.61%, 5.03% and 0.19%, respectively. At the same time, the Params and FLOPs of the model decreased by 41.3 M and 31.05 G, respectively. Furthermore, compared to lightweight models such as the MISSFormer, the HL-UNet improves the Dice of RV, MYO, and LV by 4.11%, 3.82%, and 4.33%, respectively, when the number of parameters and computational complexity are close to or even lower. CONCLUSION: The proposed HL-UNet model captures local details and edge information in images while being lightweight. Experimental results show that compared with large-scale models, HL-UNet significantly reduces the number of parameters and computational complexity while maintaining performance, thereby increasing frames per second (FPS). Compared to lightweight models, HL-UNet shows substantial improvements across various key metrics, with parameter count and computational complexity approaching or even lower.

6.
Methods ; 229: 9-16, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38838947

RESUMO

Robust segmentation of large and complex conjoined tree structures in 3-D is a major challenge in computer vision. This is particularly true in computational biology, where we often encounter large data structures in size, but few in number, which poses a hard problem for learning algorithms. We show that merging multiscale opening with geodesic path propagation, can shed new light on this classic machine vision challenge, while circumventing the learning issue by developing an unsupervised visual geometry approach (digital topology/morphometry). The novelty of the proposed MSO-GP method comes from the geodesic path propagation being guided by a skeletonization of the conjoined structure that helps to achieve robust segmentation results in a particularly challenging task in this area, that of artery-vein separation from non-contrast pulmonary computed tomography angiograms. This is an important first step in measuring vascular geometry to then diagnose pulmonary diseases and to develop image-based phenotypes. We first present proof-of-concept results on synthetic data, and then verify the performance on pig lung and human lung data with less segmentation time and user intervention needs than those of the competing methods.


Assuntos
Algoritmos , Imageamento Tridimensional , Animais , Imageamento Tridimensional/métodos , Humanos , Suínos , Pulmão/diagnóstico por imagem , Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Biologia Computacional/métodos
7.
bioRxiv ; 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38766074

RESUMO

Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.

8.
Mitochondrion ; 76: 101882, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38599302

RESUMO

Mitochondria are dynamic organelles that alter their morphological characteristics in response to functional needs. Therefore, mitochondrial morphology is an important indicator of mitochondrial function and cellular health. Reliable segmentation of mitochondrial networks in microscopy images is a crucial initial step for further quantitative evaluation of their morphology. However, 3D mitochondrial segmentation, especially in cells with complex network morphology, such as in highly polarized cells, remains challenging. To improve the quality of 3D segmentation of mitochondria in super-resolution microscopy images, we took a machine learning approach, using 3D Trainable Weka, an ImageJ plugin. We demonstrated that, compared with other commonly used methods, our approach segmented mitochondrial networks effectively, with improved accuracy in different polarized epithelial cell models, including differentiated human retinal pigment epithelial (RPE) cells. Furthermore, using several tools for quantitative analysis following segmentation, we revealed mitochondrial fragmentation in bafilomycin-treated RPE cells.


Assuntos
Células Epiteliais , Imageamento Tridimensional , Aprendizado de Máquina , Mitocôndrias , Humanos , Mitocôndrias/metabolismo , Células Epiteliais/metabolismo , Imageamento Tridimensional/métodos , Epitélio Pigmentado da Retina/citologia , Processamento de Imagem Assistida por Computador/métodos , Linhagem Celular
10.
Int J Med Robot ; 20(2): e2633, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38654571

RESUMO

BACKGROUND: Allergic rhinitis constitutes a widespread health concern, with traditional treatments often proving to be painful and ineffective. Acupuncture targeting the pterygopalatine fossa proves effective but is complicated due to the intricate nearby anatomy. METHODS: To enhance the safety and precision in targeting the pterygopalatine fossa, we introduce a deep learning-based model to refine the segmentation of the pterygopalatine fossa. Our model expands the U-Net framework with DenseASPP and integrates an attention mechanism for enhanced precision in the localisation and segmentation of the pterygopalatine fossa. RESULTS: The model achieves Dice Similarity Coefficient of 93.89% and 95% Hausdorff Distance of 2.53 mm with significant precision. Remarkably, it only uses 1.98 M parameters. CONCLUSIONS: Our deep learning approach yields significant advancements in localising and segmenting the pterygopalatine fossa, providing a reliable basis for guiding pterygopalatine fossa-assisted punctures.


Assuntos
Aprendizado Profundo , Fossa Pterigopalatina , Humanos , Fossa Pterigopalatina/diagnóstico por imagem , Fossa Pterigopalatina/anatomia & histologia , Algoritmos , Rinite Alérgica/diagnóstico por imagem , Rinite Alérgica/terapia , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes
11.
J Crohns Colitis ; 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38647203

RESUMO

BACKGROUND: Herein, we present a proof-of-concept study of 3-dimensional (3D) pouchography using virtual and printed 3D models of ileal pouch-anal anastomosis (IPAA) in patients with normal pouches and in cases of mechanical pouch complications. MATERIALS & METHODS: We performed a retrospective, descriptive case series of a convenience sample of 10 pouch patients with or without pouch dysfunction who had CT scans appropriate for segmentation were identified from our pouch registry. The steps involved in clinician-driven automated 3D reconstruction are presented. RESULTS: Three patients who underwent CT imaging and were found to have no primary pouch pathology, and seven patients with known pouch pathology identifiable with 3D reconstruction including pouch strictures, megapouch, pouch volvulus, and twisted pouches underwent 3D virtual modeling; one normal and one twisted pouch were 3D printed. We discovered that 3D pouchography reliably identified staple lines (pouch body, anorectal circular and transverse, and tip of J), the relationship between staple lines, and variations in pouch morphology, and pouch pathology. CONCLUSIONS: Three-dimensional reconstruction of IPAA morphology is highly feasible using readily available technology. In our practice, we have found 3D pouchography to be an extremely useful adjunct to diagnose various mechanical pouch complications and improve planning for pouch salvage strategies. Given its ease of use and helpfulness in understanding the pouch structure and function, we have started to routinely integrate 3D pouchography into our clinical pouch referral practice. Further study is needed to formally assess to value of this technique to aid in the diagnosis of pouch pathology.

12.
Sensors (Basel) ; 24(2)2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38257521

RESUMO

The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when reconstructing cattle in a real environment, difficulties including fences and the vibration of cameras can lead to the failure of the process of reconstruction. A new scheme is proposed that automatically removes environmental fences and noise. An optimization method is proposed that interweaves camera pose updates, and the distances between the camera pose and the initial camera position are added as part of the objective function. The difference between the camera's point clouds to the mesh output is reduced from 7.5 mm to 5.5 mm. The experimental results showed that our scheme can automatically generate a high-quality mesh in a real environment. This scheme provides data that can be used for other research on Korean cattle.

13.
Med Image Anal ; 93: 103090, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38241763

RESUMO

Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.


Assuntos
Imageamento por Ressonância Magnética , Neuroimagem , Humanos , Criança , Adolescente , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Encéfalo/diagnóstico por imagem , Algoritmos , Artefatos
14.
Methods Mol Biol ; 2725: 131-146, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37856022

RESUMO

Volume electron microscopy (vEM) is a high-resolution imaging technique capable of revealing the 3D structure of cells, tissues, and model organisms. This imaging modality is gaining prominence due to its ability to provide a comprehensive view of cells at the nanometer scale. The visualization and quantitative analysis of individual subcellular structures however requires segmentation of each 2D electron micrograph slice of the 3D vEM dataset; this process is extremely laborious de facto limiting its applications and throughput. To address these limitations, deep learning approaches have been recently developed including Empanada-Napari plugin, an open-source tool for automated segmentation based on a Panoptic-DeepLab (PDL) architecture. In this chapter, we provide a step-by-step protocol describing the process of manual segmentation using 3dMOD within the IMOD package and the process of automated segmentation using Empanada-Napari plugins for the 3D reconstruction of airway cellular structures.


Assuntos
Imageamento Tridimensional , Microscopia Eletrônica de Volume , Imageamento Tridimensional/métodos , Aprendizado de Máquina , Tórax , Processamento de Imagem Assistida por Computador/métodos
15.
Diagnostics (Basel) ; 13(23)2023 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-38066804

RESUMO

We present a very rare case of a child with nine supernumerary teeth to analyze the potential, benefits, and limitations of artificial intelligence, as well as two commercial tools for tooth segmentation. Artificial intelligence (AI) is increasingly finding applications in dentistry today, particularly in radiography. Special attention is given to models based on convolutional neural networks (CNN) and their application in automatic segmentation of the oral cavity and tooth structures. The integration of AI is gaining increasing attention, and the automation of the detection and localization of supernumerary teeth can accelerate the treatment planning process. Despite advancements in 3D segmentation techniques, relying on trained professionals remains crucial. Therefore, human expertise should remain key, and AI should be seen as a support rather than a replacement. Generally, a comprehensive tool that can satisfy all clinical needs in terms of supernumerary teeth and their segmentation is not yet available, so it is necessary to incorporate multiple tools into practice.

16.
Head Face Med ; 19(1): 54, 2023 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-38098053

RESUMO

INTRODUCTION: An accurate identification of mandibular asymmetries is required by modern orthodontics and orthognathic surgery to improve diagnosis and treatment planning of such deformities. Although craniofacial deformities are very frequent pathologies, some types of asymmetries can be very difficult to assess without the proper diagnostic tools. The purpose of this study was to implement the usage of three-dimensional (3D) segmentation procedures to identify asymmetries at the mandibular level in adult patients with different vertical and sagittal patterns where the asymmetries could go unnoticed at the observational level. METHODS: The study sample comprised 60 adult patients (33 women and 27 men, aged between 18 and 60 years). Subjects were divided into 3 sagittal and vertical skeletal groups. CBCT images were segmented, mirrored and voxel-based registered with reference landmarks using ITK-SNAP® and 3DSlicer® software's. 3D surface models were constructed to evaluate the degree of asymmetry at different anatomical levels. RESULTS: There was a degree of asymmetry, with the left hemimandible tending to contain the right one (0.123 ± 0.270 mm (CI95% 0.036-0.222; p < 0.001). Although the subjects under study did not present significant differences between mandibular asymmetries and their sagittal or vertical skeletal pattern (p = 0.809 and p = 0.453, respectively), a statistically significant difference has been found depending on the anatomical region (p < 0.001; CI95%=1.020-1.021), being higher in the condyle, followed by the ramus and the corpus. CONCLUSIONS: Although mandibular asymmetries cannot be correlated with vertical and sagittal skeletal patterns in symmetric patients, knowledge about 3D segmentation procedures and color maps can provide valuable information to identify mandibular asymmetries.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento Tridimensional , Adulto , Masculino , Humanos , Feminino , Adolescente , Adulto Jovem , Pessoa de Meia-Idade , Tomografia Computadorizada de Feixe Cônico/métodos , Assimetria Facial/diagnóstico por imagem , Mandíbula/diagnóstico por imagem , Côndilo Mandibular
17.
BMC Bioinformatics ; 24(1): 480, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-38102537

RESUMO

BACKGROUND: Spatial mapping of transcriptional states provides valuable biological insights into cellular functions and interactions in the context of the tissue. Accurate 3D cell segmentation is a critical step in the analysis of this data towards understanding diseases and normal development in situ. Current approaches designed to automate 3D segmentation include stitching masks along one dimension, training a 3D neural network architecture from scratch, and reconstructing a 3D volume from 2D segmentations on all dimensions. However, the applicability of existing methods is hampered by inaccurate segmentations along the non-stitching dimensions, the lack of high-quality diverse 3D training data, and inhomogeneity of image resolution along orthogonal directions due to acquisition constraints; as a result, they have not been widely used in practice. METHODS: To address these challenges, we formulate the problem of finding cell correspondence across layers with a novel optimal transport (OT) approach. We propose CellStitch, a flexible pipeline that segments cells from 3D images without requiring large amounts of 3D training data. We further extend our method to interpolate internal slices from highly anisotropic cell images to recover isotropic cell morphology. RESULTS: We evaluated the performance of CellStitch through eight 3D plant microscopic datasets with diverse anisotropic levels and cell shapes. CellStitch substantially outperforms the state-of-the art methods on anisotropic images, and achieves comparable segmentation quality against competing methods in isotropic setting. We benchmarked and reported 3D segmentation results of all the methods with instance-level precision, recall and average precision (AP) metrics. CONCLUSIONS: The proposed OT-based 3D segmentation pipeline outperformed the existing state-of-the-art methods on different datasets with nonzero anisotropy, providing high fidelity recovery of 3D cell morphology from microscopic images.


Assuntos
Imageamento Tridimensional , Redes Neurais de Computação , Anisotropia , Imageamento Tridimensional/métodos , Processamento de Imagem Assistida por Computador/métodos
18.
BMC Neurosci ; 24(1): 49, 2023 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-37710208

RESUMO

BACKGROUND: Intervertebral disc herniation, degenerative lumbar spinal stenosis, and other lumbar spine diseases can occur across most age groups. MRI examination is the most commonly used detection method for lumbar spine lesions with its good soft tissue image resolution. However, the diagnosis accuracy is highly dependent on the experience of the diagnostician, leading to subjective errors caused by diagnosticians or differences in diagnostic criteria for multi-center studies in different hospitals, and inefficient diagnosis. These factors necessitate the standardized interpretation and automated classification of lumbar spine MRI to achieve objective consistency. In this research, a deep learning network based on SAFNet is proposed to solve the above challenges. METHODS: In this research, low-level features, mid-level features, and high-level features of spine MRI are extracted. ASPP is used to process the high-level features. The multi-scale feature fusion method is used to increase the scene perception ability of the low-level features and mid-level features. The high-level features are further processed using global adaptive pooling and Sigmoid function to obtain new high-level features. The processed high-level features are then point-multiplied with the mid-level features and low-level features to obtain new high-level features. The new high-level features, low-level features, and mid-level features are all sampled to the same size and concatenated in the channel dimension to output the final result. RESULTS: The DSC of SAFNet for segmenting 17 vertebral structures among 5 folds are 79.46 ± 4.63%, 78.82 ± 7.97%, 81.32 ± 3.45%, 80.56 ± 5.47%, and 80.83 ± 3.48%, with an average DSC of 80.32 ± 5.00%. The average DSC was 80.32 ± 5.00%. Compared to existing methods, our SAFNet provides better segmentation results and has important implications for the diagnosis of spinal and lumbar diseases. CONCLUSIONS: This research proposes SAFNet, a highly accurate and robust spine segmentation deep learning network capable of providing effective anatomical segmentation for diagnostic purposes. The results demonstrate the effectiveness of the proposed method and its potential for improving radiological diagnosis accuracy.

19.
Med Phys ; 50(11): 6990-7002, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37738468

RESUMO

PURPOSE: Deep learning-based networks have become increasingly popular in the field of medical image segmentation. The purpose of this research was to develop and optimize a new architecture for automatic segmentation of the prostate gland and normal organs in the pelvic, thoracic, and upper gastro-intestinal (GI) regions. METHODS: We developed an architecture which combines a shifted-window (Swin) transformer with a convolutional U-Net. The network includes a parallel encoder, a cross-fusion block, and a CNN-based decoder to extract local and global information and merge related features on the same scale. A skip connection is applied between the cross-fusion block and decoder to integrate low-level semantic features. Attention gates (AGs) are integrated within the CNN to suppress features in image background regions. Our network is termed "SwinAttUNet." We optimized the architecture for automatic image segmentation. Training datasets consisted of planning-CT datasets from 300 prostate cancer patients from an institutional database and 100 CT datasets from a publicly available dataset (CT-ORG). Images were linearly interpolated and resampled to a spatial resolution of (1.0 × 1.0× 1.5) mm3 . A volume patch (192 × 192 × 96) was used for training and inference, and the dataset was split into training (75%), validation (10%), and test (15%) cohorts. Data augmentation transforms were applied consisting of random flip, rotation, and intensity scaling. The loss function comprised Dice and cross-entropy equally weighted and summed. We evaluated Dice coefficients (DSC), 95th percentile Hausdorff Distances (HD95), and Average Surface Distances (ASD) between results of our network and ground truth data. RESULTS: SwinAttUNet, DSC values were 86.54 ± 1.21, 94.15 ± 1.17, and 87.15 ± 1.68% and HD95 values were 5.06 ± 1.42, 3.16 ± 0.93, and 5.54 ± 1.63 mm for the prostate, bladder, and rectum, respectively. Respective ASD values were 1.45 ± 0.57, 0.82 ± 0.12, and 1.42 ± 0.38 mm. For the lung, liver, kidneys and pelvic bones, respective DSC values were: 97.90 ± 0.80, 96.16 ± 0.76, 93.74 ± 2.25, and 89.31 ± 3.87%. Respective HD95 values were: 5.13 ± 4.11, 2.73 ± 1.19, 2.29 ± 1.47, and 5.31 ± 1.25 mm. Respective ASD values were: 1.88 ± 1.45, 1.78 ± 1.21, 0.71 ± 0.43, and 1.21 ± 1.11 mm. Our network outperformed several existing deep learning approaches using only attention-based convolutional or Transformer-based feature strategies, as detailed in the results section. CONCLUSIONS: We have demonstrated that our new architecture combining Transformer- and convolution-based features is able to better learn the local and global context for automatic segmentation of multi-organ, CT-based anatomy.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Masculino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Tomografia Computadorizada por Raios X/métodos
20.
Prog Urol ; 33(10): 509-518, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37633733

RESUMO

INTRODUCTION: Indication for percutaneous-ablation (PA) is gradually expanding to renal tumors T1b (4-7cm). Few data exist on the alteration of renal functional volume (RFV) post-PA. Yet, it is a surrogate marker of post partial-nephrectomy (PN) glomerular filtration rate (GFR) impairment. The objective was to compare RFV and GFR at 1-year post-PN or PA, in this T1b population. METHODS: Patients with unifocal renal tumor≥4cm treated between 2014 and 2019 were included. Tumor, homolateral (RFVh), contralateral RFV, and total volumes were assessed by manual segmentation (3D Slicer) before and at 1 year of treatment, as was GFR. The loss of RFV, contralateral hypertrophy, and preservation of GFR were compared between both groups (PN vs. PA). RESULTS: 144 patients were included (87PN, 57PA). Preoperatively, PA group was older (74 vs. 59 years; P<0.0001), had more impaired GFR (73 vs. 85mL/min; P=0.0026) and smaller tumor volume(31.1 vs. 55.9cm3; P=0.0007) compared to PN group. At 1 year, the PN group had significantly more homolateral RFV loss (-19 vs. -14%; P=0.002), and contralateral compensatory hypertrophy (+4% vs. +1,8%; P=0.02, respectively). Total-RFV loss was similar between both (-21.7 vs. -19cm3; P=0.07). GFR preservation was better in the PN group (95.9 vs. 90.7%; P=0.03). In multivariate analysis, age and tumor size were associated with loss of RFVh. CONCLUSION: For renal tumors T1b, PN is associated with superior compensatory hypertrophy compared with PA, compensating for the higher RFVh loss, resulting in similar ΔRFV-total between both groups. The superior post-PN GFR preservation suggests that the preserved quantitative RFV factor is insufficient. Therefore, the underlying quality of the parenchyma would play a major role in postoperative GFR.


Assuntos
Neoplasias Renais , Humanos , Neoplasias Renais/cirurgia , Nefrectomia , Rim/cirurgia , Taxa de Filtração Glomerular , Hipertrofia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA