Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 119
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Cell ; 184(26): 6361-6377.e24, 2021 12 22.
Artículo en Inglés | MEDLINE | ID: mdl-34875226

RESUMEN

Determining the spatial organization and morphological characteristics of molecularly defined cell types is a major bottleneck for characterizing the architecture underpinning brain function. We developed Expansion-Assisted Iterative Fluorescence In Situ Hybridization (EASI-FISH) to survey gene expression in brain tissue, as well as a turnkey computational pipeline to rapidly process large EASI-FISH image datasets. EASI-FISH was optimized for thick brain sections (300 µm) to facilitate reconstruction of spatio-molecular domains that generalize across brains. Using the EASI-FISH pipeline, we investigated the spatial distribution of dozens of molecularly defined cell types in the lateral hypothalamic area (LHA), a brain region with poorly defined anatomical organization. Mapping cell types in the LHA revealed nine spatially and molecularly defined subregions. EASI-FISH also facilitates iterative reanalysis of scRNA-seq datasets to determine marker-genes that further dissociated spatial and morphological heterogeneity. The EASI-FISH pipeline democratizes mapping molecularly defined cell types, enabling discoveries about brain organization.


Asunto(s)
Área Hipotalámica Lateral/metabolismo , Hibridación Fluorescente in Situ , Animales , Biomarcadores/metabolismo , Perfilación de la Expresión Génica , Regulación de la Expresión Génica , Área Hipotalámica Lateral/citología , Imagenología Tridimensional , Masculino , Ratones Endogámicos C57BL , Neuronas/metabolismo , Neuropéptidos/metabolismo , Proteínas Proto-Oncogénicas c-fos/metabolismo , ARN/metabolismo , RNA-Seq , Análisis de la Célula Individual , Transcripción Genética
2.
Methods ; 229: 9-16, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38838947

RESUMEN

Robust segmentation of large and complex conjoined tree structures in 3-D is a major challenge in computer vision. This is particularly true in computational biology, where we often encounter large data structures in size, but few in number, which poses a hard problem for learning algorithms. We show that merging multiscale opening with geodesic path propagation, can shed new light on this classic machine vision challenge, while circumventing the learning issue by developing an unsupervised visual geometry approach (digital topology/morphometry). The novelty of the proposed MSO-GP method comes from the geodesic path propagation being guided by a skeletonization of the conjoined structure that helps to achieve robust segmentation results in a particularly challenging task in this area, that of artery-vein separation from non-contrast pulmonary computed tomography angiograms. This is an important first step in measuring vascular geometry to then diagnose pulmonary diseases and to develop image-based phenotypes. We first present proof-of-concept results on synthetic data, and then verify the performance on pig lung and human lung data with less segmentation time and user intervention needs than those of the competing methods.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Animales , Imagenología Tridimensional/métodos , Humanos , Porcinos , Pulmón/diagnóstico por imagen , Angiografía por Tomografía Computarizada/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Biología Computacional/métodos
3.
Mol Microbiol ; 119(6): 659-676, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37066636

RESUMEN

Bacteria often grow into matrix-encased three-dimensional (3D) biofilm communities, which can be imaged at cellular resolution using confocal microscopy. From these 3D images, measurements of single-cell properties with high spatiotemporal resolution are required to investigate cellular heterogeneity and dynamical processes inside biofilms. However, the required measurements rely on the automated segmentation of bacterial cells in 3D images, which is a technical challenge. To improve the accuracy of single-cell segmentation in 3D biofilms, we first evaluated recent classical and deep learning segmentation algorithms. We then extended StarDist, a state-of-the-art deep learning algorithm, by optimizing the post-processing for bacteria, which resulted in the most accurate segmentation results for biofilms among all investigated algorithms. To generate the large 3D training dataset required for deep learning, we developed an iterative process of automated segmentation followed by semi-manual correction, resulting in >18,000 annotated Vibrio cholerae cells in 3D images. We demonstrate that this large training dataset and the neural network with optimized post-processing yield accurate segmentation results for biofilms of different species and on biofilm images from different microscopes. Finally, we used the accurate single-cell segmentation results to track cell lineages in biofilms and to perform spatiotemporal measurements of single-cell growth rates during biofilm development.


Asunto(s)
Aprendizaje Profundo , Linaje de la Célula , Imagenología Tridimensional/métodos , Algoritmos , Biopelículas , Bacterias , Procesamiento de Imagen Asistido por Computador/métodos
4.
J Cell Sci ; 135(7)2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-35420128

RESUMEN

For the past century, the nucleus has been the focus of extensive investigations in cell biology. However, many questions remain about how its shape and size are regulated during development, in different tissues, or during disease and aging. To track these changes, microscopy has long been the tool of choice. Image analysis has revolutionized this field of research by providing computational tools that can be used to translate qualitative images into quantitative parameters. Many tools have been designed to delimit objects in 2D and, eventually, in 3D in order to define their shapes, their number or their position in nuclear space. Today, the field is driven by deep-learning methods, most of which take advantage of convolutional neural networks. These techniques are remarkably adapted to biomedical images when trained using large datasets and powerful computer graphics cards. To promote these innovative and promising methods to cell biologists, this Review summarizes the main concepts and terminologies of deep learning. Special emphasis is placed on the availability of these methods. We highlight why the quality and characteristics of training image datasets are important and where to find them, as well as how to create, store and share image datasets. Finally, we describe deep-learning methods well-suited for 3D analysis of nuclei and classify them according to their level of usability for biologists. Out of more than 150 published methods, we identify fewer than 12 that biologists can use, and we explain why this is the case. Based on this experience, we propose best practices to share deep-learning methods with biologists.


Asunto(s)
Aprendizaje Profundo , Núcleo Celular , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Microscopía/métodos , Redes Neurales de la Computación
5.
J Pathol ; 260(4): 390-401, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37232213

RESUMEN

Prostate cancer treatment decisions rely heavily on subjective visual interpretation [assigning Gleason patterns or International Society of Urological Pathology (ISUP) grade groups] of limited numbers of two-dimensional (2D) histology sections. Under this paradigm, interobserver variance is high, with ISUP grades not correlating well with outcome for individual patients, and this contributes to the over- and undertreatment of patients. Recent studies have demonstrated improved prognostication of prostate cancer outcomes based on computational analyses of glands and nuclei within 2D whole slide images. Our group has also shown that the computational analysis of three-dimensional (3D) glandular features, extracted from 3D pathology datasets of whole intact biopsies, can allow for improved recurrence prediction compared to corresponding 2D features. Here we seek to expand on these prior studies by exploring the prognostic value of 3D shape-based nuclear features in prostate cancer (e.g. nuclear size, sphericity). 3D pathology datasets were generated using open-top light-sheet (OTLS) microscopy of 102 cancer-containing biopsies extracted ex vivo from the prostatectomy specimens of 46 patients. A deep learning-based workflow was developed for 3D nuclear segmentation within the glandular epithelium versus stromal regions of the biopsies. 3D shape-based nuclear features were extracted, and a nested cross-validation scheme was used to train a supervised machine classifier based on 5-year biochemical recurrence (BCR) outcomes. Nuclear features of the glandular epithelium were found to be more prognostic than stromal cell nuclear features (area under the ROC curve [AUC] = 0.72 versus 0.63). 3D shape-based nuclear features of the glandular epithelium were also more strongly associated with the risk of BCR than analogous 2D features (AUC = 0.72 versus 0.62). The results of this preliminary investigation suggest that 3D shape-based nuclear features are associated with prostate cancer aggressiveness and could be of value for the development of decision-support tools. © 2023 The Pathological Society of Great Britain and Ireland.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Clasificación del Tumor , Próstata/patología , Neoplasias de la Próstata/patología , Pronóstico , Prostatectomía/métodos , Medición de Riesgo
6.
Sensors (Basel) ; 24(2)2024 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-38257521

RESUMEN

The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when reconstructing cattle in a real environment, difficulties including fences and the vibration of cameras can lead to the failure of the process of reconstruction. A new scheme is proposed that automatically removes environmental fences and noise. An optimization method is proposed that interweaves camera pose updates, and the distances between the camera pose and the initial camera position are added as part of the objective function. The difference between the camera's point clouds to the mesh output is reduced from 7.5 mm to 5.5 mm. The experimental results showed that our scheme can automatically generate a high-quality mesh in a real environment. This scheme provides data that can be used for other research on Korean cattle.

7.
Sensors (Basel) ; 24(14)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-39065873

RESUMEN

In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements.

8.
Sensors (Basel) ; 24(11)2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38894336

RESUMEN

The paranasal sinuses, a bilaterally symmetrical system of eight air-filled cavities, represent one of the most complex parts of the equine body. This study aimed to extract morphometric measures from computed tomography (CT) images of the equine head and to implement a clustering analysis for the computer-aided identification of age-related variations. Heads of 18 cadaver horses, aged 2-25 years, were CT-imaged and segmented to extract their volume, surface area, and relative density from the frontal sinus (FS), dorsal conchal sinus (DCS), ventral conchal sinus (VCS), rostral maxillary sinus (RMS), caudal maxillary sinus (CMS), sphenoid sinus (SS), palatine sinus (PS), and middle conchal sinus (MCS). Data were grouped into young, middle-aged, and old horse groups and clustered using the K-means clustering algorithm. Morphometric measurements varied according to the sinus position and age of the horses but not the body side. The volume and surface area of the VCS, RMS, and CMS increased with the age of the horses. With accuracy values of 0.72 for RMS, 0.67 for CMS, and 0.31 for VCS, the possibility of the age-related clustering of CT-based 3D images of equine paranasal sinuses was confirmed for RMS and CMS but disproved for VCS.


Asunto(s)
Imagenología Tridimensional , Senos Paranasales , Caballos , Animales , Análisis por Conglomerados , Senos Paranasales/diagnóstico por imagen , Imagenología Tridimensional/métodos , Tomografía Computarizada Multidetector/métodos , Algoritmos
9.
Clin Anat ; 2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39311610

RESUMEN

The infrapatellar fat pad (IFP), also known as the Hoffa fat pad, is an essential structure in the knee joint with diverse functions and characteristics. Pathological changes in it can lead to anterior knee pain and impingement. The aim of this study was to investigate the relationship between age and Hoffa fat pad volume. A retrospective analysis was conducted on MRI scans of 100 individuals aged 10-80 years with no Hoffa fat pad pathology. The IFP was meticulously segmented on each sagittal and coronal MRI plane and its volume was calculated on the basis of the segmented boundaries. Correlation analysis was used to explore the relationships among age, sex, height, weight, and patella-related variables. Contrary to the hypothesis, there was no significant correlation between age and Hoffa fat pad volume. However, there were strong positive correlations between Hoffa fat pad volume and individuals' height, patellar height, and patellar ligament length. Multivariable linear regression analysis revealed that height, weight, patellar height, and patellar ligament length collectively explained 67% of Hoffa fat pad volume variability. These findings suggest that the Hoffa fat pad adapts to accommodate morphological changes in the knee joint as individuals grow taller. In conclusion, our study examined Hoffa fat pad volume in individuals across the age spectrum, using advanced imaging techniques to reveal the importance of considering height and knee-related variables for assessing Hoffa fat pad volume. This elucidates age-related volume changes and highlights the need for further research to understand its functional implications and interactions within the knee joint, with the aim of improving orthopedic interventions.

10.
BMC Bioinformatics ; 24(1): 480, 2023 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-38102537

RESUMEN

BACKGROUND: Spatial mapping of transcriptional states provides valuable biological insights into cellular functions and interactions in the context of the tissue. Accurate 3D cell segmentation is a critical step in the analysis of this data towards understanding diseases and normal development in situ. Current approaches designed to automate 3D segmentation include stitching masks along one dimension, training a 3D neural network architecture from scratch, and reconstructing a 3D volume from 2D segmentations on all dimensions. However, the applicability of existing methods is hampered by inaccurate segmentations along the non-stitching dimensions, the lack of high-quality diverse 3D training data, and inhomogeneity of image resolution along orthogonal directions due to acquisition constraints; as a result, they have not been widely used in practice. METHODS: To address these challenges, we formulate the problem of finding cell correspondence across layers with a novel optimal transport (OT) approach. We propose CellStitch, a flexible pipeline that segments cells from 3D images without requiring large amounts of 3D training data. We further extend our method to interpolate internal slices from highly anisotropic cell images to recover isotropic cell morphology. RESULTS: We evaluated the performance of CellStitch through eight 3D plant microscopic datasets with diverse anisotropic levels and cell shapes. CellStitch substantially outperforms the state-of-the art methods on anisotropic images, and achieves comparable segmentation quality against competing methods in isotropic setting. We benchmarked and reported 3D segmentation results of all the methods with instance-level precision, recall and average precision (AP) metrics. CONCLUSIONS: The proposed OT-based 3D segmentation pipeline outperformed the existing state-of-the-art methods on different datasets with nonzero anisotropy, providing high fidelity recovery of 3D cell morphology from microscopic images.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Anisotropía , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos
11.
BMC Neurosci ; 24(1): 49, 2023 09 14.
Artículo en Inglés | MEDLINE | ID: mdl-37710208

RESUMEN

BACKGROUND: Intervertebral disc herniation, degenerative lumbar spinal stenosis, and other lumbar spine diseases can occur across most age groups. MRI examination is the most commonly used detection method for lumbar spine lesions with its good soft tissue image resolution. However, the diagnosis accuracy is highly dependent on the experience of the diagnostician, leading to subjective errors caused by diagnosticians or differences in diagnostic criteria for multi-center studies in different hospitals, and inefficient diagnosis. These factors necessitate the standardized interpretation and automated classification of lumbar spine MRI to achieve objective consistency. In this research, a deep learning network based on SAFNet is proposed to solve the above challenges. METHODS: In this research, low-level features, mid-level features, and high-level features of spine MRI are extracted. ASPP is used to process the high-level features. The multi-scale feature fusion method is used to increase the scene perception ability of the low-level features and mid-level features. The high-level features are further processed using global adaptive pooling and Sigmoid function to obtain new high-level features. The processed high-level features are then point-multiplied with the mid-level features and low-level features to obtain new high-level features. The new high-level features, low-level features, and mid-level features are all sampled to the same size and concatenated in the channel dimension to output the final result. RESULTS: The DSC of SAFNet for segmenting 17 vertebral structures among 5 folds are 79.46 ± 4.63%, 78.82 ± 7.97%, 81.32 ± 3.45%, 80.56 ± 5.47%, and 80.83 ± 3.48%, with an average DSC of 80.32 ± 5.00%. The average DSC was 80.32 ± 5.00%. Compared to existing methods, our SAFNet provides better segmentation results and has important implications for the diagnosis of spinal and lumbar diseases. CONCLUSIONS: This research proposes SAFNet, a highly accurate and robust spine segmentation deep learning network capable of providing effective anatomical segmentation for diagnostic purposes. The results demonstrate the effectiveness of the proposed method and its potential for improving radiological diagnosis accuracy.

12.
Med Teach ; 45(10): 1108-1111, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37542360

RESUMEN

What was the educational challenge?The complexity and variability of cross-sectional imaging present a significant challenge in imparting knowledge of radiologic anatomy to medical students.What was the solution?Recent advancements in three-dimensional (3D) segmentation and augmented reality (AR) technology provide a promising solution. These advances allow for the creation of interactive, patient-specific 3D/AR models which incorporate multiple imaging modalities including MRI, CT, and 3D rotational angiography can help trainees understand cross-sectional imaging.How was the solution implemented?To create the model, DICOM files of patient scans with slice thicknesses of 1 mm or less are exported to a computer and imported to 3D Slicer for registration. Once registered, the files are segmented with Vitrea software utilizing thresholding, region growing, and edge detection. After the creation of the models, they are then imported to a web-based interactive viewing platform and/or AR application.What lessons were learned that are relevant to a wider global audience?Low-resource 3D/AR models offer an accessible and intuitive tool to teach radiologic anatomy and pathology. Our novel method of creating these models leverages recent advances in 3D/AR technology to create a better experience than traditional high and low-resource 3D/AR modeling techniques. This will allow trainees to better understand cross-sectional imaging.What are the next steps?The interactive and intuitive nature of 3D and AR models has the potential to significantly improve the teaching and presentation of radiologic anatomy and pathology to a medical student audience. We encourage educators to incorporate 3D segmentation models and AR in their teaching strategies.


Asunto(s)
Realidad Aumentada , Radiología , Humanos , Programas Informáticos , Radiografía , Radiología/educación , Aprendizaje , Modelos Anatómicos
13.
Eur Arch Otorhinolaryngol ; 280(5): 2149-2154, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36210370

RESUMEN

PURPOSE: A narrow bony internal auditory canal (IAC) may be associated with a hypoplastic cochlear nerve and poorer hearing performances after cochlear implantation. However, definitions for a narrow IAC vary widely and commonly, qualitative grading or two-dimensional measures are used to characterize a narrow IAC. We aimed to refine the definition of a narrow IAC by determining IAC volume in both control patients and patients with inner ear malformations (IEMs). METHODS: In this multicentric study, we included high-resolution CT (HRCT) scans of 128 temporal bones (85 with IEMs: cochlear aplasia, n = 11; common cavity, n = 2; cochlear hypoplasia type, n = 19; incomplete partition type I/III, n = 8/8; Mondini malformation, n = 16; enlarged vestibular aqueduct syndrome, n = 19; 45 controls). The IAC diameter was measured in the axial plane and the IAC volume was measured by semi-automatic segmentation and three-dimensional reconstruction. RESULTS: In controls, the mean IAC diameter was 5.5 mm (SD 1.1 mm) and the mean IAC volume was 175.3 mm3 (SD 52.6 mm3). Statistically significant differences in IAC volumes were found in cochlear aplasia (68.3 mm3, p < 0.0001), IPI (107.4 mm3, p = 0.04), and IPIII (277.5 mm3, p = 0.0004 mm3). Inter-rater reliability was higher in IAC volume than in IAC diameter (intraclass correlation coefficient 0.92 vs. 0.77). CONCLUSIONS: Volumetric measurement of IAC in cases of IEMs reduces measurement variability and may add to classifying IEMs. Since a hypoplastic IAC can be associated with a hypoplastic cochlear nerve and sensorineural hearing loss, radiologic assessment of the IAC is crucial in patients with severe sensorineural hearing loss undergoing cochlear implantation.


Asunto(s)
Oído Interno , Pérdida Auditiva Sensorineural , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Oído Interno/diagnóstico por imagen , Oído Interno/anomalías , Cóclea/diagnóstico por imagen , Pérdida Auditiva Sensorineural/diagnóstico por imagen , Pérdida Auditiva Sensorineural/cirugía
14.
Eur Arch Otorhinolaryngol ; 280(5): 2155-2163, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36216913

RESUMEN

OBJECTIVES: Enlarged vestibular aqueduct (EVA) is a common finding associated with inner ear malformations (IEM). However, uniform radiologic definitions for EVA are missing and various 2D-measurement methods to define EVA have been reported. This study evaluates VA volume in different types of IEM and compares 3D-reconstructed VA volume to 2D-measurements. METHODS: A total of 98 high-resolution CT (HRCT) data sets from temporal bones were analyzed (56 with IEM; [cochlear hypoplasia (CH; n = 18), incomplete partition type I (IPI; n = 12) and type II (IPII; n = 11) and EVA (n = 15)]; 42 controls). VA diameter was measured in axial images. VA volume was analyzed by software-based, semi-automatic segmentation and 3D-reconstruction. Differences in VA volume between the groups and associations between VA volume and VA diameter were assessed. Inter-rater-reliability (IRR) was assessed using the intra-class-correlation-coefficient (ICC). RESULTS: Larger VA volumes were found in IEM compared to controls. Significant differences in VA volume between patients with EVA and controls (p < 0.001) as well as between IPII and controls (p < 0.001) were found. VA diameter at the midpoint (VA midpoint) and at the operculum (VA operculum) correlated to VA volume in IPI (VA midpoint: r = 0.78, VA operculum: r = 0.91), in CH (VA midpoint: r = 0.59, VA operculum: r = 0.61), in EVA (VA midpoint: r = 0.55, VA operculum: r = 0.66) and in controls (VA midpoint: r = 0.36, VA operculum: r = 0.42). The highest IRR was found for VA volume (ICC = 0.90). CONCLUSIONS: The VA diameter may be an insufficient estimate of VA volume, since (1) measurement of VA diameter does not reliably correlate with VA volume and (2) VA diameter shows a lower IRR than VA volume. 3D-reconstruction and VA volumetry may add information in diagnosing EVA in cases with or without additional IEM.


Asunto(s)
Pérdida Auditiva Sensorineural , Acueducto Vestibular , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Acueducto Vestibular/diagnóstico por imagen , Acueducto Vestibular/anomalías , Cóclea
15.
Eur Arch Otorhinolaryngol ; 280(3): 1089-1099, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35931824

RESUMEN

BACKGROUND AND PURPOSE: Retrofacial approach (RFA) is an access route to sinus tympani (ST) and it is used in cholesteatoma surgery, especially when type C ST is encountered. It may also be used to gain an access to stapedius muscle to assess the evoked stapedius reflex threshold. The primary object of this study was to evaluate the morphology of sinus tympani and its relationship to facial nerve (FN) and posterior semicircular canal (PSC) in context of planning retrofacial approach in pneumatized temporal bones. METHODS: CBCT of 130 adults were reviewed. The type of sinus tympani was assessed according to Marchioni's classification. Width of entrance to sinus tympani (STW), depth of ST (STD), distance between the posterior semicircular canal and facial nerve (F-PSC), distance between the latter plane to the floor of ST at the right angle (P-ST) were measured at level of round window (RW) and pyramidal ridge (PR). RESULTS: All of the bones were well-aerated and classified in Dexian Tan pneumatization group 3 or 4. Type B of ST is dominant (70.8%) in adult population with no history of inflammatory otologic diseases, followed by type C (22.7%) and then type A (6.5%). The depth of ST (STD) presented significant deviations (ANOVA, p < 0.05) among all three types. STW reaches greater values on the level of PR. F-PSC does not correlate with type of ST. In over 75% of examined type C sinus tympani the distance P-ST was less than 1 mm. CONCLUSIONS: The qualitative classification of the sinus tympani into types A, B and C, introduced by Marchioni is justified by statistically significant differences of depth between individual types of tympanic sinuses. The STW distance reaches greater values inferiorly-it may suggest that RFA should be performed in infero-superior manner rather than opposite direction. Preoperative assessment of temporal bones CT scans gives very important information about size of sinus tympani and distance between FN and PSC.


Asunto(s)
Hueso Temporal , Adulto , Humanos , Oído Medio/anatomía & histología , Oído Medio/diagnóstico por imagen , Oído Medio/cirugía , Estapedio , Hueso Temporal/diagnóstico por imagen , Hueso Temporal/cirugía , Hueso Temporal/anatomía & histología , Membrana Timpánica/diagnóstico por imagen , Membrana Timpánica/cirugía
16.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36850583

RESUMEN

Measuring pulmonary nodules accurately can help the early diagnosis of lung cancer, which can increase the survival rate among patients. Numerous techniques for lung nodule segmentation have been developed; however, most of them either rely on the 3D volumetric region of interest (VOI) input by radiologists or use the 2D fixed region of interest (ROI) for all the slices of computed tomography (CT) scan. These methods only consider the presence of nodules within the given VOI, which limits the networks' ability to detect nodules outside the VOI and can also encompass unnecessary structures in the VOI, leading to potentially inaccurate segmentation. In this work, we propose a novel approach for 3D lung nodule segmentation that utilizes the 2D region of interest (ROI) inputted from a radiologist or computer-aided detection (CADe) system. Concretely, we developed a two-stage lung nodule segmentation technique. Firstly, we designed a dual-encoder-based hard attention network (DEHA-Net) in which the full axial slice of thoracic computed tomography (CT) scan, along with an ROI mask, were considered as input to segment the lung nodule in the given slice. The output of DEHA-Net, the segmentation mask of the lung nodule, was inputted to the adaptive region of interest (A-ROI) algorithm to automatically generate the ROI masks for the surrounding slices, which eliminated the need for any further inputs from radiologists. After extracting the segmentation along the axial axis, at the second stage, we further investigated the lung nodule along sagittal and coronal views by employing DEHA-Net. All the estimated masks were inputted into the consensus module to obtain the final volumetric segmentation of the nodule. The proposed scheme was rigorously evaluated on the lung image database consortium and image database resource initiative (LIDC/IDRI) dataset, and an extensive analysis of the results was performed. The quantitative analysis showed that the proposed method not only improved the existing state-of-the-art methods in terms of dice score but also showed significant robustness against different types, shapes, and dimensions of the lung nodules. The proposed framework achieved the average dice score, sensitivity, and positive predictive value of 87.91%, 90.84%, and 89.56%, respectively.


Asunto(s)
Adipatos , Algoritmos , Humanos , Sistemas de Computación , Consenso
17.
Sensors (Basel) ; 23(4)2023 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-36850942

RESUMEN

Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and prone to human error, can act as a bottleneck in the diagnostic process. This motivates the development of automated algorithms for brain tumor segmentation. However, accurately segmenting the enhanced and core tumor regions is complicated due to high levels of inter- and intra-tumor heterogeneity in terms of texture, morphology, and shape. This study proposes a fully automatic method called the selective deeply supervised multi-scale attention network (SDS-MSA-Net) for segmenting brain tumor regions using a multi-scale attention network with novel selective deep supervision (SDS) mechanisms for training. The method utilizes a 3D input composed of five consecutive slices, in addition to a 2D slice, to maintain sequential information. The proposed multi-scale architecture includes two encoding units to extract meaningful global and local features from the 3D and 2D inputs, respectively. These coarse features are then passed through attention units to filter out redundant information by assigning lower weights. The refined features are fed into a decoder block, which upscales the features at various levels while learning patterns relevant to all tumor regions. The SDS block is introduced to immediately upscale features from intermediate layers of the decoder, with the aim of producing segmentations of the whole, enhanced, and core tumor regions. The proposed framework was evaluated on the BraTS2020 dataset and showed improved performance in brain tumor region segmentation, particularly in the segmentation of the core and enhancing tumor regions, demonstrating the effectiveness of the proposed approach. Our code is publicly available.


Asunto(s)
Neoplasias Encefálicas , Médicos , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Aprendizaje
18.
Prog Urol ; 33(10): 509-518, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37633733

RESUMEN

INTRODUCTION: Indication for percutaneous-ablation (PA) is gradually expanding to renal tumors T1b (4-7cm). Few data exist on the alteration of renal functional volume (RFV) post-PA. Yet, it is a surrogate marker of post partial-nephrectomy (PN) glomerular filtration rate (GFR) impairment. The objective was to compare RFV and GFR at 1-year post-PN or PA, in this T1b population. METHODS: Patients with unifocal renal tumor≥4cm treated between 2014 and 2019 were included. Tumor, homolateral (RFVh), contralateral RFV, and total volumes were assessed by manual segmentation (3D Slicer) before and at 1 year of treatment, as was GFR. The loss of RFV, contralateral hypertrophy, and preservation of GFR were compared between both groups (PN vs. PA). RESULTS: 144 patients were included (87PN, 57PA). Preoperatively, PA group was older (74 vs. 59 years; P<0.0001), had more impaired GFR (73 vs. 85mL/min; P=0.0026) and smaller tumor volume(31.1 vs. 55.9cm3; P=0.0007) compared to PN group. At 1 year, the PN group had significantly more homolateral RFV loss (-19 vs. -14%; P=0.002), and contralateral compensatory hypertrophy (+4% vs. +1,8%; P=0.02, respectively). Total-RFV loss was similar between both (-21.7 vs. -19cm3; P=0.07). GFR preservation was better in the PN group (95.9 vs. 90.7%; P=0.03). In multivariate analysis, age and tumor size were associated with loss of RFVh. CONCLUSION: For renal tumors T1b, PN is associated with superior compensatory hypertrophy compared with PA, compensating for the higher RFVh loss, resulting in similar ΔRFV-total between both groups. The superior post-PN GFR preservation suggests that the preserved quantitative RFV factor is insufficient. Therefore, the underlying quality of the parenchyma would play a major role in postoperative GFR.


Asunto(s)
Neoplasias Renales , Humanos , Neoplasias Renales/cirugía , Nefrectomía , Riñón/cirugía , Tasa de Filtración Glomerular , Hipertrofia
19.
Sensors (Basel) ; 22(24)2022 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-36560251

RESUMEN

Accurate segmentation of mandibular canals in lower jaws is important in dental implantology. Medical experts manually determine the implant position and dimensions from 3D CT images to avoid damaging the mandibular nerve inside the canal. In this paper, we propose a novel dual-stage deep learning-based scheme for the automatic segmentation of the mandibular canal. In particular, we first enhance the CBCT scans by employing the novel histogram-based dynamic windowing scheme, which improves the visibility of mandibular canals. After enhancement, we designed 3D deeply supervised attention UNet architecture for localizing the Volumes Of Interest (VOIs), which contain the mandibular canals (i.e., left and right canals). Finally, we employed the Multi-Scale input Residual UNet (MSiR-UNet) architecture to segment the mandibular canals using VOIs accurately. The proposed method has been rigorously evaluated on 500 and 15 CBCT scans from our dataset and from the public dataset, respectively. The results demonstrate that our technique improves the existing performance of mandibular canal segmentation to a clinically acceptable range. Moreover, it is robust against the types of CBCT scans in terms of field of view.


Asunto(s)
Canal Mandibular , Tomografía Computarizada de Haz Cónico Espiral , Tomografía Computarizada de Haz Cónico/métodos , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos
20.
Sensors (Basel) ; 22(20)2022 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-36298341

RESUMEN

In this paper, we present a two stages solution to 3D vehicle detection and segmentation. The first stage depends on the combination of EfficientNetB3 architecture with multiparallel residual blocks (inspired by CenterNet architecture) for 3D localization and poses estimation for vehicles on the scene. The second stage takes the output of the first stage as input (cropped car images) to train EfficientNet B3 for the image recognition task. Using predefined 3D Models, we substitute each vehicle on the scene with its match using the rotation matrix and translation vector from the first stage to get the 3D detection bounding boxes and segmentation masks. We trained our models on an open-source dataset (ApolloCar3D). Our method outperforms all published solutions in terms of 6 degrees of freedom error (6 DoF err).


Asunto(s)
Retraso en el Despertar Posanestésico , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA