Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Image Process ; 30: 739-753, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33226942

RESUMEN

The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional anatomy. Segmentation of intra-temporal anatomy based on computed tomography (CT) images is necessary for applications such as surgical training and rehearsal, amongst others. However, temporal bone segmentation is challenging due to the similar intensities and complicated anatomical relationships among critical structures, undetectable small structures on standard clinical CT, and the amount of time required for manual segmentation. This paper describes a single multi-class deep learning-based pipeline as the first fully automated algorithm for segmenting multiple temporal bone structures from CT volumes, including the sigmoid sinus, facial nerve, inner ear, malleus, incus, stapes, internal carotid artery and internal auditory canal. The proposed fully convolutional network, PWD-3DNet, is a patch-wise densely connected (PWD) three-dimensional (3D) network. The accuracy and speed of the proposed algorithm was shown to surpass current manual and semi-automated segmentation techniques. The experimental results yielded significantly high Dice similarity scores and low Hausdorff distances for all temporal bone structures with an average of 86% and 0.755 millimeter (mm), respectively. We illustrated that overlapping in the inference sub-volumes improves the segmentation performance. Moreover, we proposed augmentation layers by using samples with various transformations and image artefacts to increase the robustness of PWD-3DNet against image acquisition protocols, such as smoothing caused by soft tissue scanner settings and larger voxel sizes used for radiation reduction. The proposed algorithm was tested on low-resolution CTs acquired by another center with different scanner parameters than the ones used to create the algorithm and shows potential for application beyond the particular training data used in the study.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Hueso Temporal/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos
2.
Otol Neurotol ; 41(3): e378-e386, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31917770

RESUMEN

HYPOTHESIS: To characterize anatomical measurements and shape variation of the facial nerve within the temporal bone, and to create statistical shape models (SSMs) to enhance knowledge of temporal bone anatomy and aid in automated segmentation. BACKGROUND: The facial nerve is a fundamental structure in otologic surgery, and detailed anatomic knowledge with surgical experience are needed to avoid its iatrogenic injury. Trainees can use simulators to practice surgical techniques, however manual segmentation required to develop simulations can be time consuming. Consequently, automated segmentation algorithms have been developed that use atlas registration, SSMs, and deep learning. METHODS: Forty cadaveric temporal bones were evaluated using three dimensional microCT (µCT) scans. The image sets were aligned using rigid fiducial registration, and the facial nerve canals were segmented and analyzed. Detailed measurements were performed along the various sections of the nerve. Shape variability was then studied using two SSMs: one involving principal component analysis (PCA) and a second using the Statismo framework. RESULTS: Measurements of the nerve canal revealed mean diameters and lengths of the labyrinthine, tympanic, and mastoid segments. The landmark PCA analysis demonstrated significant shape variation along one mode at the distal tympanic segment, and along three modes at the distal mastoid segment. The Statismo shape model was consistent with this analysis, emphasizing the variability at the mastoid segment. The models were made publicly available to aid in future research and foster collaborative work. CONCLUSION: The facial nerve exhibited statistical variation within the temporal bone. The models used form a framework for automated facial nerve segmentation and simulation for trainees.


Asunto(s)
Oído Interno , Nervio Facial , Oído Medio , Nervio Facial/diagnóstico por imagen , Humanos , Apófisis Mastoides , Hueso Temporal/diagnóstico por imagen
3.
Int J Comput Assist Radiol Surg ; 15(2): 259-267, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31760585

RESUMEN

PURPOSE: To create a novel, multi-atlas-based segmentation algorithm of the facial nerve (FN) requiring minimal user intervention that could be easily deployed into an existing open-source toolkit. Specifically, the mastoid, tympanic and labyrinthine segments of the FN would be segmented. METHODS: High-resolution micro-computed tomography (micro-CT) scans were pre-segmented and used as atlases of the FN. The algorithm requires the user to place four fiducials to orient the target, low-resolution clinical CT scan, and generate a centerline along the nerve. Based on this data, the appropriate atlas is chosen by the algorithm and then rigidly and non-rigidly registered to provide an automated segmentation of the FN. RESULTS: The algorithm was successfully developed and implemented into an existing open-source software framework. Validation was performed on 28 temporal bones, where the automated segmentation was compared against gold-standard manual segmentation by an expert. The algorithm achieved an average Dice metric of 0.76 and an average Hausdorff distance of 0.17 mm for the tympanic and mastoid portions of the FN when segmenting healthy facial nerves, which are similar to previously published algorithms. CONCLUSION: A successful FN segmentation algorithm was developed using a high-resolution micro-CT multi-atlas approach. The algorithm was unique in its ability to segment the entire intratemporal FN, with the exception of the meatal segment, which was not included in the segmentation as it was not discernible from the vestibulocochlear nerve within the internal auditory canal. It will be published as an open-source extension to allow use in virtual reality simulators for automatic segmentation, greatly reducing the time for expert segmentation and verification.


Asunto(s)
Nervio Facial/cirugía , Hueso Temporal/cirugía , Realidad Virtual , Algoritmos , Nervio Facial/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos , Hueso Temporal/diagnóstico por imagen , Microtomografía por Rayos X
4.
Hear Res ; 354: 1-8, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28822316

RESUMEN

High resolution images are used as a basis for finite-element modeling of the middle-ear structures to study their biomechanical function. Commonly used imaging techniques such as micro-computed tomography (CT) and optical microscopy require extensive sample preparation, processing or staining using contrast agents to achieve sufficient soft-tissue contrast. We compare imaging of middle-ear structures in unstained, non-decalcified human temporal bones using conventional absorption-contrast micro-CT and using synchrotron radiation phase-contrast imaging (SR-PCI). Four cadaveric temporal bones were imaged using SR-PCI and conventional micro-CT. Images were qualitatively compared in terms of visualization of structural details and soft-tissue contrast using intensity profiles and histograms. In order to quantitatively compare SR-PCI to micro-CT, three-dimensional (3D) models of the ossicles were constructed from both modalities using a semi-automatic segmentation method as these structures are clearly visible in both types of images. Volumes of the segmented ossicles were computed and compared between the two imaging modalities and to estimates from the literature. SR-PCI images provided superior visualization of soft-tissue microstructures over conventional micro-CT images. Intensity profiles emphasized the improved contrast and detectability of soft-tissue in SR-PCI in comparison to absorption-contrast micro-CT. In addition, the semi-automatic segmentations of SR-PCI images yielded accurate 3D reconstructions of the ossicles with mean volumes in accord with volume estimates from micro-CT images and literature. Sample segmentations of the ossicles and soft tissue structures were provided on an online data repository for benefit of the research community. The improved visualization, modeling accuracy and simple sample preparation make SR-PCI a promising tool for generating reliable FE models of the middle-ear structures, including both soft tissues and bone.


Asunto(s)
Oído Medio/diagnóstico por imagen , Sincrotrones , Hueso Temporal/diagnóstico por imagen , Microtomografía por Rayos X , Cadáver , Simulación por Computador , Osículos del Oído/anatomía & histología , Osículos del Oído/diagnóstico por imagen , Oído Medio/anatomía & histología , Análisis de Elementos Finitos , Humanos , Imagenología Tridimensional , Modelos Anatómicos , Interpretación de Imagen Radiográfica Asistida por Computador , Hueso Temporal/anatomía & histología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...