Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
IEEE J Biomed Health Inform ; 28(5): 2904-2915, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38416610

RESUMEN

Three-dimensional images are frequently used in medical imaging research for classification, segmentation, and detection. However, the limited availability of 3D images hinders research progress due to network training difficulties. Generative methods have been proposed to create medical images using AI techniques. Nevertheless, 2D approaches have difficulty dealing with 3D anatomical structures, which can result in discontinuities between slices. To mitigate these discontinuities, several 3D generative networks have been proposed. However, the scarcity of available 3D images makes training these networks with limited samples inadequate for producing high-fidelity 3D images. We propose a data-guided generative adversarial network to provide high fidelity in 3D image generation. The generator creates fake images with noise using reference code obtained by extracting features from real images. The generator also creates decoded images using reference code without noise. These decoded images are compared to the real images to evaluate fidelity in the reference code. This generation process can create high-fidelity 3D images from only a small amount of real training data. Additionally, our method employs three types of discriminator: volume (evaluates all the slices), slab (evaluates a set of consecutive slices), and slice (evaluates randomly selected slices). The proposed discriminator enhances fidelity by differentiating between real and fake images based on detailed characteristics. Results from our method are compared with existing methods by using quantitative analysis such as Fréchet inception distance and maximum mean discrepancy. The results demonstrate that our method produces more realistic 3D images than existing methods.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Humanos , Imagenología Tridimensional/métodos , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos
2.
Bioengineering (Basel) ; 11(2)2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38391649

RESUMEN

Volumetric representation is a technique used to express 3D objects in various fields, such as medical applications. On the other hand, tomography images for reconstructing volumetric data have limited utilization because they contain personal information. Existing GAN-based medical image generation techniques can produce virtual tomographic images for volume reconstruction while preserving the patient's privacy. Nevertheless, these images often do not consider vertical correlations between the adjacent slices, leading to erroneous results in 3D reconstruction. Furthermore, while volume generation techniques have been introduced, they often focus on surface modeling, making it challenging to represent the internal anatomical features accurately. This paper proposes volumetric imitation GAN (VI-GAN), which imitates a human anatomical model to generate volumetric data. The primary goal of this model is to capture the attributes and 3D structure, including the external shape, internal slices, and the relationship between the vertical slices of the human anatomical model. The proposed network consists of a generator for feature extraction and up-sampling based on a 3D U-Net and ResNet structure and a 3D-convolution-based LFFB (local feature fusion block). In addition, a discriminator utilizes 3D convolution to evaluate the authenticity of the generated volume compared to the ground truth. VI-GAN also devises reconstruction loss, including feature and similarity losses, to converge the generated volumetric data into a human anatomical model. In this experiment, the CT data of 234 people were used to assess the reliability of the results. When using volume evaluation metrics to measure similarity, VI-GAN generated a volume that realistically represented the human anatomical model compared to existing volume generation methods.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38376971

RESUMEN

Accurately segmenting polyps from colonoscopy images is essential for diagnosing colorectal cancer. Despite the tremendous success of the deep convolutional neural networks in automatic polyp segmentation, it suffers from domain shift issues, where the trained model yields performance deterioration on unseen test datasets. This paper proposes an illumination enhancement-based domain generalization approach to improve the generalization capability of the model on unseen test datasets and alleviate this issue. In particular, an image decomposition module (IDM) was developed to separate colonoscopy images into reflectance, local, and global illumination components. An illumination transform module (ITM) was proposed to augment images with different global illuminations by synthesizing target-like global illumination maps. A novel illumination variance insensitiveness (IViSen) is also introduced to evaluate the robustness of the model against illumination disturbance. IViSen is easy to compute and correlates well with model generalizability. The segmentation performance of the proposed model on four colonoscopy datasets was examined: CVC-ClinicDB, CVC-ColonDB, ETIS-Larib, and Kvasir-SEG. The method outperformed the competitive methods when tested on unseen domains. In particular, the proposed approach yielded 60.82% and 53.19% in terms of mean Dice and IoU, respectively, with 2.06% and 2.31% improvements.

4.
Bioengineering (Basel) ; 9(12)2022 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-36550927

RESUMEN

Color medical images provide better visualization and diagnostic information for doctors during clinical procedures than grayscale medical images. Although generative adversarial network-based image colorization approaches have shown promising results, in these methods, adversarial training is applied to the whole image without considering the appearance conflicts between the foreground objects and the background contents, resulting in generating various artifacts. To remedy this issue, we propose a fully automatic spatial mask-guided colorization with generative adversarial network (SMCGAN) framework for medical image colorization. It generates colorized images with fewer artifacts by introducing spatial masks, which encourage the network to focus on the colorization of the foreground regions instead of the whole image. Specifically, we propose a novel spatial mask-guided method by introducing an auxiliary foreground segmentation branch combined with the main colorization branch to obtain the spatial masks. The spatial masks are then used to generate masked colorized images where most background contents are filtered out. Moreover, two discriminators are utilized for the generated colorized images and masked generated colorized images, respectively, to assist the model in focusing on the colorization of foreground regions. We validate our proposed framework on two publicly available datasets, including the Visible Human Project (VHP) dataset and the prostate dataset from NCI-ISBI 2013 challenge. The experimental results demonstrate that SMCGAN outperforms the state-of-the-art GAN-based image colorization approaches with an average improvement of 8.48% in the PSNR metric. The proposed SMCGAN can also generate colorized medical images with fewer artifacts.

5.
Med Phys ; 49(10): 6491-6504, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35981348

RESUMEN

PURPOSE: In clinical practice, medical image analysis has played a key role in disease diagnosis. One of the important steps is to perform an accurate organ or tissue segmentation for assisting medical professionals in making correct diagnoses. Despite the tremendous progress in the deep learning-based medical image segmentation approaches, they often fail to generalize to test datasets due to distribution discrepancies across domains. Recent advances aligning the domain gaps by using bi-directional GANs (e.g., CycleGAN) have shown promising results, but the strict constraints of the cycle consistency hamper these methods from yielding better performance. The purpose of this study is to propose a novel bi-directional GAN-based segmentation model with fewer constraints on the cycle consistency to improve the generalized segmentation results. METHODS: We propose a novel unsupervised domain adaptation approach by designing content-consistent generative adversarial networks ( C 2 -GAN $\text{C}^2\text{-GAN}$ ) for medical image segmentation. First, we introduce content consistency instead of cycle consistency to relax the constraint of the invertibility map to encourage the synthetic domain generated with a large domain transportation distance. The synthetic domain is thus pulled close to the target domain for the reduction of domain discrepancy. Second, we suggest a novel style transfer loss based on the difference in low-frequency magnitude to further mitigate the appearance shifts across domains. RESULTS: We validate our proposed approach on three public X-ray datasets, including the Montgomery, JSRT, and Shenzhen datasets. For an accurate evaluation, we randomly divided the images of each dataset into 70% for training, 10% for evaluation, and 20% for testing. The mean Dice was 95.73 ± 0.22%, 95.16 ± 1.42% for JSRT and Shenzhen datasets, respectively. For the recall and precision metrics, our model also achieved better or comparable performance than the state-of-the-art CycleGAN-based UDA approaches. CONCLUSIONS: The experimental results validate the effectiveness of our method in mitigating the domain gaps and improving generalized segmentation results for X-ray image segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
6.
Comput Biol Med ; 145: 105427, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35585731

RESUMEN

Owing to the data distribution shifts generated by collecting images using various imaging protocols and device vendors, the generalization capability of deep models is crucial for medical image analysis when applied to test datasets in clinical environments. Domain generalization (DG) methods have shown promising generalization performance in the field of medical image segmentation. In contrast to conventional DG, which has strict requirements regarding the availability of multiple source domains, we consider a more challenging problem, that is, single-domain generalization (SDG), where only a single source is available during network training. In this scenario, the augmentation of the entire image to improve the model generalization ability may cause alteration of hue values, resulting in the wrong segmentation of tissues in color medical images. To resolve this problem, we first present a novel illumination-randomized SDG framework to improve the model generalization power for color medical image segmentation by synthesizing randomized illumination maps. Specifically, we devise unsupervised retinex-based image decomposition neural networks (ID-Nets) to decompose color medical images into reflectance and illumination maps. Illumination maps are augmented by performing illumination randomization to generate medical color images under diverse illumination conditions. Second, to measure the quality of retinex-based image decomposition, we devise a novel metric, the transport gradient consistency index, by modeling physical illumination. Extensive experiments are performed to evaluate our proposed framework on two retinal fundus image segmentation tasks: optic cup and disc segmentation. The experimental results demonstrate that our framework outperforms other SDG and image enhancement methods, surpassing the state-of-the-art SDG methods by up to 9.6% with respect to the Dice coefficient.


Asunto(s)
Iluminación , Disco Óptico , Fondo de Ojo , Aumento de la Imagen , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
7.
Sensors (Basel) ; 20(11)2020 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-32503191

RESUMEN

Crowdsensing applications provide platforms for sharing sensing data collected by mobile devices. A blockchain system has the potential to replace a traditional centralized trusted third party for crowdsensing services to perform operations that involve evaluating the quality of sensing data, finishing payment, and storing sensing data and so forth. The requirements which are codified as smart contracts are executed to evaluate the quality of sensing data in a blockchain. However, regardless of the fact that the quality of sensing data may actually be sufficient, one key challenge is that malicious requesters can deliberately publish abnormal requirements that cause failure to occur in the quality evaluation process. If requesters control a miner node or full node, they can access the data without making payment; this is because of the transparency of data stored in the blockchain. This issue promotes unfair dealing and severely lowers the motivation of workers to participate in crowdsensing tasks. We (i) propose a novel crowdsensing scheme to address this issue using Trusted Execution Environments; (ii) offer a solution for the confidentiality and integrity of sensing data, which is only accessible by the worker and corresponding requester; (iii) and finally, report on the implementation of a prototype and evaluate its performance. Our results demonstrate that the proposed solution can guarantee fairness without a significant increase in overhead.

8.
J Korean Med Sci ; 35(12): e90, 2020 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-32233159

RESUMEN

BACKGROUND: Virtual environments have brought the use of realistic training closer to many different fields of education. In medical education, several visualization methods for studying inside the human body have been introduced as a way to verify the structure of internal organs. However, these methods are insufficient for realistic training simulators because they do not provide photorealistic scenes or offer an intuitive perception to the user. In addition, they are used in limited environments within a classroom setting. METHODS: We have developed a virtual dissection exploration system that provides realistic three-dimensional images and a virtual endoscopic experience. This system enables the user to manipulate a virtual camera through a human organ, using gesture-sensing technology. We can make a virtual dissection image of the human body using a virtual dissection simulator and then navigate inside an organ using a virtual endoscope. To improve the navigation performance during virtual endoscopy, our system warns the user about any potential collisions that may occur against the organ's wall by taking the virtual control sphere at the virtual camera position into consideration. RESULTS: Experimental results show that our system efficiently provides high-quality anatomical visualization. We can simulate anatomic training using virtual dissection and endoscopic images. CONCLUSION: Our training simulator would be helpful in training medical students because it provides an immersive environment.


Asunto(s)
Simulación por Computador , Educación Médica , Endoscopía , Interfaz Usuario-Computador , Competencia Clínica , Endoscopía/educación , Endoscopía/métodos , Cuerpo Humano , Humanos , Estudiantes de Medicina
9.
Comput Biol Med ; 117: 103608, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32072967

RESUMEN

Light effects have been frequently used in volume rendering because they can depict the shapes of objects more realistically. Global illumination reflects light intensity values at relevant pixel positions of reconstructed images based on the considerations of scattering and extinction phenomena. However, in the cases of ultrasound volumes that do not use Cartesian coordinates, internal lighting operations generate errors owing to the distorted direction of light propagation, and thus increase the amount of light and its effects according to the position of the volume inside. In this study, we present a novel global illumination method with calibrated light along the progression direction in accordance with volume ray casting in non-Cartesian coordinates. In addition, we reduce the consumption of lighting operation in these lighting processes using a light-distribution template. Experimental results show the volume rendering outcomes in non-Cartesian coordinates that realistically visualize the global illumination effect. The light scattering effect is expressed uniformly in the top and bottom areas where many distortions are generated in the ultrasound coordinates by using the light template kernels adaptively. Our method can effectively identify dark areas that are invisible owing to differences in brightness at the upper and lower regions of the ultrasound coordinates. Our method can be used to realistically show the shapes of the fetus during relevant examinations with ultrasonography.


Asunto(s)
Aumento de la Imagen , Imagenología Tridimensional , Ultrasonografía
10.
Comput Intell Neurosci ; 2019: 8527819, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31485217

RESUMEN

With the widespread use of deep learning methods, semantic segmentation has achieved great improvements in recent years. However, many researchers have pointed out that with multiple uses of convolution and pooling operations, great information loss would occur in the extraction processes. To solve this problem, various operations or network architectures have been suggested to make up for the loss of information. We observed a trend in many studies to design a network as a symmetric type, with both parts representing the "encoding" and "decoding" stages. By "upsampling" operations in the "decoding" stage, feature maps are constructed in a certain way that would more or less make up for the losses in previous layers. In this paper, we focus on upsampling operations, make a detailed analysis, and compare current methods used in several famous neural networks. We also combine the knowledge on image restoration and design a new upsampled layer (or operation) named the TGV upsampling algorithm. We successfully replaced upsampling layers in the previous research with our new method. We found that our model can better preserve detailed textures and edges of feature maps and can, on average, achieve 1.4-2.3% improved accuracy compared to the original models.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Semántica , Simulación por Computador , Procesamiento de Imagen Asistido por Computador/métodos
11.
Int. j. morphol ; 37(3): 1016-1022, Sept. 2019. tab, graf
Artículo en Inglés | LILACS | ID: biblio-1012390

RESUMEN

To allow students and surgeons to learn the sites for botulinum toxin injection, new types of educational images are needed because MRI, CT, and sectioned images are inadequate. This article describes browsing software that displays face peeled images that allow layers along the curved surface of the face to be peeled gradually in even depths across the surface. Two volume models of the head were reconstructed from sectioned images and segmented images of Visible Korean, respectively. These volume models were peeled serially at a thickness of 0.2 mm along the curved surface of the facial skin to construct the peeled images and peeled segmented images. All of the peeled images were marked with botulinum toxin injection sites, facial creases and wrinkles, and fat compartments. All peeled images and the text information were entered into browsing software. The browsing software shows 12 botulinum toxin injection sites on all peeled images of the anterior and lateral views. Further, the software shows 23 anatomic landmarks, 13 facial creases and wrinkles, and 7 face fat compartments. When a user points at any structure on the peeled images, the name of the structure appears. Our software featuring the peeled images will be particularly effective for helping medical students to quickly and easily learn the accurate facial anatomy for botulinum toxin injection sites. It will also be useful for explaining plastic surgery procedures to patients and studying the anatomic structure of the human face.


Para permitir que los estudiantes y cirujanos aprendan los sitios para la inyección de toxina botulínica, se necesitan nuevos tipos de imágenes educativas ya que las imágenes de MRI, CT e imágenes seccionadas son inadecuadas. Este artículo describe el software de navegación que muestra imágenes de cara sin piel que permiten que las capas a lo largo de la superficie curva de la cara se despeguen gradualmente en profundidades uniformes a lo largo de la superficie. Se reconstruyeron dos modelos de volumen de la cabeza a partir de imágenes seccionadas e imágenes segmentadas visibles, respectivamente. En estos modelos de volumen se retiró la piel en serie con un grosor de 0,2 mm a lo largo de la superficie curva de la cara para construir las imágenes sin piel y las imágenes segmentadas sin piel. Todas las imágenes sin piel se marcaron con puntos de inyección de toxina botulínica, arrugas y arrugas faciales y compartimientos de grasa. Todas las imágenes despegadas y la información de texto se ingresaron en el software de navegación. El software de navegación muestra 12 sitios de inyección de toxina botulínica en todas las imágenes de las vistas anterior y lateral. Además, el software muestra 23 puntos de referencia anatómicos, 13 pliegues y arrugas faciales y 7 compartimentos de grasa facial. Cuando un usuario selecciona cualquier estructura en las imágenes sin piel, aparece el nombre de la estructura. Nuestro software con las imágenes sin piel será particularmente efectivo para ayudar a los estudiantes de medicina a aprender rápida y fácilmente la anatomía facial precisa para los sitios de inyección de toxina botulínica. También será útil para explicar los procedimientos de cirugía plástica a pacientes y estudiar la estructura anatómica del rostro humano.


Asunto(s)
Humanos , Cirugía Plástica/educación , Proyectos Humanos Visibles , Cara/anatomía & histología , Toxinas Botulínicas , Cadáver , Interpretación de Imagen Asistida por Computador , Color , Puntos Anatómicos de Referencia , Modelos Anatómicos
12.
J Korean Med Sci ; 34(3): e15, 2019 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-30662383

RESUMEN

BACKGROUND: The curved sectional planes of the human body can provide a new approach of surface anatomy that the classical horizontal, coronal, and sagittal planes cannot do. The purpose of this study was to verify whether the curved sectional planes contribute to the morphological comprehension of anatomical structures. METHODS: By stacking the sectioned images of a male cadaver, a volume model of the right half body was produced (voxel size 1 mm). The sectioned images with the segmentation data were also used to build another volume model. The volume models were peeled and rotated to be screen captured. The captured images were loaded on user-friendly browsing software that had been made in the laboratory. RESULTS: The browsing software was downloadable from the authors' homepage (anatomy.co.kr). On the software, the volume model was peeled at 1 mm thicknesses and rotated at 30 degrees. Since the volume models were made from the cadaveric images, actual colors of the structures were displayed in high resolution. Thanks to the segmentation data, the structures on the volume model could be automatically annotated. Using the software, the sternocleidomastoid muscle and the internal jugular vein in the neck region, the cubital fossa in the upper limb region, and the femoral triangle in the lower limb region were observed to be described. CONCLUSION: For the students learning various medical procedures, the software presents the needed graphic information of the human body. The curved sectional planes are expected to be a tool for disciplinary convergence of the sectional anatomy and surface anatomy.


Asunto(s)
Anatomía Transversal/métodos , Modelos Anatómicos , Adulto , Cadáver , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Masculino , Interfaz Usuario-Computador
13.
J Korean Med Sci ; 33(8): e64, 2018 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-29441756

RESUMEN

BACKGROUND: The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. METHODS: On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. RESULTS: All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. CONCLUSION: These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape.


Asunto(s)
Mano/anatomía & histología , Modelos Anatómicos , Programas Informáticos , Cadáver , Humanos , Procesamiento de Imagen Asistido por Computador
14.
J Korean Med Sci ; 32(7): 1195-1201, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28581279

RESUMEN

The thousands of serial images used for medical pedagogy cannot be included in a printed book; they also cannot be efficiently handled by ordinary image viewer software. The purpose of this study was to provide browsing software to grasp serial medical images efficiently. The primary function of the newly programmed software was to select images using 3 types of interfaces: buttons or a horizontal scroll bar, a vertical scroll bar, and a checkbox. The secondary function was to show the names of the structures that had been outlined on the images. To confirm the functions of the software, 3 different types of image data of cadavers (sectioned and outlined images, volume models of the stomach, and photos of the dissected knees) were inputted. The browsing software was downloadable for free from the homepage (anatomy.co.kr) and available off-line. The data sets provided could be replaced by any developers for their educational achievements. We anticipate that the software will contribute to medical education by allowing users to browse a variety of images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Anatómicos , Programas Informáticos , Cadáver , Humanos , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Interfaz Usuario-Computador
15.
Ann Anat ; 211: 202-206, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28274804

RESUMEN

This study was intended to confirm whether simultaneous examination of surface and volume models contributes to learning of hand structures. Outlines of the skin, muscles, and bones of the right hand were traced in sectioned images of a male cadaver to create surface models of the structures. After the outlines were filled with selected colors, the color-filled sectioned images were stacked to produce a volume model of the hand, from which the skin was gradually peeled. The surface models provided locational orientation of the hand structures such as extrinsic and intrinsic hand muscles, while the peeled volume model revealed the depth of the individual hand structures. In addition, the characteristic appearances of the radial artery and the wrist joint were confirmed. The exploration of the volume model accompanied by equivalent surface models is synergistically helpful for understanding the morphological properties of hand structures.


Asunto(s)
Anatomía/educación , Instrucción por Computador/métodos , Mano/anatomía & histología , Imagenología Tridimensional/métodos , Modelos Anatómicos , Interfaz Usuario-Computador , Cadáver , Humanos , Masculino , República de Corea , Piel/anatomía & histología , Enseñanza
16.
Int. j. morphol ; 34(3): 939-944, Sept. 2016. ilus
Artículo en Inglés | LILACS | ID: biblio-828966

RESUMEN

Diagnosing and treating stomach diseases requires as many of the related anatomy details as possible. The objective of this study based on the sectioned images of cadaver was to offer interested clinicians anatomical knowledge about the stomach and its neighbors from the new viewpoint. For the raw data, sectioned images of a male cadaver without stomach pathology were used. By manual segmentation and automatic interpolation, a high-quality volume model of the stomach was reconstructed. The model was continuously peeled and piled to synthetically reveal the inside and outside of the stomach. The anterior, posterior, right, and left views of the models were compared with a chosen sectioned image. The numerous stomach images were then put into user-friendly browsing software. Some advantages of this study are that the sectioned images reveal real stomach color with high resolution; the peeled and piled volume models result in new features of the stomach and surroundings; and the processed models can be conveniently browsed in the presented software. These image data and tutorial software are expected to be helpful in acquiring supplementary morphologic information on the stomach and related structures.


El diagnóstico y el tratamiento de enfermedades del estómago requieren del conocimiento del mayor número de detalles posible sobre su anatomía. El objetivo de este estudio, basado en secciones de imágenes de cadáver, es ofrecer a los médicos la anatomía del estómago y sus estructuras vecinas desde un nuevo punto de vista. Se utilizaron imágenes de secciones de un cadáver, de sexo masculino, sin patología del estómago. Por segmentación manual y automática de interpolación, se reconstruyó un modelo de volumen de alta calidad del estómago. El modelo fue descortezado y apilado para revelar sintéticamente el interior y exterior del estómago. Se compararon los puntos de vista anterior, posterior, derecho e izquierdo de los modelos en una sección elegida. Las numerosas imágenes del estómago luego fueron puestas en el software de navegación de fácil uso para el profesional. Algunas de las ventajas de este estudio son que las imágenes seccionadas revelan el color real del estómago con alta resolución; los modelos de volumen descortezados y apilados dan lugar a nuevas funciones del estómago y sus estructuras circundantes; y los modelos procesados pueden ser convenientemente navegados en el software presentado. Se espera que estos datos de imagen y el tutorial del programa sean de utilidad para la adquisición de información morfológica complementaria sobre el estómago y las estructuras relacionadas.


Asunto(s)
Humanos , Masculino , Adulto , Estómago/anatomía & histología , Interfaz Usuario-Computador , Proyectos Humanos Visibles , Cadáver , Modelos Anatómicos , Programas Informáticos
18.
Ann Anat ; 208: 19-23, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27475426

RESUMEN

Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project.


Asunto(s)
Anatomía Transversal/educación , Anatomía/educación , Colon/anatomía & histología , Colonoscopía/educación , Instrucción por Computador/métodos , Programas Informáticos , Anatomía Transversal/métodos , Cadáver , Humanos , Imagenología Tridimensional/métodos , Modelos Anatómicos , Enseñanza , Interfaz Usuario-Computador
19.
Comput Methods Programs Biomed ; 133: 25-34, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27393797

RESUMEN

BACKGROUND AND OBJECTIVE: This paper introduces an effective noise removal method for medical ultrasound volume data. Ultrasound data usually need to be filtered because they contain significant noise. Conventional two-dimensional (2D) filtering methods cannot use the implicit information between adjacent layers, and existing 3D filtering methods are slow because of complicated filter kernels. Even though one filter method utilizes simple filters for speed, it is inefficient at removing noise and does not take into account the characteristics of ultrasound sampling. To solve this problem, we introduce a fast filtering method using parallel bilateral filtering and adjust the filter window size proportionally according to its position. METHODS: We devised a parallel bilateral filtering by obtaining a 3D summed area table of a quantized spatial filter. The filtering method is made adaptive by changing the kernel window size according to the distance from the ultrasound signal transmission point. RESULTS: Experiments were performed to compare the noise removal and loss of original data of the anisotropic diffusion filtering, bilateral filtering, and adaptive bilateral filtering of ultrasound volume-rendered images. The results show that the adaptive filter correctly takes into account the sampling characteristics of the ultrasound volumes. CONCLUSIONS: The proposed method can more efficiently remove noise and minimize distortion from ultrasound data than existing simple or non-adaptive filtering methods.


Asunto(s)
Imagenología Tridimensional , Ultrasonografía Prenatal , Femenino , Humanos , Embarazo
20.
J Xray Sci Technol ; 24(4): 537-48, 2016 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-27127935

RESUMEN

Data sets containing colored anatomical images of the human body, such as Visible Human or Visible Korean, show realistic internal organ structures. However, imperfect segmentations of these color images, which are typically generated manually or semi-automatically, produces poor-quality rendering results. We propose an interactive high-quality visualization method using GPU-based refinements to aid in the study of anatomical structures. In order to represent the boundaries of a region-of-interest (ROI) smoothly, we apply Gaussian filtering to the opacity values of the color volume. Morphological grayscale erosion operations are performed to reduce the region size, which is expanded by Gaussian filtering. Pseudo-coloring and color blending are also applied to the color volume in order to give more informative rendering results. We implement these operations on GPUs to speed up the refinements. As a result, our method delivered high-quality result images with smooth boundaries and provided considerably faster refinements. The speed of these refinements is sufficient to be used with interactive renderings as the ROI changes, especially compared to CPU-based methods. Moreover, the pseudo-coloring methods used presented anatomical structures clearly.


Asunto(s)
Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Gráficos por Computador , Bases de Datos Factuales , Cabeza/diagnóstico por imagen , Humanos , Distribución Normal , Torso/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...