Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 125.151
Filtrar
1.
Eur J Med Res ; 25(1): 49, 2020 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-33046116

RESUMO

BACKGROUND: The coronavirus disease 2019 (COVID-19) has brought a global disaster. Quantitative lesions may provide the radiological evidence of the severity of pneumonia and further to assess the effect of comorbidity on patients with COVID-19. METHODS: 294 patients with COVID-19 were enrolled from February, 24, 2020 to June, 1, 2020 from six centers. Multi-task Unet network was used to segment the whole lung and lesions from chest CT images. This deep learning method was pre-trained in 650 CT images (550 in primary dataset and 100 in test dataset) with COVID-19 or community-acquired pneumonia and Dice coefficients in test dataset were calculated. 50 CT scans of 50 patients (15 with comorbidity and 35 without comorbidity) were random selected to mark lesions manually. The results will be compared with the automatic segmentation model. Eight quantitative parameters were calculated based on the segmentation results to evaluate the effect of comorbidity on patients with COVID-19. RESULTS: Quantitative segmentation model was proved to be effective and accurate with all Dice coefficients more than 0.85 and all accuracies more than 0.95. Of the 294 patients, 52 (17.7%) patients were reported having at least one comorbidity; 14 (4.8%) having more than one comorbidity. Patients with any comorbidity were older (P < 0.001), had longer incubation period (P < 0.001), were more likely to have abnormal laboratory findings (P < 0.05), and be in severity status (P < 0.001). More lesions (including larger volume of lesion, consolidation, and ground-glass opacity) were shown in patients with any comorbidity than patients without comorbidity (all P < 0.001). More lesions were found on CT images in patients with more comorbidities. The median volumes of lesion, consolidation, and ground-glass opacity in diabetes mellitus group were largest among the groups with single comorbidity that had the incidence rate of top three. CONCLUSIONS: Multi-task Unet network can make quantitative CT analysis of lesions to assess the effect of comorbidity on patients with COVID-19, further to provide the radiological evidence of the severity of pneumonia. More lesions (including GGO and consolidation) were found in CT images of cases with comorbidity. The more comorbidities patients have, the more lesions CT images show.


Assuntos
Algoritmos , Betacoronavirus , Infecções por Coronavirus/epidemiologia , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Pneumonia Viral/epidemiologia , Pneumonia/diagnóstico , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Comorbidade , Infecções por Coronavirus/diagnóstico , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia/epidemiologia , Pneumonia Viral/diagnóstico , Reprodutibilidade dos Testes , Estudos Retrospectivos
2.
Nat Commun ; 11(1): 4949, 2020 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-33009388

RESUMO

Electron microscopy (EM) is widely used for studying cellular structure and network connectivity in the brain. We have built a parallel imaging pipeline using transmission electron microscopes that scales this technology, implements 24/7 continuous autonomous imaging, and enables the acquisition of petascale datasets. The suitability of this architecture for large-scale imaging was demonstrated by acquiring a volume of more than 1 mm3 of mouse neocortex, spanning four different visual areas at synaptic resolution, in less than 6 months. Over 26,500 ultrathin tissue sections from the same block were imaged, yielding a dataset of more than 2 petabytes. The combined burst acquisition rate of the pipeline is 3 Gpixel per sec and the net rate is 600 Mpixel per sec with six microscopes running in parallel. This work demonstrates the feasibility of acquiring EM datasets at the scale of cortical microcircuits in multiple brain regions and species.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia Eletrônica de Transmissão , Rede Nervosa/ultraestrutura , Neurônios/ultraestrutura , Animais , Automação , Camundongos , Neocórtex/diagnóstico por imagem , Software
3.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 76(10): 1009-1016, 2020.
Artigo em Japonês | MEDLINE | ID: mdl-33087646

RESUMO

PURPOSE: The purpose of this paper was to determine the optimal imaging conditions for four-dimensional cone-beam computed tomography (4D-CBCT) using an X-ray tube and a flat-panel detector mounted on a radiotherapy device. METHODS: The optimal imaging conditions were examined by changing the gantry speed (GS) parameter that affected the exposure time. Exposed dose during imaging and image quality of moving phantom were compared between examined conditions. RESULTS: The weighted computed tomography dose index (CTDIW) decreased linearly with increasing GS. However, when GS was 180°/min or faster, the image quality degraded, and errors of 1 mm or more were observed regarding the size of mock tumor in the moving phantom. The accuracy of automatic image matching was within 0.1 mm when GS of 120°/min or slower was chosen. CONCLUSION: From the results of this study, we concluded that GS of 120°/min is the optimum imaging condition. Under this imaging condition, the exposure time and CTDIW can be reduced by about 50% without compromising the accuracy of image registration, compared to the conventional GS of 70°/min. In addition, it has been clarified that there is an event that image reconstruction is not performed correctly due to the influence of phantom artifacts without depending on GS.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Tomografia Computadorizada Quadridimensional , Artefatos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas
4.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 76(10): 1025-1034, 2020.
Artigo em Japonês | MEDLINE | ID: mdl-33087648

RESUMO

PURPOSE: The aim of this study was to clarify the optimal post-reconstruction filtering type in the three- dimensional ordered subset expectation maximization (3D-OSEM) method for bone single photon emission computed tomography (SPECT) from image quality and quantitative values. METHOD: We scanned a National Electrical Manufactures Association's body phantom for bone SPECT filled with radioactive solution of 99mTc whose radioactivity concentration was accurately measured. The SPECT images were created using the 3D-OSEM method. Post-reconstruction filtering was performed using a Butterworth filter (BW), a Gaussian filter (GA), and a Hanning filter (HA) with various parameters. The image quality was evaluated by the normalized mean-squared error (NMSE) value and % of contrast-to-noise ratio (QNR17). The image quality was evaluated by the error values between the measured radioactivity concentration and the true radioactivity concentration in the BG region and insert sphere. RESULTS: The minimum NMSE values were 0.034 (BW), 0.036 (GA), and 0.035 (HA), and there was no difference depending on the filter type. The values of QNR17 were 2.5 (BW), 2.6 (GA), and 2.6 (HA), and there was no difference depending on the filter type. The BG region was greatly affected by parameter changes in GA but less by those in BW and HA. The error values of the 37 mm insert sphere were 18.0% (BW), 28.2% (GA), and 26.2% (HA), and BW showed the lowest value. CONCLUSION: Our results suggest that the post-reconstruction filtering type used in the 3D-OSEM method was BW from the image quality and quantitative values.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia Computadorizada de Emissão de Fóton Único
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2015-2018, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018399

RESUMO

Image filtering is a technique that can create additional visual representations of the original image. Entropy filtering is a specific application that can be used to highlight randomness of pixel grayscale intensities within an image. These image map created from filtering are based on the number of surrounding neighbourhood of pixels considered. However, there is no standard procedure for determining the correct "neighbourhood size" to use. We investigated the effects of neighbourhood size on the entropy calculation and provide a standardized approach for determining an appropriate neighbourhood size in entropy filtering in a musculoskeletal application. Ten healthy subjects showing no symptoms related to neuromuscular disease were recruited and ultrasound images of their trapezius muscle were acquired. The muscle regions in the images were manually isolated and regions of interest with varying neighbourhood sizes (increasing by 2 pixels) from 3x3 to 61X61 pixels were extracted. The entropy, relative signal entropy over noise entropy, statistical effect size as well as the percentage change of the effect size and instantaneous slope of the effect size was examined. The analysis showed that a neighbourhood size within the range of 21-25 pixels provides the maximum amount of information gained and coincides with a percentage change of the effect size of less than 5% and instantaneous slopes < 0.05.


Assuntos
Processamento de Imagem Assistida por Computador , Entropia , Humanos , Ultrassonografia
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2113-2116, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018423

RESUMO

The purpose of this study was to develop an automatic method for the segmentation of muscle cross-sectional area on transverse B-mode ultrasound images of gastrocnemius medialis using a convolutional neural network(CNN). In the provided dataset images with both normal and increased echogenicity are present. The manually annotated dataset consisted of 591 images, from 200 subjects, 400 relative to subjects with normal echogenicity and 191 to subjects with augmented echogenicity. From the DICOM files, the image has been extracted and processed using the CNN, then the output has been post-processed to obtain a finer segmentation. Final results have been compared to the manual segmentations. Precision and Recall scores as mean ± standard deviation for training, validation, and test sets are 0.96 ± 0.05, 0.90 ± 0.18, 0.89 ± 0.15 and 0.97 ±0.03, 0.89± 0.17, 0.90 ± 0.14 respectively. The CNN approach has also been compared to another automatic algorithm, showing better performances. The proposed automatic method provides an accurate estimation of muscle cross-sectional area in muscles with different echogenicity levels.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Humanos , Músculo Esquelético/diagnóstico por imagem , Ultrassonografia
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4390-4393, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018968

RESUMO

It has been known that the fall of a patient in a hospital is a serious accident. In order to prevent such accidents, we have been studying the fall prevention using image processing technology. Our previous studies have detected the patient's end sitting position with high accuracy, but have problems responding to the sitting position of patients who are eating or responding to visitors. In order to solve these problems, this paper proposes a method to detect the patient's bed exit action by analyzing the posture of the patient extracted from the image of the monocular camera by long short-term memory (LSTM). Our proposed method introduces two strategies - abstraction of input information and use of relative position information for the input time-series human images, achieving a 99.2[%] detection rate of bed exit action with a 5.7[%] false detection rate. Detecting the bed exit action with high accuracy contributes to preventing the patient from falling down. The proposed solution handles only posture information that abstracts camera images for patient privacy purposes.


Assuntos
Memória de Curto Prazo , Postura , Acidentes por Quedas/prevenção & controle , Humanos , Processamento de Imagem Assistida por Computador , Postura Sentada
8.
Sci Rep ; 10(1): 16942, 2020 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-33037291

RESUMO

The use of imaging data has been reported to be useful for rapid diagnosis of COVID-19. Although computed tomography (CT) scans show a variety of signs caused by the viral infection, given a large amount of images, these visual features are difficult and can take a long time to be recognized by radiologists. Artificial intelligence methods for automated classification of COVID-19 on CT scans have been found to be very promising. However, current investigation of pretrained convolutional neural networks (CNNs) for COVID-19 diagnosis using CT data is limited. This study presents an investigation on 16 pretrained CNNs for classification of COVID-19 using a large public database of CT scans collected from COVID-19 patients and non-COVID-19 subjects. The results show that, using only 6 epochs for training, the CNNs achieved very high performance on the classification task. Among the 16 CNNs, DenseNet-201, which is the deepest net, is the best in terms of accuracy, balance between sensitivity and specificity, [Formula: see text] score, and area under curve. Furthermore, the implementation of transfer learning with the direct input of whole image slices and without the use of data augmentation provided better classification rates than the use of data augmentation. Such a finding alleviates the task of data augmentation and manual extraction of regions of interest on CT images, which are adopted by current implementation of deep-learning models for COVID-19 classification.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Redes Neurais de Computação , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Tomografia Computadorizada por Raios X/métodos , Inteligência Artificial , Betacoronavirus , Gerenciamento de Dados , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Pulmão/patologia , Pandemias
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1186-1189, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018199

RESUMO

With the development of Convolutional Neural Network, the classification on ordinary natural images has made remarkable progress by using single feature maps. However, it is difficult to always produce good results on coronary artery angiograms because there is a lot of photographing noise and small class gaps between the classification targets on angiograms. In this paper, we propose a new network to enhance the richness and relevance of features in the training process by using multiple convolutions with different kernel sizes, which can improve the final classification result. Our network has a strong generalization ability, that is, it can perform a variety of classification tasks on angiograms better. Compared with some state-of-the-art image classification networks, the classification recall increases by 30.5% and precision increases by 19.1% in the best results of our network.


Assuntos
Vasos Coronários , Processamento de Imagem Assistida por Computador , Angiografia , Vasos Coronários/diagnóstico por imagem , Redes Neurais de Computação
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1198-1202, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018202

RESUMO

Atrial fibrillation (AF) is the most common sustained arrhythmia and is associated with dramatic increases in mortality and morbidity. Atrial cine MR images are increasingly used in the management of this condition, but there are few specific tools to aid in the segmentation of such data. Some characteristics of atrial cine MR (thick slices, variable number of slices in a volume) preclude the direct use of traditional segmentation tools. When combined with scarcity of labelled data and similarity of the intensity and texture of the left atrium (LA) to other cardiac structures, the segmentation of the LA in CINE MRI becomes a difficult task. To deal with these challenges, we propose a semi-automatic method to segment the left atrium (LA) in MR images, which requires an initial user click per volume. The manually given location information is used to generate a chamber location map to roughly locate the LA, which is then used as an input to a deep network with slightly over 0.5 million parameters. A tracking method is introduced to pass the location information across a volume and to remove unwanted structures in segmentation maps. According to the results of our experiments conducted in an in-house MRI dataset, the proposed method outperforms the U-Net [1] with a margin of 20 mm on Hausdorff distance and 0.17 on Dice score, with limited manual interaction.


Assuntos
Fibrilação Atrial , Processamento de Imagem Assistida por Computador , Fibrilação Atrial/diagnóstico por imagem , Átrios do Coração/diagnóstico por imagem , Humanos , Imagem por Ressonância Magnética , Imagem Cinética por Ressonância Magnética
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1211-1216, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018205

RESUMO

We propose a robust technique for segmenting magnetic resonance images of post-atrial septal occlusion intervention in the cardiac chamber. The technique can be used to determine the surgical outcomes of atrial septal defects before and after implantation of a septal occluder, which intends to provide volume restoration of the right and left atria. A variant of the U-Net architecture is used to perform atrial segmentation via a deep convolutional neural network. The method was evaluated on a dataset containing 550 two-dimensional image slices, outperforming conventional active contouring regarding the Dice similarity coefficient, Jaccard index, and Hausdorff distance, and achieving segmentation in the presence of ghost artifacts that occlude the atrium outline. Moreover, the proposed technique is closer to manual segmentation than the snakes active contour model. After segmentation, we computed the volume ratio of right to left atria, obtaining a smaller ratio that indicates better restoration. Hence, the proposed technique allows to evaluate the surgical success of atrial septal occlusion and may support diagnosis regarding the accurate evaluation of atrial septal defects before and after occlusion procedures.


Assuntos
Aprendizado Profundo , Comunicação Interatrial , Comunicação Interatrial/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Espectroscopia de Ressonância Magnética , Redes Neurais de Computação
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1303-1306, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018227

RESUMO

Deep learning has recently attracted widespread interest as a means of reducing noise in low-dose CT (LDCT) images. Deep convolutional neural networks (CNNs) are typically trained to transfer high-quality image features of normal-dose CT (NDCT) images to LDCT images. However, existing deep learning approaches for denoising LDCT images often overlook the statistical property of CT images. In this paper, we propose an approach to statistical image restoration for LDCT using deep learning (StatCNN). We introduce a loss function to incorporate the noise property in the image domain derived from the noise statistics in the sinogram domain. In order to capture the spatially-varying statistics of axial CT images, we increase the receptive fields of the proposed network to cover full-size CT slices. In addition, the proposed network utilizes z-directional correlation by taking multiple consecutive CT slices as input. For performance evaluation, the proposed network was thoroughly trained and tested by leave-one-out cross-validation with a dataset consisting of LDCT-NDCT image pairs. The experimental results showed that the denoising networks successfully reduced the noise level and restored the image details without adding artifacts. This study demonstrates that the statistical deep learning approach can transfer the image style from NDCT images to LDCT images without loss of anatomical information.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Artefatos , Redes Neurais de Computação , Razão Sinal-Ruído
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1307-1310, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018228

RESUMO

This paper presents a new 3D CT image reconstruction for limited angle C-arm cone-beam CT imaging system based on total-variation (TV) regularized in image domain and L1-penalty in projection domain. This is motivated by the facts that the CT images are sparse in TV setting and their projections are sinusoid-like forms, which are sparse in the discrete cosine transform (DCT) domain. Furthermore, the artifacts in image domain are directional due to limited angle views, so the anisotropic TV is employed. And the reweighted L1penalty in projection domain is adopted to enhance sparsity. Hence, this paper applied the anisotropic TV-norm and reweighted L1-norm sparse techniques to the limited angle Carm CT imaging system to enhance the image quality in both CT image and projection domains. Experimental results also show the efficiency of the proposed method.Clinical Relevance-This new CT reconstruction approach provides high quality images and projections for practicing clinicians.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Tomografia Computadorizada de Feixe Cônico , Imagens de Fantasmas
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1327-1330, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018233

RESUMO

The lumbar vertebrae segmentation in Computed tomography (CT) is challenging due to the scarcity of the labeled training data that we define as paired training data for the deep learning technique. Much of the available data is limited to the raw CT scans, unlabeled by radiologists. To handle the scarcity of labeled data, we utilized a hybrid training system by combining paired and unpaired training data and construct a hybrid deep segmentation generative adversarial network (Hybrid-SegGAN). We develop a total automatic approach for lumbar vertebrae segmentation in CT images using Hybrid-SegGAN for synthetic segmentation. Our network receives paired and unpaired data, discriminates between the two sets of data, and processes each through separate phases. We used CT images from 120 patients to demonstrate the performance of the proposed method and extensively evaluate the segmentation results against their ground truth by using 12 performance measures. The result analysis of the proposed method suggests its feasibility to improve the capabilities of deep learning segmentation without demanding the time-consuming annotation procedure for labeled and paired data.


Assuntos
Processamento de Imagem Assistida por Computador , Vértebras Lombares , Humanos , Vértebras Lombares/diagnóstico por imagem , Tomografia Computadorizada por Raios X
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1376-1379, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018245

RESUMO

In this paper, we present a framework to address the augmentation of images for the rare and minor appearance of mitotic type staining patterns, for Human Epithelium Type2 (HEp-2) cell images. The identification of mitotic patterns among non-mitotic/interphase patterns is important in the process of diagnosis of various autoimmune disorders. This task leads to a pattern classification problem between mitotic v/s interphase. However, among the two classes, typically, the number of mitotic cells are relatively very less. Thus, in this work, we propose to generate synthetic mitotic samples, which can be used to augment the number of mitotic samples and balance the samples of mitotic and interphase patterns in classification paradigm. An effective feature representation is used, to validate the usefulness of the synthetic samples in classification task, along with a subjective validation done by a medical expert. The results demonstrate that the approach of generating and mingling synthetic samples with existing training data works well and yields good performance, with 0.98 balanced class accuracy (BcA) in one case, over a public dataset, i.e., UQ-SNP I3A Task-3 mitotic cell identification dataset.


Assuntos
Doenças Autoimunes , Processamento de Imagem Assistida por Computador , Humanos , Interfase , Grupos Minoritários
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1412-1415, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018254

RESUMO

Ki-67 labelling index is a biomarker which is used across the world to predict the aggressiveness of cancer. To compute the Ki-67 index, pathologists normally count the tumour nuclei from the slide images manually; hence it is timeconsuming and is subject to inter pathologist variability. With the development of image processing and machine learning, many methods have been introduced for automatic Ki-67 estimation. But most of them require manual annotations and are restricted to one type of cancer. In this work, we propose a pooled Otsu's method to generate labels and train a semantic segmentation deep neural network (DNN). The output is postprocessed to find the Ki-67 index. Evaluation of two different types of cancer (bladder and breast cancer) results in a mean absolute error of 3.52%. The performance of the DNN trained with automatic labels is better than DNN trained with ground truth by an absolute value of 1.25%.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Antígeno Ki-67 , Aprendizado de Máquina
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1584-1587, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018296

RESUMO

High spatial resolution of Magnetic Resonance images (MRI) provide rich structural details to facilitate accurate diagnosis and quantitative image analysis. However the long acquisition time of MRI leads to patient discomfort and possible motion artifacts in the reconstructed image. Single Image Super-Resolution (SISR) using Convolutional Neural networks (CNN) is an emerging trend in biomedical imaging especially Magnetic Resonance (MR) image analysis for image post processing. An efficient choice of SISR architecture is required to achieve better quality reconstruction. In addition, a robust choice of loss function together with the domain in which these loss functions operate play an important role in enhancing the fine structural details as well as removing the blurring effects to form a high resolution image. In this work, we propose a novel combined loss function consisting of an L1 Charbonnier loss function in the image domain and a wavelet domain loss function called the Isotropic Undecimated Wavelet loss (IUW loss) to train the existing Laplacian Pyramid Super-Resolution CNN. The proposed loss function was evaluated on three MRI datasets - privately collected Knee MRI dataset and the publicly available Kirby21 brain and iSeg infant brain datasets and on benchmark SISR datasets for natural images. Experimental analysis shows promising results with better recovery of structure and improvements in qualitative metrics.


Assuntos
Processamento de Imagem Assistida por Computador , Imagem por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Espectroscopia de Ressonância Magnética , Redes Neurais de Computação
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1612-1615, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018303

RESUMO

Convolutional neural networks (CNNs) have been widely used in medical image segmentation. Vessel segmentation in coronary angiography remains a challenging task. It is a great challenge to extract fine features of coronary artery for segmentation due to the poor opacification, numerous overlap of different artery segments and high similarity between artery segments and soft tissues in an angiography image, which results in a sub-optimal segmentation performance. In this paper, we propose an adapted generative adversarial networks (GANs) to complete the conversion from coronary angiography image to semantic segmentation image. We implemented an adapted U-net as the generator, and a novel 3-layer pyramid structure as the discriminator. During the training period, multi-scale inputs were fed into the discriminator to optimize the objective functions, producing high-definition segmentation results. Due to the generative adversarial mechanism, both generator and discriminator can extract fine feature of coronary artery. Our method effectively solves the problems of segmentation discontinuity and intra-class inconsistencies. Experiment shows that our method improves the segmentation accuracy effectively comparing to other vessel segmentation methods.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Angiografia , Semântica
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1616-1619, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018304

RESUMO

Semantic segmentation is a fundamental and challenging problem in medical image analysis. At present, deep convolutional neural network plays a dominant role in medical image segmentation. The existing problems of this field are making less use of image information and learning few edge features, which may lead to the ambiguous boundary and inhomogeneous intensity distribution of the result. Since the characteristics of different stages are highly inconsistent, these two cannot be directly combined. In this paper, we proposed the Attention and Edge Constraint Network (AEC-Net) to optimize features by introducing attention mechanisms in the lower-level features, so that it can be better combined with higher-level features. Meanwhile, an edge branch is added to the network which can learn edge and texture features simultaneously. We evaluated this model on three datasets, including skin cancer segmentation, vessel segmentation, and lung segmentation. Results demonstrate that the proposed model has achieved state-of-the-art performance on all datasets.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Cutâneas , Atenção , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1629-1632, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018307

RESUMO

Segmenting the bladder wall from MRI images is of great significance for the early detection and auxiliary diagnosis of bladder tumors. However, automatic bladder wall segmentation is challenging due to weak boundaries and diverse shapes of bladders. Level-set-based methods have been applied to this task by utilizing the shape prior of bladders. However, it is a complex operation to adjust multiple parameters manually, and to select suitable hand-crafted features. In this paper, we propose an automatic method for the task based on deep learning and anatomical constraints. First, the autoencoder is used to model anatomical and semantic information of bladder walls by extracting their low dimensional feature representations from both MRI images and label images. Then as the constraint, such priors are incorporated into the modified residual network so as to generate more plausible segmentation results. Experiments on 1092 MRI images shows that the proposed method can generate more accurate and reliable results comparing with related works, with a dice similarity coefficient (DSC) of 85.48%.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias da Bexiga Urinária , Aprendizado Profundo , Humanos , Imagem por Ressonância Magnética , Neoplasias da Bexiga Urinária/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA