Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
BMC Bioinformatics ; 20(1): 724, 2019 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-31852433

RESUMO

BACKGROUND: Quantitative areas is of great measurement of wound significance in clinical trials, wound pathological analysis, and daily patient care. 2D methods cannot solve the problems caused by human body curvatures and different camera shooting angles. Our objective is to simply collect wound areas, accurately measure wound areas and overcome the shortcomings of 2D methods. RESULTS: We propose a method with 3D transformation to measure wound area on a human body surface, which combines structure from motion (SFM), least squares conformal mapping (LSCM), and image segmentation. The method captures 2D images of wound, which is surrounded by adhesive tape scale next to it, by smartphone and implements 3D reconstruction from the images based on SFM. Then it uses LSCM to unwrap the UV map of the 3D model. In the end, it utilizes image segmentation by interactive method for wound extraction and measurement. Our system yields state-of-the-art results on a dataset of 118 wounds on 54 patients, and performs with an accuracy of 0.97. The Pearson correlation, standardized regression coefficient and adjusted R square of our method are 0.999, 0.895 and 0.998 respectively. CONCLUSIONS: A smartphone is used to capture wound images, which lowers costs, lessens dependence on hardware, and avoids the risk of infection. The quantitative calculation of the 3D wound area is realized, solving the challenges that 2D methods cannot and achieving a good accuracy.


Assuntos
Smartphone , Ferimentos e Lesões/diagnóstico por imagem , Algoritmos , Humanos , Imageamento Tridimensional
2.
BMC Bioinformatics ; 20(1): 430, 2019 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-31419946

RESUMO

*: Background Consisting of dictated free-text documents such as discharge summaries, medical narratives are widely used in medical natural language processing. Relationships between anatomical entities and human body parts are crucial for building medical text mining applications. To achieve this, we establish a mapping system consisting of a Wikipedia-based scoring algorithm and a named entity normalization method (NEN). The mapping system makes full use of information available on Wikipedia, which is a comprehensive Internet medical knowledge base. We also built a new ontology, Tree of Human Body Parts (THBP), from core anatomical parts by referring to anatomical experts and Unified Medical Language Systems (UMLS) to make the mapping system efficacious for clinical treatments. *: Result The gold standard is derived from 50 discharge summaries from our previous work, in which 2,224 anatomical entities are included. The F1-measure of the baseline system is 70.20%, while our algorithm based on Wikipedia achieves 86.67% with the assistance of NEN. *: Conclusions We construct a framework to map anatomical entities to THBP ontology using normalization and a scoring algorithm based on Wikipedia. The proposed framework is proven to be much more effective and efficient than the main baseline system.


Assuntos
Anatomia , Mineração de Dados , Corpo Humano , Bases de Conhecimento , Alta do Paciente , Algoritmos , Humanos
3.
Sleep Breath ; 23(2): 719-728, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30783913

RESUMO

OBJECTIVES: To determine inter-lab reliability in sleep stage scoring using the 2014 American Academy of Sleep Medicine (AASM) manual. To understand in-depth reasons for disagreement and provide suggestions for improvement. METHODS: This study consisted of 40 all-night polysomnographys (PSGs) from different samples. PSGs were segmented into 37,642 30-s epochs. Five doctors from China and two doctors from America scored the epochs following the 2014 AASM standard. Scoring disagreement between two centers was evaluated using Cohen's kappa (κ). After visual inspection of PSGs of deviating scorings, potential disagreement reasons were analyzed. RESULTS: Inter-lab reliability yielded a substantial degree (κ = 0.75 ± 0.01). Scoring for stage W (κ = 0.89) and R (κ = 0.87) achieved the highest agreement, while stage N1 (κ = 0.45) reflected the lowest. Considering the relative disagreement ratio, N2-N3 (22.09%), W-N1 (19.68%), and N1-N2 (18.75%) were the most frequent combinations of discrepancy. American and Chinese doctors showed certain characteristics in the scoring of discrepancy combination W-N1, N1-N2, and N2-N3. There are seven reasons for disagreement, namely "on-threshold characteristic" (29.21%), "context influence" (18.06%), "characteristic identification difficulty" (8.81%), "arousal-wake confusion" (7.57%), "derivation inconsistence" (2.15%), "on-borderline characteristic" (0.92%), and "misrecognition" (33.27%). CONCLUSIONS: This study demonstrated the sleep stage scoring agreement of the 2014 AASM manual and explored potential sources of labeling ambiguity. Improvement measures were suggested accordingly to help remove ambiguity for scorers and improve scoring reliability at the international level.


Assuntos
Comparação Transcultural , Polissonografia/normas , Medicina do Sono/normas , Fases do Sono , China , Humanos , Variações Dependentes do Observador , Estados Unidos
4.
BMC Bioinformatics ; 18(1): 360, 2017 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-28774262

RESUMO

BACKGROUND: Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. RESULTS: In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. CONCLUSIONS: The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.


Assuntos
Patologia/métodos , Algoritmos , Análise por Conglomerados , Metodologias Computacionais , Humanos , Processamento de Imagem Assistida por Computador , Curva ROC , Estatística como Assunto
5.
BMC Bioinformatics ; 18(1): 281, 2017 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-28549410

RESUMO

BACKGROUND: Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. RESULTS: In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. CONCLUSIONS: The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.


Assuntos
Neoplasias Encefálicas/patologia , Neoplasias do Colo/patologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Neoplasias Encefálicas/diagnóstico , Carcinoma/diagnóstico , Carcinoma/patologia , Neoplasias do Colo/diagnóstico , Humanos , Redes Neurais de Computação , Máquina de Vetores de Suporte
6.
BMC Bioinformatics ; 16: 149, 2015 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-25956056

RESUMO

BACKGROUND: Electronic medical record (EMR) systems have become widely used throughout the world to improve the quality of healthcare and the efficiency of hospital services. A bilingual medical lexicon of Chinese and English is needed to meet the demand for the multi-lingual and multi-national treatment. We make efforts to extract a bilingual lexicon from English and Chinese discharge summaries with a small seed lexicon. The lexical terms can be classified into two categories: single-word terms (SWTs) and multi-word terms (MWTs). For SWTs, we use a label propagation (LP; context-based) method to extract candidates of translation pairs. For MWTs, which are pervasive in the medical domain, we propose a term alignment method, which firstly obtains translation candidates for each component word of a Chinese MWT, and then generates their combinations, from which the system selects a set of plausible translation candidates. RESULTS: We compare our LP method with a baseline method based on simple context-similarity. The LP based method outperforms the baseline with the accuracies: 4.44% Acc1, 24.44% Acc10, and 62.22% Acc100, where AccN means the top N accuracy. The accuracy of the LP method drops to 5.41% Acc10 and 8.11% Acc20 for MWTs. Our experiments show that the method based on term alignment improves the performance for MWTs to 16.22% Acc10 and 27.03% Acc20. CONCLUSIONS: We constructed a framework for building an English-Chinese term dictionary from discharge summaries in the two languages. Our experiments have shown that the LP-based method augmented with the term alignment method will contribute to reduction of manual work required to compile a bilingual sydictionary of clinical terms.


Assuntos
Multilinguismo , Processamento de Linguagem Natural , Alta do Paciente/normas , Software , Tradução , Povo Asiático , Inglaterra , Humanos , Armazenamento e Recuperação da Informação , Informática Médica
7.
Med Image Anal ; 86: 102791, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36933385

RESUMO

Accurate segmentation in histopathology images at pixel-level plays a critical role in the digital pathology workflow. The development of weakly supervised methods for histopathology image segmentation liberates pathologists from time-consuming and labor-intensive works, opening up possibilities of further automated quantitative analysis of whole-slide histopathology images. As an effective subgroup of weakly supervised methods, multiple instance learning (MIL) has achieved great success in histopathology images. In this paper, we specially treat pixels as instances so that the histopathology image segmentation task is transformed into an instance prediction task in MIL. However, the lack of relations between instances in MIL limits the further improvement of segmentation performance. Therefore, we propose a novel weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images. SA-MIL introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances. In addition, we use deep supervision to make the best use of information from limited annotations in the weakly supervised method. Our approach makes up for the shortcoming that instances are independent of each other in MIL by aggregating global contextual information. We demonstrate state-of-the-art results compared to other weakly supervised methods on two histopathology image datasets. It is evident that our approach has generalization ability for the high performance on both tissue and cell histopathology datasets. There is potential in our approach for various applications in medical images.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Humanos , Fluxo de Trabalho
8.
IEEE Trans Med Imaging ; 41(8): 2092-2104, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35239478

RESUMO

Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern, whereas the quality of low-dose images is never desirable for clinical use. So it is of great interest to translate low-dose PET images into full-dose. Previous studies based on deep learning methods usually directly extract hierarchical features for reconstruction. We notice that the importance of each feature is different and they should be weighted dissimilarly so that tiny information can be captured by the neural network. Furthermore, the synthesis on some regions of interest is important in some applications. Here we propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis. (1) We put forward a style-based generator employing style modulation, which specifically controls the hierarchical features in the translation process, to generate images with more realistic textures. (2) We adopt a task-driven strategy that couples a segmentation task with a generative adversarial network (GAN) framework to improve the translation performance. Extensive experiments show the superiority of our overall framework in PET synthesis, especially on those regions of interest.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/efeitos adversos , Tomografia por Emissão de Pósitrons/métodos , Doses de Radiação , Lesões por Radiação/etiologia , Lesões por Radiação/prevenção & controle
9.
PLoS One ; 17(8): e0270339, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35969596

RESUMO

MRI brain structure segmentation plays an important role in neuroimaging studies. Existing methods either spend much CPU time, require considerable annotated data, or fail in segmenting volumes with large deformation. In this paper, we develop a novel multi-atlas-based algorithm for 3D MRI brain structure segmentation. It consists of three modules: registration, atlas selection and label fusion. Both registration and label fusion leverage an integrated flow based on grayscale and SIFT features. We introduce an effective and efficient strategy for atlas selection by employing the accompanying energy generated in the registration step. A 3D sequential belief propagation method and a 3D coarse-to-fine flow matching approach are developed in both registration and label fusion modules. The proposed method is evaluated on five public datasets. The results show that it has the best performance in almost all the settings compared to competitive methods such as ANTs, Elastix, Learning to Rank and Joint Label Fusion. Moreover, our registration method is more than 7 times as efficient as that of ANTs SyN, while our label transfer method is 18 times faster than Joint Label Fusion in CPU time. The results on the ADNI dataset demonstrate that our method is applicable to image pairs that require a significant transformation in registration. The performance on a composite dataset suggests that our method succeeds in a cross-modality manner. The results of this study show that the integrated 3D flow-based method is effective and efficient for brain structure segmentation. It also demonstrates the power of SIFT features, multi-atlas segmentation and classical machine learning algorithms for a medical image analysis task. The experimental results on public datasets show the proposed method's potential for general applicability in various brain structures and settings.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neuroimagem
10.
Comput Med Imaging Graph ; 93: 101991, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34634548

RESUMO

Whole brain segmentation is an important neuroimaging task that segments the whole brain volume into anatomically labeled regions-of-interest. Convolutional neural networks have demonstrated good performance in this task. Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately. Their representation learning is based on parts of the whole volume whereas their labeling result is produced by aggregation of partial segmentation. Learning and inference with incomplete information could lead to sub-optimal final segmentation result. To address these issues, we propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume. The framework makes use of complete information in each volume and can be implemented easily. An effective instance in this framework is given subsequently. We adopt the 3D high-resolution network (HRNet) for learning spatially fine-grained representations and the mixed precision training scheme for memory-efficient training. Extensive experiment results on a publicly available 3D MRI brain dataset show that our proposed model advances the state-of-the-art methods in terms of segmentation performance.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Neuroimagem
11.
Sci Rep ; 10(1): 3753, 2020 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-32111966

RESUMO

We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.

12.
IEEE J Biomed Health Inform ; 24(5): 1394-1404, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31689224

RESUMO

3D medical image registration is of great clinical importance. However, supervised learning methods require a large amount of accurately annotated corresponding control points (or morphing), which are very difficult to obtain. Unsupervised learning methods ease the burden of manual annotation by exploiting unlabeled data without supervision. In this article, we propose a new unsupervised learning method using convolutional neural networks under an end-to-end framework, Volume Tweening Network (VTN), for 3D medical image registration. We propose three innovative technical components: (1) An end-to-end cascading scheme that resolves large displacement; (2) An efficient integration of affine registration network; and (3) An additional invertibility loss that encourages backward consistency. Experiments demonstrate that our algorithm is 880x faster (or 3.3x faster without GPU acceleration) than traditional optimization-based methods and achieves state-of-the-art performance in medical image registration.


Assuntos
Imageamento Tridimensional/métodos , Redes Neurais de Computação , Aprendizado de Máquina não Supervisionado , Algoritmos , Bases de Dados Factuais , Humanos , Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X
13.
Front Med ; 14(4): 470-487, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32728875

RESUMO

Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.


Assuntos
Aprendizado Profundo , Aprendizado de Máquina , Inquéritos e Questionários
14.
IEEE Trans Med Imaging ; 39(10): 3042-3052, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32275587

RESUMO

Automatic Non-rigid Histological Image Registration (ANHIR) challenge was organized to compare the performance of image registration algorithms on several kinds of microscopy histology images in a fair and independent manner. We have assembled 8 datasets, containing 355 images with 18 different stains, resulting in 481 image pairs to be registered. Registration accuracy was evaluated using manually placed landmarks. In total, 256 teams registered for the challenge, 10 submitted the results, and 6 participated in the workshop. Here, we present the results of 7 well-performing methods from the challenge together with 6 well-known existing methods. The best methods used coarse but robust initial alignment, followed by non-rigid registration, used multiresolution, and were carefully tuned for the data at hand. They outperformed off-the-shelf methods, mostly by being more robust. The best methods could successfully register over 98% of all landmarks and their mean landmark registration accuracy (TRE) was 0.44% of the image diagonal. The challenge remains open to submissions and all images are available for download.


Assuntos
Algoritmos , Técnicas Histológicas
15.
IEEE J Biomed Health Inform ; 23(3): 1316-1328, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-29994411

RESUMO

The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.


Assuntos
Técnicas Histológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina não Supervisionado , Algoritmos , Células da Medula Óssea/patologia , Doenças da Medula Óssea/diagnóstico por imagem , Doenças da Medula Óssea/patologia , Humanos
16.
Med Image Anal ; 54: 111-121, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30861443

RESUMO

Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a real-world scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs. The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI. The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth. This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task.


Assuntos
Biomarcadores Tumorais/análise , Neoplasias da Mama/patologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Biomarcadores Tumorais/genética , Neoplasias da Mama/genética , Proliferação de Células , Feminino , Expressão Gênica , Humanos , Mitose , Patologia/métodos , Valor Preditivo dos Testes , Prognóstico
17.
Comput Biol Med ; 103: 71-81, 2018 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-30342269

RESUMO

BACKGROUND: Automatic sleep stage classification is essential for long-term sleep monitoring. Wearable devices show more advantages than polysomnography for home use. In this paper, we propose a novel method for sleep staging using heart rate and wrist actigraphy derived from a wearable device. METHODS: The proposed method consists of two phases: multi-level feature learning and recurrent neural networks-based (RNNs) classification. The feature learning phase is designed to extract low- and mid-level features. Low-level features are extracted from raw signals, capturing temporal and frequency domain properties. Mid-level features are explored based on low-level ones to learn compositions and structural information of signals. Sleep staging is a sequential problem with long-term dependencies. RNNs with bidirectional long short-term memory architectures are employed to learn temporally sequential patterns. RESULTS: To better simulate the use of wearable devices in the daily scene, experiments were conducted with a resting group in which sleep was recorded in the resting state, and a comprehensive group in which both resting sleep and non-resting sleep were included. The proposed algorithm classified five sleep stages (wake, non-rapid eye movement 1-3, and rapid eye movement) and achieved weighted precision, recall, and F1 score of 66.6%, 67.7%, and 64.0% in the resting group and 64.5%, 65.0%, and 60.5% in the comprehensive group using leave-one-out cross-validation. Various comparison experiments demonstrated the effectiveness of the algorithm. CONCLUSIONS: Our method is efficient and effective in scoring sleep stages. It is suitable to be applied to wearable devices for monitoring sleep at home.


Assuntos
Actigrafia/métodos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Fases do Sono/fisiologia , Actigrafia/instrumentação , Adulto , Feminino , Frequência Cardíaca/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Dispositivos Eletrônicos Vestíveis , Punho/fisiologia , Adulto Jovem
18.
IEEE Trans Med Imaging ; 36(11): 2376-2388, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28692971

RESUMO

In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.


Assuntos
Histocitoquímica/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Algoritmos , Colo/diagnóstico por imagem , Neoplasias do Colo/diagnóstico por imagem , Bases de Dados Factuais , Humanos , Análise Serial de Tecidos
19.
IEEE Trans Biomed Eng ; 64(12): 2901-2912, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28358671

RESUMO

OBJECTIVE: A new image instance segmentation method is proposed to segment individual glands (instances) in colon histology images. This process is challenging since the glands not only need to be segmented from a complex background, they must also be individually identified. METHODS: We leverage the idea of image-to-image prediction in recent deep learning by designing an algorithm that automatically exploits and fuses complex multichannel information-regional, location, and boundary cues-in gland histology images. Our proposed algorithm, a deep multichannel framework, alleviates heavy feature design due to the use of convolutional neural networks and is able to meet multifarious requirements by altering channels. RESULTS: Compared with methods reported in the 2015 MICCAI Gland Segmentation Challenge and other currently prevalent instance segmentation methods, we observe state-of-the-art results based on the evaluation metrics. CONCLUSION: The proposed deep multichannel algorithm is an effective method for gland instance segmentation. SIGNIFICANCE: The generalization ability of our model not only enable the algorithm to solve gland instance segmentation problems, but the channel is also alternative that can be replaced for a specific task.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Mucosa Intestinal/diagnóstico por imagem , Redes Neurais de Computação , Algoritmos , Colo/diagnóstico por imagem , Neoplasias Colorretais/diagnóstico por imagem , Histocitoquímica , Humanos , Aprendizado de Máquina
20.
Med Phys ; 43(5): 2229, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27147335

RESUMO

PURPOSE: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. METHODS: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. RESULTS: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. CONCLUSIONS: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.


Assuntos
Algoritmos , Atlas como Assunto , Imageamento Tridimensional/métodos , Fígado/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA