Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
J Acupunct Meridian Stud ; 17(3): 100-109, 2024 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-38898647

RESUMEN

Importance: Post-stroke sialorrhea (PSS) refers to excessive saliva flowing out the lip border after a stroke. PSS negatively affects patient self-image and social communication and may lead to depression. Limited evidence supports the link between excessive salivation and PSS. No large-scale, strictly controlled randomized controlled trials have shown the effectiveness of acupuncture in treating PSS patients. Objective: We aim to compare the effects of intraoral and sham acupuncture in PSS patients and explore relationships among salivation and drooling severity and frequency and swallowing function in stroke patients. Design: Clinical study protocol, SPIRIT compliant. Setting: Prospective, single-center, randomized, and sham-controlled trial. Population: We will recruit 106 PSS patients to receive 4-week intraoral or sham acupuncture. Additionally, 53 stroke patients without PSS will undergo a conventional 4-week treatment program to compare salivation between PSS and non-PSS patients. Exposures: Intraoral or sham acupuncture. Main Outcomes and Measures: The main evaluation index will be the 3-minute saliva weight (3MSW), comparing changes in 3MSW from baseline to weeks 4 and 8. Secondary assessment indices will include the "Drooling Severity and Frequency Scale" and "Functional Oral Intake Scale." Results: The results from this study will be published in peer-reviewed journals. Conclusion: Comparing effects of intraoral and sham acupuncture in PSS patients, this study may contribute important evidence for future PSS treatment and provide valuable insights into whether salivation issues in stroke patients are attributed to heightened salivary secretion or dysphagia.


Asunto(s)
Terapia por Acupuntura , Sialorrea , Accidente Cerebrovascular , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Terapia por Acupuntura/métodos , Estudios Prospectivos , Ensayos Clínicos Controlados Aleatorios como Asunto , Salivación , Sialorrea/terapia , Sialorrea/etiología , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/terapia , Accidente Cerebrovascular/fisiopatología
2.
Med Image Anal ; 91: 102983, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37926035

RESUMEN

Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.


Asunto(s)
Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
3.
Int J Neural Syst ; 33(6): 2350032, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37195808

RESUMEN

Facial expression recognition (FER) plays a vital role in the field of human-computer interaction. To achieve automatic FER, various approaches based on deep learning (DL) have been presented. However, most of them lack for the extraction of discriminative expression semantic information and suffer from the problem of annotation ambiguity. In this paper, we propose an elaborately designed end-to-end recognition network with contrastive learning and uncertainty-guided relabeling, to recognize facial expressions efficiently and accurately, as well as to alleviate the impact of annotation ambiguity. Specifically, a supervised contrastive loss (SCL) is introduced to promote inter-class separability and intra-class compactness, thus helping the network extract fine-grained discriminative expression features. As for the annotation ambiguity problem, we present an uncertainty estimation-based relabeling module (UERM) to estimate the uncertainty of each sample and relabel the unreliable ones. In addition, to deal with the padding erosion problem, we embed an amending representation module (ARM) into the recognition network. Experimental results on three public benchmarks demonstrate that our proposed method facilitates the recognition performance remarkably with 90.91% on RAF-DB, 88.59% on FERPlus and 61.00% on AffectNet, outperforming current state-of-the-art (SOTA) FER methods. Code will be available at http//github.com/xiaohu-run/fer_supCon.


Asunto(s)
Reconocimiento Facial , Humanos , Incertidumbre , Expresión Facial
4.
Int J Neural Syst ; 32(9): 2250043, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35912583

RESUMEN

A practical problem in supervised deep learning for medical image segmentation is the lack of labeled data which is expensive and time-consuming to acquire. In contrast, there is a considerable amount of unlabeled data available in the clinic. To make better use of the unlabeled data and improve the generalization on limited labeled data, in this paper, a novel semi-supervised segmentation method via multi-task curriculum learning is presented. Here, curriculum learning means that when training the network, simpler knowledge is preferentially learned to assist the learning of more difficult knowledge. Concretely, our framework consists of a main segmentation task and two auxiliary tasks, i.e. the feature regression task and target detection task. The two auxiliary tasks predict some relatively simpler image-level attributes and bounding boxes as the pseudo labels for the main segmentation task, enforcing the pixel-level segmentation result to match the distribution of these pseudo labels. In addition, to solve the problem of class imbalance in the images, a bounding-box-based attention (BBA) module is embedded, enabling the segmentation network to concern more about the target region rather than the background. Furthermore, to alleviate the adverse effects caused by the possible deviation of pseudo labels, error tolerance mechanisms are also adopted in the auxiliary tasks, including inequality constraint and bounding-box amplification. Our method is validated on ACDC2017 and PROMISE12 datasets. Experimental results demonstrate that compared with the full supervision method and state-of-the-art semi-supervised methods, our method yields a much better segmentation performance on a small labeled dataset. Code is available at https://github.com/DeepMedLab/MTCL.


Asunto(s)
Curriculum , Aprendizaje Automático Supervisado , Curaduría de Datos/métodos , Curaduría de Datos/normas , Conjuntos de Datos como Asunto/normas , Conjuntos de Datos como Asunto/provisión & distribución , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado/clasificación , Aprendizaje Automático Supervisado/estadística & datos numéricos , Aprendizaje Automático Supervisado/tendencias
5.
Med Image Anal ; 79: 102447, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35509136

RESUMEN

Due to the difficulty in accessing a large amount of labeled data, semi-supervised learning is becoming an attractive solution in medical image segmentation. To make use of unlabeled data, current popular semi-supervised methods (e.g., temporal ensembling, mean teacher) mainly impose data-level and model-level consistency on unlabeled data. In this paper, we argue that in addition to these strategies, we could further utilize auxiliary tasks and consider task-level consistency to better excavate effective representations from unlabeled data for segmentation. Specifically, we introduce two auxiliary tasks, i.e., a foreground and background reconstruction task for capturing semantic information and a signed distance field (SDF) prediction task for imposing shape constraint, and explore the mutual promotion effect between the two auxiliary and the segmentation tasks based on mean teacher architecture. Moreover, to handle the potential bias of the teacher model caused by annotation scarcity, we develop a tripled-uncertainty guided framework to encourage the three tasks in the student model to learn more reliable knowledge from the teacher. When calculating uncertainty, we propose an uncertainty weighted integration (UWI) strategy for yielding the segmentation predictions of the teacher. In addition, following the advance of unsupervised learning in leveraging the unlabeled data, we also incorporate a contrastive learning based constraint to help the encoders extract more distinct representations to promote the medical image segmentation performance. Extensive experiments on the public 2017 ACDC dataset and the PROMISE12 dataset have demonstrated the effectiveness of our method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado , Humanos , Incertidumbre
6.
Med Image Anal ; 77: 102339, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34990905

RESUMEN

Radiation therapy (RT) is regarded as the primary treatment for cancer in the clinic, aiming to deliver an accurate dose to the planning target volume (PTV) while protecting the surrounding organs at risk (OARs). To improve the effectiveness of the treatment planning, deep learning methods are widely adopted to predict dose distribution maps for clinical treatment planning. In this paper, we present a novel multi-constraint dose prediction model based on generative adversarial network, named Mc-GAN, to automatically predict the dose distribution map from the computer tomography (CT) images and the masks of PTV and OARs. Specifically, the generator is an embedded UNet-like structure with dilated convolution to capture both the global and local information. During the feature extraction, a dual attention module (DAM) is embedded to force the generator to take more heed of internal semantic relevance. To improve the prediction accuracy, two additional losses, i.e., the locality-constrained loss (LCL) and the self-supervised perceptual loss (SPL), are introduced besides the conventional global pixel-level loss and adversarial loss. Concretely, the LCL tries to focus on the predictions of locally important areas while the SPL aims to prevent the predicted dose maps from the possible distortion at the feature level. Evaluated on two in-house datasets, our proposed Mc-GAN has been demonstrated to outperform other state-of-the-art methods in almost all PTV and OARs criteria.


Asunto(s)
Neoplasias , Planificación de la Radioterapia Asistida por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/radioterapia , Órganos en Riesgo , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Tomografía Computarizada por Rayos X
7.
Comput Med Imaging Graph ; 80: 101663, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31923610

RESUMEN

Multi-modality based classification methods are superior to the single modality based approaches for the automatic diagnosis of the Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, most of the multi-modality based methods usually ignore the structure information of data and simply squeeze them to pairwise relationships. In real-world applications, the relationships among subjects are much more complex than pairwise, and the high-order structure containing more discriminative information will be intuitively beneficial to our learning tasks. In light of this, a hypergraph based multi-task feature selection method for AD/MCI classification is proposed in this paper. Specifically, we first perform feature selection on each modality as a single task and incorporate group-sparsity regularizer to jointly select common features across multiple modalities. Then, we introduce a hypergraph based regularization term for the standard multi-task feature selection to model the high-order structure relationship among subjects. Finally, a multi-kernel support vector machine is adopted to fuse the features selected from different modalities for the final classification. The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) demonstrate that our proposed method achieves better classification performance than the start-of-art multi-modality based methods.


Asunto(s)
Enfermedad de Alzheimer/clasificación , Enfermedad de Alzheimer/diagnóstico por imagen , Mapeo Encefálico/métodos , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Imagen Multimodal/métodos , Neuroimagen/métodos , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Reconocimiento de Normas Patrones Automatizadas
8.
Methods Inf Med ; 59(4-05): 151-161, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-33618420

RESUMEN

BACKGROUND: An accurate and reproducible method to delineate tumor margins is of great importance in clinical diagnosis and treatment. In nasopharyngeal carcinoma (NPC), due to limitations such as high variability, low contrast, and discontinuous boundaries in presenting soft tissues, tumor margin can be extremely difficult to identify in magnetic resonance imaging (MRI), increasing the challenge of NPC segmentation task. OBJECTIVES: The purpose of this work is to develop a semiautomatic algorithm for NPC image segmentation with minimal human intervention, while it is also capable of delineating tumor margins with high accuracy and reproducibility. METHODS: In this paper, we propose a novel feature selection algorithm for the identification of the margin of NPC image, named as modified random forest recursive feature selection (MRF-RFS). Specifically, to obtain a more discriminative feature subset for segmentation, a modified recursive feature selection method is applied to the original handcrafted feature set. Moreover, we combine the proposed feature selection method with the classical random forest (RF) in the training stage to take full advantage of its intrinsic property (i.e., feature importance measure). RESULTS: To evaluate the segmentation performance, we verify our method on the T1-weighted MRI images of 18 NPC patients. The experimental results demonstrate that the proposed MRF-RFS method outperforms the baseline methods and deep learning methods on the task of segmenting NPC images. CONCLUSION: The proposed method could be effective in NPC diagnosis and useful for guiding radiation therapy.


Asunto(s)
Algoritmos , Neoplasias Nasofaríngeas , Humanos , Imagen por Resonancia Magnética , Carcinoma Nasofaríngeo/diagnóstico por imagen , Neoplasias Nasofaríngeas/diagnóstico por imagen , Reproducibilidad de los Resultados
9.
Artif Intell Med ; 96: 12-24, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-31164205

RESUMEN

Label fusion is one of the key steps in multi-atlas based segmentation of structural magnetic resonance (MR) images. Although a number of label fusion methods have been developed in literature, most of those existing methods fail to address two important problems, i.e., (1) compared with boundary voxels, inner voxels usually have higher probability (or reliability) to be correctly segmented, and (2) voxels with high segmentation reliability (after initial segmentation) can help refine the segmentation of voxels with low segmentation reliability in the target image. To this end, we propose a general reliability-based robust label fusion framework for multi-atlas based MR image segmentation. Specifically, in the first step, we perform initial segmentation for MR images using a conventional multi-atlas label fusion method. In the second step, for each voxel in the target image, we define two kinds of reliability, including the label reliability and spatial reliability that are estimated based on the soft label and spatial information from the initial segmentation, respectively. Finally, we employ voxels with high label-spatial reliability to help refine the label fusion process of those with low reliability in the target image. We incorporate our proposed framework into four well-known label fusion methods, including locally-weighted voting (LWV), non-local mean patch-based method (PBM), joint label fusion (JLF) and sparse patch-based method (SPBM), and obtain four novel label-spatial reliability-based label fusion approaches (called ls-LWV, ls-PBM, ls-JLF, and ls-SPBM). We validate the proposed methods in segmenting ROIs of brain MR images from the NIREP, LONI-LPBA40 and ADNI datasets. The experimental results demonstrate that our label-spatial reliability-based label fusion methods outperform the state-of-the-art methods in multi-atlas image segmentation.


Asunto(s)
Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados
10.
IEEE Trans Med Imaging ; 38(6): 1328-1339, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30507527

RESUMEN

Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI images that provide anatomical information. Our work has four contributions. First, different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolve the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not optimal. To address this issue, we propose a locality adaptive strategy for multi-modality fusion. Second, we utilize 1 ×1 ×1 kernel to learn this locality adaptive fusion so that the number of additional parameters incurred by our method is kept minimum. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in a 3D conditional GANs model, which generates high-quality PET images by employing large-sized image patches and hierarchical features. Fourth, we apply the auto-context strategy to our scheme and propose an auto-context LA-GANs model to further refine the quality of synthesized images. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional/métodos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Bases de Datos Factuales , Humanos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Dosis de Radiación
11.
Brain Imaging Behav ; 13(4): 879-892, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29948906

RESUMEN

The functional brain network has gained increased attention in the neuroscience community because of its ability to reveal the underlying architecture of human brain. In general, majority work of functional network connectivity is built based on the correlations between discrete-time-series signals that link only two different brain regions. However, these simple region-to-region connectivity models do not capture complex connectivity patterns between three or more brain regions that form a connectivity subnetwork, or subnetwork for short. To overcome this current limitation, a hypergraph learning-based method is proposed to identify subnetwork differences between two different cohorts. To achieve our goal, a hypergraph is constructed, where each vertex represents a subject and also a hyperedge encodes a subnetwork with similar functional connectivity patterns between different subjects. Unlike previous learning-based methods, our approach is designed to jointly optimize the weights for all hyperedges such that the learned representation is in consensus with the distribution of phenotype data, i.e. clinical labels. In order to suppress the spurious subnetwork biomarkers, we further enforce a sparsity constraint on the hyperedge weights, where a larger hyperedge weight indicates the subnetwork with the capability of identifying the disorder condition. We apply our hypergraph learning-based method to identify subnetwork biomarkers in Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). A comprehensive quantitative and qualitative analysis is performed, and the results show that our approach can correctly classify ASD and ADHD subjects from normal controls with 87.65 and 65.08% accuracies, respectively.


Asunto(s)
Mapeo Encefálico/métodos , Conectoma/métodos , Red Nerviosa/fisiología , Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Trastorno del Espectro Autista/fisiopatología , Biomarcadores , Encéfalo/fisiopatología , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Modelos Teóricos , Redes Neurales de la Computación , Vías Nerviosas/fisiopatología
12.
Neuroimage ; 174: 550-562, 2018 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-29571715

RESUMEN

Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.


Asunto(s)
Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Adulto , Aprendizaje Profundo , Femenino , Humanos , Masculino , Dosis de Radiación , Reproducibilidad de los Resultados , Relación Señal-Ruido , Adulto Joven
13.
Med Image Comput Comput Assist Interv ; 11070: 329-337, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31058275

RESUMEN

Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multimodality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.


Asunto(s)
Imagen por Resonancia Magnética , Imagen Multimodal , Tomografía de Emisión de Positrones , Algoritmos , Electrones , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Redes Neurales de la Computación , Tomografía de Emisión de Positrones/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
Pattern Recognit ; 63: 511-517, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-27942077

RESUMEN

Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.

15.
Anim Reprod Sci ; 167: 40-50, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26874430

RESUMEN

Meat-type Red-feather country hens fed ad libitum (AD-hens) exhibit obesity-associated morbidities and a number of ovarian irregularities. Leukocyte participations in ovarian activities are unstudied in AD-hens. In contrast to feed-restricted hens (R-hens), ovulatory process of the F1 follicle appeared delayed in AD-hens in association with reduced F1 follicle progesterone content, gelatinase A (MMP-2) and collagenase-3 (MMP-13) activities coincident with elevated IL-1ß and no production (P<0.05), and increased leukocyte infiltration of inflamed necrotic follicle walls. Extracts of AD-hen F1 follicle walls induced greater leukocyte migration than extracts from F1 follicle wall extracts of R-hens (P<0.05). Co-cultures of granulosa cells with increasing numbers of leukocytes from either AD-hens or R-hens exhibited dose dependent reductions in progesterone production and increases in cell death. AD-hen leukocytes were less proapoptotic than their R counterparts (P<0.05). Granulosa MMP-13 and MMP-2 activities were also suppressed in the co-cultures with heterophils or monocytes in a dose-dependent manner (P<0.05). AD heterophils and R monocytes had a greater inhibitory effect on MMP activities in the co-cultures than their respective counterparts (P<0.05). Both basal and LPS-induced IL-1ß secretion and MMP-22 or MMP-2 activities in freshly isolated AD-hen leukocytes were reduced (P<0.05). Exposure of AD or R leukocytes to 0.5mM palmitate impaired IL-1ß secretion and MMP-22 or MMP-2 activity. Inhibition of ceramide synthesis with FB1 and ROS production with n-MPG scavenging rescued MMP activity and IL-1ß production in palmitate treated heterophils, but exacerbated monocyte suppression. These latter findings suggest that intracellular lipid dysregulation in leukocytes contributes to ovarian dysfunction in AD-hens.


Asunto(s)
Pollos/metabolismo , Ingestión de Alimentos , Leucocitos/fisiología , Folículo Ovárico/citología , Alimentación Animal/análisis , Crianza de Animales Domésticos , Animales , Restricción Calórica , Células Cultivadas , Quimiotaxis , Técnicas de Cocultivo , Femenino , Regulación de la Expresión Génica , Células de la Granulosa/metabolismo , Interleucina-1beta/genética , Interleucina-1beta/metabolismo , Metabolismo de los Lípidos , Metaloproteinasas de la Matriz/genética , Metaloproteinasas de la Matriz/metabolismo , Folículo Ovárico/química , Folículo Ovárico/metabolismo , Ovulación/fisiología , Extractos de Tejidos/farmacología
16.
Mach Learn Med Imaging ; 10019: 1-9, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29938714

RESUMEN

The functional connectome has gained increased attention in the neuroscience community. In general, most network connectivity models are based on correlations between discrete-time series signals that only connect two different brain regions. However, these bivariate region-to-region models do not involve three or more brain regions that form a subnetwork. Here we propose a learning-based method to explore subnetwork biomarkers that are significantly distinguishable between two clinical cohorts. Learning on hypergraph is employed in our work. Specifically, we construct a hypergraph by exhaustively inspecting all possible subnetworks for all subjects, where each hyperedge connects a group of subjects demonstrating highly correlated functional connectivity behavior throughout the underlying subnetwork. The objective function of hypergraph learning is to jointly optimize the weights for all hyperedges which make the separation of two groups by the learned data representation be in the best consensus with the observed clinical labels. We deploy our method to find high order childhood autism biomarkers from rs-fMRI images. Promising results have been obtained from comprehensive evaluation on the discriminative power and generality in diagnosis of Autism.

17.
Brain Imaging Behav ; 10(4): 1148-1159, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-26572145

RESUMEN

Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer's disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.


Asunto(s)
Enfermedad de Alzheimer/clasificación , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/clasificación , Disfunción Cognitiva/diagnóstico por imagen , Aprendizaje Automático , Neuroimagen/métodos , Anciano , Bases de Datos Factuales , Progresión de la Enfermedad , Fluorodesoxiglucosa F18 , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Imagen Multimodal/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía de Emisión de Positrones , Pronóstico , Curva ROC , Radiofármacos
18.
Brain Imaging Behav ; 10(3): 739-49, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26311394

RESUMEN

Recently, multi-task based feature selection methods have been used in multi-modality based classification of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, in traditional multi-task feature selection methods, some useful discriminative information among subjects is usually not well mined for further improving the subsequent classification performance. Accordingly, in this paper, we propose a discriminative multi-task feature selection method to select the most discriminative features for multi-modality based classification of AD/MCI. Specifically, for each modality, we train a linear regression model using the corresponding modality of data, and further enforce the group-sparsity regularization on weights of those regression models for joint selection of common features across multiple modalities. Furthermore, we propose a discriminative regularization term based on the intra-class and inter-class Laplacian matrices to better use the discriminative information among subjects. To evaluate our proposed method, we perform extensive experiments on 202 subjects, including 51 AD patients, 99 MCI patients, and 52 healthy controls (HC), from the baseline MRI and FDG-PET image data of the Alzheimer's Disease Neuroimaging Initiative (ADNI). The experimental results show that our proposed method not only improves the classification performance, but also has potential to discover the disease-related biomarkers useful for diagnosis of disease, along with the comparison to several state-of-the-art methods for multi-modality based AD/MCI classification.


Asunto(s)
Enfermedad de Alzheimer/clasificación , Enfermedad de Alzheimer/diagnóstico por imagen , Disfunción Cognitiva/clasificación , Disfunción Cognitiva/diagnóstico por imagen , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Algoritmos , Conjuntos de Datos como Asunto , Fluorodesoxiglucosa F18 , Humanos , Modelos Lineales , Curva ROC , Radiofármacos
19.
Med Image Comput Comput Assist Interv ; 9900: 291-299, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28386606

RESUMEN

Graph-based Transductive Learning (GTL) is a powerful tool in computer-assisted diagnosis, especially when the training data is not sufficient to build reliable classifiers. Conventional GTL approaches first construct a fixed subject-wise graph based on the similarities of observed features (i.e., extracted from imaging data) in the feature domain, and then follow the established graph to propagate the existing labels from training to testing data in the label domain. However, such a graph is exclusively learned in the feature domain and may not be necessarily optimal in the label domain. This may eventually undermine the classification accuracy. To address this issue, we propose a progressive GTL (pGTL) method to progressively find an intrinsic data representation. To achieve this, our pGTL method iteratively (1) refines the subject-wise relationships observed in the feature domain using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined subject-wise relationships, and (3) verifies the intrinsic data representation on the training data, in order to guarantee an optimal classification on the new testing data. Furthermore, we extend our pGTL to incorporate multi-modal imaging data, to improve the classification accuracy and robustness as multi-modal imaging data can provide complementary information. Promising classification results in identifying Alzheimer's disease (AD), Mild Cognitive Impairment (MCI), and Normal Control (NC) subjects are achieved using MRI and PET data.


Asunto(s)
Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Progresión de la Enfermedad , Interpretación de Imagen Asistida por Computador/métodos , Enfermedad de Alzheimer/patología , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Disfunción Cognitiva/patología , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
20.
Anim Reprod Sci ; 151(3-4): 229-36, 2014 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-25458320

RESUMEN

Restricted feed intake improves egg production in Cornish×Plymouth Rock (broiler) hens. Red-feather (RF) and Black-feather (BF) chickens are 2 local strains of non-broiler meat-type chickens whose egg production has declined with continued selection for meat yield, and which are unstudied regarding restricted feeding and egg-laying improvement. Sixteen week old RF and BF pullets were either fed ad libitum (AL) or restricted to 85% AL intake (R). At 35wk and 50wk R-hens showed improved egg production and less abnormal ovarian morphology than AL-hens. Obesity, hepatic steatosis, lipotoxic change to plasma lipids, and systemic inflammation induced by AL feeding in RF and BF hens were similar to those observed previously in AL-broiler hens. Egg production was negatively correlated to body weight, fractional abdominal fat weight and plasma NEFA concentrations in AL hens (P<0.05). AL-hen hierarchical follicles accumulated ceramide and increased interleukin-1ß production (P<0.05) in conjunction with increased granulosa cell apoptosis, follicle atresia, ovarian regression, and reduced plasma 17ß-estradiol concentrations (P<0.05). The present outcomes from non-broiler but nevertheless meat-type country chicken strains indicate that selection for rapid growth and increased meat yield fundamentally changes energy metabolism in a way that renders hens highly susceptible to reproductive impairment from lipid dysregulation and pro-inflammatory signaling rather than impaired resource allocation per se.


Asunto(s)
Restricción Calórica , Pollos , Enfermedades Metabólicas/prevención & control , Reproducción , Alimentación Animal , Fenómenos Fisiológicos Nutricionales de los Animales , Animales , Restricción Calórica/veterinaria , Ingestión de Alimentos/fisiología , Huevos , Femenino , Carne , Enfermedades Metabólicas/veterinaria , Oviparidad , Enfermedades de las Aves de Corral/prevención & control
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA