Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38837927

RESUMEN

Moving object detection in satellite videos (SVMOD) is a challenging task due to the extremely dim and small target characteristics. Current learning-based methods extract spatio-temporal information from multi-frame dense representation with labor-intensive manual labels to tackle SVMOD, which needs high annotation costs and contains tremendous computational redundancy due to the severe imbalance between foreground and background regions. In this paper, we propose a highly efficient unsupervised framework for SVMOD. Specifically, we propose a generic unsupervised framework for SVMOD, in which pseudo labels generated by a traditional method can evolve with the training process to promote detection performance. Furthermore, we propose a highly efficient and effective sparse convolutional anchor-free detection network by sampling the dense multi-frame image form into a sparse spatio-temporal point cloud representation and skipping the redundant computation on background regions. Coping these two designs, we can achieve both high efficiency (label and computation efficiency) and effectiveness. Extensive experiments demonstrate that our method can not only process 98.8 frames per second on 1024 ×1024 images but also achieve state-of-the-art performance.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38478434

RESUMEN

Visual speech, referring to the visual domain of speech, has attracted increasing attention due to its wide applications, such as public security, medical treatment, military defense, and film entertainment. As a powerful AI strategy, deep learning techniques have extensively promoted the development of visual speech learning. Over the past five years, numerous deep learning based methods have been proposed to address various problems in this area, especially automatic visual speech recognition and generation. To push forward future research on visual speech, this paper will present a comprehensive review of recent progress in deep learning methods on visual speech analysis. We cover different aspects of visual speech, including fundamental problems, challenges, benchmark datasets, a taxonomy of existing methods, and state-of-the-art performance. Besides, we also identify gaps in current research and discuss inspiring future research directions.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38329861

RESUMEN

This article proposes a novel module called middle spectrum grouped convolution (MSGC) for efficient deep convolutional neural networks (DCNNs) with the mechanism of grouped convolution. It explores the broad "middle spectrum" area between channel pruning and conventional grouped convolution. Compared with channel pruning, MSGC can retain most of the information from the input feature maps due to the group mechanism; compared with grouped convolution, MSGC benefits from the learnability, the core of channel pruning, for constructing its group topology, leading to better channel division. The middle spectrum area is unfolded along four dimensions: groupwise, layerwise, samplewise, and attentionwise, making it possible to reveal more powerful and interpretable structures. As a result, the proposed module acts as a booster that can reduce the computational cost of the host backbones for general image recognition with even improved predictive accuracy. For example, in the experiments on the ImageNet dataset for image classification, MSGC can reduce the multiply-accumulates (MACs) of ResNet-18 and ResNet-50 by half but still increase the Top-1 accuracy by more than 1% . With a 35% reduction of MACs, MSGC can also increase the Top-1 accuracy of the MobileNetV2 backbone. Results on the MS COCO dataset for object detection show similar observations. Our code and trained models are available at https://github.com/hellozhuo/msgc.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14956-14974, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37527290

RESUMEN

Recently, there have been tremendous efforts in developing lightweight Deep Neural Networks (DNNs) with satisfactory accuracy, which can enable the ubiquitous deployment of DNNs in edge devices. The core challenge of developing compact and efficient DNNs lies in how to balance the competing goals of achieving high accuracy and high efficiency. In this paper we propose two novel types of convolutions, dubbed Pixel Difference Convolution (PDC) and Binary PDC (Bi-PDC) which enjoy the following benefits: capturing higher-order local differential information, computationally efficient, and able to be integrated with existing DNNs. With PDC and Bi-PDC, we further present two lightweight deep networks named Pixel Difference Networks (PiDiNet) and Binary PiDiNet (Bi-PiDiNet) respectively to learn highly efficient yet more accurate representations for visual tasks including edge detection and object recognition. Extensive experiments on popular datasets (BSDS500, ImageNet, LFW, YTF, etc.) show that PiDiNet and Bi-PiDiNet achieve the best accuracy-efficiency trade-off. For edge detection, PiDiNet is the first network that can be trained without ImageNet, and can achieve the human-level performance on BSDS500 at 100 FPS and with 1 M parameters. For object recognition, among existing Binary DNNs, Bi-PiDiNet achieves the best accuracy and a nearly 2× reduction of computational cost on ResNet18.

5.
IEEE Trans Cybern ; 52(10): 10735-10749, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33784633

RESUMEN

Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. Most existing approaches learn domain-invariant features by adapting the entire information of the images. However, forcing adaptation of domain-specific variations undermines the effectiveness of the learned features. To address this problem, we propose a novel, yet elegant module, called the deep ladder-suppression network (DLSN), which is designed to better learn the cross-domain shared content by suppressing domain-specific variations. Our proposed DLSN is an autoencoder with lateral connections from the encoder to the decoder. By this design, the domain-specific details, which are only necessary for reconstructing the unlabeled target data, are directly fed to the decoder to complete the reconstruction task, relieving the pressure of learning domain-specific variations at the later layers of the shared encoder. As a result, DLSN allows the shared encoder to focus on learning cross-domain shared content and ignores the domain-specific variations. Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. Without whistles and bells, extensive experimental results on four gold-standard domain adaptation datasets, for example: 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks.

6.
Spectrochim Acta A Mol Biomol Spectrosc ; 257: 119739, 2021 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-33862374

RESUMEN

In China, over 10% of cultivated land is polluted by heavy metals, which can affect crop growth, food safety and human health. Therefore, how to effectively and quickly detect soil heavy metal pollution has become a critical issue. This study provides a novel data preprocessing method that can extract vital information from soil hyperspectra and uses different classification algorithms to detect levels of heavy metal contamination in soil. In this experiment, 160 soil samples from the Eastern Junggar Coalfield in Xinjiang were employed for verification, including 143 noncontaminated samples and 17 contaminated soil samples. Because the concentration of chromium in the soil exists in trace amounts, combined with the fact that spectral characteristics are easily influenced by other types of impurity in the soil, the evaluation of chromium concentrations in the soil through hyperspectral analysis is not satisfactory. To avoid this phenomenon, the pretreatment method of this experiment includes a combination of second derivative and data enhancement (DA) approaches. Then, support vector machine (SVM), k-nearest neighbour (KNN) and deep neural network (DNN) algorithms are used to create the discriminant models. The accuracies of the DA-SVM, DA-KNN and DA-DNN models were 95.61%, 95.62% and 96.25%, respectively. The results of this experiment demonstrate that soil hyperspectral technology combined with deep learning can be used to instantly monitor soil chromium pollution levels on a large scale. This research can be used for the management of polluted areas and agricultural insurance applications.

7.
IEEE Trans Image Process ; 28(8): 3910-3922, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30869616

RESUMEN

Research in texture recognition often concentrates on recognizing textures with intraclass variations, such as illumination, rotation, viewpoint, and small-scale changes. In contrast, in real-world applications, a change in scale can have a dramatic impact on texture appearance to the point of changing completely from one texture category to another. As a result, texture variations due to changes in scale are among the hardest to handle. In this paper, we conduct the first study of classifying textures with extreme variations in scale. To address this issue, we first propose and then reduce scale proposals on the basis of dominant texture patterns. Motivated by the challenges posed by this problem, we propose a new GANet network where we use a genetic algorithm to change the filters in the hidden layers during network training in order to promote the learning of more informative semantic texture patterns. Finally, we adopt a Fisher vector pooling of a convolutional neural network filter bank feature encoder for global texture representation. Because extreme scale variations are not necessarily present in most standard texture databases, to support the proposed extreme-scale aspects of texture understanding, we are developing a new dataset, the extreme scale variation textures (ESVaT), to test the performance of our framework. It is demonstrated that the proposed framework significantly outperforms the gold-standard texture features by more than 10% on ESVaT. We also test the performance of our proposed approach on the KTHTIPS2b and OS datasets and a further dataset synthetically derived from Forrest, showing the superior performance compared with the state-of-the-art.

8.
IEEE J Biomed Health Inform ; 21(2): 429-440, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-26685275

RESUMEN

Indirect immunofluorescence imaging of human epithelial type 2 (HEp-2) cell image is an effective evidence to diagnose autoimmune diseases. Recently, computer-aided diagnosis of autoimmune diseases by the HEp-2 cell classification has attracted great attention. However, the HEp-2 cell classification task is quite challenging due to large intraclass and small interclass variations. In this paper, we propose an effective approach for the automatic HEp-2 cell classification by combining multiresolution co-occurrence texture and large regional shape information. To be more specific, we propose to: 1) capture multiresolution co-occurrence texture information by a novel pairwise rotation-invariant co-occurrence of local Gabor binary pattern descriptor; 2) depict large regional shape information by using an improved Fisher vector model with RootSIFT features, which are sampled from large image patches in multiple scales; and 3) combine both features. We evaluate systematically the proposed approach on the IEEE International Conference on Pattern Recognition (ICPR) 2012, the IEEE International Conference on Image Processing (ICIP) 2013, and the ICPR 2014 contest datasets. The proposed method based on the combination of the introduced two features outperforms the winners of the ICPR 2012 contest using the same experimental protocol. Our method also greatly improves the winner of the ICIP 2013 contest under four different experimental setups. Using the leave-one-specimen-out evaluation strategy, our method achieves comparable performance with the winner of the ICPR 2014 contest that combined four features.


Asunto(s)
Células Epiteliales/citología , Técnica del Anticuerpo Fluorescente Indirecta/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Línea Celular , Humanos
9.
IEEE Trans Image Process ; 25(5): 1977-92, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26955032

RESUMEN

In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison.

10.
IEEE Trans Image Process ; 25(3): 1368-81, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26829791

RESUMEN

Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 2287-2290, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28268784

RESUMEN

Computer aided diagnosis (CAD) is an important issue, which can significantly improve the efficiency of doctors. In this paper, we propose a deep convolutional neural network (CNN) based method for thorax disease diagnosis. We firstly align the images by matching the interest points between the images, and then enlarge the dataset by using Gaussian scale space theory. After that we use the enlarged dataset to train a deep CNN model and apply the obtained model for the diagnosis of new test data. Our experimental results show our method achieves very promising results.


Asunto(s)
Redes Neurales de la Computación , Enfermedades Torácicas/diagnóstico , Humanos
12.
PLoS One ; 9(8): e104855, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25144549

RESUMEN

INTRODUCTION: Microscopy is the gold standard for diagnosis of malaria, however, manual evaluation of blood films is highly dependent on skilled personnel in a time-consuming, error-prone and repetitive process. In this study we propose a method using computer vision detection and visualization of only the diagnostically most relevant sample regions in digitized blood smears. METHODS: Giemsa-stained thin blood films with P. falciparum ring-stage trophozoites (n = 27) and uninfected controls (n = 20) were digitally scanned with an oil immersion objective (0.1 µm/pixel) to capture approximately 50,000 erythrocytes per sample. Parasite candidate regions were identified based on color and object size, followed by extraction of image features (local binary patterns, local contrast and Scale-invariant feature transform descriptors) used as input to a support vector machine classifier. The classifier was trained on digital slides from ten patients and validated on six samples. RESULTS: The diagnostic accuracy was tested on 31 samples (19 infected and 12 controls). From each digitized area of a blood smear, a panel with the 128 most probable parasite candidate regions was generated. Two expert microscopists were asked to visually inspect the panel on a tablet computer and to judge whether the patient was infected with P. falciparum. The method achieved a diagnostic sensitivity and specificity of 95% and 100% as well as 90% and 100% for the two readers respectively using the diagnostic tool. Parasitemia was separately calculated by the automated system and the correlation coefficient between manual and automated parasitemia counts was 0.97. CONCLUSION: We developed a decision support system for detecting malaria parasites using a computer vision algorithm combined with visualization of sample areas with the highest probability of malaria infection. The system provides a novel method for blood smear screening with a significantly reduced need for visual examination and has a potential to increase the throughput in malaria diagnostics.


Asunto(s)
Malaria/parasitología , Plasmodium falciparum/fisiología , Humanos , Malaria/diagnóstico , Malaria Falciparum/diagnóstico , Malaria Falciparum/fisiopatología , Parasitemia/fisiopatología
13.
IEEE Trans Image Process ; 23(6): 2557-68, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24733014

RESUMEN

Effective characterization of texture images requires exploiting multiple visual cues from the image appearance. The local binary pattern (LBP) and its variants achieve great success in texture description. However, because the LBP(-like) feature is an index of discrete patterns rather than a numerical feature, it is difficult to combine the LBP(-like) feature with other discriminative ones by a compact descriptor. To overcome the problem derived from the nonnumerical constraint of the LBP, this paper proposes a numerical variant accordingly, named the LBP difference (LBPD). The LBPD characterizes the extent to which one LBP varies from the average local structure of an image region of interest. It is simple, rotation invariant, and computationally efficient. To achieve enhanced performance, we combine the LBPD with other discriminative cues by a covariance matrix. The proposed descriptor, termed the covariance and LBPD descriptor (COV-LBPD), is able to capture the intrinsic correlation between the LBPD and other features in a compact manner. Experimental results show that the COV-LBPD achieves promising results on publicly available data sets.

14.
IEEE Trans Pattern Anal Mach Intell ; 36(1): 181-7, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24231875

RESUMEN

The problem of visual speech recognition involves the decoding of the video dynamics of a talking mouth in a high-dimensional visual space. In this paper, we propose a generative latent variable model to provide a compact representation of visual speech data. The model uses latent variables to separately represent the interspeaker variations of visual appearances and those caused by uttering within images, and incorporates the structural information of the visual data through placing priors of the latent variables along a curve embedded within a path graph.


Asunto(s)
Reconocimiento de Normas Patrones Automatizadas/métodos , Software de Reconocimiento del Habla , Habla/fisiología , Grabación en Video/métodos , Bases de Datos Factuales , Humanos
15.
IEEE Trans Pattern Anal Mach Intell ; 36(2): 289-302, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24356350

RESUMEN

Local feature descriptor is an important module for face recognition and those like Gabor and local binary patterns (LBP) have proven effective face descriptors. Traditionally, the form of such local descriptors is predefined in a handcrafted way. In this paper, we propose a method to learn a discriminant face descriptor (DFD) in a data-driven way. The idea is to learn the most discriminant local features that minimize the difference of the features between images of the same person and maximize that between images from different people. In particular, we propose to enhance the discriminative ability of face representation in three aspects. First, the discriminant image filters are learned. Second, the optimal neighborhood sampling strategy is soft determined. Third, the dominant patterns are statistically constructed. Discriminative learning is incorporated to extract effective and robust features. We further apply the proposed method to the heterogeneous (cross-modality) face recognition problem and learn DFD in a coupled way (coupled DFD or C-DFD) to reduce the gap between features of heterogeneous face images to improve the performance of this challenging problem. Extensive experiments on FERET, CAS-PEAL-R1, LFW, and HFB face databases validate the effectiveness of the proposed DFD learning on both homogeneous and heterogeneous face recognition problems. The DFD improves POEM and LQP by about 4.5 percent on LFW database and the C-DFD enhances the heterogeneous face recognition performance of LBP by over 25 percent.


Asunto(s)
Algoritmos , Inteligencia Artificial , Biometría/métodos , Cara/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Análisis Discriminante , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
IEEE Trans Image Process ; 22(10): 3879-91, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23686952

RESUMEN

Video texture synthesis is the process of providing a continuous and infinitely varying stream of frames, which plays an important role in computer vision and graphics. However, it still remains a challenging problem to generate high-quality synthesis results. Considering the two key factors that affect the synthesis performance, frame representation and blending artifacts, we improve the synthesis performance from two aspects: 1) Effective frame representation is designed to capture both the image appearance information in spatial domain and the longitudinal information in temporal domain. 2) Artifacts that degrade the synthesis quality are significantly suppressed on the basis of a diffeomorphic growth model. The proposed video texture synthesis approach has two major stages: video stitching stage and transition smoothing stage. In the first stage, a video texture synthesis model is proposed to generate an infinite video flow. To find similar frames for stitching video clips, we present a new spatial-temporal descriptor to provide an effective representation for different types of dynamic textures. In the second stage, a smoothing method is proposed to improve synthesis quality, especially in the aspect of temporal continuity. It aims to establish a diffeomorphic growth model to emulate local dynamics around stitched frames. The proposed approach is thoroughly tested on public databases and videos from the Internet, and is evaluated in both qualitative and quantitative ways.

17.
IEEE Trans Pattern Anal Mach Intell ; 35(5): 1164-77, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23520257

RESUMEN

Face recognition subject to uncontrolled illumination and blur is challenging. Interestingly, image degradation caused by blurring, often present in real-world imagery, has mostly been overlooked by the face recognition community. Such degradation corrupts face information and affects image alignment, which together negatively impact recognition accuracy. We propose a number of countermeasures designed to achieve system robustness to blurring. First, we propose a novel blur-robust face image descriptor based on Local Phase Quantization (LPQ) and extend it to a multiscale framework (MLPQ) to increase its effectiveness. To maximize the insensitivity to misalignment, the MLPQ descriptor is computed regionally by adopting a component-based framework. Second, the regional features are combined using kernel fusion. Third, the proposed MLPQ representation is combined with the Multiscale Local Binary Pattern (MLBP) descriptor using kernel fusion to increase insensitivity to illumination. Kernel Discriminant Analysis (KDA) of the combined features extracts discriminative information for face recognition. Last, two geometric normalizations are used to generate and combine multiple scores from different face image scales to further enhance the accuracy. The proposed approach has been comprehensively evaluated using the combined Yale and Extended Yale database B (degraded by artificially induced linear motion blur) as well as the FERET, FRGC 2.0, and LFW databases. The combined system is comparable to state-of-the-art approaches using similar system configurations. The reported work provides a new insight into the merits of various face representation and fusion methods, as well as their role in dealing with variable lighting and blur degradation.


Asunto(s)
Algoritmos , Identificación Biométrica/métodos , Cara/anatomía & histología , Inteligencia Artificial , Bases de Datos Factuales , Análisis Discriminante , Humanos , Procesamiento de Imagen Asistido por Computador
18.
IEEE Trans Image Process ; 22(1): 326-39, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22851258

RESUMEN

A dynamic texture (DT) is an extension of the texture to the temporal domain. How to segment a DT is a challenging problem. In this paper, we address the problem of segmenting a DT into disjoint regions. A DT might be different from its spatial mode (i.e., appearance) and/or temporal mode (i.e., motion field). To this end, we develop a framework based on the appearance and motion modes. For the appearance mode, we use a new local spatial texture descriptor to describe the spatial mode of the DT; for the motion mode, we use the optical flow and the local temporal texture descriptor to represent the temporal variations of the DT. In addition, for the optical flow, we use the histogram of oriented optical flow (HOOF) to organize them. To compute the distance between two HOOFs, we develop a simple effective and efficient distance measure based on Weber's law. Furthermore, we also address the problem of threshold selection by proposing a method for determining thresholds for the segmentation method by an offline supervised statistical learning. The experimental results show that our method provides very good segmentation results compared to the state-of-the-art methods in segmenting regions that differ in their dynamics.

19.
Diagn Pathol ; 7: 22, 2012 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-22385523

RESUMEN

BACKGROUND: The aim of the study was to assess whether texture analysis is feasible for automated identification of epithelium and stroma in digitized tumor tissue microarrays (TMAs). Texture analysis based on local binary patterns (LBP) has previously been used successfully in applications such as face recognition and industrial machine vision. TMAs with tissue samples from 643 patients with colorectal cancer were digitized using a whole slide scanner and areas representing epithelium and stroma were annotated in the images. Well-defined images of epithelium (n = 41) and stroma (n = 39) were used for training a support vector machine (SVM) classifier with LBP texture features and a contrast measure C (LBP/C) as input. We optimized the classifier on a validation set (n = 576) and then assessed its performance on an independent test set of images (n = 720). Finally, the performance of the LBP/C classifier was evaluated against classifiers based on Haralick texture features and Gabor filtered images. RESULTS: The proposed approach using LPB/C texture features was able to correctly differentiate epithelium from stroma according to texture: the agreement between the classifier and the human observer was 97 per cent (kappa value = 0.934, P < 0.0001) and the accuracy (area under the ROC curve) of the LBP/C classifier was 0.995 (CI95% 0.991-0.998). The accuracy of the corresponding classifiers based on Haralick features and Gabor-filter images were 0.976 and 0.981 respectively. CONCLUSIONS: The method illustrates the capability of automated segmentation of epithelial and stromal tissue in TMAs based on texture features and an SVM classifier. Applications include tissue specific assessment of gene and protein expression, as well as computerized analysis of the tumor microenvironment. VIRTUAL SLIDES: The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/4123422336534537.


Asunto(s)
Neoplasias Colorrectales/patología , Interpretación de Imagen Asistida por Computador/métodos , Máquina de Vectores de Soporte , Análisis de Matrices Tisulares/métodos , Área Bajo la Curva , Epitelio/patología , Matriz Extracelular/patología , Humanos , Curva ROC , Sensibilidad y Especificidad
20.
Arterioscler Thromb Vasc Biol ; 32(3): 815-21, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22223734

RESUMEN

OBJECTIVE: The goal of this study was to investigate the extent to which socioeconomic status (SES) in young adults is associated with cardiovascular risk factor levels and carotid intima-media thickness (IMT) and their changes over a 6-year follow-up period. METHODS AND RESULTS: The study population included 1813 subjects participating in the 21- and 27-year follow-ups of the Cardiovascular Risk in Young Finns Study (baseline age 24-39 years in 2001). At baseline, SES (indexed with education) was inversely associated with body mass index (P=0.0002), waist circumference (P<0.0001), glucose (P=0.01), and insulin (P=0.0009) concentrations; inversely associated with alcohol consumption (P=0.02) and cigarette smoking (P<0.0001); and directly associated with high-density lipoprotein cholesterol levels (P=0.05) and physical activity (P=0.006). Higher SES was associated with a smaller 6-year increase in body mass index (P=0.001). Education level and IMT were not associated (P=0.58) at baseline, but an inverse association was observed at follow-up among men (P=0.004). This became nonsignificant after adjustment with conventional risk factors (P=0.11). In all subjects, higher education was associated with a smaller increase in IMT during the follow-up (P=0.002), and this association remained after adjustments for conventional risk factors (P=0.04). CONCLUSION: This study shows that high education in young adults is associated with favorable cardiovascular risk factor profile and 6-year change of risk factors. Most importantly, the progression of carotid atherosclerosis was slower among individuals with higher educational level.


Asunto(s)
Enfermedades Cardiovasculares/epidemiología , Enfermedades de las Arterias Carótidas/epidemiología , Factores Socioeconómicos , Adulto , Factores de Edad , Análisis de Varianza , Enfermedades Asintomáticas , Enfermedades Cardiovasculares/diagnóstico por imagen , Arterias Carótidas/diagnóstico por imagen , Enfermedades de las Arterias Carótidas/diagnóstico por imagen , Distribución de Chi-Cuadrado , Progresión de la Enfermedad , Escolaridad , Femenino , Finlandia/epidemiología , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Medición de Riesgo , Factores de Riesgo , Factores de Tiempo , Túnica Íntima/diagnóstico por imagen , Túnica Media/diagnóstico por imagen , Ultrasonografía , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...