Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Hepatol Int ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38913149

RESUMEN

BACKGROUND AND AIMS: The efficacy of achieving HBsAg clearance through pegylated interferon (PEG-IFNα) therapy in patients with chronic hepatitis B (CHB) remains uncertain, especially regarding the probability of achieving functional cure among patients with varying baseline HBsAg levels. We aimed to investigate the predictive value of HBsAg quantification for HBsAg seroclearance in CHB patients undergoing PEG-IFNα treatment. METHODS: A systematic search was conducted in PubMed, Embase, and the Cochrane Library up to January 11, 2022. Subgroup analyses were performed for HBeAg-positive and HBeAg-negative patients, PEG-IFNα monotherapy and PEG-IFNα combination therapy, treatment-naive and treatment-experienced patients, and patients with or without liver cirrhosis. RESULTS: This predictive model incorporated 102 studies. The overall HBsAg clearance rates at the end of treatment (EOT) and the end of follow-up (EOF) were 10.6% (95% CI 7.8-13.7%) and 11.1% (95% CI 8.4-14.1%), respectively. Baseline HBsAg quantification was the most significant factor. According to the model, it is projected that when baseline HBsAg levels are 100, 500, 1500, and 10,000 IU/ml, the HBsAg clearance rates at EOF could reach 53.9% (95% CI 40.4-66.8%), 32.1% (95% CI 24.8-38.7%), 14.2% (95% CI 9.8-18.8%), and 7.9% (95% CI 4.2-11.8%), respectively. Additionally, treatment-experienced patients with HBeAg-negative status, and without liver cirrhosis exhibited higher HBsAg clearance rates after PEG-IFNα treatment. CONCLUSION: A successful predictive model has been established to predict the achievement of functional cure in CHB patients receiving PEG-IFNα therapy.

2.
World J Gastroenterol ; 26(2): 134-153, 2020 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-31969776

RESUMEN

BACKGROUND: Hepatocellular carcinoma (HCC) is a common cancer with a poor prognosis. Previous studies revealed that the tumor microenvironment (TME) plays an important role in HCC progression, recurrence, and metastasis, leading to poor prognosis. However, the effects of genes involved in TME on the prognosis of HCC patients remain unclear. Here, we investigated the HCC microenvironment to identify prognostic genes for HCC. AIM: To identify a robust gene signature associated with the HCC microenvironment to improve prognosis prediction of HCC. METHODS: We computed the immune/stromal scores of HCC patients obtained from The Cancer Genome Atlas based on the ESTIMATE algorithm. Additionally, a risk score model was established based on Differentially Expressed Genes (DEGs) between high- and low-immune/stromal score patients. RESULTS: The risk score model consisting of eight genes was constructed and validated in the HCC patients. The patients were divided into high- or low-risk groups. The genes (Disabled homolog 2, Musculin, C-X-C motif chemokine ligand 8, Galectin 3, B-cell-activating transcription factor, Killer cell lectin like receptor B1, Endoglin and adenomatosis polyposis coli tumor suppressor) involved in our risk score model were considered to be potential immunotherapy targets, and they may provide better performance in combination. Functional enrichment analysis showed that the immune response and T cell receptor signaling pathway represented the major function and pathway, respectively, related to the immune-related genes in the DEGs between high- and low-risk groups. The receiver operating characteristic (ROC) curve analysis confirmed the good potency of the risk score prognostic model. Moreover, we validated the risk score model using the International Cancer Genome Consortium and the Gene Expression Omnibus database. A nomogram was established to predict the overall survival of HCC patients. CONCLUSION: The risk score model and the nomogram will benefit HCC patients through personalized immunotherapy.


Asunto(s)
Biomarcadores de Tumor/genética , Carcinoma Hepatocelular/mortalidad , Neoplasias Hepáticas/mortalidad , Modelos Genéticos , Microambiente Tumoral/genética , Anciano , Antineoplásicos Inmunológicos/farmacología , Antineoplásicos Inmunológicos/uso terapéutico , Biomarcadores de Tumor/antagonistas & inhibidores , Carcinoma Hepatocelular/tratamiento farmacológico , Carcinoma Hepatocelular/genética , Carcinoma Hepatocelular/inmunología , Bases de Datos Genéticas/estadística & datos numéricos , Conjuntos de Datos como Asunto , Femenino , Perfilación de la Expresión Génica/métodos , Regulación Neoplásica de la Expresión Génica/inmunología , Humanos , Estimación de Kaplan-Meier , Hígado/inmunología , Hígado/patología , Neoplasias Hepáticas/tratamiento farmacológico , Neoplasias Hepáticas/genética , Neoplasias Hepáticas/inmunología , Masculino , Persona de Mediana Edad , Estadificación de Neoplasias , Nomogramas , Medicina de Precisión/métodos , Curva ROC , Medición de Riesgo/métodos , Resultado del Tratamiento , Microambiente Tumoral/inmunología
3.
Neural Netw ; 119: 313-322, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31499355

RESUMEN

Heterogeneous domain adaptation aims to exploit the source domain data to train a prediction model for the target domain with different input feature space. Current methods either map the data points from different domains with different feature space to a common latent subspace or use asymmetric projections for learning the classifier. However, these learning methods separate common space learning and shared classifier training. This may lead complex model structure and more parameters to be determined. To appropriately address this problem, we propose a novel bidirectional ECOC projection method, named HDA-ECOC, for heterogeneous domain adaptation. The proposed method projects the inputs and outputs (labels) of two domains into a common ECOC coding space, such that, the common space learning and the shared classifier training can be performed simultaneously. Then, classification of the target testing sample can be directly addressed by an ECOC decoding. Moreover, the unlabeled target data is exploited by estimating the two domains projected instances consistency through a maximum mean discrepancy (MMD) criterion. We formulate this method as a dual convex minimization problem and propose an alternating optimization algorithm for solving it. For performance evaluation, experiments are performed on cross-lingual text classification and cross-domain digital image classification with heterogeneous feature space. The experimental results demonstrate that the proposed method is effective and efficient in solving the heterogeneous domain adaptation problems.


Asunto(s)
Algoritmos , Aprendizaje , Redes Neurales de la Computación , Almacenamiento y Recuperación de la Información , Transferencia de Experiencia en Psicología
4.
IEEE Trans Neural Netw Learn Syst ; 30(4): 1180-1190, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30176608

RESUMEN

In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches.

5.
Comput Intell Neurosci ; 2016: 5197932, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27143958

RESUMEN

We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.


Asunto(s)
Algoritmos , Máquina de Vectores de Soporte
6.
Neural Netw ; 77: 14-28, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26907860

RESUMEN

Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Minería de Datos/métodos
7.
Neural Netw ; 76: 29-38, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26829605

RESUMEN

In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.


Asunto(s)
Aprendizaje Automático , Máquina de Vectores de Soporte , Algoritmos
8.
Neural Netw ; 53: 1-7, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24513850

RESUMEN

Activity recognition based on mobile embedded accelerometer is very important for developing human-centric pervasive applications such as healthcare, personalized recommendation and so on. However, the distribution of accelerometer data is heavily affected by varying users. The performance will degrade when the model trained on one person is used to others. To solve this problem, we propose a fast and accurate cross-person activity recognition model, known as TransRKELM (Transfer learning Reduced Kernel Extreme Learning Machine) which uses RKELM (Reduced Kernel Extreme Learning Machine) to realize initial activity recognition model. In the online phase OS-RKELM (Online Sequential Reduced Kernel Extreme Learning Machine) is applied to update the initial model and adapt the recognition model to new device users based on recognition results with high confidence level efficiently. Experimental results show that, the proposed model can adapt the classifier to new device users quickly and obtain good recognition performance.


Asunto(s)
Inteligencia Artificial , Identificación Biométrica/métodos , Modelos Teóricos , Humanos , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...