Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 178: 106483, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38954893

RESUMEN

In reinforcement learning, accurate estimation of the Q-value is crucial for acquiring an optimal policy. However, current successful Actor-Critic methods still suffer from underestimation bias. Additionally, there exists a significant estimation bias, regardless of the method used in the critic initialization phase. To address these challenges and reduce estimation errors, we propose CEILING, a simple and compatible framework that can be applied to any model-free Actor-Critic methods. The core idea of CEILING is to evaluate the superiority of different estimation methods by incorporating the true Q-value, calculated using Monte Carlo, during the training process. CEILING consists of two implementations: the Direct Picking Operation and the Exponential Softmax Weighting Operation. The first implementation selects the optimal method at each fixed step and applies it in subsequent interactions until the next selection. The other implementation utilizes a nonlinear weighting function that dynamically assigns larger weights to more accurate methods. Theoretically, we demonstrate that our methods provide a more accurate and stable Q-value estimation. Additionally, we analyze the upper bound of the estimation bias. Based on two implementations, we propose specific algorithms and their variants, and our methods achieve superior performance on several benchmark tasks.


Asunto(s)
Algoritmos , Refuerzo en Psicología , Método de Montecarlo , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Simulación por Computador
2.
Artículo en Inglés | MEDLINE | ID: mdl-38739517

RESUMEN

In point cloud, some regions typically exist nodes from multiple categories, i.e., these regions have both homophilic and heterophilic nodes. However, most existing methods ignore the heterophily of edges during the aggregation of the neighborhood node features, which inevitably mixes unnecessary information of heterophilic nodes and leads to blurred boundaries of segmentation. To address this problem, we model the point cloud as a homophilic-heterophilic graph and propose a graph regulation network (GRN) to produce finer segmentation boundaries. The proposed method can adaptively adjust the propagation mechanism with the degree of neighborhood homophily. Moreover, we build a prototype feature extraction module, which is utilised to mine the homophily features of nodes from the global prototype space. Theoretically, we prove that our convolution operation can constrain the similarity of representations between nodes based on their degree of homophily. Extensive experiments on fully and weakly supervised point cloud semantic segmentation tasks demonstrate that our method achieves satisfactory performance. Especially in the case of weak supervision, that is, each sample has only 1%-10% labeled points, the proposed method has a significant improvement in segmentation performance.

3.
N Engl J Med ; 390(20): 1862-1872, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38752650

RESUMEN

BACKGROUND: Treatment of acute stroke, before a distinction can be made between ischemic and hemorrhagic types, is challenging. Whether very early blood-pressure control in the ambulance improves outcomes among patients with undifferentiated acute stroke is uncertain. METHODS: We randomly assigned patients with suspected acute stroke that caused a motor deficit and with elevated systolic blood pressure (≥150 mm Hg), who were assessed in the ambulance within 2 hours after the onset of symptoms, to receive immediate treatment to lower the systolic blood pressure (target range, 130 to 140 mm Hg) (intervention group) or usual blood-pressure management (usual-care group). The primary efficacy outcome was functional status as assessed by the score on the modified Rankin scale (range, 0 [no symptoms] to 6 [death]) at 90 days after randomization. The primary safety outcome was any serious adverse event. RESULTS: A total of 2404 patients (mean age, 70 years) in China underwent randomization and provided consent for the trial: 1205 in the intervention group and 1199 in the usual-care group. The median time between symptom onset and randomization was 61 minutes (interquartile range, 41 to 93), and the mean blood pressure at randomization was 178/98 mm Hg. Stroke was subsequently confirmed by imaging in 2240 patients, of whom 1041 (46.5%) had a hemorrhagic stroke. At the time of patients' arrival at the hospital, the mean systolic blood pressure in the intervention group was 159 mm Hg, as compared with 170 mm Hg in the usual-care group. Overall, there was no difference in functional outcome between the two groups (common odds ratio, 1.00; 95% confidence interval [CI], 0.87 to 1.15), and the incidence of serious adverse events was similar in the two groups. Prehospital reduction of blood pressure was associated with a decrease in the odds of a poor functional outcome among patients with hemorrhagic stroke (common odds ratio, 0.75; 95% CI, 0.60 to 0.92) but an increase among patients with cerebral ischemia (common odds ratio, 1.30; 95% CI, 1.06 to 1.60). CONCLUSIONS: In this trial, prehospital blood-pressure reduction did not improve functional outcomes in a cohort of patients with undifferentiated acute stroke, of whom 46.5% subsequently received a diagnosis of hemorrhagic stroke. (Funded by the National Health and Medical Research Council of Australia and others; INTERACT4 ClinicalTrials.gov number, NCT03790800; Chinese Trial Registry number, ChiCTR1900020534.).


Asunto(s)
Antihipertensivos , Presión Sanguínea , Servicios Médicos de Urgencia , Hipertensión , Accidente Cerebrovascular , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ambulancias , Antihipertensivos/administración & dosificación , Antihipertensivos/efectos adversos , Antihipertensivos/uso terapéutico , Presión Sanguínea/efectos de los fármacos , Hipertensión/complicaciones , Hipertensión/tratamiento farmacológico , Accidente Cerebrovascular Isquémico/terapia , Accidente Cerebrovascular/etiología , Accidente Cerebrovascular/terapia , Tiempo de Tratamiento , Enfermedad Aguda , Estado Funcional , China
4.
Neurology ; 102(7): e209217, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38489544

RESUMEN

BACKGROUND AND OBJECTIVES: Acute stent thrombosis (AST) is not uncommon and even catastrophic during intracranial stenting angioplasty in patients with symptomatic high-grade intracranial atherosclerotic stenosis (ICAS). The purpose of this study was to investigate whether adjuvant intravenous tirofiban before stenting could reduce the risk of AST and periprocedural ischemic stroke in patients receiving stent angioplasty for symptomatic ICAS. METHODS: A prospective, multicenter, open-label, randomized clinical trial was conducted from September 9, 2020, to February 18, 2022, at 10 medical centers in China. Patients intended to receive stent angioplasty for symptomatic high-grade ICAS were enrolled and randomly assigned to receive intravenous tirofiban or not before stenting in a 1:1 ratio. The primary outcomes included the incidence of AST within 30 minutes after stenting, periprocedural new-onset ischemic stroke, and symptomatic intracranial hemorrhage. The outcomes were analyzed using logistic regression analysis to obtain an odds ratio and 95% confidence interval. RESULTS: A total of 200 participants (122 men [61.0%]; median [interquartile ranges] age, 57 [52-66] years) were included in the analysis, with 100 participants randomly assigned to the tirofiban group and 100 participants to the control (no tirofiban) group. The AST incidence was lower in the tirofiban group than that in the control group (4.0% vs 14.0%; adjusted odds ratio, 0.25; 95% CI 0.08-0.82; p = 0.02). No significant difference was observed in the incidence of periprocedural ischemic stroke (7.0% vs 8.0%; p = 0.98) or symptomatic intracranial hemorrhage between the 2 groups. DISCUSSION: This study suggests that adjuvant intravenous tirofiban before stenting could lower the risk of AST during stent angioplasty in patients with symptomatic high-grade ICAS. TRIAL REGISTRATION INFORMATION: URL: chictr.org.cn; Unique identifier: ChiCTR2000031935. CLASSIFICATION OF EVIDENCE: This study provides Class II evidence that for patients with symptomatic high-grade ICAS, pretreatment with tirofiban decreases the incidence of acute stent thrombosis. This study is Class II due to the unequal distribution of involved arteries between the 2 groups.


Asunto(s)
Arteriosclerosis Intracraneal , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Trombosis , Masculino , Humanos , Persona de Mediana Edad , Tirofibán/uso terapéutico , Accidente Cerebrovascular/etiología , Estudios Prospectivos , Constricción Patológica/complicaciones , Stents/efectos adversos , Accidente Cerebrovascular Isquémico/complicaciones , Hemorragias Intracraneales/complicaciones , Trombosis/complicaciones , Arteriosclerosis Intracraneal/tratamiento farmacológico , Arteriosclerosis Intracraneal/cirugía , Resultado del Tratamiento
5.
Artículo en Inglés | MEDLINE | ID: mdl-37966927

RESUMEN

In this article, a new unsupervised contrastive clustering (CC) model is introduced, namely, image CC with self-learning pairwise constraints (ICC-SPC). This model is designed to integrate pairwise constraints into the CC process, enhancing the latent representation learning and improving clustering results for image data. The incorporation of pairwise constraints helps reduce the impact of false negatives and false positives in contrastive learning, while maintaining robust cluster discrimination. However, obtaining prior pairwise constraints from unlabeled data directly is quite challenging in unsupervised scenarios. To address this issue, ICC-SPC designs a pairwise constraints learning module. This module autonomously learns pairwise constraints among data samples by leveraging consensus information between latent representation and pseudo-labels, which are generated by the clustering algorithm. Consequently, there is no requirement for labeled images, offering a practical resolution to the challenge posed by the lack of sufficient supervised information in unsupervised clustering tasks. ICC-SPC's effectiveness is validated through evaluations on multiple benchmark datasets. This contribution is significant, as we present a novel framework for unsupervised clustering by integrating contrastive learning with self-learning pairwise constraints.

6.
Neural Netw ; 168: 459-470, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37806139

RESUMEN

Graph Convolutional Networks (GCNs) have shown remarkable performance in processing graph-structured data by leveraging neighborhood information for node representation learning. While most GCN models assume strong homophily within the networks they handle, some models can also handle heterophilous graphs. However, the selection of neighbors participating in the node representation learning process can significantly impact these models' performance. To address this, we investigate the influence of neighbor selection on GCN performance, focusing on the analysis of edge distribution through theoretical and empirical approaches. Based on our findings, we propose a novel GCN model called Graph Convolution Network with Improved Edge Distribution (GCN-IED). GCN-IED incorporates both direct edges, which rely on local neighborhood similarity, and hidden edges, obtained by aggregating information from multi-hop neighbors. We extensively evaluate GCN-IED on diverse graph benchmark datasets and observe its superior performance compared to other state-of-the-art GCN methods on heterophilous datasets. Our GCN-IED model, which considers the role of neighbors and optimizes edge distribution, provides valuable insights for enhancing graph representation learning and achieving superior performance on heterophilous graphs.


Asunto(s)
Benchmarking , Aprendizaje
7.
Artículo en Inglés | MEDLINE | ID: mdl-37672370

RESUMEN

Consensus clustering is to find a high quality and robust partition that is in agreement with multiple existing base clusterings. However, its computational cost is often very expensive and the quality of the final clustering is easily affected by uncertain consensus relations between clusters. In order to solve these problems, we develop a new k -type algorithm, called k -relations-based consensus clustering with double entropy-norm regularizers (KRCC-DE). In this algorithm, we build an optimization model to learn a consensus-relation matrix between final and base clusters and employ double entropy-norm regularizers to control the distribution of these consensus relations, which can reduce the impact of the uncertain consensus relations. The proposed algorithm uses an iterative strategy with strict updating formulas to get the optimal solution. Since its computation complexity is linear with the number of objects, base clusters, or final clusters, it can take low computational costs to effectively solve the consensus clustering problem. In experimental analysis, we compared the proposed algorithm with other k -type-based and global-search consensus clustering algorithms on benchmark datasets. The experimental results illustrate that the proposed algorithm can balance the quality of the final clustering and its computational cost well.

8.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14975-14989, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37490384

RESUMEN

Graph convolutional neural networks can effectively process geometric data and thus have been successfully used in point cloud data representation. However, existing graph-based methods usually adopt the K-nearest neighbor (KNN) algorithm to construct graphs, which may not be optimal for point cloud analysis tasks, owning to the solution of KNN is independent of network training. In this paper, we propose a novel graph structure learning convolutional neural network (GSLCN) for multiple point cloud analysis tasks. The fundamental concept is to propose a general graph structure learning architecture (GSL) that builds long-range and short-range dependency graphs. To learn optimal graphs that best serve to extract local features and investigate global contextual information, respectively, we integrated the GSL with the designed graph convolution operator under a unified framework. Furthermore, we design the graph structure losses with some prior knowledge to guide graph learning during network training. The main benefit is that given labels and prior knowledge are taken into account in GSLCN, providing useful supervised information to build graphs and thus facilitating the graph convolution operation for the point cloud. Experimental results on challenging benchmarks demonstrate that the proposed framework achieves excellent performance for point cloud classification, part segmentation, and semantic segmentation.

9.
Artículo en Inglés | MEDLINE | ID: mdl-37335781

RESUMEN

Few-shot knowledge graph completion (FKGC), which aims to infer new triples for a relation using only a few reference triples of the relation, has attracted much attention in recent years. Most existing FKGC methods learn a transferable embedding space, where entity pairs belonging to the same relations are close to each other. In real-world knowledge graphs (KGs), however, some relations may involve multiple semantics, and their entity pairs are not always close due to having different meanings. Hence, the existing FKGC methods may yield suboptimal performance when handling multiple semantic relations in the few-shot scenario. To solve this problem, we propose a new method named adaptive prototype interaction network (APINet) for FKGC. Our model consists of two major components: 1) an interaction attention encoder (InterAE) to capture the underlying relational semantics of entity pairs by modeling the interactive information between head and tail entities and 2) an adaptive prototype net (APNet) to generate relation prototypes adaptive to different query triples by extracting query-relevant reference pairs and reducing the data inconsistency between support and query sets. Experimental results on two public datasets demonstrate that APINet outperforms several state-of-the-art FKGC methods. The ablation study demonstrates the rationality and effectiveness of each component of APINet.

10.
Artículo en Inglés | MEDLINE | ID: mdl-37216237

RESUMEN

The bagging method has received much application and attention in recent years due to its good performance and simple framework. It has facilitated the advanced random forest method and accuracy-diversity ensemble theory. Bagging is an ensemble method based on simple random sampling (SRS) method with replacement. However, SRS is the most foundation sampling method in the field of statistics, where exists some other advanced sampling methods for probability density estimation. In imbalanced ensemble learning, down-sampling, over-sampling, and SMOTE methods have been proposed for generating base training set. However, these methods aim at changing the underlying distribution of data rather than simulating it better. The ranked set sampling (RSS) method uses auxiliary information to get more effective samples. The purpose of this article is to propose a bagging ensemble method based on RSS, which uses the ordering of objects related to the class to obtain more effective training sets. To explain its performance, we give a generalization bound of ensemble from the perspective of posterior probability estimation and Fisher information. On the basis of RSS sample having a higher Fisher information than SRS sample, the presented bound theoretically explains the better performance of RSS-Bagging. The experiments on 12 benchmark datasets demonstrate that RSS-Bagging statistically performs better than SRS-Bagging when the base classifiers are multinomial logistic regression (MLR) and support vector machine (SVM).

11.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 9639-9653, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37022220

RESUMEN

For a classification task, we usually select an appropriate classifier via model selection. How to evaluate whether the chosen classifier is optimal? One can answer this question via Bayes error rate (BER). Unfortunately, estimating BER is a fundamental conundrum. Most existing BER estimators focus on giving the upper and lower bounds of the BER. However, evaluating whether the selected classifier is optimal based on these bounds is hard. In this article, we aim to learn the exact BER instead of bounds on BER. The core of our method is to transform the BER calculation problem into a noise recognition problem. Specifically, we define a type of noise called Bayes noise and prove that the proportion of Bayes noisy samples in a data set is statistically consistent with the BER of the data set. To recognize the Bayes noisy samples, we present a method consisting of two parts: selecting reliable samples based on percolation theory and then employing a label propagation algorithm to recognize the Bayes noisy samples based on the selected reliable samples. The superiority of the proposed method compared to the existing BER estimators is verified on extensive synthetic, benchmark, and image data sets.


Asunto(s)
Algoritmos , Teorema de Bayes
12.
Plants (Basel) ; 12(4)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36840185

RESUMEN

Plant nitrogen (N) uptake preference is a key factor affecting plant nutrient acquisition, vegetation composition and ecosystem function. However, few studies have investigated the contribution of different N sources to plant N strategies, especially during the process of primary succession of a glacial retreat area. By measuring the natural abundance of N isotopes (δ15N) of dominant plants and soil, we estimated the relative contribution of different N forms (ammonium-NH4+, nitrate-NO3- and soluble organic N-DON) and absorption preferences of nine dominant plants of three stages (12, 40 and 120 years old) of the Hailuogou glacier retreat area. Along with the chronosequence of primary succession, dominant plants preferred to absorb NO3- in the early (73.5%) and middle (46.5%) stages. At the late stage, soil NH4+ contributed more than 60.0%, In addition, the contribution of DON to the total N uptake of plants was nearly 19.4%. Thus, the dominant plants' preference for NO3- in the first two stages changes to NH4+ in the late stages during primary succession. The contribution of DON to the N source of dominant plants should not be ignored. It suggests that the shift of N uptake preference of dominant plants may reflect the adjustment of their N acquisition strategy, in response to the changes in their physiological traits and soil nutrient conditions. Better knowledge of plant preferences for different N forms could significantly improve our understanding on the potential feedbacks of plant N acquisition strategies to environmental changes, and provide valuable suggestions for the sustainable management of plantations during different successional stages.

13.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 5126-5138, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35786548

RESUMEN

As a leading graph clustering technique, spectral clustering is one of the most widely used clustering methods to capture complex clusters in data. Some additional prior information can help it to further reduce the difference between its clustering results and users' expectations. However, it is hard to get the prior information under unsupervised scene to guide the clustering process. To solve this problem, we propose a self-constrained spectral clustering algorithm. In this algorithm, we extend the objective function of spectral clustering by adding pairwise and label self-constrained terms to it. We provide the theoretical analysis to show the roles of the self-constrained terms and the extensibility of the proposed algorithm. Based on the new objective function, we build an optimization model for self-constrained spectral clustering so that we can simultaneously learn the clustering results and constraints. Furthermore, we propose an iterative method to solve the new optimization problem. Compared to other existing versions of spectral clustering algorithms, the new algorithm can discover a high-quality cluster structure of a data set without prior information. Extensive experiments on benchmark data sets illustrate the effectiveness of the proposed algorithm.

14.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7235-7247, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35038298

RESUMEN

We consider the problem of distinguishing direct causes from direct effects of a target variable of interest from multiple manipulated datasets with unknown manipulated variables and nonidentical data distributions. Recent studies have shown that datasets attained from manipulated experiments (i.e., manipulated data) contain richer causal information than observational data for causal structure learning. Thus, in this article, we propose a new algorithm, which makes full use of the interventional properties of a causal model to discover the direct causes and direct effects of a target variable from multiple datasets with different manipulations. It is more suited to real-world cases and is also a challenge to be addressed in this article. First, we apply the backward framework to learn parents and children (PC) of a given target from multiple manipulated datasets. Second, we orient some edges connected to the target in advance through the assumption that the target variable is not manipulated and then orient the remaining undirected edges by finding invariant V-structures from multiple datasets. Third, we analyze the correctness of the proposed algorithm. To the best of our knowledge, the proposed algorithm is the first that can identify the local causal structure of a given target from multiple manipulated datasets with unknown manipulated variables. Experimental results on standard Bayesian networks validate the effectiveness of our algorithm.

15.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 1798-1816, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35486570

RESUMEN

The pure accuracy measure is used to eliminate random consistency from the accuracy measure. Biases to both majority and minority classes in the pure accuracy are lower than that in the accuracy measure. In this paper, we demonstrate that compared with the accuracy measure and F-measure, the pure accuracy measure is class distribution insensitive and discriminative for good classifiers. The advantages make the pure accuracy measure suitable for traditional classification. Further, we mainly focus on two points: exploring a tighter generalization bound on pure accuracy based learning paradigm and designing a learning algorithm based on the pure accuracy measure. Particularly, with the self-bounding property, we build an algorithm-independent generalization bound on the pure accuracy measure, which is tighter than the existing bound of an order O(1/√N) (N is the number of instances). The proposed bound is free from making a smoothness or convex assumption on the hypothesis functions. In addition, we design a learning algorithm optimizing the pure accuracy measure and use it in the selective ensemble learning setting. The experiments on sixteen benchmark data sets and four image data sets demonstrate that the proposed method statistically performs better than the other eight representative benchmark algorithms.

16.
Artículo en Inglés | MEDLINE | ID: mdl-35560073

RESUMEN

Graph neural networks (GNNs) have made great progress in graph-based semi-supervised learning (GSSL). However, most existing GNNs are confronted with the oversmoothing issue that limits their expressive ability. A key factor that leads to this problem is the excessive aggregation of information from other classes when updating the node representation. To alleviate this limitation, we propose an effective method called GUIded Dropout over Edges (GUIDE) for training deep GNNs. The core of the method is to reduce the influence of nodes from other classes by removing a certain number of inter-class edges. In GUIDE, we drop edges according to the edge strength, which is defined as the time an edge acts as a bridge along the shortest path between node pairs. We find that the stronger the edge strength, the more likely it is to be an inter-class edge. In this way, GUIDE can drop more inter-class edges and keep more intra-class edges. Therefore, nodes in the same community or class are more similar, whereas different classes are more separated in the embedded space. In addition, we perform some theoretical analysis of the proposed method, which explains why it is effective in alleviating the oversmoothing problem. To validate its rationality and effectiveness, we conduct experiments on six public benchmarks with different GNNs backbones. Experimental results demonstrate that GUIDE consistently outperforms state-of-the-art methods in both shallow and deep GNNs.

17.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9236-9254, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34752381

RESUMEN

Multi-modal classification (MMC) aims to integrate the complementary information from different modalities to improve classification performance. Existing MMC methods can be grouped into two categories: traditional methods and deep learning-based methods. The traditional methods often implement fusion in a low-level original space. Besides, they mostly focus on the inter-modal fusion and neglect the intra-modal fusion. Thus, the representation capacity of fused features induced by them is insufficient. The deep learning-based methods implement the fusion in a high-level feature space where the associations among features are considered, while the whole process is implicit and the fused space lacks interpretability. Based on these observations, we propose a novel interpretative association-based fusion method for MMC, named AF. In AF, both the association information and the high-order information extracted from feature space are simultaneously encoded into a new feature space to help to train an MMC model in an explicit manner. Moreover, AF is a general fusion framework, and most existing MMC methods can be embedded into it to improve their performance. Finally, the effectiveness and the generality of AF are validated on 22 datasets, four typically traditional MMC methods adopting best modality, early, late and model fusion strategies and a deep learning-based MMC method.

18.
Front Immunol ; 12: 582768, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34177880

RESUMEN

Background: The presence of fluid attenuated inversion recovery (FLAIR)-hyperintense lesions in anti-myelin oligodendrocyte glycoprotein (MOG) antibody-associated cerebral cortical encephalitis with seizures (FLAMCES) was recently reported. However, the clinical characteristics and outcome of this rare clinico-radiographic syndrome remain unclear. Methods: The present study reported two new cases. In addition, cases in the literature were systematically reviewed to investigate the clinical symptoms, magnetic resonance imaging (MRI) abnormalities, treatments and prognosis for this rare clinico-radiographic syndrome. Results: A total of 21 cases were identified during a literature review, with a mean patient age at onset of 26.8 years. The primary clinicopathological characteristics included seizures (100%), headache (71.4%), fever (52.3%) and other cortical symptoms associated with the encephalitis location (61.9%). The common seizure types were focal to bilateral tonic-clonic seizures (28.6%) and unknown-onset tonic-clonic seizures (38.1%). The cortical abnormalities on MRI FLAIR imaging were commonly located in the frontal (58.8%), parietal (70.6%) and temporal (64.7%) lobes. In addition, pleocytosis in the cerebrospinal fluid was reported in the majority of the patients (95.2%). All patients received a treatment regimen of corticosteroids and 9 patients received anti-epileptic drugs. Clinical improvement was achieved in all patients; however, one-third of the patients reported relapse following recovery from cortical encephalitis. Conclusions: FLAMCES is a rare phenotype of MOG-associated disease. Thus, the wider recognition of this rare syndrome may enable timely diagnosis and the development of suitable treatment regimens.


Asunto(s)
Autoanticuerpos/metabolismo , Corteza Cerebral/patología , Líquido Cefalorraquídeo/inmunología , Encefalitis/diagnóstico , Enfermedades del Complejo Inmune/diagnóstico , Corticoesteroides/uso terapéutico , Adulto , Anticonvulsivantes/uso terapéutico , Corteza Cerebral/inmunología , Encefalitis/tratamiento farmacológico , Femenino , Cefalea , Humanos , Enfermedades del Complejo Inmune/tratamiento farmacológico , Leucocitosis , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Glicoproteína Mielina-Oligodendrócito , Convulsiones , Adulto Joven
19.
IEEE Trans Pattern Anal Mach Intell ; 43(9): 3247-3258, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32167885

RESUMEN

Semi-supervised clustering is one of important research topics in cluster analysis, which uses pre-given knowledge as constraints to improve the clustering performance. While clustering a data set, people often get prior constraints from different information sources, which may have different representations and contents, to guide clustering process. However, most of existing semi-supervised clustering algorithms are based on single-source constraints and rarely consider to integrate multi-source constraints to enhance the clustering quality. To solve the problem, we analyze the relations among different types of constraints and propose an uniform representation for them. Based it, we propose a new semi-supervised clustering algorithm to find out a clustering that has good cluster structure and high consensus of all the sources of constraints. In the algorithm, we construct an optimization objective model and its solution method to achieve the aim. This algorithm can integrate multi-source constraints well to reduce the effect of incorrect constraints from single sources and find out a high-quality clustering. By the experimental studies on several benchmark data sets, we illustrate the effectiveness of the proposed algorithm, compared to other semi-supervised clustering algorithms.

20.
Neural Netw ; 132: 394-404, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33010715

RESUMEN

This study builds a fully deconvolutional neural network (FDNN) and addresses the problem of single image super-resolution (SISR) by using the FDNN. Although SISR using deep neural networks has been a major research focus, the problem of reconstructing a high resolution (HR) image with an FDNN has received little attention. A few recent approaches toward SISR are to embed deconvolution operations into multilayer feedforward neural networks. This paper constructs a deep FDNN for SISR that possesses two remarkable advantages compared to existing SISR approaches. The first improves the network performance without increasing the depth of the network or embedding complex structures. The second replaces all convolution operations with deconvolution operations to implement an effective reconstruction. That is, the proposed FDNN only contains deconvolution layers and learns an end-to-end mapping from low resolution (LR) to HR images. Furthermore, to avoid the oversmoothness of the mean squared error loss, the trained image is treated as a probability distribution, and the Kullback-Leibler divergence is introduced into the final loss function to achieve enhanced recovery. Although the proposed FDNN only has 10 layers, it is successfully evaluated through extensive experiments. Compared with other state-of-the-art methods and deep convolution neural networks with 20 or 30 layers, the proposed FDNN achieves better performance for SISR.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...