Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sensors (Basel) ; 22(19)2022 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-36236789

RESUMEN

Deep summarization models have succeeded in the video summarization field based on the development of gated recursive unit (GRU) and long and short-term memory (LSTM) technology. However, for some long videos, GRU and LSTM cannot effectively capture long-term dependencies. This paper proposes a deep summarization network with auxiliary summarization losses to address this problem. We introduce an unsupervised auxiliary summarization loss module with LSTM and a swish activation function to capture the long-term dependencies for video summarization, which can be easily integrated with various networks. The proposed model is an unsupervised framework for deep reinforcement learning that does not depend on any labels or user interactions. Additionally, we implement a reward function (R(S)) that jointly considers the consistency, diversity, and representativeness of generated summaries. Furthermore, the proposed model is lightweight and can be successfully deployed on mobile devices and enhance the experience of mobile users and reduce pressure on server operations. We conducted experiments on two benchmark datasets and the results demonstrate that our proposed unsupervised approach can obtain better summaries than existing video summarization methods. Furthermore, the proposed algorithm can generate higher F scores with a nearly 6.3% increase on the SumMe dataset and a 2.2% increase on the TVSum dataset compared to the DR-DSN model.


Asunto(s)
Algoritmos , Memoria a Largo Plazo , Memoria a Largo Plazo/fisiología
2.
Sensors (Basel) ; 20(18)2020 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-32962241

RESUMEN

Compressed sensing provides an elegant framework for recovering sparse signals from compressed measurements. This paper addresses the problem of sparse signal reconstruction from compressed measurements that is more robust to complex, especially non-Gaussian noise, which arises in many applications. For this purpose, we present a method that exploits the maximum negentropy theory to promote the adaptability to noise. This problem is formalized as a constrained minimization problem, where the objective function is the negentropy of measurement error with sparse constraint -norm. On the minimization issue of the problem, although several promising algorithms have been proposed in the literature, they are very computationally demanding and thus cannot be used in many practical situations. To improve on this, we propose an efficient algorithm based on a fast iterative shrinkage-thresholding algorithm that can converge fast. Both the theoretical analysis and numerical experiments show the better accuracy and convergent rate of the proposed method.

3.
Entropy (Basel) ; 20(2)2018 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-33265235

RESUMEN

Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU) pool via high-speed low-latency fronthaul links, which enables efficient resource allocation and interference management. As the RAPs are geographically distributed, group sparse beamforming schemes attract extensive studies, where a subset of RAPs is assigned to be active and a high spectral efficiency can be achieved. However, most studies assume that each user is equipped with a single antenna. How to design the group sparse precoder for the multiple antenna users remains little understood, as it requires the joint optimization of the mutual coupling transmit and receive beamformers. This paper formulates an optimal joint RAP selection and precoding design problem in a C-RAN with multiple antennas at each user. Specifically, we assume a fixed transmit power constraint for each RAP, and investigate the optimal tradeoff between the sum rate and the number of active RAPs. Motivated by the compressive sensing theory, this paper formulates the group sparse precoding problem by inducing the ℓ 0 -norm as a penalty and then uses the reweighted ℓ 1 heuristic to find a solution. By adopting the idea of block diagonalization precoding, the problem can be formulated as a convex optimization, and an efficient algorithm is proposed based on its Lagrangian dual. Simulation results verify that our proposed algorithm can achieve almost the same sum rate as that obtained from an exhaustive search.

4.
Neural Comput ; 27(9): 1951-82, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26161819

RESUMEN

We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.

5.
Sci Rep ; 14(1): 5106, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38429392

RESUMEN

Taking the return-airway 4204 with roof cutting in Longquan Coal Mine as the engineering background, roof structure, key parameters, and deviatoric stress evolution were studied. Conclusion: The Key Stratum within a 4-8 times mining height is considered as Near Key Stratum. Cutting the roof makes it possible to form a cantilever structure of the Key Stratum on the solid coal side, which is more conducive to the stability of gob-side roadway. During cutting angle of 90-55°, the deviatoric stress increases linearly, and the increase rate is coal pillar > solid coal > roof > floor. While cutting length from 0 to 35 m, the deviatoric stress decreases linearly, and the decreasing range: coal pillar > solid coal > roof > floor. When coal pillar width is from 30 to 4 m, the deviatoric stress of left side and floor presents a "single peak" distribution. The deviatoric stress of coal pillar changes from an asymmetric "double peak" to a bell-shaped distribution, and the deviatoric stress of roof changes from a "single peak" to an asymmetric "double peak" distribution. Under same coal pillar width, the deviatoric stress of left, coal pillar and roof after roof cutting decreases most obviously, followed by the floor. Finally, the coal pillar width is 8 m, the cutting angle is 75°, the cutting length is 20 m, and the hole spacing is 1.0 m. The support scheme is bolt + metal mesh + steel belt + anchor cable combined support. The stable period of roadway is about 10 days.

6.
J Shanghai Jiaotong Univ Sci ; 28(3): 323-329, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36846270

RESUMEN

This study focuses on a robot vision localization method for coping with the operational task of automatic nasal swab sampling. The application is important in the detection and epidemic prevention of Corona Virus Disease 2019 (COVID-19) to alleviate the large-scale negative impact of individuals suffering from pneumonia owing to COVID-19. In this method, the idea of a hierarchical decision network is used to consider the strong infectious characteristics of the COVID-19, which is followed by processing the robot behavior constraint condition. The visual navigation and positioning method using a single-arm robot for sampling is also planned, which considers the operation characteristics of medical staff. In the decision network, the risk factor for potential contact infection caused by swab sampling operations is established to avoid the spread among personnel. A robot visual servo control with artificial intelligence characteristics is developed to achieve a stable and safe nasal swab sampling operation. Experiments demonstrate that the proposed method can achieve good vision positioning for the robots and provide technical support for managing new major public health situations.

7.
Artículo en Inglés | MEDLINE | ID: mdl-37216248

RESUMEN

Medical image processing plays an important role in the interaction of real world and metaverse for healthcare. Self-supervised denoising based on sparse coding methods, without any prerequisite on large-scale training samples, has been attracting extensive attention for medical image processing. Whereas, existing self-supervised methods suffer from poor performance and low efficiency. In this paper, to achieve state-of-the-art denoising performance on the one hand, we present a self-supervised sparse coding method, named the weighted iterative shrinkage thresholding algorithm (WISTA). It does not rely on noisy-clean ground-truth image pairs to learn from only a single noisy image. On the other hand, to further improve denoising efficiency, we unfold the WISTA to construct a deep neural network (DNN) structured WISTA, named WISTA-Net. Specifically, in WISTA, motivated by the merit of the lp-norm, WISTA-Net has better denoising performance than the classical orthogonal matching pursuit (OMP) algorithm and the ISTA. Moreover, leveraging the high-efficiency of DNN structure in parameter updating, WISTA-Net outperforms the compared methods in denoising efficiency. In detail, for a 256 by 256 noisy image, the running time of WISTA-Net is 4.72 s on the CPU, which is much faster than WISTA, OMP, and ISTA by 32.88 s, 13.06 s, and 6.17 s, respectively.

8.
Comput Math Methods Med ; 2021: 3628179, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33564322

RESUMEN

Acute ischemic stroke (AIS) has been a common threat to human health and may lead to severe outcomes without proper and prompt treatment. To precisely diagnose AIS, it is of paramount importance to quantitatively evaluate the AIS lesions. By adopting a convolutional neural network (CNN), many automatic methods for ischemic stroke lesion segmentation on magnetic resonance imaging (MRI) have been proposed. However, most CNN-based methods should be trained on a large amount of fully labeled subjects, and the label annotation is a labor-intensive and time-consuming task. Therefore, in this paper, we propose to use a mixture of many weakly labeled and a few fully labeled subjects to relieve the thirst of fully labeled subjects. In particular, a multifeature map fusion network (MFMF-Network) with two branches is proposed, where hundreds of weakly labeled subjects are used to train the classification branch, and several fully labeled subjects are adopted to tune the segmentation branch. By training on 398 weakly labeled and 5 fully labeled subjects, the proposed method is able to achieve a mean dice coefficient of 0.699 ± 0.128 on a test set with 179 subjects. The lesion-wise and subject-wise metrics are also evaluated, where a lesion-wise F1 score of 0.886 and a subject-wise detection rate of 1 are achieved.


Asunto(s)
Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/estadística & datos numéricos , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Imagen por Resonancia Magnética/estadística & datos numéricos , Encéfalo/diagnóstico por imagen , Biología Computacional , Bases de Datos Factuales , Humanos , Imagen Multimodal/estadística & datos numéricos
9.
Cancers (Basel) ; 13(3)2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33535569

RESUMEN

The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor's grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa's grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.

10.
Neural Netw ; 98: 212-222, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29272726

RESUMEN

Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ1∕2 norm as a regularizer. The very recent study on ℓ1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Aprendizaje Automático/estadística & datos numéricos , Ruido
11.
IET Syst Biol ; 10(1): 34-40, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26816398

RESUMEN

The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.


Asunto(s)
Algoritmos , Electrocardiografía/métodos , Aprendizaje Automático , Procesamiento de Señales Asistido por Computador , Simulación por Computador
12.
IEEE Trans Neural Netw Learn Syst ; 23(10): 1601-10, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24808005

RESUMEN

The problem of nonnegative blind source separation (NBSS) is addressed in this paper, where both the sources and the mixing matrix are nonnegative. Because many real-world signals are sparse, we deal with NBSS by sparse component analysis. First, a determinant-based sparseness measure, named D-measure, is introduced to gauge the temporal and spatial sparseness of signals. Based on this measure, a new NBSS model is derived, and an iterative sparseness maximization (ISM) approach is proposed to solve this model. In the ISM approach, the NBSS problem can be cast into row-to-row optimizations with respect to the unmixing matrix, and then the quadratic programming (QP) technique is used to optimize each row. Furthermore, we analyze the source identifiability and the computational complexity of the proposed ISM-QP method. The new method requires relatively weak conditions on the sources and the mixing matrix, has high computational efficiency, and is easy to implement. Simulation results demonstrate the effectiveness of our method.

13.
IEEE Trans Image Process ; 20(4): 1112-25, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20889432

RESUMEN

Nonnegative matrix factorization (NMF) is a widely used method for blind spectral unmixing (SU), which aims at obtaining the endmembers and corresponding fractional abundances, knowing only the collected mixing spectral data. It is noted that the abundance may be sparse (i.e., the endmembers may be with sparse distributions) and sparse NMF tends to lead to a unique result, so it is intuitive and meaningful to constrain NMF with sparseness for solving SU. However, due to the abundance sum-to-one constraint in SU, the traditional sparseness measured by L0/L1-norm is not an effective constraint any more. A novel measure (termed as S-measure) of sparseness using higher order norms of the signal vector is proposed in this paper. It features the physical significance. By using the S-measure constraint (SMC), a gradient-based sparse NMF algorithm (termed as NMF-SMC) is proposed for solving the SU problem, where the learning rate is adaptively selected, and the endmembers and abundances are simultaneously estimated. In the proposed NMF-SMC, there is no pure index assumption and no need to know the exact sparseness degree of the abundance in prior. Yet, it does not require the preprocessing of dimension reduction in which some useful information may be lost. Experiments based on synthetic mixtures and real-world images collected by AVIRIS and HYDICE sensors are performed to evaluate the validity of the proposed method.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA