Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-37527324

RESUMEN

Canonical correlation analysis (CCA) is a correlation analysis technique that is widely used in statistics and the machine-learning community. However, the high complexity involved in the training process lays a heavy burden on the processing units and memory system, making CCA nearly impractical in large-scale data. To overcome this issue, a novel CCA method that tries to carry out analysis on the dataset in the Fourier domain is developed in this article. Appling Fourier transform on the data, we can convert the traditional eigenvector computation of CCA into finding some predefined discriminative Fourier bases that can be learned with only element-wise dot product and sum operations, without complex time-consuming calculations. As the eigenvalues come from the sum of individual sample products, they can be estimated in parallel. Besides, thanks to the data characteristic of pattern repeatability, the eigenvalues can be well estimated with partial samples. Accordingly, a progressive estimate scheme is proposed, in which the eigenvalues are estimated through feeding data batch by batch until the eigenvalues sequence is stable in order. As a result, the proposed method shows its characteristics of extraordinarily fast and memory efficiencies. Furthermore, we extend this idea to the nonlinear kernel and deep models and obtained satisfactory accuracy and extremely fast training time consumption as expected. An extensive discussion on the fast Fourier transform (FFT)-CCA is made in terms of time and memory efficiencies. Experimental results on several large-scale correlation datasets, such as MNIST8M, X-RAY MICROBEAM SPEECH, and Twitter Users Data, demonstrate the superiority of the proposed algorithm over state-of-the-art (SOTA) large-scale CCA methods, as our proposed method achieves almost same accuracy with the training time of our proposed method being 1000 times faster. This makes our proposed models best practice models for dealing with large-scale correlation datasets. The source code is available at https://github.com/Mrxuzhao/FFTCCA.

2.
J Comput Biol ; 30(9): 951-960, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37585615

RESUMEN

Spiking neural network (SNN) simulators play an important role in neural system modeling and brain function research. They can help scientists reproduce and explore neuronal activities in brain regions, neuroscience, brain-like computing, and other fields and can also be applied to artificial intelligence, machine learning, and other fields. At present, many simulators using central processing unit (CPU) or graphics processing unit (GPU) have been developed. However, due to the randomness of connections between neurons and spiking events in SNN simulation, this causes a lot of memory access time. To alleviate this problem, we developed an SNN simulator SWsnn based on the new Sunway SW26010pro processor. The SW26010pro processor consists of six core groups, each with 16 MB of local data memory (LDM). LDM has the characteristics of high-speed read and write, which is suitable for performing simulation tasks similar to SNNs. Experimental results show that SWsnn runs faster than other mainstream GPU-based simulators when simulating a certain scale of neural network, showing a strong performance advantage. To conduct larger scale simulations, SWsnn designed a simulation computation based on a large shared model of Sunway processor and developed a multiprocessor version of SWsnn based on this mode, achieving larger scale SNN simulations.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Simulación por Computador , Neuronas/fisiología , Encéfalo
3.
Complex Intell Systems ; : 1-13, 2023 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-37361964

RESUMEN

The purpose of this paper is to study the multi-attribute decision-making problem under the fuzzy picture environment. First, a method to compare the pros and cons of picture fuzzy numbers (PFNs) is introduced in this paper. Second, the correlation coefficient and standard deviation (CCSD) method is used to determine the attribute weight information under the picture fuzzy environment regardless of whether the attribute weight information is partially unknown or completely unknown. Third, the ARAS and VIKOR methods are extended to the picture fuzzy environment, and the proposed PFNs comparison rules are also applied in the PFS-ARAS and PFS-VIKOR methods. Fourth, the problem of green supplier selection in a picture-ambiguous environment is solved by the method proposed in this paper. Finally, the method proposed in this paper is compared with some methods and the results are analyzed.

4.
Global Spine J ; : 21925682231170607, 2023 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-37203443

RESUMEN

STUDY DESIGN: A retrospective study. OBJECTIVE: To develop a new MRI scoring system to assess patients' clinical characteristics, outcomes and complications. METHODS: A retrospective 1-year follow-up study of 366 patients with cervical spondylosis from 2017 to 2021. The CCCFLS scores (cervical curvature and balance (CC), spinal cord curvature (SC), spinal cord compression ratio (CR), cerebrospinal fluid space (CFS). Spinal cord and lesion location (SL). Increased Signal Intensity (ISI) were divided into Mild group (0-6), Moderate group (6-12), and Severe group (12-18) for comparison, and the Japanese Orthopaedic Association (JOA) scores, visual analog scale (VAS), numerical rating scale (NRS), Neck Disability Index (NDI) and Nurick scores were evaluated. Correlation and regression analyses were performed between each variable and the total model in relation to clinical symptoms and C5 palsy. RESULTS: The CCCFLS scoring system was linearly correlated with JOA, NRS, Nurick and NDI scores, with significant differences in JOA scores among patients with different CC, CR, CFS, ISI scores, with a predictive model (R2 = 69.3%), and significant differences in preoperative and final follow-up clinical scores among the 3 groups, with a higher rate of improvement in JOA in the severe group (P < .05), while patients with and without C5 paralysis had significant differences in preoperative SC and SL (P < .05). CONCLUSIONS: CCCFLS scoring system can be divided into mild (0-6). moderate (6-12), severe (12-18) groups. It can effectively reflect the severity of clinical symptoms, and the improvement rate of JOA is better in the severe group, while the preoperative SC and SL scores are closely related to C5 palsy. LEVEL OF EVIDENCE: III.

5.
Orthop Surg ; 15(6): 1541-1548, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37183354

RESUMEN

OBJECTIVE: It is clinically important for pedicle screws to be placed quickly and accurately. Misplacement of pedicle screws results in various complications. However, the incidence of complications varies greatly due to the different professional titles of physicians and surgical experience. Therefore, physicians must minimize pedicle screw dislocation. This study aims to compare the three nail placement methods in this study, and explore which method is the best for determining the anatomical landmarks and vertical trajectories. METHODS: This study involved 70 patients with moderate idiopathic scoliosis who had undergone deformity correction surgery between 2018 and 2021. Two spine surgeons used three techniques (preoperative computed tomography scan [CTS], visual inspection-X-freehand [XFH], and intraoperative detection [ID] of anatomical landmarks) to locate pedicle screws. The techniques used include visual inspection for 287 screws in 21 patients, preoperative planning for 346 screws in 26 patients, and intraoperative probing for 309 screws in 23 patients. Observers assessed screw conditions based on intraoperative CT scans (Grade A, B, C, D). RESULTS: There were no significant differences between the three groups in terms of age, sex, and degree of deformity. We found that 68.64% of screws in the XFH group, 67.63% in the CTS group, and 77.99% in the ID group were placed within the pedicle margins (grade A). On the other hand, 6.27% of screws in the XFH group, 4.33% in the CTS group, and 6.15% in the ID group were considered misplaced (grades C and D). The results show that the total amount of upper thoracic pedicle screws was fewer, meanwhile their placement accuracy was lower. The three methods used in this study had similar accuracy in intermediate physicians (P > 0.05). Compared with intermediate physicians, the placement accuracy of three techniques in senior physicians was higher. The intraoperative detection group was better than the other two groups in the good rate and accuracy of nail placement (P < 0.05). CONCLUSION: Intraoperative common anatomical landmarks and vertical trajectories were beneficial to patients with moderate idiopathic scoliosis undergoing surgery. It is an optimal method for clinical application.


Asunto(s)
Tornillos Pediculares , Escoliosis , Fusión Vertebral , Humanos , Escoliosis/diagnóstico por imagen , Escoliosis/cirugía , Columna Vertebral/cirugía , Tomografía Computarizada por Rayos X/métodos , Fusión Vertebral/métodos , Estudios Retrospectivos
6.
Artículo en Inglés | MEDLINE | ID: mdl-37015131

RESUMEN

Transformer, an attention-based encoder-decoder model, has already revolutionized the field of natural language processing (NLP). Inspired by such significant achievements, some pioneering works have recently been done on employing Transformer-liked architectures in the computer vision (CV) field, which have demonstrated their effectiveness on three fundamental CV tasks (classification, detection, and segmentation) as well as multiple sensory data stream (images, point clouds, and vision-language data). Because of their competitive modeling capabilities, the visual Transformers have achieved impressive performance improvements over multiple benchmarks as compared with modern convolution neural networks (CNNs). In this survey, we have reviewed over 100 of different visual Transformers comprehensively according to three fundamental CV tasks and different data stream types, where taxonomy is proposed to organize the representative methods according to their motivations, structures, and application scenarios. Because of their differences on training settings and dedicated vision tasks, we have also evaluated and compared all these existing visual Transformers under different configurations. Furthermore, we have revealed a series of essential but unexploited aspects that may empower such visual Transformers to stand out from numerous architectures, e.g., slack high-level semantic embeddings to bridge the gap between the visual Transformers and the sequential ones. Finally, two promising research directions are suggested for future investment. We will continue to update the latest articles and their released source codes at.

7.
Bioresour Technol ; 363: 127971, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36122848

RESUMEN

Hydrochar's specific surface area (SSA) is important in environmental remediation; however, a hydrophobic coating formed on hydrochar creates a physical barrier that reduces that SSA. The formation and composition of the hydrophobic coating and its effects on hydrochar properties are unclear. In this study, hydrochar was produced from Chinese fan palm (Livistona chinensis) leaves at different temperatures. The resulting hydrophobic coatings were investigated by in situ characterization and then extracted with acetone for composition identification. Additionally, hydrochar properties were compared before and after hydrophobic coating removal. The results showed that the hydrophobic coating of the hydrochar produced at 180 °C was the insoluble cuticle layer of raw biomass, while the hydrophobic coatings formed above 180 °C were the depolymerization products of cutin. For the hydrochar above 180 °C, especially at 260 °C, the removal of the hydrophobic coating from hydrochar increased both its SSA and its oxygen-containing functional groups.


Asunto(s)
Carbono , Oxígeno , Acetona , Biomasa , Carbono/química , Temperatura
8.
Gene ; 830: 146517, 2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35452705

RESUMEN

Apocynum pictum of the dogbane family, Apocynaceae, is a perennial semi-shrub species of ecological, medicinal, and economic value. It is mainly distributed in semi-arid, saline-alkaline, and desert regions of Xinjiang, Qinghai, and Gansu of western China and adjacent regions from Kazakhstan and Mongolia. Here, we reported the complete chloroplast (cp) genome of A. pictum for the first time, and we found that it had a circular structure with an estimated length of 150,749 bp and a GC content of 38.3%. The cp genome was composed of a large single copy (LSC), a single small single copy (SSC), and two inverted repeat (IR) regions, which were 81,888 bp, 17,251 bp and 25,805 bp long, respectively. The cp genome of A. pictum encoded 134 genes and contained 66 simple sequence repeats (SSRs). A comparative analysis with other cp genomes from Apocynaceae indicated that the cp genome of A. pictum was very conserved, except for subtle differences occurring in the protein-coding genes accD, ndhF, rpl22, rpl32, rpoC2, ycf1 and ycf2. A phylogenetic reconstruction showed that A. pictum and A. venetum were sister species, forming a strongly supported clade with Trachelospermum. Interestingly, nucleotide substitution ratios (Ka/Ks) between A. pictum and A. venetum on accD and ndhF were >1.0, suggesting positive selective pressure on these genes. Our result enriches the genomic resources for the diverse dogbane family and provides critical molecular resources to develop future studies on ecological adaptation to desert habitats in Apocynum.


Asunto(s)
Apocynaceae , Apocynum , Genoma del Cloroplasto , Apocynaceae/genética , Apocynum/genética , Composición de Base , Filogenia
9.
Med Image Anal ; 78: 102395, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35231851

RESUMEN

Medical image segmentation can provide a reliable basis for further clinical analysis and disease diagnosis. With the development of convolutional neural networks (CNNs), medical image segmentation performance has advanced significantly. However, most existing CNN-based methods often produce unsatisfactory segmentation masks without accurate object boundaries. This problem is caused by the limited context information and inadequate discriminative feature maps after consecutive pooling and convolution operations. Additionally, medical images are characterized by high intra-class variation, inter-class indistinction and noise, extracting powerful context and aggregating discriminative features for fine-grained segmentation remain challenging. In this study, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information, which incorporates encoder-decoder architecture. In each stage of the encoder sub-network, a proposed pyramid edge extraction module first obtains multi-granularity edge information. Then a newly designed mini multi-task learning module for jointly learning segments the object masks and detects lesion boundaries, in which a new interactive attention layer is introduced to bridge the two tasks. In this way, information complementarity between different tasks is achieved, which effectively leverages the boundary information to offer strong cues for better segmentation prediction. Finally, a cross feature fusion module acts to selectively aggregate multi-level features from the entire encoder sub-network. By cascading these three modules, richer context and fine-grain features of each stage are encoded and then delivered to the decoder. The results of extensive experiments on five datasets show that the proposed BA-Net outperforms state-of-the-art techniques.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje
10.
IEEE Trans Image Process ; 31: 2695-2709, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35320103

RESUMEN

The existing publicly available datasets with pixel-level labels contain limited categories, and it is difficult to generalize to the real world containing thousands of categories. In this paper, we propose an approach to generate object masks with detailed pixel-level structures/boundaries automatically to enable semantic image segmentation of thousands of targets in the real world without manually labelling. A Guided Filter Network (GFN) is first developed to learn the segmentation knowledge from an existed dataset, and such GFN then transfers the learned segmentation knowledge to generate initial coarse object masks for the target images. These coarse object masks are treated as pseudo labels to self-optimize the GFN iteratively in the target images. Our experiments on six image sets have demonstrated that our proposed approach can generate object masks with detailed pixel-level structures/boundaries, whose quality is comparable to the manually-labelled ones. Our proposed approach also achieves better performance on semantic image segmentation than most existing weakly-supervised, semi-supervised, and domain adaptation approaches under the same experimental conditions.

11.
IEEE Trans Image Process ; 31: 1057-1071, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34965210

RESUMEN

Video object segmentation is a challenging task in computer vision because the appearances of target objects might change drastically along the time in the video. To solve this problem, space-time memory (STM) networks are exploited to make use of the information from all the intermediate frames between the first frame and the current frame in the video. However, fully using the information from all the memory frames may make STM not practical for long videos. To overcome this issue, a novel method is developed in this paper to select the reference frames adaptively. First, an adaptive selection criterion is introduced to choose the reference frames with similar appearance and precise mask estimation, which can efficiently capture the rich information of the target object and overcome the challenges of appearance changes, occlusion, and model drift. Secondly, bi-matching (bi-scale and bi-direction) is conducted to obtain more robust correlations for objects of various scales and prevents multiple similar objects in the current frame from being mismatched with the same target object in the reference frame. Thirdly, a novel edge refinement technique is designed by using an edge detection network to obtain smooth edges from the outputs of edge confidence maps, where the edge confidence is quantized into ten sub-intervals to generate smooth edges step by step. Experimental results on the challenging benchmark datasets DAVIS-2016, DAVIS-2017, YouTube-VOS, and a Long-Video dataset have demonstrated the effectiveness of our proposed approach to video object segmentation.

12.
Spectrochim Acta A Mol Biomol Spectrosc ; 268: 120675, 2022 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-34890871

RESUMEN

Infrared spectroscopy is a powerful tool for the understanding of molecular structure and function of polypeptides. Theoretical interpretation of IR spectra relies on ab initio calculations may be very costly in computational resources. Herein, we developed a neural network (NN) modeling protocol to evaluate a model dipeptide's backbone amide-I spectra. DFT calculations were performed for the amide-I vibrational motions and structural parameters of alanine dipeptide (ALAD) conformers in different micro-environments ranging from polar to non-polar ones. The obtained backbone dihedrals, C = O bond lengths and amide-I frequencies of ALAD were gather together for NN architecture. The applications of built NN protocols for the prediction of amide-I frequencies of ALAD in other solvation conditions are quite satisfactory with much less computational cost comparing with electronic structure calculations. The results show that this cost-effective way enables us to decipher the polypeptide's dynamic secondary structures and biological functions with their backbone vibrational probes.


Asunto(s)
Amidas , Dipéptidos , Alanina , Simulación de Dinámica Molecular , Redes Neurales de la Computación , Espectrofotometría Infrarroja , Vibración
13.
BMC Musculoskelet Disord ; 22(1): 818, 2021 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-34556093

RESUMEN

BACKGROUND: Fibrosis is an important factor and process of ligamentum flavum hypertrophy. The expression of phosphodiesterase family (PDE) is related to inflammation and fibrosis. This article studied the expression of PDE in hypertrophic ligamentum flavum fibroblasts and investigated whether inhibition of PDE4 activity can play an anti-fibrotic effect. METHODS: Samples of clinical hypertrophic ligamentum flavum were collected and patients with lumbar disc herniations as a control group. The collagenase digestion method is used to separate fibroblasts. qPCR is used to detect the expression of PDE subtypes, type I collagen (Col I), type III collagen (Col III), fibronectin (FN1) and transforming growth factor ß1 (TGF-ß1). Recombinant TGF-ß1 was used to stimulate fibroblasts to make a fibrotic cell model and treated with Rolipram. The morphology of the cells treated with drugs was observed by Sirius Red staining. Scratch the cells to observe their migration and proliferation. WB detects the expression of the above-mentioned multiple fibrotic proteins after drug treatment. Finally, combined with a variety of signaling pathway drugs, the signaling mechanism was studied. RESULTS: Multiple PDE subtypes were expressed in ligamentum flavum fibroblasts. The expression of PDE4A and 4B was significantly up-regulated in the hypertrophic group. Using Rolipram to inhibit PDE4 activity, the expression of Col I and TGF-ß1 in the hypertrophic group was inhibited. Col I recovered to the level of the control group. TGF-ß1 was significantly inhibited, which was lower than the control group. Recombinant TGF-ß1 stimulated fibroblasts to increase the expression of Col I/III, FN1 and TGF-ß1, which was blocked by Rolipram. Rolipram restored the increased expression of p-ERK1/2 stimulated by TGF-ß1. CONCLUSION: The expressions of PDE4A and 4B in the hypertrophic ligamentum flavum are increased, suggesting that it is related to the hypertrophy of the ligamentum flavum. Rolipram has a good anti-fibrosis effect after inhibiting the activity of PDE4. This is related to blocking the function of TGF-ß1, specifically by restoring normal ERK1/2 signal.


Asunto(s)
Ligamento Amarillo , Fibroblastos/metabolismo , Fibrosis , Humanos , Ligamento Amarillo/patología , Sistema de Señalización de MAP Quinasas , Rolipram/metabolismo , Rolipram/farmacología , Factor de Crecimiento Transformador beta1/metabolismo
14.
IEEE Trans Image Process ; 30: 7995-8007, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34554911

RESUMEN

Multi-keyword query is widely supported in text search engines. However, an analogue in image retrieval systems, multi-object query, is rarely studied. Meanwhile, traditional object-based image retrieval methods often involve multiple steps separately. In this work, we propose a weakly-supervised Deep Multiple Instance Hashing (DMIH) approach for multi-object image retrieval. Our DMIH approach, which leverages a popular CNN model to build the end-to-end relation between a raw image and the binary hash codes of its multiple objects, can support multi-object queries effectively and integrate object detection with hashing learning seamlessly. We treat object detection as a binary multiple instance learning (MIL) problem and such instances are automatically extracted from multi-scale convolutional feature maps. We also design a conditional random field (CRF) module to capture both the semantic and spatial relations among different class labels. For hashing training, we sample image pairs to learn their semantic relationships in terms of hash codes of the most probable proposals for owned labels as guided by object predictors. The two objectives benefit each other in a multi-task learning scheme. Finally, a two-level inverted index method is proposed to further speed up the retrieval of multi-object queries. Our DMIH approach outperforms state-of-the-arts on public benchmarks for object-based image retrieval and achieves promising results for multi-object queries.

15.
JMIR Med Inform ; 8(7): e17257, 2020 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-32628616

RESUMEN

BACKGROUND: Predictions of cardiovascular disease risks based on health records have long attracted broad research interests. Despite extensive efforts, the prediction accuracy has remained unsatisfactory. This raises the question as to whether the data insufficiency, statistical and machine-learning methods, or intrinsic noise have hindered the performance of previous approaches, and how these issues can be alleviated. OBJECTIVE: Based on a large population of patients with hypertension in Shenzhen, China, we aimed to establish a high-precision coronary heart disease (CHD) prediction model through big data and machine-learning. METHODS: Data from a large cohort of 42,676 patients with hypertension, including 20,156 patients with CHD onset, were investigated from electronic health records (EHRs) 1-3 years prior to CHD onset (for CHD-positive cases) or during a disease-free follow-up period of more than 3 years (for CHD-negative cases). The population was divided evenly into independent training and test datasets. Various machine-learning methods were adopted on the training set to achieve high-accuracy prediction models and the results were compared with traditional statistical methods and well-known risk scales. Comparison analyses were performed to investigate the effects of training sample size, factor sets, and modeling approaches on the prediction performance. RESULTS: An ensemble method, XGBoost, achieved high accuracy in predicting 3-year CHD onset for the independent test dataset with an area under the receiver operating characteristic curve (AUC) value of 0.943. Comparison analysis showed that nonlinear models (K-nearest neighbor AUC 0.908, random forest AUC 0.938) outperform linear models (logistic regression AUC 0.865) on the same datasets, and machine-learning methods significantly surpassed traditional risk scales or fixed models (eg, Framingham cardiovascular disease risk models). Further analyses revealed that using time-dependent features obtained from multiple records, including both statistical variables and changing-trend variables, helped to improve the performance compared to using only static features. Subpopulation analysis showed that the impact of feature design had a more significant effect on model accuracy than the population size. Marginal effect analysis showed that both traditional and EHR factors exhibited highly nonlinear characteristics with respect to the risk scores. CONCLUSIONS: We demonstrated that accurate risk prediction of CHD from EHRs is possible given a sufficiently large population of training data. Sophisticated machine-learning methods played an important role in tackling the heterogeneity and nonlinear nature of disease prediction. Moreover, accumulated EHR data over multiple time points provided additional features that were valuable for risk prediction. Our study highlights the importance of accumulating big data from EHRs for accurate disease predictions.

16.
IEEE J Biomed Health Inform ; 24(9): 2461-2472, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32287022

RESUMEN

Automated electrocardiogram (ECG) analysis for arrhythmia detection plays a critical role in early prevention and diagnosis of cardiovascular diseases. Extracting powerful features from raw ECG signals for fine-grained diseases classification is still a challenging problem today due to variable abnormal rhythms and noise distribution. For ECG analysis, the previous research works depend mostly on heartbeat or single scale signal segments, which ignores underlying complementary information of different scales. In this paper, we formulate a novel end-to-end Deep Multi-Scale Fusion convolutional neural network (DMSFNet) architecture for multi-class arrhythmia detection. Our proposed approach can effectively capture abnormal patterns of diseases and suppress noise interference by multi-scale feature extraction and cross-scale information complementarity of ECG signals. The proposed method implements feature extraction for signal segments with different sizes by integrating multiple convolution kernels with different receptive fields. Meanwhile, joint optimization strategy with multiple losses of different scales is designed, which not only learns scale-specific features, but also realizes cumulatively multi-scale complementary feature learning during the learning process. In our work, we demonstrate our DMSFNet on two open datasets (CPSC_2018 and PhysioNet/CinC_2017) and deliver the state-of-art performance on them. Among them, CPSC_2018 is a 12-lead ECG dataset and CinC_2017 is a single-lead dataset. For these two datasets, we achieve the F1 score [Formula: see text] and [Formula: see text] which are higher than previous state-of-art approaches respectively. The results demonstrate that our end-to-end DMSFNet has outstanding performance for feature extraction from a broad range of distinct arrhythmias and elegant generalization ability for effectively handling ECG signals with different leads.


Asunto(s)
Arritmias Cardíacas , Redes Neurales de la Computación , Algoritmos , Arritmias Cardíacas/diagnóstico , Electrocardiografía , Frecuencia Cardíaca , Humanos
17.
Opt Express ; 28(6): 8132-8144, 2020 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-32225444

RESUMEN

Photon-limited imaging technique is desired in tasks of capturing and reconstructing images by detecting a small number of photons. However, it is still a challenge to achieve high photon-efficiency. Here, we propose a novel photon-limited imaging technique that explores the consistency of photon detection probability in a single pulse and light intensity distribution in a single-pixel correlated imaging system. We demonstrated theoretically and experimentally that our technique can reconstruct a high-quality 3D image by using only one pulse each frame, thereby achieving a high photon efficiency of 0.01 detected photons per pixel. Long-distance field experiments for 100 km cooperative target and 3 km practical target are conducted to verify its feasibility. Compared with the conventional single-pixel imaging, which requires hundreds or thousands of pulses per frame, our technique saves two orders of magnitude in the consumption of total light power and acquisition time.

18.
Sci Total Environ ; 713: 136663, 2020 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-31958735

RESUMEN

The ash content of municipal sewage sludge is generally high. However, the manner in which the composition of ash affects biochar properties and sorption remains unclear. Sewage sludge from two cities, Chongqing and Kunming, were pyrolyzed at different temperatures to produce biochar in this work. The physicochemical properties of biochar were investigated by bulk chemical characteristics (such as FTIR, XPS, Raman analysis, and elemental analysis) and benzene polycarboxylic acid (BPCA) molecular biomarkers, after which they were correlated with sorption characteristics. In comparison with biochar from Chongqing sewage sludge (CSS), biochar from Kunming sewage sludge (KSS) showed stronger polarity, a larger specific surface area (SSA) and more functional groups, but a lower degree of graphitization and aromatization. These differences may result from the higher aluminum (Al) content of KSS. The single-point sorption coefficient Kd values of biochar derived from CSS and KSS were analyzed together. Kd was positively correlated with the SSA and pore volume of sewage sludge and biochar produced at 200-300 °C. For biochar produced at 300-700 °C, the Kd value was positively correlated with the O content, O/C and (O + N)/C. The pyrolysis temperature of 300 °C was a threshold temperature for Cu(II) sorption onto biochar, at which there was a balance between decreased oxygen-containing functional groups and increased SSA. The findings of this study show that higher Al content in sewage sludge was beneficial to pore volume enlargement and functional group retention during the pyrolysis process.


Asunto(s)
Aguas del Alcantarillado , Carbón Orgánico , Ciudades , Temperatura
19.
IEEE Trans Neural Netw Learn Syst ; 31(8): 2779-2790, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31751253

RESUMEN

In this article, a discriminative fast hierarchical learning algorithm is developed for supporting multiclass image classification, where a visual tree is seamlessly integrated with multitask learning to achieve fast training of the tree classifier hierarchically (i.e., a set of structural node classifiers over the visual tree). By partitioning a large number of categories hierarchically in a coarse-to-fine fashion, a visual tree is first constructed and further used to handle data imbalance and identify the interrelated learning tasks automatically (e.g., the tasks for learning the node classifiers for the sibling child nodes under the same parent node are strongly interrelated), and a multitask SVM classifier is trained for each nonleaf node to achieve more effective separation of its sibling child nodes at the next level of the visual tree. Both the internode visual similarities and the interlevel visual correlations are utilized to train more discriminative multitask SVM classifiers and control the interlevel error propagation effectively, and a stochastic gradient descent (SGD) algorithm is developed for learning such multitask SVM classifiers with higher efficiency. Our experimental results have demonstrated that our fast hierarchical learning algorithm can achieve very competitive results on both the classification accuracy rates and the computational efficiency.

20.
Artículo en Inglés | MEDLINE | ID: mdl-31562088

RESUMEN

There are two key components that can be leveraged for visual tracking: (a) object appearances; and (b) object motions. Many existing techniques have recently employed deep learning to enhance visual tracking due to its superior representation power and strong learning ability, where most of them employed object appearances but few of them exploited object motions. In this work, a deep spatial and temporal network (DSTN) is developed for visual tracking by explicitly exploiting both the object representations from each frame and their dynamics along multiple frames in a video, such that it can seamlessly integrate the object appearances with their motions to produce compact object appearances and capture their temporal variations effectively. Our DSTN method, which is deployed into a tracking pipeline in a coarse-to-fine form, can perceive the subtle differences on spatial and temporal variations of the target (object being tracked), and thus it benefits from both off-line training and online fine-tuning. We have also conducted our experiments over four largest tracking benchmarks, including OTB-2013, OTB-2015, VOT2015, and VOT2017, and our experimental results have demonstrated that our DSTN method can achieve competitive performance as compared with the state-of-the-art techniques. The source code, trained models, and all the experimental results of this work will be made public available to facilitate further studies on this problem.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...