Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
Ophthalmic Physiol Opt ; 44(2): 378-387, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38149468

RESUMO

PURPOSE: Evidence suggests that eye movements have potential as a tool for detecting glaucomatous visual field defects. This study evaluated the influence of sampling frequency on eye movement parameters in detecting glaucomatous visual field defects during a free-viewing task. METHODS: We investigated eye movements in two sets of experiments: (a) young adults with and without simulated visual field defects and (b) glaucoma patients and age-matched controls. In Experiment 1, we recruited 30 healthy volunteers. Among these, 10 performed the task with a gaze-contingent superior arcuate (SARC) scotoma, 10 performed the task with a gaze-contingent biarcuate (BARC) scotoma and 10 performed the task without a simulated scotoma (NoSim). The experimental task involved participants freely exploring 100 images, each for 4 s. Eye movements were recorded using the LiveTrack Lightning eye-tracker (500 Hz). In Experiment 2, we recruited 20 glaucoma patients and 16 age-matched controls. All participants underwent similar experimental tasks as in Experiment 1, except only 37 images were shown for exploration. To analyse the effect of sampling frequency, data were downsampled to 250, 120 and 60 Hz. Eye movement parameters, such as the number of fixations, fixation duration, saccadic amplitude and bivariate contour ellipse area (BCEA), were computed across various sampling frequencies. RESULTS: Two-way ANOVA revealed no significant effects of sampling frequency on fixation duration (simulation, p = 0.37; glaucoma patients, p = 0.95) and BCEA (simulation, p = 0.84; glaucoma patients: p = 0.91). BCEA showed good distinguishability in differentiating groups across different sampling frequencies, whereas fixation duration failed to distinguish between glaucoma patients and controls. Number of fixations and saccade amplitude showed variations with sampling frequency in both simulations and glaucoma patients. CONCLUSION: In both simulations and glaucoma patients, BCEA consistently differentiated them from controls across various sampling frequencies.


Assuntos
Glaucoma , Campos Visuais , Adulto Jovem , Humanos , Escotoma , Movimentos Oculares , Transtornos da Visão , Glaucoma/diagnóstico
2.
Sensors (Basel) ; 24(5)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38474994

RESUMO

Graph neural networks (GNNs) have been proven to be an ideal approach to deal with irregular point clouds, but involve massive computations for searching neighboring points in the graph, which limits their application in large-scale LiDAR point cloud processing. Down-sampling is a straightforward and indispensable step in current GNN-based 3D detectors to reduce the computational burden of the model, but the commonly used down-sampling methods cannot distinguish the categories of the LiDAR points, which leads to an inability to effectively improve the computational efficiency of the GNN models without affecting their detection accuracy. In this paper, we propose (1) a LiDAR point cloud pre-segmented down-sampling (PSD) method that can selectively reduce background points while preserving the foreground object points during the process, greatly improving the computational efficiency of the model without affecting its 3D detection accuracy. (2) A lightweight GNN-based 3D detector that can extract point features and detect objects from the raw down-sampled LiDAR point cloud directly without any pre-transformation. We test the proposed model on the KITTI 3D Object Detection Benchmark, and the results demonstrate its effectiveness and efficiency for autonomous driving 3D object detection.

3.
Evol Comput ; : 1-32, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38271633

RESUMO

Genetic Programming (GP) often uses large training sets and requires all individuals to be evaluated on all training cases during selection. Random down-sampled lexicase selection evaluates individuals on only a random subset of the training cases allowing for more individuals to be explored with the same amount of program executions. However, sampling randomly can exclude important cases from the down-sample for a number of generations, while cases that measure the same behavior (synonymous cases) may be overused. In this work, we introduce Informed Down-Sampled Lexicase Selection. This method leverages population statistics to build down-samples that contain more distinct and therefore informative training cases. Through an empirical investigation across two different GP systems (PushGP and Grammar-Guided GP), we find that informed down-sampling significantly outperforms random down-sampling on a set of contemporary program synthesis benchmark problems. Through an analysis of the created down-samples, we find that important training cases are included in the down-sample consistently across independent evolutionary runs and systems. We hypothesize that this improvement can be attributed to the ability of Informed Down-Sampled Lexicase Selection to maintain more specialist individuals over the course of evolution, while still benefiting from reduced per-evaluation costs.

4.
Entropy (Basel) ; 26(4)2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38667838

RESUMO

Recently, with more portable diagnostic devices being moved to people anywhere, point-of-care (PoC) imaging has become more convenient and more popular than the traditional "bed imaging". Instant image segmentation, as an important technology of computer vision, is receiving more and more attention in PoC diagnosis. However, the image distortion caused by image preprocessing and the low resolution of medical images extracted by PoC devices are urgent problems that need to be solved. Moreover, more efficient feature representation is necessary in the design of instant image segmentation. In this paper, a new feature representation considering the relationships among local features with minimal parameters and a lower computational complexity is proposed. Since a feature window sliding along a diagonal can capture more pluralistic features, a Diagonal-Axial Multi-Layer Perceptron is designed to obtain the global correlation among local features for a more comprehensive feature representation. Additionally, a new multi-scale feature fusion is proposed to integrate nonlinear features with linear ones to obtain a more precise feature representation. Richer features are figured out. In order to improve the generalization of the models, a dynamic residual spatial pyramid pooling based on various receptive fields is constructed according to different sizes of images, which alleviates the influence of image distortion. The experimental results show that the proposed strategy has better performance on instant image segmentation. Notably, it yields an average improvement of 1.31% in Dice than existing strategies on the BUSI, ISIC2018 and MoNuSeg datasets.

5.
Magn Reson Med ; 89(1): 299-307, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36089834

RESUMO

PURPOSE: Chemical exchange saturation transfer (CEST) MRI is promising for detecting dilute metabolites and microenvironment properties, which has been increasingly adopted in imaging disorders such as acute stroke and cancer. However, in vivo CEST MRI quantification remains challenging because routine asymmetry analysis (MTRasym ) or Lorentzian decoupling measures a combined effect of the labile proton concentration and its exchange rate. Therefore, our study aimed to quantify amide proton concentration and exchange rate independently in a cardiac arrest-induced global ischemia rat model. METHODS: The amide proton CEST (APT) effect was decoupled from tissue water, macromolecular magnetization transfer, nuclear Overhauser enhancement, guanidinium, and amine protons using the image downsampling expedited adaptive least-squares (IDEAL) fitting algorithm on Z-spectra obtained under multiple RF saturation power levels, before and after global ischemia. Omega plot analysis was applied to determine amide proton concentration and exchange rate simultaneously. RESULTS: Global ischemia induces a significant APT signal drop from intact tissue. Using the modified omega plot analysis, we found that the amide proton exchange rate decreased from 29.6 ± 5.6 to 12.1 ± 1.3 s-1 (P < 0.001), whereas the amide proton concentration showed little change (0.241 ± 0.035% vs. 0.202 ± 0.034%, P = 0.074) following global ischemia. CONCLUSION: Our study determined the labile proton concentration and exchange rate underlying the in vivo APT MRI. The significant change in the exchange rate, but not the concentration of amide proton demonstrated that the pH effect dominates the APT contrast during tissue ischemia.


Assuntos
Imageamento por Ressonância Magnética , Prótons , Animais , Ratos , Imageamento por Ressonância Magnética/métodos , Concentração de Íons de Hidrogênio , Amidas/metabolismo , Isquemia
6.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-36850615

RESUMO

In view of the difficulty of using raw 3D point clouds for component detection in the railway field, this paper designs a point cloud segmentation model based on deep learning together with a point cloud preprocessing mechanism. First, a special preprocessing algorithm is designed to resolve the problems of noise points, acquisition errors, and large data volume in the actual point cloud model of the bolt. The algorithm uses the point cloud adaptive weighted guided filtering for noise smoothing according to the noise characteristics. Then retaining the key points of the point cloud, this algorithm uses the octree to partition the point cloud and carries out iterative farthest point sampling in each partition for obtaining the standard point cloud model. The standard point cloud model is then subjected to hierarchical multi-scale feature extraction to obtain global features, which are combined with local features through a self-attention mechanism, while linear interpolation is used to further expand the perceptual field of local features of the model as a basis for segmentation, and finally the segmentation is completed. Experiments show that the proposed algorithm could deal with the scattered bolt point cloud well, realize the segmentation of train bolt and background, and could achieve high segmentation accuracy, which has important practical significance for train safety detection.

7.
Sensors (Basel) ; 23(5)2023 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-36904766

RESUMO

High-definition images covering entire large-scene construction sites are increasingly used for monitoring management. However, the transmission of high-definition images is a huge challenge for construction sites with harsh network conditions and scarce computing resources. Thus, an effective compressed sensing and reconstruction method for high-definition monitoring images is urgently needed. Although current deep learning-based image compressed sensing methods exhibit superior performance in recovering images from a reduced number of measurements, they still face difficulties in achieving efficient and accurate high-definition image compressed sensing with less memory usage and computational cost at large-scene construction sites. This paper investigated an efficient deep learning-based high-definition image compressed sensing framework (EHDCS-Net) for large-scene construction site monitoring, which consists of four parts, namely the sampling, initial recovery, deep recovery body, and recovery head subnets. This framework was exquisitely designed by rational organization of the convolutional, downsampling, and pixelshuffle layers based on the procedures of block-based compressed sensing. To effectively reduce memory occupation and computational cost, the framework utilized nonlinear transformations on downscaled feature maps in reconstructing images. Moreover, the efficient channel attention (ECA) module was introduced to further increase the nonlinear reconstruction capability on downscaled feature maps. The framework was tested on large-scene monitoring images from a real hydraulic engineering megaproject. Extensive experiments showed that the proposed EHDCS-Net framework not only used less memory and floating point operations (FLOPs), but it also achieved better reconstruction accuracy with faster recovery speed than other state-of-the-art deep learning-based image compressed sensing methods.

8.
Zhongguo Yi Liao Qi Xie Za Zhi ; 47(1): 38-42, 2023 Jan 30.
Artigo em Chinês | MEDLINE | ID: mdl-36752004

RESUMO

Accurate segmentation of retinal blood vessels is of great significance for diagnosing, preventing and detecting eye diseases. In recent years, the U-Net network and its various variants have reached advanced level in the field of medical image segmentation. Most of these networks choose to use simple max pooling to down-sample the intermediate feature layer of the image, which is easy to lose part of the information, so this study proposes a simple and effective new down-sampling method Pixel Fusion-pooling (PF-pooling), which can well fuse the adjacent pixel information of the image. The down-sampling method proposed in this study is a lightweight general module that can be effectively integrated into various network architectures based on convolutional operations. The experimental results on the DRIVE and STARE datasets show that the F1-score index of the U-Net model using PF-pooling on the STARE dataset improved by 1.98%. The accuracy rate is increased by 0.2%, and the sensitivity is increased by 3.88%. And the generalization of the proposed module is verified by replacing different algorithm models. The results show that PF-pooling has achieved performance improvement in both Dense-UNet and Res-UNet models, and has good universality.


Assuntos
Algoritmos , Vasos Retinianos , Processamento de Imagem Assistida por Computador
9.
J Biomed Inform ; 130: 104093, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35537690

RESUMO

The random noises, sampling biases, and batch effects often confound true biological variations in single-cell RNA-sequencing (scRNA-seq) data. Adjusting such biases is key to the robust discoveries in downstream analyses, such as cell clustering, gene selection and data integration. Here we propose a model-based downsampling algorithm based on minimal unbiased representative points (MURPXMBD). MURPXMBD is designed to retrieve a set of representative points by reducing gene-wise random independent errors, while retaining the covariance structure of biological origin hence provide an unbiased representation of the cell population. Subsequent validation using benchmark datasets shows that MURPXMBD can improve the quality and accuracy of clustering algorithms, and thus facilitate the discovery of new cell types. Besides, MURPXMBD also improves the performance of dataset integration algorithms. In summary, MURPXMBD serves as a useful noise-reduction method for single-cell sequencing analysis in biomedical studies.


Assuntos
Análise de Célula Única , Transcriptoma , Algoritmos , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos
10.
Entropy (Basel) ; 24(3)2022 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-35327905

RESUMO

Quantum machine learning is a promising application of quantum computing for data classification. However, most of the previous research focused on binary classification, and there are few studies on multi-classification. The major challenge comes from the limitations of near-term quantum devices on the number of qubits and the size of quantum circuits. In this paper, we propose a hybrid quantum neural network to implement multi-classification of a real-world dataset. We use an average pooling downsampling strategy to reduce the dimensionality of samples, and we design a ladder-like parameterized quantum circuit to disentangle the input states. Besides this, we adopt an all-qubit multi-observable measurement strategy to capture sufficient hidden information from the quantum system. The experimental results show that our algorithm outperforms the classical neural network and performs especially well on different multi-class datasets, which provides some enlightenment for the application of quantum computing to real-world data on near-term quantum processors.

11.
J Biomed Inform ; 123: 103934, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34666185

RESUMO

BACKGROUND: While cardiac pulsations are widely present within physiological and neuroimaging data, it is unknown the extent this information can provide valid and reliable heart rate and heart rate variability (HRV) estimates. The objective of this study was to demonstrate how a slight temporal shift due to an insufficient sampling frequency can impact the validity/accuracy of deriving cardiac metrics. METHODS: Twenty-two participants were instrumented with valid/reliable industry-standard or open-source electrocardiograms. Five-minute lead II recordings were collected at 1000 Hz in an upright orthostatic position. Following artifact removal, the 1000 Hz recording for each participant was downsampled to frequencies ranging 2-500 Hz. The validity of each participant's downsampled recording was compared against their 1000 Hz recording ("reference-standard") using Bland-Altman plots with 95 % limits of agreement (LOA), coefficient of variation (CoV), intraclass correlation coefficients, and adjusted r-squared values. RESULTS: Downsampled frequencies of ≥ 50 and ≥ 90 Hz produced highly robust measures with narrow log-transformed 95 % LOA (<±0.01) and low CoV values (≤3.5 %) for heart rate and HRV metrics, respectively. Below these thresholds, the log-transformed 95 % LOA became wider (LOA range: ±0.1-1.9) and more variable (CoV range: 1.5-111.6 %). CONCLUSION: These results provide an important consideration for obtaining cardiac information from physiological data. Compared to the "reference-standard" ECG, a seemingly negligible temporal shift of the systolic contraction (R wave) greater than 11-milliseconds (90 Hz) away from its true value, lessened the validity of the HRV. Further research is warranted to determine the minimum sampling frequency required to obtain valid heart rate/HRV metrics from pulsatile waveforms.


Assuntos
Benchmarking , Eletrocardiografia , Frequência Cardíaca , Humanos , Neuroimagem , Reprodutibilidade dos Testes
12.
Entropy (Basel) ; 23(12)2021 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-34945961

RESUMO

Surface electromyography (sEMG) is a valuable technique that helps provide functional and structural information about the electric activity of muscles. As sEMG measures output of complex living systems characterized by multiscale and nonlinear behaviors, Multiscale Permutation Entropy (MPE) is a suitable tool for capturing useful information from the ordinal patterns of sEMG time series. In a previous work, a theoretical comparison in terms of bias and variance of two MPE variants-namely, the refined composite MPE (rcMPE) and the refined composite downsampling (rcDPE), was addressed. In the current paper, we assess the superiority of rcDPE over MPE and rcMPE, when applied to real sEMG signals. Moreover, we demonstrate the capacity of rcDPE in quantifying fatigue levels by using sEMG data recorded during a fatiguing exercise. The processing of four consecutive temporal segments, during biceps brachii exercise maintained at 70% of maximal voluntary contraction until exhaustion, shows that the 10th-scale of rcDPE was capable of better differentiation of the fatigue segments. This scale actually brings the raw sEMG data, initially sampled at 10 kHz, to the specific 0-500 Hz sEMG spectral band of interest, which finally reveals the inner complexity of the data. This study promotes good practices in the use of MPE complexity measures on real data.

13.
BMC Med Inform Decis Mak ; 20(Suppl 3): 123, 2020 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-32646495

RESUMO

BACKGROUND: Electronic medical records contain a variety of valuable medical information for patients. So, when we are able to recognize and extract risk factors for disease from EMRs of patients with cardiovascular disease (CVD), and are able to use them to predict CVD, we have the ability to automatically process clinical texts, resulting in an improved accuracy of supporting doctors for the clinical diagnosis of CVD. In the case where CVD is becoming more worldwide, predictive CVD based on EMRs has been studied by many researchers to address this important aspect of improving diagnostic efficiency. METHODS: This paper proposes an Enhanced Character-level Deep Convolutional Neural Networks (EnDCNN) model for cardiovascular disease prediction. RESULTS: On the manually annotated Chinese EMRs corpus, our risk factor identification extraction model achieved 0.9073 of F-score, our prediction model achieved 0.9516 of F-score, and the prediction result is better than the most previous methods. CONCLUSIONS: The character-level model based on text region embedding can well map risk factors and their labels as a unit into a vector, and downsampling plays a crucial role in improving the training efficiency of deep CNN. What's more, the shortcut connections with pre-activation used in our model architecture implements dimension-matching free in training.


Assuntos
Doenças Cardiovasculares , Doenças Cardiovasculares/diagnóstico , Registros Eletrônicos de Saúde , Humanos , Redes Neurais de Computação
14.
Adv Water Resour ; 1412020 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-34366548

RESUMO

A tracer breakthrough curve (BTC) for each sampling station is the ultimate goal of every quantitative hydrologic tracing study, and dataset size can critically affect the BTC. Groundwater-tracing data obtained using in situ automatic sampling or detection devices may result in very high-density data sets. Data-dense tracer BTCs obtained using in situ devices and stored in dataloggers can result in visually cluttered overlapping data points. The relatively large amounts of data detected by high-frequency settings available on in situ devices and stored in dataloggers ensure that important tracer BTC features, such as data peaks, are not missed. Alternatively, such dense datasets can also be difficult to interpret. Even more difficult, is the application of such dense data sets in solute-transport models that may not be able to adequately reproduce tracer BTC shapes due to the overwhelming mass of data. One solution to the difficulties associated with analyzing, interpreting, and modeling dense data sets is the selective removal of blocks of the data from the total dataset. Although it is possible to arrange to skip blocks of tracer BTC data in a periodic sense (data decimation) so as to lessen the size and density of the dataset, skipping or deleting blocks of data also may result in missing the important features that the high-frequency detection setting efforts were intended to detect. Rather than removing, reducing, or reformulating data overlap, signal filtering and smoothing may be utilized but smoothing errors (e.g., averaging errors, outliers, and potential time shifts) need to be considered. Appropriate probability distributions to tracer BTCs may be used to describe typical tracer BTC shapes, which usually include long tails. Recognizing appropriate probability distributions applicable to tracer BTCs can help in understanding some aspects of the tracer migration.

15.
Sensors (Basel) ; 20(8)2020 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-32326059

RESUMO

Human interaction recognition technology is a hot topic in the field of computer vision, and its application prospects are very extensive. At present, there are many difficulties in human interaction recognition such as the spatial complexity of human interaction, the differences in action characteristics at different time periods, and the complexity of interactive action features. The existence of these problems restricts the improvement of recognition accuracy. To investigate the differences in the action characteristics at different time periods, we propose an improved fusion time-phase feature of the Gaussian model to obtain video keyframes and remove the influence of a large amount of redundant information. Regarding the complexity of interactive action features, we propose a multi-feature fusion network algorithm based on parallel Inception and ResNet. This multi-feature fusion network not only reduces the network parameter quantity, but also improves the network performance; it alleviates the network degradation caused by the increase in network depth and obtains higher classification accuracy. For the spatial complexity of human interaction, we combined the whole video features with the individual video features, making full use of the feature information of the interactive video. A human interaction recognition algorithm based on whole-individual detection is proposed, where the whole video contains the global features of both sides of action, and the individual video contains the individual detail features of a single person. Making full use of the feature information of the whole video and individual videos is the main contribution of this paper to the field of human interaction recognition and the experimental results in the UT dataset (UT-interaction dataset) showed that the accuracy of this method was 91.7%.


Assuntos
Reconhecimento Automatizado de Padrão , Algoritmos , Humanos , Redes Neurais de Computação , Distribuição Normal
16.
Entropy (Basel) ; 23(1)2020 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-33379184

RESUMO

Multiscale Permutation Entropy (MPE) analysis is a powerful ordinal tool in the measurement of information content of time series. MPE refinements, such as Composite MPE (cMPE) and Refined Composite MPE (rcMPE), greatly increase the precision of the entropy estimation by modifying the original method. Nonetheless, these techniques have only been proposed as algorithms, and are yet to be described from the theoretical perspective. Therefore, the purpose of this article is two-fold. First, we develop the statistical theory behind cMPE and rcMPE. Second, we propose an alternative method, Refined Composite Downsampling Permutation Entropy (rcDPE) to further increase the entropy estimation's precision. Although cMPE and rcMPE outperform MPE when applied on uncorrelated noise, the results are higher than our predictions due to inherent redundancies found in the composite algorithms. The rcDPE method, on the other hand, not only conforms to our theoretical predictions, but also greatly improves over the other methods, showing the smallest bias and variance. By using MPE, rcMPE and rcDPE to classify faults in bearing vibration signals, rcDPE outperforms the multiscaling methods, enhancing the difference between faulty and non-faulty signals, provided we apply a proper anti-aliasing low-pass filter at each time scale.

17.
Magn Reson Med ; 81(1): 645-652, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30058148

RESUMO

PURPOSE: Chemical exchange saturation transfer (CEST) MRI has been used for quantitative assessment of dilute metabolites and/or pH in disorders such as acute stroke and tumor. However, routine asymmetry analysis (MTRasym ) may be confounded by concomitant effects such as semisolid macromolecular magnetization transfer (MT) and nuclear Overhauser enhancement. Resolving multiple contributions is essential for elucidating the origins of in vivo CEST contrast. METHODS: Here we used a newly proposed image downsampling expedited adaptive least-squares fitting on densely sampled Z-spectrum to quantify multipool contribution from water, nuclear Overhauser enhancement, MT, guanidinium, amine, and amide protons in adult male Wistar rats before and after global ischemia. RESULTS: Our results revealed the major contributors to in vivo T1 -normalized MTRasym (3.5 ppm) contrast between white and gray matter (WM/GM) in normal brain (-1.96%/second) are pH-insensitive macromolecular MT (-0.89%/second) and nuclear Overhauser enhancement (-1.04%/second). Additionally, global ischemia resulted in significant changes of MTRasym , being -2.05%/second and -1.56%/second in WM and GM, which are dominated by changes in amide (-1.05%/second, -1.14%/second) and MT (-0.88%/second, -0.62%/second). Notably, the pH-sensitive amine and amide effects account for nearly 60% and 80% of the MTRasym changes seen in WM and GM, respectively, after global ischemia, indicating that MTRasym is predominantly pH-sensitive. CONCLUSION: Combined amide and amine effects dominated the MTRasym changes after global ischemia, indicating that MTRasym is predominantly pH-sensitive and suitable for detecting tissue acidosis following acute stroke.


Assuntos
Amidas/química , Isquemia Encefálica/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Acidose , Algoritmos , Animais , Mapeamento Encefálico , Humanos , Concentração de Íons de Hidrogênio , Interpretação de Imagem Assistida por Computador/métodos , Isquemia , Análise dos Mínimos Quadrados , Masculino , Prótons , Ratos , Ratos Wistar , Processamento de Sinais Assistido por Computador , Substância Branca/diagnóstico por imagem
18.
BMC Biol ; 16(1): 113, 2018 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-30309354

RESUMO

BACKGROUND: High throughput methods for profiling the transcriptomes of single cells have recently emerged as transformative approaches for large-scale population surveys of cellular diversity in heterogeneous primary tissues. However, the efficient generation of such atlases will depend on sufficient sampling of diverse cell types while remaining cost-effective to enable a comprehensive examination of organs, developmental stages, and individuals. RESULTS: To examine the relationship between sampled cell numbers and transcriptional heterogeneity in the context of unbiased cell type classification, we explored the population structure of a publicly available 1.3 million cell dataset from E18.5 mouse brain and validated our findings in published data from adult mice. We propose a computational framework for inferring the saturation point of cluster discovery in a single-cell mRNA-seq experiment, centered around cluster preservation in downsampled datasets. In addition, we introduce a "complexity index," which characterizes the heterogeneity of cells in a given dataset. Using Cajal-Retzius cells as an example of a limited complexity dataset, we explored whether the detected biological distinctions relate to technical clustering. Surprisingly, we found that clustering distinctions carrying biologically interpretable meaning are achieved with far fewer cells than the originally sampled, though technical saturation of rare populations such as Cajal-Retzius cells is not achieved. We additionally validated these findings with a recently published atlas of cell types across mouse organs and again find using subsampling that a much smaller number of cells recapitulates the cluster distinctions of the complete dataset. CONCLUSIONS: Together, these findings suggest that most of the biologically interpretable cell types from the 1.3 million cell database can be recapitulated by analyzing 50,000 randomly selected cells, indicating that instead of profiling few individuals at high "cellular coverage," cell atlas studies may instead benefit from profiling more individuals, or many time points at lower cellular coverage and then further enriching for populations of interest. This strategy is ideal for scenarios where cost and time are limited, though extremely rare populations of interest (< 1%) may be identifiable only with much higher cell numbers.


Assuntos
Encéfalo/fisiologia , Perfilação da Expressão Gênica/métodos , Ensaios de Triagem em Larga Escala/métodos , Análise de Célula Única/métodos , Animais , Camundongos , Estudos de Amostragem
19.
Sensors (Basel) ; 19(24)2019 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-31817463

RESUMO

Transmission multispectral imaging (TMI) has potential value for medical applications, such as early screening for breast cancer. However, because biological tissue has strong scattering and absorption characteristics, the heterogeneity detection capability of TMI is poor. Many techniques, such as frame accumulation and shape function signal modulation/demodulation techniques, can improve detection accuracy. In this work, we develop a heterogeneity detection method by combining the contour features and spectral features of TMI. Firstly, the acquisition experiment of the phantom multispectral images was designed. Secondly, the signal-to-noise ratio (SNR) and grayscale level were improved by combining frame accumulation with shape function signal modulation and demodulation techniques. Then, an image exponential downsampling pyramid and Laplace operator were used to roughly extract and fuse the contours of all heterogeneities in images produced by a variety of wavelengths. Finally, we used the hypothesis of invariant parameters to do heterogeneity classification. Experimental results show that these invariant parameters can effectively distinguish heterogeneities with various thicknesses. Moreover, this method may provide a reference for heterogeneity detection in TMI.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Animais , Cucurbita/química , Carne/análise , Razão Sinal-Ruído , Solanum tuberosum/química , Suínos
20.
Sensors (Basel) ; 19(16)2019 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-31394773

RESUMO

Data compression is a useful method to reduce the communication energy consumption in wireless sensor networks (WSNs). Most existing neural network compression methods focus on improving the compression and reconstruction accuracy (i.e., increasing parameters and layers), ignoring the computation consumption of the network and its application ability in WSNs. In contrast, we pay attention to the computation consumption and application of neural networks, and propose an extremely simple and efficient neural network data compression model. The model combines the feature extraction advantages of Convolutional Neural Network (CNN) with the data generation ability of Variational Autoencoder (VAE) and Restricted Boltzmann Machine (RBM), we call it CBN-VAE. In particular, we propose a new efficient convolutional structure: Downsampling-Convolutional RBM (D-CRBM), and use it to replace the standard convolution to reduce parameters and computational consumption. Specifically, we use the VAE model composed of multiple D-CRBM layers to learn the hidden mathematical features of the sensing data, and use this feature to compress and reconstruct the sensing data. We test the performance of the model by using various real-world WSN datasets. Under the same network size, compared with the CNN, the parameters of CBN-VAE model are reduced by 73.88% and the floating-point operations (FLOPs) are reduced by 96.43% with negligible accuracy loss. Compared with the traditional neural networks, the proposed model is more suitable for application on nodes in WSNs. For the Intel Lab temperature data, the average Signal-to-Noise Ratio (SNR) value of the model can reach 32.51 dB, the average reconstruction error value is 0.0678 °C. The node communication energy consumption can be reduced by 95.83%. Compared with the traditional compression methods, the proposed model has better compression and reconstruction accuracy. At the same time, the experimental results show that the model has good fault detection performance and anti-noise ability. When reconstructing data, the model can effectively avoid fault and noise data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA