Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 289
Filtrar
Mais filtros

Eixos temáticos
Tipo de documento
Intervalo de ano de publicação
1.
BMC Bioinformatics ; 25(1): 80, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378440

RESUMO

BACKGROUND: With the increase of the dimensionality in flow cytometry data over the past years, there is a growing need to replace or complement traditional manual analysis (i.e. iterative 2D gating) with automated data analysis pipelines. A crucial part of these pipelines consists of pre-processing and applying quality control filtering to the raw data, in order to use high quality events in the downstream analyses. This part can in turn be split into a number of elementary steps: signal compensation or unmixing, scale transformation, debris, doublets and dead cells removal, batch effect correction, etc. However, assembling and assessing the pre-processing part can be challenging for a number of reasons. First, each of the involved elementary steps can be implemented using various methods and R packages. Second, the order of the steps can have an impact on the downstream analysis results. Finally, each method typically comes with its specific, non standardized diagnostic and visualizations, making objective comparison difficult for the end user. RESULTS: Here, we present CytoPipeline and CytoPipelineGUI, two R packages to build, compare and assess pre-processing pipelines for flow cytometry data. To exemplify these new tools, we present the steps involved in designing a pre-processing pipeline on a real life dataset and demonstrate different visual assessment use cases. We also set up a benchmarking comparing two pre-processing pipelines differing by their quality control methods, and show how the package visualization utilities can provide crucial user insight into the obtained benchmark metrics. CONCLUSION: CytoPipeline and CytoPipelineGUI are two Bioconductor R packages that help building, visualizing and assessing pre-processing pipelines for flow cytometry data. They increase productivity during pipeline development and testing, and complement benchmarking tools, by providing user intuitive insight into benchmarking results.


Assuntos
Análise de Dados , Software , Citometria de Fluxo/métodos
2.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38379519

RESUMO

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Assuntos
Doença de Alzheimer , Esquizofrenia , Humanos , Fluxo de Trabalho , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
3.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34472590

RESUMO

The emergence of single cell RNA sequencing has facilitated the studied of genomes, transcriptomes and proteomes. As available single-cell RNA-seq datasets are released continuously, one of the major challenges facing traditional RNA analysis tools is the high-dimensional, high-sparsity, high-noise and large-scale characteristics of single-cell RNA-seq data. Deep learning technologies match the characteristics of single-cell RNA-seq data perfectly and offer unprecedented promise. Here, we give a systematic review for most popular single-cell RNA-seq analysis methods and tools based on deep learning models, involving the procedures of data preprocessing (quality control, normalization, data correction, dimensionality reduction and data visualization) and clustering task for downstream analysis. We further evaluate the deep model-based analysis methods of data correction and clustering quantitatively on 11 gold standard datasets. Moreover, we discuss the data preferences of these methods and their limitations, and give some suggestions and guidance for users to select appropriate methods and tools.


Assuntos
Aprendizado Profundo , Análise de Célula Única , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos
4.
Magn Reson Med ; 91(2): 773-783, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37831659

RESUMO

PURPOSE: DTI characterizes tissue microstructure and provides proxy measures of nerve health. Echo-planar imaging is a popular method of acquiring DTI but is susceptible to various artifacts (e.g., susceptibility, motion, and eddy currents), which may be ameliorated via preprocessing. There are many pipelines available but limited data comparing their performance, which provides the rationale for this study. METHODS: DTI was acquired from the upper limb of heathy volunteers at 3T in blip-up and blip-down directions. Data were independently corrected using (i) FSL's TOPUP & eddy, (ii) FSL's TOPUP, (iii) DSI Studio, and (iv) TORTOISE. DTI metrics were extracted from the median, radial, and ulnar nerves and compared (between pipelines) using mixed-effects linear regression. The geometric similarity of corrected b = 0 images and the slice matched T1-weighted (T1w) images were computed using the Sörenson-Dice coefficient. RESULTS: Without preprocessing, the similarity coefficient of the blip-up and blip-down datasets to the T1w was 0·80 and 0·79, respectively. Preprocessing improved the geometric similarity by 1% with no difference between pipelines. Compared to TOPUP & eddy, DSI Studio and TORTOISE generated 2% and 6% lower estimates of fractional anisotropy, and 6% and 13% higher estimates of radial diffusivity, respectively. Estimates of anisotropy from TOPUP & eddy versus TOPUP were not different but TOPUP reduced radial diffusivity by 3%. The agreement of DTI metrics between pipelines was poor. CONCLUSIONS: Preprocessing DTI from the upper limb improves geometric similarity but the choice of the pipeline introduces clinically important variability in diffusion parameter estimates from peripheral nerves.


Assuntos
Imagem de Difusão por Ressonância Magnética , Imagem de Tensor de Difusão , Humanos , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Nervos Periféricos , Extremidade Superior/diagnóstico por imagem , Imagem Ecoplanar , Processamento de Imagem Assistida por Computador/métodos
5.
J Magn Reson Imaging ; 59(5): 1800-1806, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37572098

RESUMO

BACKGROUND: Single center MRI radiomics models are sensitive to data heterogeneity, limiting the diagnostic capabilities of current prostate cancer (PCa) radiomics models. PURPOSE: To study the impact of image resampling on the diagnostic performance of radiomics in a multicenter prostate MRI setting. STUDY TYPE: Retrospective. POPULATION: Nine hundred thirty patients (nine centers, two vendors) with 737 eligible PCa lesions, randomly split into training (70%, N = 500), validation (10%, N = 89), and a held-out test set (20%, N = 148). FIELD STRENGTH/SEQUENCE: 1.5T and 3T scanners/T2-weighted imaging (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient maps. ASSESSMENT: A total of 48 normalized radiomics datasets were created using various resampling methods, including different target resolutions (T2W: 0.35, 0.5, and 0.8 mm; DWI: 1.37, 2, and 2.5 mm), dimensionalities (2D/3D) and interpolation techniques (nearest neighbor, linear, Bspline and Blackman windowed-sinc). Each of the datasets was used to train a radiomics model to detect clinically relevant PCa (International Society of Urological Pathology grade ≥ 2). Baseline models were constructed using 2D and 3D datasets without image resampling. The resampling configurations with highest validation performance were evaluated in the test dataset and compared to the baseline models. STATISTICAL TESTS: Area under the curve (AUC), DeLong test. The significance level used was 0.05. RESULTS: The best 2D resampling model (T2W: Bspline and 0.5 mm resolution, DWI: nearest neighbor and 2 mm resolution) significantly outperformed the 2D baseline (AUC: 0.77 vs. 0.64). The best 3D resampling model (T2W: linear and 0.8 mm resolution, DWI: nearest neighbor and 2.5 mm resolution) significantly outperformed the 3D baseline (AUC: 0.79 vs. 0.67). DATA CONCLUSION: Image resampling has a significant effect on the performance of multicenter radiomics artificial intelligence in prostate MRI. The recommended 2D resampling configuration is isotropic resampling with T2W at 0.5 mm (Bspline interpolation) and DWI at 2 mm (nearest neighbor interpolation). For the 3D radiomics, this work recommends isotropic resampling with T2W at 0.8 mm (linear interpolation) and DWI at 2.5 mm (nearest neighbor interpolation). EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Estudos Retrospectivos , Inteligência Artificial , Radiômica , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
6.
Anal Bioanal Chem ; 416(2): 373-386, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37946036

RESUMO

Continuous manufacturing is becoming increasingly important in the (bio-)pharmaceutical industry, as more product can be produced in less time and at lower costs. In this context, there is a need for powerful continuous analytical tools. Many established off-line analytical methods, such as mass spectrometry (MS), are hardly considered for process analytical technology (PAT) applications in biopharmaceutical processes, as they are limited to at-line analysis due to the required sample preparation and the associated complexity, although they would provide a suitable technique for the assessment of a wide range of quality attributes. In this study, we investigated the applicability of a recently developed micro simulated moving bed chromatography system (µSMB) for continuous on-line sample preparation for MS. As a test case, we demonstrate the continuous on-line MS measurement of a protein solution (myoglobin) containing Tris buffer, which interferes with ESI-MS measurements, by continuously exchanging this buffer with a volatile ammonium acetate buffer suitable for MS measurements. The integration of the µSMB significantly increases MS sensitivity by removing over 98% of the buffer substances. Thus, this study demonstrates the feasibility of on-line µSMB-MS, providing a versatile PAT tool by combining the detection power of MS for various product attributes with all the advantages of continuous on-line analytics.

7.
Network ; 35(2): 190-211, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38155546

RESUMO

Nowadays, Internet of things (IoT) and IoT platforms are extensively utilized in several healthcare applications. The IoT devices produce a huge amount of data in healthcare field that can be inspected on an IoT platform. In this paper, a novel algorithm, named artificial flora optimization-based chameleon swarm algorithm (AFO-based CSA), is developed for optimal path finding. Here, data are collected by the sensors and transmitted to the base station (BS) using the proposed AFO-based CSA, which is derived by integrating artificial flora optimization (AFO) in chameleon swarm algorithm (CSA). This integration refers to the AFO-based CSA model enhancing the strengths and features of both AFO and CSA for optimal routing of medical data in IoT. Moreover, the proposed AFO-based CSA algorithm considers factors such as energy, delay, and distance for the effectual routing of data. At BS, prediction is conducted, followed by stages, like pre-processing, feature dimension reduction, adopting Pearson's correlation, and disease detection, done by recurrent neural network, which is trained by the proposed AFO-based CSA. Experimental result exhibited that the performance of the proposed AFO-based CSA is superior to competitive approaches based on the energy consumption (0.538 J), accuracy (0.950), sensitivity (0.965), and specificity (0.937).


Assuntos
Aprendizado Profundo , Internet das Coisas , Algoritmos , Instalações de Saúde , Redes Neurais de Computação
8.
Graefes Arch Clin Exp Ophthalmol ; 262(7): 2247-2267, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38400856

RESUMO

BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved. METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined. RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively. CONCLUSION: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.


Assuntos
Algoritmos , Retinopatia Diabética , Índice de Gravidade de Doença , Humanos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/classificação , Aprendizado Profundo , Inteligência Artificial , Retina/patologia , Retina/diagnóstico por imagem , Reprodutibilidade dos Testes , Masculino
9.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732843

RESUMO

As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.


Assuntos
Gestos , Mãos , Aprendizado de Máquina , Redes Neurais de Computação , Humanos , Mãos/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia/métodos , Ultrassonografia/instrumentação , Ultrassom/instrumentação , Algoritmos
10.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732936

RESUMO

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Assuntos
Pneumopatias , Redes Neurais de Computação , Humanos , Pneumopatias/diagnóstico por imagem , Pneumopatias/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , Pulmão/diagnóstico por imagem , Pulmão/patologia
11.
J Environ Manage ; 360: 121097, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38733844

RESUMO

With high-frequency data of nitrate (NO3-N) concentrations in waters becoming increasingly important for understanding of watershed system behaviors and ecosystem managements, the accurate and economic acquisition of high-frequency NO3-N concentration data has become a key point. This study attempted to use coupled deep learning neural networks and routine monitored data to predict hourly NO3-N concentrations in a river. The hourly NO3-N concentration at the outlet of the Oyster River watershed in New Hampshire, USA, was predicted through neural networks with a hybrid model architecture coupling the Convolutional Neural Networks and the Long Short-Term Memory model (CNN-LSTM). The routine monitored data (the river depth, water temperature, air temperature, precipitation, specific conductivity, pH and dissolved oxygen concentrations) for model training were collected from a nested high-frequency monitoring network, while the high-frequency NO3-N concentration data obtained at the outlet were not included as inputs. The whole dataset was separated into training, validation, and testing processes according to the ratio of 5:3:2, respectively. The hybrid CNN-LSTM model with different input lengths (1d, 3d, 7d, 15d, 30d) displayed comparable even better performance than other studies with lower frequencies, showing mean values of the Nash-Sutcliffe Efficiency 0.60-0.83. Models with shorter input lengths demonstrated both the higher modeling accuracy and stability. The water level, water temperature and pH values at monitoring sites were main controlling factors for forecasting performances. This study provided a new insight of using deep learning networks with a coupled architecture and routine monitored data for high-frequency riverine NO3-N concentration forecasting and suggestions about strategies about variable and input length selection during preprocessing of input data.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Nitratos , Rios , Nitratos/análise , Rios/química , Monitoramento Ambiental/métodos , Poluentes Químicos da Água/análise , New Hampshire
12.
Electromagn Biol Med ; 43(1-2): 31-45, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38369844

RESUMO

This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.


This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a Self-Attention based Generative Adversarial Network (SAGAN) optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time. The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging.


Assuntos
Algoritmos , Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cor , Redes Neurais de Computação , Aprendizado Profundo
13.
Environ Monit Assess ; 196(8): 724, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38990407

RESUMO

Analysis of the change in groundwater used as a drinking and irrigation water source is of critical importance in terms of monitoring aquifers, planning water resources, energy production, combating climate change, and agricultural production. Therefore, it is necessary to model groundwater level (GWL) fluctuations to monitor and predict groundwater storage. Artificial intelligence-based models in water resource management have become prevalent due to their proven success in hydrological studies. This study proposed a hybrid model that combines the artificial neural network (ANN) and the artificial bee colony optimization (ABC) algorithm, along with the ensemble empirical mode decomposition (EEMD) and the local mean decomposition (LMD) techniques, to model groundwater levels in Erzurum province, Türkiye. GWL estimation results were evaluated with mean square error (MSE), coefficient of determination (R2), and residual sum of squares (RSS) and visually with violin, scatter, and time series plot. The study results indicated that the EEMD-ABC-ANN hybrid model was superior to other models in estimating GWL, with R2 values ranging from 0.91 to 0.99 and MSE values ranging from 0.004 to 0.07. It has also been revealed that promising GWL predictions can be made with previous GWL data.


Assuntos
Monitoramento Ambiental , Água Subterrânea , Redes Neurais de Computação , Água Subterrânea/química , Abelhas , Animais , Monitoramento Ambiental/métodos , Algoritmos
14.
Brief Bioinform ; 22(5)2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-33822850

RESUMO

Next-generation sequencing (NGS) enables massively parallel acquisition of large-scale omics data; however, objective data quality filtering parameters are lacking. Although a useful metric, evidence reveals that platform-generated Phred values overestimate per-base quality scores. We have developed novel and empirically based algorithms that streamline NGS data quality filtering. The pipeline leverages known sequence motifs to enable empirical estimation of error rates, detection of erroneous base calls and removal of contaminating adapter sequence. The performance of motif-based error detection and quality filtering were further validated with read compression rates as an unbiased metric. Elevated error rates at read ends, where known motifs lie, tracked with propagation of erroneous base calls. Barcode swapping, an inherent problem with pooled libraries, was also effectively mitigated. The ngsComposer pipeline is suitable for various NGS protocols and platforms due to the universal concepts on which the algorithms are based.


Assuntos
Algoritmos , Biologia Computacional/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos , Software , Simulação por Computador , Humanos , Reprodutibilidade dos Testes
15.
Network ; 34(4): 374-391, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37916510

RESUMO

The performance of time-series classification of electroencephalographic data varies strongly across experimental paradigms and study participants. Reasons are task-dependent differences in neuronal processing and seemingly random variations between subjects, amongst others. The effect of data pre-processing techniques to ameliorate these challenges is relatively little studied. Here, the influence of spatial filter optimization methods and non-linear data transformation on time-series classification performance is analyzed by the example of high-frequency somatosensory evoked responses. This is a model paradigm for the analysis of high-frequency electroencephalography data at a very low signal-to-noise ratio, which emphasizes the differences of the explored methods. For the utilized data, it was found that the individual signal-to-noise ratio explained up to 74% of the performance differences between subjects. While data pre-processing was shown to increase average time-series classification performance, it could not fully compensate the signal-to-noise ratio differences between the subjects. This study proposes an algorithm to prototype and benchmark pre-processing pipelines for a paradigm and data set at hand. Extreme learning machines, Random Forest, and Logistic Regression can be used quickly to compare a set of potentially suitable pipelines. For subsequent classification, however, machine learning models were shown to provide better accuracy.


Assuntos
Algoritmos , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Algoritmo Florestas Aleatórias , Extremidade Superior , Razão Sinal-Ruído , Processamento de Sinais Assistido por Computador
16.
MAGMA ; 36(6): 945-956, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37556085

RESUMO

PURPOSE: To evaluate the reproducibility of radiomics features derived via different pre-processing settings from paired T2-weighted imaging (T2WI) prostate lesions acquired within a short interval, to select the setting that yields the highest number of reproducible features, and to evaluate the impact of disease characteristics (i.e., clinical variables) on features reproducibility. MATERIALS AND METHODS: A dataset of 50 patients imaged using T2WI at 2 consecutive examinations was used. The dataset was pre-processed using 48 different settings. A total of 107 radiomics features were extracted from manual delineations of 74 lesions. The inter-scan reproducibility of each feature was measured using the intra-class correlation coefficient (ICC), with ICC values > 0.75 considered good. Statistical differences were assessed using Mann-Whitney U and Kruskal-Wallis tests. RESULTS: The pre-processing parameters strongly influenced the reproducibility of radiomics features of T2WI prostate lesions. The setting that yielded the highest number of features (25 features) with high reproducibility was the relative discretization with a fixed bin number of 64, no signal intensity normalization, and outlier filtering by excluding outliers. Disease characteristics did not significantly impact the reproducibility of radiomics features. CONCLUSION: The reproducibility of T2WI radiomics features was significantly influenced by pre-processing parameters, but not by disease characteristics. The selected pre-processing setting yielded 25 reproducible features.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Próstata/diagnóstico por imagem , Estudos Retrospectivos
17.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36904632

RESUMO

Industrialization and rapid urbanization in almost every country adversely affect many of our environmental values, such as our core ecosystem, regional climate differences and global diversity. The difficulties we encounter as a result of the rapid change we experience cause us to encounter many problems in our daily lives. The background of these problems is rapid digitalization and the lack of sufficient infrastructure to process and analyze very large volumes of data. Inaccurate, incomplete or irrelevant data produced in the IoT detection layer causes weather forecast reports to drift away from the concepts of accuracy and reliability, and as a result, activities based on weather forecasting are disrupted. A sophisticated and difficult talent, weather forecasting needs the observation and processing of enormous volumes of data. In addition, rapid urbanization, abrupt climate changes and mass digitization make it more difficult for the forecasts to be accurate and reliable. Increasing data density and rapid urbanization and digitalization make it difficult for the forecasts to be accurate and reliable. This situation prevents people from taking precautions against bad weather conditions in cities and rural areas and turns into a vital problem. In this study, an intelligent anomaly detection approach is presented to minimize the weather forecasting problems that arise as a result of rapid urbanization and mass digitalization. The proposed solutions cover data processing at the edge of the IoT and include filtering out the missing, unnecessary or anomaly data that prevent the predictions from being more accurate and reliable from the data obtained through the sensors. Anomaly detection metrics of five different machine learning (ML) algorithms, including support vector classifier (SVC), Adaboost, logistic regression (LR), naive Bayes (NB) and random forest (RF), were also compared in the study. These algorithms were used to create a data stream using the time, temperature, pressure, humidity and other sensor-generated information.

18.
Sensors (Basel) ; 23(9)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37177523

RESUMO

Pervasive computing, human-computer interaction, human behavior analysis, and human activity recognition (HAR) fields have grown significantly. Deep learning (DL)-based techniques have recently been effectively used to predict various human actions using time series data from wearable sensors and mobile devices. The management of time series data remains difficult for DL-based techniques, despite their excellent performance in activity detection. Time series data still has several problems, such as difficulties in heavily biased data and feature extraction. For HAR, an ensemble of Deep SqueezeNet (SE) and bidirectional long short-term memory (BiLSTM) with improved flower pollination optimization algorithm (IFPOA) is designed to construct a reliable classification model utilizing wearable sensor data in this research. The significant features are extracted automatically from the raw sensor data by multi-branch SE-BiLSTM. The model can learn both short-term dependencies and long-term features in sequential data due to SqueezeNet and BiLSTM. The different temporal local dependencies are captured effectively by the proposed model, enhancing the feature extraction process. The hyperparameters of the BiLSTM network are optimized by the IFPOA. The model performance is analyzed using three benchmark datasets: MHEALTH, KU-HAR, and PAMPA2. The proposed model has achieved 99.98%, 99.76%, and 99.54% accuracies on MHEALTH, KU-HAR, and PAMPA2 datasets, respectively. The proposed model performs better than other approaches from the obtained experimental results. The suggested model delivers competitive results compared to state-of-the-art techniques, according to experimental results on four publicly accessible datasets.


Assuntos
Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Polinização , Algoritmos , Atividades Humanas , Flores
19.
Sensors (Basel) ; 23(21)2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37960560

RESUMO

JPEG is the international standard for still image encoding and is the most widely used compression algorithm because of its simple encoding process and low computational complexity. Recently, many methods have been developed to improve the quality of JPEG images by using deep learning. However, these methods require the use of high-performance devices since they need to perform neural network computation for decoding images. In this paper, we propose a method to generate high-quality images using deep learning without changing the decoding algorithm. The key idea is to reduce and smooth colors and gradient regions in the original images before JPEG compression. The reduction and smoothing can suppress red block noise and pseudo-contour in the compressed images. Furthermore, high-performance devices are unnecessary for decoding. The proposed method consists of two components: a color transformation network using deep learning and a pseudo-contour suppression model using signal processing. The experimental results showed that the proposed method outperforms standard JPEG in quality measurements correlated with human perception.

20.
Sensors (Basel) ; 23(8)2023 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-37112416

RESUMO

Autonomous driving of higher automation levels asks for optimal execution of critical maneuvers in all environments. A crucial prerequisite for such optimal decision-making instances is accurate situation awareness of automated and connected vehicles. For this, vehicles rely on the sensory data captured from onboard sensors and information collected through V2X communication. The classical onboard sensors exhibit different capabilities and hence a heterogeneous set of sensors is required to create better situation awareness. Fusion of the sensory data from such a set of heterogeneous sensors poses critical challenges when it comes to creating an accurate environment context for effective decision-making in AVs. Hence this exclusive survey analyses the influence of mandatory factors like data pre-processing preferably data fusion along with situation awareness toward effective decision-making in the AVs. A wide range of recent and related articles are analyzed from various perceptive, to pick the major hiccups, which can be further addressed to focus on the goals of higher automation levels. A section of the solution sketch is provided that directs the readers to the potential research directions for achieving accurate contextual awareness. To the best of our knowledge, this survey is uniquely positioned for its scope, taxonomy, and future directions.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa