Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 302
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
BMC Bioinformatics ; 25(1): 80, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378440

RESUMO

BACKGROUND: With the increase of the dimensionality in flow cytometry data over the past years, there is a growing need to replace or complement traditional manual analysis (i.e. iterative 2D gating) with automated data analysis pipelines. A crucial part of these pipelines consists of pre-processing and applying quality control filtering to the raw data, in order to use high quality events in the downstream analyses. This part can in turn be split into a number of elementary steps: signal compensation or unmixing, scale transformation, debris, doublets and dead cells removal, batch effect correction, etc. However, assembling and assessing the pre-processing part can be challenging for a number of reasons. First, each of the involved elementary steps can be implemented using various methods and R packages. Second, the order of the steps can have an impact on the downstream analysis results. Finally, each method typically comes with its specific, non standardized diagnostic and visualizations, making objective comparison difficult for the end user. RESULTS: Here, we present CytoPipeline and CytoPipelineGUI, two R packages to build, compare and assess pre-processing pipelines for flow cytometry data. To exemplify these new tools, we present the steps involved in designing a pre-processing pipeline on a real life dataset and demonstrate different visual assessment use cases. We also set up a benchmarking comparing two pre-processing pipelines differing by their quality control methods, and show how the package visualization utilities can provide crucial user insight into the obtained benchmark metrics. CONCLUSION: CytoPipeline and CytoPipelineGUI are two Bioconductor R packages that help building, visualizing and assessing pre-processing pipelines for flow cytometry data. They increase productivity during pipeline development and testing, and complement benchmarking tools, by providing user intuitive insight into benchmarking results.


Assuntos
Análise de Dados , Software , Citometria de Fluxo/métodos
2.
Hum Brain Mapp ; 45(14): e70034, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39370644

RESUMO

Automated EEG pre-processing pipelines provide several key advantages over traditional manual data cleaning approaches; primarily, they are less time-intensive and remove potential experimenter error/bias. Automated pipelines also require fewer technical expertise as they remove the need for manual artefact identification. We recently developed the fully automated Reduction of Electroencephalographic Artefacts (RELAX) pipeline and demonstrated its performance in cleaning EEG data recorded from adult populations. Here, we introduce the RELAX-Jr pipeline, which was adapted from RELAX and designed specifically for pre-processing of data collected from children. RELAX-Jr implements multi-channel Wiener filtering (MWF) and/or wavelet-enhanced independent component analysis (wICA) combined with the adjusted-ADJUST automated independent component classification algorithm to identify and reduce all artefacts using algorithms adapted to optimally identify artefacts in EEG recordings taken from children. Using a dataset of resting-state EEG recordings (N = 136) from children spanning early-to-middle childhood (4-12 years), we assessed the cleaning performance of RELAX-Jr using a range of metrics including signal-to-error ratio, artefact-to-residue ratio, ability to reduce blink and muscle contamination, and differences in estimates of alpha power between eyes-open and eyes-closed recordings. We also compared the performance of RELAX-Jr against four publicly available automated cleaning pipelines. We demonstrate that RELAX-Jr provides strong cleaning performance across a range of metrics, supporting its use as an effective and fully automated cleaning pipeline for neurodevelopmental EEG data.


Assuntos
Artefatos , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Humanos , Eletroencefalografia/métodos , Eletroencefalografia/normas , Criança , Pré-Escolar , Masculino , Feminino , Encéfalo/fisiologia , Algoritmos
3.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38379519

RESUMO

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Assuntos
Doença de Alzheimer , Esquizofrenia , Humanos , Fluxo de Trabalho , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
4.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34472590

RESUMO

The emergence of single cell RNA sequencing has facilitated the studied of genomes, transcriptomes and proteomes. As available single-cell RNA-seq datasets are released continuously, one of the major challenges facing traditional RNA analysis tools is the high-dimensional, high-sparsity, high-noise and large-scale characteristics of single-cell RNA-seq data. Deep learning technologies match the characteristics of single-cell RNA-seq data perfectly and offer unprecedented promise. Here, we give a systematic review for most popular single-cell RNA-seq analysis methods and tools based on deep learning models, involving the procedures of data preprocessing (quality control, normalization, data correction, dimensionality reduction and data visualization) and clustering task for downstream analysis. We further evaluate the deep model-based analysis methods of data correction and clustering quantitatively on 11 gold standard datasets. Moreover, we discuss the data preferences of these methods and their limitations, and give some suggestions and guidance for users to select appropriate methods and tools.


Assuntos
Aprendizado Profundo , Análise de Célula Única , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos
5.
Magn Reson Med ; 91(2): 773-783, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37831659

RESUMO

PURPOSE: DTI characterizes tissue microstructure and provides proxy measures of nerve health. Echo-planar imaging is a popular method of acquiring DTI but is susceptible to various artifacts (e.g., susceptibility, motion, and eddy currents), which may be ameliorated via preprocessing. There are many pipelines available but limited data comparing their performance, which provides the rationale for this study. METHODS: DTI was acquired from the upper limb of heathy volunteers at 3T in blip-up and blip-down directions. Data were independently corrected using (i) FSL's TOPUP & eddy, (ii) FSL's TOPUP, (iii) DSI Studio, and (iv) TORTOISE. DTI metrics were extracted from the median, radial, and ulnar nerves and compared (between pipelines) using mixed-effects linear regression. The geometric similarity of corrected b = 0 images and the slice matched T1-weighted (T1w) images were computed using the Sörenson-Dice coefficient. RESULTS: Without preprocessing, the similarity coefficient of the blip-up and blip-down datasets to the T1w was 0·80 and 0·79, respectively. Preprocessing improved the geometric similarity by 1% with no difference between pipelines. Compared to TOPUP & eddy, DSI Studio and TORTOISE generated 2% and 6% lower estimates of fractional anisotropy, and 6% and 13% higher estimates of radial diffusivity, respectively. Estimates of anisotropy from TOPUP & eddy versus TOPUP were not different but TOPUP reduced radial diffusivity by 3%. The agreement of DTI metrics between pipelines was poor. CONCLUSIONS: Preprocessing DTI from the upper limb improves geometric similarity but the choice of the pipeline introduces clinically important variability in diffusion parameter estimates from peripheral nerves.


Assuntos
Imagem de Difusão por Ressonância Magnética , Imagem de Tensor de Difusão , Humanos , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Nervos Periféricos , Extremidade Superior/diagnóstico por imagem , Imagem Ecoplanar , Processamento de Imagem Assistida por Computador/métodos
6.
J Magn Reson Imaging ; 59(5): 1800-1806, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37572098

RESUMO

BACKGROUND: Single center MRI radiomics models are sensitive to data heterogeneity, limiting the diagnostic capabilities of current prostate cancer (PCa) radiomics models. PURPOSE: To study the impact of image resampling on the diagnostic performance of radiomics in a multicenter prostate MRI setting. STUDY TYPE: Retrospective. POPULATION: Nine hundred thirty patients (nine centers, two vendors) with 737 eligible PCa lesions, randomly split into training (70%, N = 500), validation (10%, N = 89), and a held-out test set (20%, N = 148). FIELD STRENGTH/SEQUENCE: 1.5T and 3T scanners/T2-weighted imaging (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient maps. ASSESSMENT: A total of 48 normalized radiomics datasets were created using various resampling methods, including different target resolutions (T2W: 0.35, 0.5, and 0.8 mm; DWI: 1.37, 2, and 2.5 mm), dimensionalities (2D/3D) and interpolation techniques (nearest neighbor, linear, Bspline and Blackman windowed-sinc). Each of the datasets was used to train a radiomics model to detect clinically relevant PCa (International Society of Urological Pathology grade ≥ 2). Baseline models were constructed using 2D and 3D datasets without image resampling. The resampling configurations with highest validation performance were evaluated in the test dataset and compared to the baseline models. STATISTICAL TESTS: Area under the curve (AUC), DeLong test. The significance level used was 0.05. RESULTS: The best 2D resampling model (T2W: Bspline and 0.5 mm resolution, DWI: nearest neighbor and 2 mm resolution) significantly outperformed the 2D baseline (AUC: 0.77 vs. 0.64). The best 3D resampling model (T2W: linear and 0.8 mm resolution, DWI: nearest neighbor and 2.5 mm resolution) significantly outperformed the 3D baseline (AUC: 0.79 vs. 0.67). DATA CONCLUSION: Image resampling has a significant effect on the performance of multicenter radiomics artificial intelligence in prostate MRI. The recommended 2D resampling configuration is isotropic resampling with T2W at 0.5 mm (Bspline interpolation) and DWI at 2 mm (nearest neighbor interpolation). For the 3D radiomics, this work recommends isotropic resampling with T2W at 0.8 mm (linear interpolation) and DWI at 2.5 mm (nearest neighbor interpolation). EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Estudos Retrospectivos , Inteligência Artificial , Radiômica , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
7.
Eur Radiol ; 2024 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-39453470

RESUMO

Radiomics is a method to extract detailed information from diagnostic images that cannot be perceived by the naked eye. Although radiomics research carries great potential to improve clinical decision-making, its inherent methodological complexities make it difficult to comprehend every step of the analysis, often causing reproducibility and generalizability issues that hinder clinical adoption. Critical steps in the radiomics analysis and model development pipeline-such as image, application of image filters, and selection of feature extraction parameters-can greatly affect the values of radiomic features. Moreover, common errors in data partitioning, model comparison, fine-tuning, assessment, and calibration can reduce reproducibility and impede clinical translation. Clinical adoption of radiomics also requires a deep understanding of model explainability and the development of intuitive interpretations of radiomic features. To address these challenges, it is essential for radiomics model developers and clinicians to be well-versed in current best practices. Proper knowledge and application of these practices is crucial for accurate radiomics feature extraction, robust model development, and thorough assessment, ultimately increasing reproducibility, generalizability, and the likelihood of successful clinical translation. In this article, we have provided researchers with our recommendations along with practical examples to facilitate good research practices in radiomics. KEY POINTS: Radiomics' inherent methodological complexity should be understood to ensure rigorous radiomic model development to improve clinical decision-making. Adherence to radiomics-specific checklists and quality assessment tools ensures methodological rigor. Use of standardized radiomics tools and best practices enhances clinical translation of radiomics models.

8.
Anal Bioanal Chem ; 416(2): 373-386, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37946036

RESUMO

Continuous manufacturing is becoming increasingly important in the (bio-)pharmaceutical industry, as more product can be produced in less time and at lower costs. In this context, there is a need for powerful continuous analytical tools. Many established off-line analytical methods, such as mass spectrometry (MS), are hardly considered for process analytical technology (PAT) applications in biopharmaceutical processes, as they are limited to at-line analysis due to the required sample preparation and the associated complexity, although they would provide a suitable technique for the assessment of a wide range of quality attributes. In this study, we investigated the applicability of a recently developed micro simulated moving bed chromatography system (µSMB) for continuous on-line sample preparation for MS. As a test case, we demonstrate the continuous on-line MS measurement of a protein solution (myoglobin) containing Tris buffer, which interferes with ESI-MS measurements, by continuously exchanging this buffer with a volatile ammonium acetate buffer suitable for MS measurements. The integration of the µSMB significantly increases MS sensitivity by removing over 98% of the buffer substances. Thus, this study demonstrates the feasibility of on-line µSMB-MS, providing a versatile PAT tool by combining the detection power of MS for various product attributes with all the advantages of continuous on-line analytics.

9.
Network ; 35(2): 190-211, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38155546

RESUMO

Nowadays, Internet of things (IoT) and IoT platforms are extensively utilized in several healthcare applications. The IoT devices produce a huge amount of data in healthcare field that can be inspected on an IoT platform. In this paper, a novel algorithm, named artificial flora optimization-based chameleon swarm algorithm (AFO-based CSA), is developed for optimal path finding. Here, data are collected by the sensors and transmitted to the base station (BS) using the proposed AFO-based CSA, which is derived by integrating artificial flora optimization (AFO) in chameleon swarm algorithm (CSA). This integration refers to the AFO-based CSA model enhancing the strengths and features of both AFO and CSA for optimal routing of medical data in IoT. Moreover, the proposed AFO-based CSA algorithm considers factors such as energy, delay, and distance for the effectual routing of data. At BS, prediction is conducted, followed by stages, like pre-processing, feature dimension reduction, adopting Pearson's correlation, and disease detection, done by recurrent neural network, which is trained by the proposed AFO-based CSA. Experimental result exhibited that the performance of the proposed AFO-based CSA is superior to competitive approaches based on the energy consumption (0.538 J), accuracy (0.950), sensitivity (0.965), and specificity (0.937).


Assuntos
Aprendizado Profundo , Internet das Coisas , Algoritmos , Instalações de Saúde , Redes Neurais de Computação
10.
Graefes Arch Clin Exp Ophthalmol ; 262(7): 2247-2267, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38400856

RESUMO

BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved. METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined. RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively. CONCLUSION: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.


Assuntos
Algoritmos , Retinopatia Diabética , Índice de Gravidade de Doença , Humanos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/classificação , Aprendizado Profundo , Inteligência Artificial , Retina/patologia , Retina/diagnóstico por imagem , Reprodutibilidade dos Testes , Masculino
11.
Sensors (Basel) ; 24(18)2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39338846

RESUMO

Currently, there is a demand for an increase in the diversity and quality of new products reaching the consumer market. This fact imposes new challenges for different industrial sectors, including processes that integrate machine vision. Hardware acceleration and improvements in processing efficiency are becoming crucial for vision-based algorithms to follow the complexity growth of future industrial systems. This article presents a generic library of pre-processing filters for execution in field-programmable gate arrays (FPGAs) to reduce the overall image processing time in vision systems. An experimental setup based on the Zybo Z7 Pcam 5C Demo project was developed and used to validate the filters described in VHDL (VHSIC hardware description language). Finally, a comparison of the execution times using GPU and CPU platforms was performed as well as an evaluation of the integration of the current work in an industrial application. The results showed a decrease in the pre-processing time from milliseconds to nanoseconds when using FPGAs.

12.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732843

RESUMO

As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.


Assuntos
Gestos , Mãos , Aprendizado de Máquina , Redes Neurais de Computação , Humanos , Mãos/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia/métodos , Ultrassonografia/instrumentação , Ultrassom/instrumentação , Algoritmos
13.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732936

RESUMO

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Assuntos
Pneumopatias , Redes Neurais de Computação , Humanos , Pneumopatias/diagnóstico por imagem , Pneumopatias/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , Pulmão/diagnóstico por imagem , Pulmão/patologia
14.
J Environ Manage ; 360: 121097, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38733844

RESUMO

With high-frequency data of nitrate (NO3-N) concentrations in waters becoming increasingly important for understanding of watershed system behaviors and ecosystem managements, the accurate and economic acquisition of high-frequency NO3-N concentration data has become a key point. This study attempted to use coupled deep learning neural networks and routine monitored data to predict hourly NO3-N concentrations in a river. The hourly NO3-N concentration at the outlet of the Oyster River watershed in New Hampshire, USA, was predicted through neural networks with a hybrid model architecture coupling the Convolutional Neural Networks and the Long Short-Term Memory model (CNN-LSTM). The routine monitored data (the river depth, water temperature, air temperature, precipitation, specific conductivity, pH and dissolved oxygen concentrations) for model training were collected from a nested high-frequency monitoring network, while the high-frequency NO3-N concentration data obtained at the outlet were not included as inputs. The whole dataset was separated into training, validation, and testing processes according to the ratio of 5:3:2, respectively. The hybrid CNN-LSTM model with different input lengths (1d, 3d, 7d, 15d, 30d) displayed comparable even better performance than other studies with lower frequencies, showing mean values of the Nash-Sutcliffe Efficiency 0.60-0.83. Models with shorter input lengths demonstrated both the higher modeling accuracy and stability. The water level, water temperature and pH values at monitoring sites were main controlling factors for forecasting performances. This study provided a new insight of using deep learning networks with a coupled architecture and routine monitored data for high-frequency riverine NO3-N concentration forecasting and suggestions about strategies about variable and input length selection during preprocessing of input data.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Nitratos , Rios , Nitratos/análise , Rios/química , Monitoramento Ambiental/métodos , Poluentes Químicos da Água/análise , New Hampshire
15.
Electromagn Biol Med ; 43(1-2): 31-45, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38369844

RESUMO

This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.


This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a Self-Attention based Generative Adversarial Network (SAGAN) optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time. The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging.


Assuntos
Algoritmos , Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cor , Redes Neurais de Computação , Aprendizado Profundo
16.
Environ Monit Assess ; 196(8): 724, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38990407

RESUMO

Analysis of the change in groundwater used as a drinking and irrigation water source is of critical importance in terms of monitoring aquifers, planning water resources, energy production, combating climate change, and agricultural production. Therefore, it is necessary to model groundwater level (GWL) fluctuations to monitor and predict groundwater storage. Artificial intelligence-based models in water resource management have become prevalent due to their proven success in hydrological studies. This study proposed a hybrid model that combines the artificial neural network (ANN) and the artificial bee colony optimization (ABC) algorithm, along with the ensemble empirical mode decomposition (EEMD) and the local mean decomposition (LMD) techniques, to model groundwater levels in Erzurum province, Türkiye. GWL estimation results were evaluated with mean square error (MSE), coefficient of determination (R2), and residual sum of squares (RSS) and visually with violin, scatter, and time series plot. The study results indicated that the EEMD-ABC-ANN hybrid model was superior to other models in estimating GWL, with R2 values ranging from 0.91 to 0.99 and MSE values ranging from 0.004 to 0.07. It has also been revealed that promising GWL predictions can be made with previous GWL data.


Assuntos
Monitoramento Ambiental , Água Subterrânea , Redes Neurais de Computação , Água Subterrânea/química , Abelhas , Animais , Monitoramento Ambiental/métodos , Algoritmos
17.
Brief Bioinform ; 22(5)2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-33822850

RESUMO

Next-generation sequencing (NGS) enables massively parallel acquisition of large-scale omics data; however, objective data quality filtering parameters are lacking. Although a useful metric, evidence reveals that platform-generated Phred values overestimate per-base quality scores. We have developed novel and empirically based algorithms that streamline NGS data quality filtering. The pipeline leverages known sequence motifs to enable empirical estimation of error rates, detection of erroneous base calls and removal of contaminating adapter sequence. The performance of motif-based error detection and quality filtering were further validated with read compression rates as an unbiased metric. Elevated error rates at read ends, where known motifs lie, tracked with propagation of erroneous base calls. Barcode swapping, an inherent problem with pooled libraries, was also effectively mitigated. The ngsComposer pipeline is suitable for various NGS protocols and platforms due to the universal concepts on which the algorithms are based.


Assuntos
Algoritmos , Biologia Computacional/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos , Software , Simulação por Computador , Humanos , Reprodutibilidade dos Testes
18.
Network ; 34(4): 374-391, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37916510

RESUMO

The performance of time-series classification of electroencephalographic data varies strongly across experimental paradigms and study participants. Reasons are task-dependent differences in neuronal processing and seemingly random variations between subjects, amongst others. The effect of data pre-processing techniques to ameliorate these challenges is relatively little studied. Here, the influence of spatial filter optimization methods and non-linear data transformation on time-series classification performance is analyzed by the example of high-frequency somatosensory evoked responses. This is a model paradigm for the analysis of high-frequency electroencephalography data at a very low signal-to-noise ratio, which emphasizes the differences of the explored methods. For the utilized data, it was found that the individual signal-to-noise ratio explained up to 74% of the performance differences between subjects. While data pre-processing was shown to increase average time-series classification performance, it could not fully compensate the signal-to-noise ratio differences between the subjects. This study proposes an algorithm to prototype and benchmark pre-processing pipelines for a paradigm and data set at hand. Extreme learning machines, Random Forest, and Logistic Regression can be used quickly to compare a set of potentially suitable pipelines. For subsequent classification, however, machine learning models were shown to provide better accuracy.


Assuntos
Algoritmos , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Algoritmo Florestas Aleatórias , Extremidade Superior , Razão Sinal-Ruído , Processamento de Sinais Assistido por Computador
19.
MAGMA ; 36(6): 945-956, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37556085

RESUMO

PURPOSE: To evaluate the reproducibility of radiomics features derived via different pre-processing settings from paired T2-weighted imaging (T2WI) prostate lesions acquired within a short interval, to select the setting that yields the highest number of reproducible features, and to evaluate the impact of disease characteristics (i.e., clinical variables) on features reproducibility. MATERIALS AND METHODS: A dataset of 50 patients imaged using T2WI at 2 consecutive examinations was used. The dataset was pre-processed using 48 different settings. A total of 107 radiomics features were extracted from manual delineations of 74 lesions. The inter-scan reproducibility of each feature was measured using the intra-class correlation coefficient (ICC), with ICC values > 0.75 considered good. Statistical differences were assessed using Mann-Whitney U and Kruskal-Wallis tests. RESULTS: The pre-processing parameters strongly influenced the reproducibility of radiomics features of T2WI prostate lesions. The setting that yielded the highest number of features (25 features) with high reproducibility was the relative discretization with a fixed bin number of 64, no signal intensity normalization, and outlier filtering by excluding outliers. Disease characteristics did not significantly impact the reproducibility of radiomics features. CONCLUSION: The reproducibility of T2WI radiomics features was significantly influenced by pre-processing parameters, but not by disease characteristics. The selected pre-processing setting yielded 25 reproducible features.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Próstata/diagnóstico por imagem , Estudos Retrospectivos
20.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36904632

RESUMO

Industrialization and rapid urbanization in almost every country adversely affect many of our environmental values, such as our core ecosystem, regional climate differences and global diversity. The difficulties we encounter as a result of the rapid change we experience cause us to encounter many problems in our daily lives. The background of these problems is rapid digitalization and the lack of sufficient infrastructure to process and analyze very large volumes of data. Inaccurate, incomplete or irrelevant data produced in the IoT detection layer causes weather forecast reports to drift away from the concepts of accuracy and reliability, and as a result, activities based on weather forecasting are disrupted. A sophisticated and difficult talent, weather forecasting needs the observation and processing of enormous volumes of data. In addition, rapid urbanization, abrupt climate changes and mass digitization make it more difficult for the forecasts to be accurate and reliable. Increasing data density and rapid urbanization and digitalization make it difficult for the forecasts to be accurate and reliable. This situation prevents people from taking precautions against bad weather conditions in cities and rural areas and turns into a vital problem. In this study, an intelligent anomaly detection approach is presented to minimize the weather forecasting problems that arise as a result of rapid urbanization and mass digitalization. The proposed solutions cover data processing at the edge of the IoT and include filtering out the missing, unnecessary or anomaly data that prevent the predictions from being more accurate and reliable from the data obtained through the sensors. Anomaly detection metrics of five different machine learning (ML) algorithms, including support vector classifier (SVC), Adaboost, logistic regression (LR), naive Bayes (NB) and random forest (RF), were also compared in the study. These algorithms were used to create a data stream using the time, temperature, pressure, humidity and other sensor-generated information.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA