Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 904
Filtrar
Más filtros

Tipo del documento
Publication year range
1.
Brief Bioinform ; 25(3)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38752856

RESUMEN

Enhancing the reproducibility and comprehension of adaptive immune receptor repertoire sequencing (AIRR-seq) data analysis is critical for scientific progress. This study presents guidelines for reproducible AIRR-seq data analysis, and a collection of ready-to-use pipelines with comprehensive documentation. To this end, ten common pipelines were implemented using ViaFoundry, a user-friendly interface for pipeline management and automation. This is accompanied by versioned containers, documentation and archiving capabilities. The automation of pre-processing analysis steps and the ability to modify pipeline parameters according to specific research needs are emphasized. AIRR-seq data analysis is highly sensitive to varying parameters and setups; using the guidelines presented here, the ability to reproduce previously published results is demonstrated. This work promotes transparency, reproducibility, and collaboration in AIRR-seq data analysis, serving as a model for handling and documenting bioinformatics pipelines in other research domains.


Asunto(s)
Biología Computacional , Programas Informáticos , Humanos , Biología Computacional/métodos , Reproducibilidad de los Resultados , Receptores Inmunológicos/genética , Secuenciación de Nucleótidos de Alto Rendimiento/métodos , Inmunidad Adaptativa/genética , Guías como Asunto
2.
Brief Bioinform ; 25(1)2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-38084919

RESUMEN

Single-cell ATAC-seq (scATAC-seq) is a recently developed approach that provides means to investigate open chromatin at single cell level, to assess epigenetic regulation and transcription factors binding landscapes. The sparsity of the scATAC-seq data calls for imputation. Similarly, preprocessing (filtering) may be required to reduce computational load due to the large number of open regions. However, optimal strategies for both imputation and preprocessing have not been yet evaluated together. We present SAPIEnS (scATAC-seq Preprocessing and Imputation Evaluation System), a benchmark for scATAC-seq imputation frameworks, a combination of state-of-the-art imputation methods with commonly used preprocessing techniques. We assess different types of scATAC-seq analysis, i.e. clustering, visualization and digital genomic footprinting, and attain optimal preprocessing-imputation strategies. We discuss the benefits of the imputation framework depending on the task and the number of the dataset features (peaks). We conclude that the preprocessing with the Boruta method is beneficial for the majority of tasks, while imputation is helpful mostly for small datasets. We also implement a SAPIEnS database with pre-computed transcription factor footprints based on imputed data with their activity scores in a specific cell type. SAPIEnS is published at: https://github.com/lab-medvedeva/SAPIEnS. SAPIEnS database is available at: https://sapiensdb.com.


Asunto(s)
Epigénesis Genética , Genómica , Genómica/métodos , Factores de Transcripción/genética , Factores de Transcripción/metabolismo , Regulación de la Expresión Génica , Análisis por Conglomerados
3.
Brief Bioinform ; 25(1)2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-38113078

RESUMEN

Single-cell chromatin accessibility sequencing (scCAS) technologies have enabled characterizing the epigenomic heterogeneity of individual cells. However, the identification of features of scCAS data that are relevant to underlying biological processes remains a significant gap. Here, we introduce a novel method Cofea, to fill this gap. Through comprehensive experiments on 5 simulated and 54 real datasets, Cofea demonstrates its superiority in capturing cellular heterogeneity and facilitating downstream analysis. Applying this method to identification of cell type-specific peaks and candidate enhancers, as well as pathway enrichment analysis and partitioned heritability analysis, we illustrate the potential of Cofea to uncover functional biological process.


Asunto(s)
Cromatina , Secuencias Reguladoras de Ácidos Nucleicos , Cromatina/genética
4.
BMC Bioinformatics ; 25(1): 80, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378440

RESUMEN

BACKGROUND: With the increase of the dimensionality in flow cytometry data over the past years, there is a growing need to replace or complement traditional manual analysis (i.e. iterative 2D gating) with automated data analysis pipelines. A crucial part of these pipelines consists of pre-processing and applying quality control filtering to the raw data, in order to use high quality events in the downstream analyses. This part can in turn be split into a number of elementary steps: signal compensation or unmixing, scale transformation, debris, doublets and dead cells removal, batch effect correction, etc. However, assembling and assessing the pre-processing part can be challenging for a number of reasons. First, each of the involved elementary steps can be implemented using various methods and R packages. Second, the order of the steps can have an impact on the downstream analysis results. Finally, each method typically comes with its specific, non standardized diagnostic and visualizations, making objective comparison difficult for the end user. RESULTS: Here, we present CytoPipeline and CytoPipelineGUI, two R packages to build, compare and assess pre-processing pipelines for flow cytometry data. To exemplify these new tools, we present the steps involved in designing a pre-processing pipeline on a real life dataset and demonstrate different visual assessment use cases. We also set up a benchmarking comparing two pre-processing pipelines differing by their quality control methods, and show how the package visualization utilities can provide crucial user insight into the obtained benchmark metrics. CONCLUSION: CytoPipeline and CytoPipelineGUI are two Bioconductor R packages that help building, visualizing and assessing pre-processing pipelines for flow cytometry data. They increase productivity during pipeline development and testing, and complement benchmarking tools, by providing user intuitive insight into benchmarking results.


Asunto(s)
Análisis de Datos , Programas Informáticos , Citometría de Flujo/métodos
5.
BMC Genomics ; 25(1): 361, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38609853

RESUMEN

BACKGROUND: Single-cell sequencing techniques are revolutionizing every field of biology by providing the ability to measure the abundance of biological molecules at a single-cell resolution. Although single-cell sequencing approaches have been developed for several molecular modalities, single-cell transcriptome sequencing is the most prevalent and widely applied technique. SPLiT-seq (split-pool ligation-based transcriptome sequencing) is one of these single-cell transcriptome techniques that applies a unique combinatorial-barcoding approach by splitting and pooling cells into multi-well plates containing barcodes. This unique approach required the development of dedicated computational tools to preprocess the data and extract the count matrices. Here we compare eight bioinformatic pipelines (alevin-fry splitp, LR-splitpipe, SCSit, splitpipe, splitpipeline, SPLiTseq-demultiplex, STARsolo and zUMI) that have been developed to process SPLiT-seq data. We provide an overview of the tools, their computational performance, functionality and impact on downstream processing of the single-cell data, which vary greatly depending on the tool used. RESULTS: We show that STARsolo, splitpipe and alevin-fry splitp can all handle large amount of data within reasonable time. In contrast, the other five pipelines are slow when handling large datasets. When using smaller dataset, cell barcode results are similar with the exception of SPLiTseq-demultiplex and splitpipeline. LR-splitpipe that is originally designed for processing long-read sequencing data is the slowest of all pipelines. Alevin-fry produced different down-stream results that are difficult to interpret. STARsolo functions nearly identical to splitpipe and produce results that are highly similar to each other. However, STARsolo lacks the function to collapse random hexamer reads for which some additional coding is required. CONCLUSION: Our comprehensive comparative analysis aids users in selecting the most suitable analysis tool for efficient SPLiT-seq data processing, while also detailing the specific prerequisites for each of these pipelines. From the available pipelines, we recommend splitpipe or STARSolo for SPLiT-seq data analysis.


Asunto(s)
Biología Computacional , Transcriptoma , Análisis de Datos
6.
Eur J Neurosci ; 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39085986

RESUMEN

Diffusion-based tractography in the optic nerve requires sampling strategies assisted by anatomical landmark information (regions of interest [ROIs]). We aimed to investigate the feasibility of expert-placed, high-resolution T1-weighted ROI-data transfer onto lower spatial resolution diffusion-weighted images. Slab volumes from 20 volunteers were acquired and preprocessed including distortion bias correction and artifact reduction. Constrained spherical deconvolution was used to generate a directional diffusion information grid (fibre orientation distribution-model [FOD]). Three neuroradiologists marked landmarks on both diffusion imaging variants and structural datasets. Structural ROI information (volumetric interpolated breath-hold sequence [VIBE]) was respectively registered (linear with 6/12 degrees of freedom [DOF]) onto single-shot EPI (ss-EPI) and readout-segmented EPI (rs-EPI) volumes, respectively. All eight ROI/FOD-combinations were compared in a targeted tractography task of the optic nerve pathway. Inter-rater reliability for placed ROIs among experts was highest in VIBE images (lower confidence interval 0.84 to 0.97, mean 0.91) and lower in both ss-EPI (0.61 to 0.95, mean 0.79) and rs-EPI (0.59 to 0.86, mean 0.70). Tractography success rate based on streamline selection performance was highest in VIBE-drawn ROIs registered (6-DOF) onto rs-EPI FOD (70.0% over 5%-threshold, capped to failed ratio 39/16) followed by both 12-DOF-registered (67.5%; 41/16) and nonregistered VIBE (67.5%; 40/23). On ss-EPI FOD, VIBE-ROI-datasets obtained fewer streamlines overall with each at 55.0% above 5%-threshold and with lower capped to failed ratio (6-DOF: 35/36; 12-DOF: 34/34, nonregistered 33/36). The combination of VIBE-placed ROIs (highest inter-rater reliability) with 6-DOF registration onto rs-EPI targets (best streamline selection performance) is most suitable for white matter template generation required in group studies.

7.
Hum Brain Mapp ; 45(12): e70003, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39185668

RESUMEN

Computationally expensive data processing in neuroimaging research places demands on energy consumption-and the resulting carbon emissions contribute to the climate crisis. We measured the carbon footprint of the functional magnetic resonance imaging (fMRI) preprocessing tool fMRIPrep, testing the effect of varying parameters on estimated carbon emissions and preprocessing performance. Performance was quantified using (a) statistical individual-level task activation in regions of interest and (b) mean smoothness of preprocessed data. Eight variants of fMRIPrep were run with 257 participants who had completed an fMRI stop signal task (the same data also used in the original validation of fMRIPrep). Some variants led to substantial reductions in carbon emissions without sacrificing data quality: for instance, disabling FreeSurfer surface reconstruction reduced carbon emissions by 48%. We provide six recommendations for minimising emissions without compromising performance. By varying parameters and computational resources, neuroimagers can substantially reduce the carbon footprint of their preprocessing. This is one aspect of our research carbon footprint over which neuroimagers have control and agency to act upon.


Asunto(s)
Encéfalo , Huella de Carbono , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Adulto Joven , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
8.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38379519

RESUMEN

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Asunto(s)
Enfermedad de Alzheimer , Esquizofrenia , Humanos , Flujo de Trabajo , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Esquizofrenia/diagnóstico por imagen , Esquizofrenia/patología , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos
9.
Brief Bioinform ; 23(1)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34472590

RESUMEN

The emergence of single cell RNA sequencing has facilitated the studied of genomes, transcriptomes and proteomes. As available single-cell RNA-seq datasets are released continuously, one of the major challenges facing traditional RNA analysis tools is the high-dimensional, high-sparsity, high-noise and large-scale characteristics of single-cell RNA-seq data. Deep learning technologies match the characteristics of single-cell RNA-seq data perfectly and offer unprecedented promise. Here, we give a systematic review for most popular single-cell RNA-seq analysis methods and tools based on deep learning models, involving the procedures of data preprocessing (quality control, normalization, data correction, dimensionality reduction and data visualization) and clustering task for downstream analysis. We further evaluate the deep model-based analysis methods of data correction and clustering quantitatively on 11 gold standard datasets. Moreover, we discuss the data preferences of these methods and their limitations, and give some suggestions and guidance for users to select appropriate methods and tools.


Asunto(s)
Aprendizaje Profundo , Análisis de la Célula Individual , Análisis por Conglomerados , Perfilación de la Expresión Génica/métodos , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos
10.
Magn Reson Med ; 2024 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-39155397

RESUMEN

PURPOSE: The objective of this study was to propose a novel preprocessing approach to simultaneously correct for the frequency and phase drifts in MRS data using cross-correlation technique. METHODS: The performance of the proposed method was first investigated at different SNR levels using simulation. Random frequency and phase offsets were added to a previously acquired STEAM human data at 7 T, simulating two different noise levels with and without baseline artifacts. Alongside the proposed spectral cross-correlation (SC) method, three other simultaneous alignment approaches were evaluated. Validation was performed on human brain data at 3 T and mouse brain data at 16.4 T. RESULTS: The results showed that the SC technique effectively corrects for both small and large frequency and phase drifts, even at low SNR levels. Furthermore, the mean square measurement error of the SC algorithm was comparable to the other three methods used, with much faster processing time. The efficacy of the proposed technique was successfully demonstrated in both human brain MRS data and in a noisy MRS dataset acquired from a small volume-of-interest in the mouse brain. CONCLUSION: The study demonstrated the availability of a fast and robust technique that accurately corrects for both small and large frequency and phase shifts in MRS.

11.
Magn Reson Med ; 91(2): 773-783, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37831659

RESUMEN

PURPOSE: DTI characterizes tissue microstructure and provides proxy measures of nerve health. Echo-planar imaging is a popular method of acquiring DTI but is susceptible to various artifacts (e.g., susceptibility, motion, and eddy currents), which may be ameliorated via preprocessing. There are many pipelines available but limited data comparing their performance, which provides the rationale for this study. METHODS: DTI was acquired from the upper limb of heathy volunteers at 3T in blip-up and blip-down directions. Data were independently corrected using (i) FSL's TOPUP & eddy, (ii) FSL's TOPUP, (iii) DSI Studio, and (iv) TORTOISE. DTI metrics were extracted from the median, radial, and ulnar nerves and compared (between pipelines) using mixed-effects linear regression. The geometric similarity of corrected b = 0 images and the slice matched T1-weighted (T1w) images were computed using the Sörenson-Dice coefficient. RESULTS: Without preprocessing, the similarity coefficient of the blip-up and blip-down datasets to the T1w was 0·80 and 0·79, respectively. Preprocessing improved the geometric similarity by 1% with no difference between pipelines. Compared to TOPUP & eddy, DSI Studio and TORTOISE generated 2% and 6% lower estimates of fractional anisotropy, and 6% and 13% higher estimates of radial diffusivity, respectively. Estimates of anisotropy from TOPUP & eddy versus TOPUP were not different but TOPUP reduced radial diffusivity by 3%. The agreement of DTI metrics between pipelines was poor. CONCLUSIONS: Preprocessing DTI from the upper limb improves geometric similarity but the choice of the pipeline introduces clinically important variability in diffusion parameter estimates from peripheral nerves.


Asunto(s)
Imagen de Difusión por Resonancia Magnética , Imagen de Difusión Tensora , Humanos , Imagen de Difusión Tensora/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Nervios Periféricos , Extremidad Superior/diagnóstico por imagen , Imagen Eco-Planar , Procesamiento de Imagen Asistido por Computador/métodos
12.
J Magn Reson Imaging ; 59(5): 1800-1806, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-37572098

RESUMEN

BACKGROUND: Single center MRI radiomics models are sensitive to data heterogeneity, limiting the diagnostic capabilities of current prostate cancer (PCa) radiomics models. PURPOSE: To study the impact of image resampling on the diagnostic performance of radiomics in a multicenter prostate MRI setting. STUDY TYPE: Retrospective. POPULATION: Nine hundred thirty patients (nine centers, two vendors) with 737 eligible PCa lesions, randomly split into training (70%, N = 500), validation (10%, N = 89), and a held-out test set (20%, N = 148). FIELD STRENGTH/SEQUENCE: 1.5T and 3T scanners/T2-weighted imaging (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient maps. ASSESSMENT: A total of 48 normalized radiomics datasets were created using various resampling methods, including different target resolutions (T2W: 0.35, 0.5, and 0.8 mm; DWI: 1.37, 2, and 2.5 mm), dimensionalities (2D/3D) and interpolation techniques (nearest neighbor, linear, Bspline and Blackman windowed-sinc). Each of the datasets was used to train a radiomics model to detect clinically relevant PCa (International Society of Urological Pathology grade ≥ 2). Baseline models were constructed using 2D and 3D datasets without image resampling. The resampling configurations with highest validation performance were evaluated in the test dataset and compared to the baseline models. STATISTICAL TESTS: Area under the curve (AUC), DeLong test. The significance level used was 0.05. RESULTS: The best 2D resampling model (T2W: Bspline and 0.5 mm resolution, DWI: nearest neighbor and 2 mm resolution) significantly outperformed the 2D baseline (AUC: 0.77 vs. 0.64). The best 3D resampling model (T2W: linear and 0.8 mm resolution, DWI: nearest neighbor and 2.5 mm resolution) significantly outperformed the 3D baseline (AUC: 0.79 vs. 0.67). DATA CONCLUSION: Image resampling has a significant effect on the performance of multicenter radiomics artificial intelligence in prostate MRI. The recommended 2D resampling configuration is isotropic resampling with T2W at 0.5 mm (Bspline interpolation) and DWI at 2 mm (nearest neighbor interpolation). For the 3D radiomics, this work recommends isotropic resampling with T2W at 0.8 mm (linear interpolation) and DWI at 2.5 mm (nearest neighbor interpolation). EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Estudios Retrospectivos , Inteligencia Artificial , Radiómica , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología
13.
Biotechnol Bioeng ; 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39044472

RESUMEN

In the burgeoning field of proteins, the effective analysis of intricate protein data remains a formidable challenge, necessitating advanced computational tools for data processing, feature extraction, and interpretation. This study introduces ProteinFlow, an innovative framework designed to revolutionize feature engineering in protein data analysis. ProteinFlow stands out by offering enhanced efficiency in data collection and preprocessing, along with advanced capabilities in feature extraction, directly addressing the complexities inherent in multidimensional protein data sets. Through a comparative analysis, ProteinFlow demonstrated a significant improvement over traditional methods, notably reducing data preprocessing time and expanding the scope of biologically significant features identified. The framework's parallel data processing strategy and advanced algorithms ensure not only rapid data handling but also the extraction of comprehensive, meaningful insights from protein sequences, structures, and interactions. Furthermore, ProteinFlow exhibits remarkable scalability, adeptly managing large-scale data sets without compromising performance, a crucial attribute in the era of big data.

14.
Stat Med ; 43(5): 1019-1047, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38155152

RESUMEN

Birth defects and their associated deaths, high health and financial costs of maternal care and associated morbidity are major contributors to infant mortality. If permitted by law, prenatal diagnosis allows for intrauterine care, more complicated hospital deliveries, and termination of pregnancy. During pregnancy, a set of measurements is commonly used to monitor the fetal health, including fetal head circumference, crown-rump length, abdominal circumference, and femur length. Because of the intricate interactions between the biological tissues and the US waves mother and fetus, analyzing fetal US images from a specialized perspective is difficult. Artifacts include acoustic shadows, speckle noise, motion blur, and missing borders. The fetus moves quickly, body structures close, and the weeks of pregnancy vary greatly. In this work, we propose a fetal growth analysis through US image of head circumference biometry using optimal segmentation and hybrid classifier. First, we introduce a hybrid whale with oppositional fruit fly optimization (WOFF) algorithm for optimal segmentation of segment fetal head which improves the detection accuracy. Next, an improved U-Net design is utilized for the hidden feature (head circumference biometry) extraction which extracts features from the segmented extraction. Then, we design a modified Boosting arithmetic optimization (MBAO) algorithm for feature optimization to selects optimal best features among multiple features for the reduction of data dimensionality issues. Furthermore, a hybrid deep learning technique called bi-directional LSTM with convolutional neural network (B-LSTM-CNN) for fetal growth analysis to compute the fetus growth and health. Finally, we validate our proposed method through the open benchmark datasets are HC18 (Ultrasound image) and oxford university research archive (ORA-data) (Ultrasound video frames). We compared the simulation results of our proposed algorithm with the existing state-of-art techniques in terms of various metrics.


Asunto(s)
Desarrollo Fetal , Ultrasonografía Prenatal , Embarazo , Femenino , Humanos , Ultrasonografía Prenatal/métodos , Biometría , Algoritmos , Redes Neurales de la Computación
15.
Anal Bioanal Chem ; 416(2): 373-386, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37946036

RESUMEN

Continuous manufacturing is becoming increasingly important in the (bio-)pharmaceutical industry, as more product can be produced in less time and at lower costs. In this context, there is a need for powerful continuous analytical tools. Many established off-line analytical methods, such as mass spectrometry (MS), are hardly considered for process analytical technology (PAT) applications in biopharmaceutical processes, as they are limited to at-line analysis due to the required sample preparation and the associated complexity, although they would provide a suitable technique for the assessment of a wide range of quality attributes. In this study, we investigated the applicability of a recently developed micro simulated moving bed chromatography system (µSMB) for continuous on-line sample preparation for MS. As a test case, we demonstrate the continuous on-line MS measurement of a protein solution (myoglobin) containing Tris buffer, which interferes with ESI-MS measurements, by continuously exchanging this buffer with a volatile ammonium acetate buffer suitable for MS measurements. The integration of the µSMB significantly increases MS sensitivity by removing over 98% of the buffer substances. Thus, this study demonstrates the feasibility of on-line µSMB-MS, providing a versatile PAT tool by combining the detection power of MS for various product attributes with all the advantages of continuous on-line analytics.

16.
Network ; 35(2): 190-211, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38155546

RESUMEN

Nowadays, Internet of things (IoT) and IoT platforms are extensively utilized in several healthcare applications. The IoT devices produce a huge amount of data in healthcare field that can be inspected on an IoT platform. In this paper, a novel algorithm, named artificial flora optimization-based chameleon swarm algorithm (AFO-based CSA), is developed for optimal path finding. Here, data are collected by the sensors and transmitted to the base station (BS) using the proposed AFO-based CSA, which is derived by integrating artificial flora optimization (AFO) in chameleon swarm algorithm (CSA). This integration refers to the AFO-based CSA model enhancing the strengths and features of both AFO and CSA for optimal routing of medical data in IoT. Moreover, the proposed AFO-based CSA algorithm considers factors such as energy, delay, and distance for the effectual routing of data. At BS, prediction is conducted, followed by stages, like pre-processing, feature dimension reduction, adopting Pearson's correlation, and disease detection, done by recurrent neural network, which is trained by the proposed AFO-based CSA. Experimental result exhibited that the performance of the proposed AFO-based CSA is superior to competitive approaches based on the energy consumption (0.538 J), accuracy (0.950), sensitivity (0.965), and specificity (0.937).


Asunto(s)
Aprendizaje Profundo , Internet de las Cosas , Algoritmos , Instituciones de Salud , Redes Neurales de la Computación
17.
Network ; 35(1): 55-72, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37933604

RESUMEN

Our approach includes picture preprocessing, feature extraction utilizing the SqueezeNet model, hyperparameter optimisation utilising the Equilibrium Optimizer (EO) algorithm, and classification utilising a Stacked Autoencoder (SAE) model. Each of these processes is carried out in a series of separate steps. During the image preprocessing stage, contrast limited adaptive histogram equalisations (CLAHE) is utilized to improve the contrasts, and Adaptive Bilateral Filtering (ABF) to get rid of any noise that may be present. The SqueezeNet paradigm is utilized to obtain relevant characteristics from the pictures that have been preprocessed, and the EO technique is utilized to fine-tune the hyperparameters. Finally, the SAE model categorises the diseases that affect the grape leaf. The simulation analysis of the EODTL-GLDC technique tested New Plant Diseases Datasets and the results were inspected in many prospects. The results demonstrate that this model outperforms other deep learning techniques and methods that are more often related to machine learning. Specifically, this technique was able to attain a precision of 96.31% on the testing datasets and 96.88% on the training data set that was split 80:20. These results offer more proof that the suggested strategy is successful in automating the detection and categorization of grape leaf diseases.


Asunto(s)
Enfermedad por Deficiencia de Carbamoil-Fosfato Sintasa I , Desnutrición , Vitis , Aprendizaje Automático , Hojas de la Planta
18.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38973163

RESUMEN

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Asunto(s)
Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patología , Humanos
19.
J Sep Sci ; 47(16): e2400337, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39189599

RESUMEN

Sample pretreatment technology is crucial for drug analysis and detection, because the effect of sample pretreatment directly determinates the final analysis results. In recent years, with the continuous innovation of microextraction and other technologies like material preparation technologies and assistant technologies for extraction, the sample pretreatment techniques in the process of drug analysis have become more and more mature and diverse. This article takes amphetamine (AM) or methamphetamine as an example to review the recent development of pretreatment methods for AM-containing biological samples from the perspectives of extraction techniques, extraction media and auxiliary technologies. Extraction techniques are summarized with the categories of contact microextraction, separate microextraction and membrane-based microextraction for better guidance of application according to their features. Prevailing and innovative extraction media including carbon-based material, silicon-based material, metal organic framework, molecularly selective materials, supramolecular solvents and ionic liquids are reviewed. Auxiliary technologies like magnetic field, electric field, microwave, ultrasound and so on which can enhance extraction efficiency and accuracy are also reviewed. In the last, prospects of the future development of pretreatment technology for the analysis of AM biological samples are provided.


Asunto(s)
Anfetamina , Humanos , Anfetamina/análisis , Anfetamina/química , Microextracción en Fase Sólida
20.
Graefes Arch Clin Exp Ophthalmol ; 262(7): 2247-2267, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38400856

RESUMEN

BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved. METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined. RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively. CONCLUSION: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.


Asunto(s)
Algoritmos , Retinopatía Diabética , Índice de Severidad de la Enfermedad , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/clasificación , Aprendizaje Profundo , Inteligencia Artificial , Retina/patología , Retina/diagnóstico por imagen , Reproducibilidad de los Resultados , Masculino
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda