Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 875
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38752856

RESUMO

Enhancing the reproducibility and comprehension of adaptive immune receptor repertoire sequencing (AIRR-seq) data analysis is critical for scientific progress. This study presents guidelines for reproducible AIRR-seq data analysis, and a collection of ready-to-use pipelines with comprehensive documentation. To this end, ten common pipelines were implemented using ViaFoundry, a user-friendly interface for pipeline management and automation. This is accompanied by versioned containers, documentation and archiving capabilities. The automation of pre-processing analysis steps and the ability to modify pipeline parameters according to specific research needs are emphasized. AIRR-seq data analysis is highly sensitive to varying parameters and setups; using the guidelines presented here, the ability to reproduce previously published results is demonstrated. This work promotes transparency, reproducibility, and collaboration in AIRR-seq data analysis, serving as a model for handling and documenting bioinformatics pipelines in other research domains.


Assuntos
Biologia Computacional , Software , Humanos , Biologia Computacional/métodos , Reprodutibilidade dos Testes , Receptores Imunológicos/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Imunidade Adaptativa/genética , Guias como Assunto
2.
Brief Bioinform ; 25(1)2023 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-38084919

RESUMO

Single-cell ATAC-seq (scATAC-seq) is a recently developed approach that provides means to investigate open chromatin at single cell level, to assess epigenetic regulation and transcription factors binding landscapes. The sparsity of the scATAC-seq data calls for imputation. Similarly, preprocessing (filtering) may be required to reduce computational load due to the large number of open regions. However, optimal strategies for both imputation and preprocessing have not been yet evaluated together. We present SAPIEnS (scATAC-seq Preprocessing and Imputation Evaluation System), a benchmark for scATAC-seq imputation frameworks, a combination of state-of-the-art imputation methods with commonly used preprocessing techniques. We assess different types of scATAC-seq analysis, i.e. clustering, visualization and digital genomic footprinting, and attain optimal preprocessing-imputation strategies. We discuss the benefits of the imputation framework depending on the task and the number of the dataset features (peaks). We conclude that the preprocessing with the Boruta method is beneficial for the majority of tasks, while imputation is helpful mostly for small datasets. We also implement a SAPIEnS database with pre-computed transcription factor footprints based on imputed data with their activity scores in a specific cell type. SAPIEnS is published at: https://github.com/lab-medvedeva/SAPIEnS. SAPIEnS database is available at: https://sapiensdb.com.


Assuntos
Epigênese Genética , Genômica , Genômica/métodos , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo , Regulação da Expressão Gênica , Análise por Conglomerados
3.
Brief Bioinform ; 25(1)2023 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-38113078

RESUMO

Single-cell chromatin accessibility sequencing (scCAS) technologies have enabled characterizing the epigenomic heterogeneity of individual cells. However, the identification of features of scCAS data that are relevant to underlying biological processes remains a significant gap. Here, we introduce a novel method Cofea, to fill this gap. Through comprehensive experiments on 5 simulated and 54 real datasets, Cofea demonstrates its superiority in capturing cellular heterogeneity and facilitating downstream analysis. Applying this method to identification of cell type-specific peaks and candidate enhancers, as well as pathway enrichment analysis and partitioned heritability analysis, we illustrate the potential of Cofea to uncover functional biological process.


Assuntos
Cromatina , Sequências Reguladoras de Ácido Nucleico , Cromatina/genética
4.
BMC Bioinformatics ; 25(1): 80, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378440

RESUMO

BACKGROUND: With the increase of the dimensionality in flow cytometry data over the past years, there is a growing need to replace or complement traditional manual analysis (i.e. iterative 2D gating) with automated data analysis pipelines. A crucial part of these pipelines consists of pre-processing and applying quality control filtering to the raw data, in order to use high quality events in the downstream analyses. This part can in turn be split into a number of elementary steps: signal compensation or unmixing, scale transformation, debris, doublets and dead cells removal, batch effect correction, etc. However, assembling and assessing the pre-processing part can be challenging for a number of reasons. First, each of the involved elementary steps can be implemented using various methods and R packages. Second, the order of the steps can have an impact on the downstream analysis results. Finally, each method typically comes with its specific, non standardized diagnostic and visualizations, making objective comparison difficult for the end user. RESULTS: Here, we present CytoPipeline and CytoPipelineGUI, two R packages to build, compare and assess pre-processing pipelines for flow cytometry data. To exemplify these new tools, we present the steps involved in designing a pre-processing pipeline on a real life dataset and demonstrate different visual assessment use cases. We also set up a benchmarking comparing two pre-processing pipelines differing by their quality control methods, and show how the package visualization utilities can provide crucial user insight into the obtained benchmark metrics. CONCLUSION: CytoPipeline and CytoPipelineGUI are two Bioconductor R packages that help building, visualizing and assessing pre-processing pipelines for flow cytometry data. They increase productivity during pipeline development and testing, and complement benchmarking tools, by providing user intuitive insight into benchmarking results.


Assuntos
Análise de Dados , Software , Citometria de Fluxo/métodos
5.
BMC Genomics ; 25(1): 361, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609853

RESUMO

BACKGROUND: Single-cell sequencing techniques are revolutionizing every field of biology by providing the ability to measure the abundance of biological molecules at a single-cell resolution. Although single-cell sequencing approaches have been developed for several molecular modalities, single-cell transcriptome sequencing is the most prevalent and widely applied technique. SPLiT-seq (split-pool ligation-based transcriptome sequencing) is one of these single-cell transcriptome techniques that applies a unique combinatorial-barcoding approach by splitting and pooling cells into multi-well plates containing barcodes. This unique approach required the development of dedicated computational tools to preprocess the data and extract the count matrices. Here we compare eight bioinformatic pipelines (alevin-fry splitp, LR-splitpipe, SCSit, splitpipe, splitpipeline, SPLiTseq-demultiplex, STARsolo and zUMI) that have been developed to process SPLiT-seq data. We provide an overview of the tools, their computational performance, functionality and impact on downstream processing of the single-cell data, which vary greatly depending on the tool used. RESULTS: We show that STARsolo, splitpipe and alevin-fry splitp can all handle large amount of data within reasonable time. In contrast, the other five pipelines are slow when handling large datasets. When using smaller dataset, cell barcode results are similar with the exception of SPLiTseq-demultiplex and splitpipeline. LR-splitpipe that is originally designed for processing long-read sequencing data is the slowest of all pipelines. Alevin-fry produced different down-stream results that are difficult to interpret. STARsolo functions nearly identical to splitpipe and produce results that are highly similar to each other. However, STARsolo lacks the function to collapse random hexamer reads for which some additional coding is required. CONCLUSION: Our comprehensive comparative analysis aids users in selecting the most suitable analysis tool for efficient SPLiT-seq data processing, while also detailing the specific prerequisites for each of these pipelines. From the available pipelines, we recommend splitpipe or STARSolo for SPLiT-seq data analysis.


Assuntos
Biologia Computacional , Transcriptoma , Análise de Dados
6.
Eur J Neurosci ; 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39085986

RESUMO

Diffusion-based tractography in the optic nerve requires sampling strategies assisted by anatomical landmark information (regions of interest [ROIs]). We aimed to investigate the feasibility of expert-placed, high-resolution T1-weighted ROI-data transfer onto lower spatial resolution diffusion-weighted images. Slab volumes from 20 volunteers were acquired and preprocessed including distortion bias correction and artifact reduction. Constrained spherical deconvolution was used to generate a directional diffusion information grid (fibre orientation distribution-model [FOD]). Three neuroradiologists marked landmarks on both diffusion imaging variants and structural datasets. Structural ROI information (volumetric interpolated breath-hold sequence [VIBE]) was respectively registered (linear with 6/12 degrees of freedom [DOF]) onto single-shot EPI (ss-EPI) and readout-segmented EPI (rs-EPI) volumes, respectively. All eight ROI/FOD-combinations were compared in a targeted tractography task of the optic nerve pathway. Inter-rater reliability for placed ROIs among experts was highest in VIBE images (lower confidence interval 0.84 to 0.97, mean 0.91) and lower in both ss-EPI (0.61 to 0.95, mean 0.79) and rs-EPI (0.59 to 0.86, mean 0.70). Tractography success rate based on streamline selection performance was highest in VIBE-drawn ROIs registered (6-DOF) onto rs-EPI FOD (70.0% over 5%-threshold, capped to failed ratio 39/16) followed by both 12-DOF-registered (67.5%; 41/16) and nonregistered VIBE (67.5%; 40/23). On ss-EPI FOD, VIBE-ROI-datasets obtained fewer streamlines overall with each at 55.0% above 5%-threshold and with lower capped to failed ratio (6-DOF: 35/36; 12-DOF: 34/34, nonregistered 33/36). The combination of VIBE-placed ROIs (highest inter-rater reliability) with 6-DOF registration onto rs-EPI targets (best streamline selection performance) is most suitable for white matter template generation required in group studies.

7.
Hum Brain Mapp ; 45(3): e26632, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38379519

RESUMO

Since the introduction of the BrainAGE method, novel machine learning methods for brain age prediction have continued to emerge. The idea of estimating the chronological age from magnetic resonance images proved to be an interesting field of research due to the relative simplicity of its interpretation and its potential use as a biomarker of brain health. We revised our previous BrainAGE approach, originally utilising relevance vector regression (RVR), and substituted it with Gaussian process regression (GPR), which enables more stable processing of larger datasets, such as the UK Biobank (UKB). In addition, we extended the global BrainAGE approach to regional BrainAGE, providing spatially specific scores for five brain lobes per hemisphere. We tested the performance of the new algorithms under several different conditions and investigated their validity on the ADNI and schizophrenia samples, as well as on a synthetic dataset of neocortical thinning. The results show an improved performance of the reframed global model on the UKB sample with a mean absolute error (MAE) of less than 2 years and a significant difference in BrainAGE between healthy participants and patients with Alzheimer's disease and schizophrenia. Moreover, the workings of the algorithm show meaningful effects for a simulated neocortical atrophy dataset. The regional BrainAGE model performed well on two clinical samples, showing disease-specific patterns for different levels of impairment. The results demonstrate that the new improved algorithms provide reliable and valid brain age estimations.


Assuntos
Doença de Alzheimer , Esquizofrenia , Humanos , Fluxo de Trabalho , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
8.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34472590

RESUMO

The emergence of single cell RNA sequencing has facilitated the studied of genomes, transcriptomes and proteomes. As available single-cell RNA-seq datasets are released continuously, one of the major challenges facing traditional RNA analysis tools is the high-dimensional, high-sparsity, high-noise and large-scale characteristics of single-cell RNA-seq data. Deep learning technologies match the characteristics of single-cell RNA-seq data perfectly and offer unprecedented promise. Here, we give a systematic review for most popular single-cell RNA-seq analysis methods and tools based on deep learning models, involving the procedures of data preprocessing (quality control, normalization, data correction, dimensionality reduction and data visualization) and clustering task for downstream analysis. We further evaluate the deep model-based analysis methods of data correction and clustering quantitatively on 11 gold standard datasets. Moreover, we discuss the data preferences of these methods and their limitations, and give some suggestions and guidance for users to select appropriate methods and tools.


Assuntos
Aprendizado Profundo , Análise de Célula Única , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos
9.
Magn Reson Med ; 91(2): 773-783, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37831659

RESUMO

PURPOSE: DTI characterizes tissue microstructure and provides proxy measures of nerve health. Echo-planar imaging is a popular method of acquiring DTI but is susceptible to various artifacts (e.g., susceptibility, motion, and eddy currents), which may be ameliorated via preprocessing. There are many pipelines available but limited data comparing their performance, which provides the rationale for this study. METHODS: DTI was acquired from the upper limb of heathy volunteers at 3T in blip-up and blip-down directions. Data were independently corrected using (i) FSL's TOPUP & eddy, (ii) FSL's TOPUP, (iii) DSI Studio, and (iv) TORTOISE. DTI metrics were extracted from the median, radial, and ulnar nerves and compared (between pipelines) using mixed-effects linear regression. The geometric similarity of corrected b = 0 images and the slice matched T1-weighted (T1w) images were computed using the Sörenson-Dice coefficient. RESULTS: Without preprocessing, the similarity coefficient of the blip-up and blip-down datasets to the T1w was 0·80 and 0·79, respectively. Preprocessing improved the geometric similarity by 1% with no difference between pipelines. Compared to TOPUP & eddy, DSI Studio and TORTOISE generated 2% and 6% lower estimates of fractional anisotropy, and 6% and 13% higher estimates of radial diffusivity, respectively. Estimates of anisotropy from TOPUP & eddy versus TOPUP were not different but TOPUP reduced radial diffusivity by 3%. The agreement of DTI metrics between pipelines was poor. CONCLUSIONS: Preprocessing DTI from the upper limb improves geometric similarity but the choice of the pipeline introduces clinically important variability in diffusion parameter estimates from peripheral nerves.


Assuntos
Imagem de Difusão por Ressonância Magnética , Imagem de Tensor de Difusão , Humanos , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Nervos Periféricos , Extremidade Superior/diagnóstico por imagem , Imagem Ecoplanar , Processamento de Imagem Assistida por Computador/métodos
10.
J Magn Reson Imaging ; 59(5): 1800-1806, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37572098

RESUMO

BACKGROUND: Single center MRI radiomics models are sensitive to data heterogeneity, limiting the diagnostic capabilities of current prostate cancer (PCa) radiomics models. PURPOSE: To study the impact of image resampling on the diagnostic performance of radiomics in a multicenter prostate MRI setting. STUDY TYPE: Retrospective. POPULATION: Nine hundred thirty patients (nine centers, two vendors) with 737 eligible PCa lesions, randomly split into training (70%, N = 500), validation (10%, N = 89), and a held-out test set (20%, N = 148). FIELD STRENGTH/SEQUENCE: 1.5T and 3T scanners/T2-weighted imaging (T2W), diffusion-weighted imaging (DWI), and apparent diffusion coefficient maps. ASSESSMENT: A total of 48 normalized radiomics datasets were created using various resampling methods, including different target resolutions (T2W: 0.35, 0.5, and 0.8 mm; DWI: 1.37, 2, and 2.5 mm), dimensionalities (2D/3D) and interpolation techniques (nearest neighbor, linear, Bspline and Blackman windowed-sinc). Each of the datasets was used to train a radiomics model to detect clinically relevant PCa (International Society of Urological Pathology grade ≥ 2). Baseline models were constructed using 2D and 3D datasets without image resampling. The resampling configurations with highest validation performance were evaluated in the test dataset and compared to the baseline models. STATISTICAL TESTS: Area under the curve (AUC), DeLong test. The significance level used was 0.05. RESULTS: The best 2D resampling model (T2W: Bspline and 0.5 mm resolution, DWI: nearest neighbor and 2 mm resolution) significantly outperformed the 2D baseline (AUC: 0.77 vs. 0.64). The best 3D resampling model (T2W: linear and 0.8 mm resolution, DWI: nearest neighbor and 2.5 mm resolution) significantly outperformed the 3D baseline (AUC: 0.79 vs. 0.67). DATA CONCLUSION: Image resampling has a significant effect on the performance of multicenter radiomics artificial intelligence in prostate MRI. The recommended 2D resampling configuration is isotropic resampling with T2W at 0.5 mm (Bspline interpolation) and DWI at 2 mm (nearest neighbor interpolation). For the 3D radiomics, this work recommends isotropic resampling with T2W at 0.8 mm (linear interpolation) and DWI at 2.5 mm (nearest neighbor interpolation). EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 2.


Assuntos
Próstata , Neoplasias da Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Estudos Retrospectivos , Inteligência Artificial , Radiômica , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
11.
Biotechnol Bioeng ; 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39044472

RESUMO

In the burgeoning field of proteins, the effective analysis of intricate protein data remains a formidable challenge, necessitating advanced computational tools for data processing, feature extraction, and interpretation. This study introduces ProteinFlow, an innovative framework designed to revolutionize feature engineering in protein data analysis. ProteinFlow stands out by offering enhanced efficiency in data collection and preprocessing, along with advanced capabilities in feature extraction, directly addressing the complexities inherent in multidimensional protein data sets. Through a comparative analysis, ProteinFlow demonstrated a significant improvement over traditional methods, notably reducing data preprocessing time and expanding the scope of biologically significant features identified. The framework's parallel data processing strategy and advanced algorithms ensure not only rapid data handling but also the extraction of comprehensive, meaningful insights from protein sequences, structures, and interactions. Furthermore, ProteinFlow exhibits remarkable scalability, adeptly managing large-scale data sets without compromising performance, a crucial attribute in the era of big data.

12.
Stat Med ; 43(5): 1019-1047, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38155152

RESUMO

Birth defects and their associated deaths, high health and financial costs of maternal care and associated morbidity are major contributors to infant mortality. If permitted by law, prenatal diagnosis allows for intrauterine care, more complicated hospital deliveries, and termination of pregnancy. During pregnancy, a set of measurements is commonly used to monitor the fetal health, including fetal head circumference, crown-rump length, abdominal circumference, and femur length. Because of the intricate interactions between the biological tissues and the US waves mother and fetus, analyzing fetal US images from a specialized perspective is difficult. Artifacts include acoustic shadows, speckle noise, motion blur, and missing borders. The fetus moves quickly, body structures close, and the weeks of pregnancy vary greatly. In this work, we propose a fetal growth analysis through US image of head circumference biometry using optimal segmentation and hybrid classifier. First, we introduce a hybrid whale with oppositional fruit fly optimization (WOFF) algorithm for optimal segmentation of segment fetal head which improves the detection accuracy. Next, an improved U-Net design is utilized for the hidden feature (head circumference biometry) extraction which extracts features from the segmented extraction. Then, we design a modified Boosting arithmetic optimization (MBAO) algorithm for feature optimization to selects optimal best features among multiple features for the reduction of data dimensionality issues. Furthermore, a hybrid deep learning technique called bi-directional LSTM with convolutional neural network (B-LSTM-CNN) for fetal growth analysis to compute the fetus growth and health. Finally, we validate our proposed method through the open benchmark datasets are HC18 (Ultrasound image) and oxford university research archive (ORA-data) (Ultrasound video frames). We compared the simulation results of our proposed algorithm with the existing state-of-art techniques in terms of various metrics.


Assuntos
Desenvolvimento Fetal , Ultrassonografia Pré-Natal , Gravidez , Feminino , Humanos , Ultrassonografia Pré-Natal/métodos , Biometria , Algoritmos , Redes Neurais de Computação
13.
Anal Bioanal Chem ; 416(2): 373-386, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37946036

RESUMO

Continuous manufacturing is becoming increasingly important in the (bio-)pharmaceutical industry, as more product can be produced in less time and at lower costs. In this context, there is a need for powerful continuous analytical tools. Many established off-line analytical methods, such as mass spectrometry (MS), are hardly considered for process analytical technology (PAT) applications in biopharmaceutical processes, as they are limited to at-line analysis due to the required sample preparation and the associated complexity, although they would provide a suitable technique for the assessment of a wide range of quality attributes. In this study, we investigated the applicability of a recently developed micro simulated moving bed chromatography system (µSMB) for continuous on-line sample preparation for MS. As a test case, we demonstrate the continuous on-line MS measurement of a protein solution (myoglobin) containing Tris buffer, which interferes with ESI-MS measurements, by continuously exchanging this buffer with a volatile ammonium acetate buffer suitable for MS measurements. The integration of the µSMB significantly increases MS sensitivity by removing over 98% of the buffer substances. Thus, this study demonstrates the feasibility of on-line µSMB-MS, providing a versatile PAT tool by combining the detection power of MS for various product attributes with all the advantages of continuous on-line analytics.

14.
Network ; 35(2): 190-211, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38155546

RESUMO

Nowadays, Internet of things (IoT) and IoT platforms are extensively utilized in several healthcare applications. The IoT devices produce a huge amount of data in healthcare field that can be inspected on an IoT platform. In this paper, a novel algorithm, named artificial flora optimization-based chameleon swarm algorithm (AFO-based CSA), is developed for optimal path finding. Here, data are collected by the sensors and transmitted to the base station (BS) using the proposed AFO-based CSA, which is derived by integrating artificial flora optimization (AFO) in chameleon swarm algorithm (CSA). This integration refers to the AFO-based CSA model enhancing the strengths and features of both AFO and CSA for optimal routing of medical data in IoT. Moreover, the proposed AFO-based CSA algorithm considers factors such as energy, delay, and distance for the effectual routing of data. At BS, prediction is conducted, followed by stages, like pre-processing, feature dimension reduction, adopting Pearson's correlation, and disease detection, done by recurrent neural network, which is trained by the proposed AFO-based CSA. Experimental result exhibited that the performance of the proposed AFO-based CSA is superior to competitive approaches based on the energy consumption (0.538 J), accuracy (0.950), sensitivity (0.965), and specificity (0.937).


Assuntos
Aprendizado Profundo , Internet das Coisas , Algoritmos , Instalações de Saúde , Redes Neurais de Computação
15.
Network ; 35(1): 55-72, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37933604

RESUMO

Our approach includes picture preprocessing, feature extraction utilizing the SqueezeNet model, hyperparameter optimisation utilising the Equilibrium Optimizer (EO) algorithm, and classification utilising a Stacked Autoencoder (SAE) model. Each of these processes is carried out in a series of separate steps. During the image preprocessing stage, contrast limited adaptive histogram equalisations (CLAHE) is utilized to improve the contrasts, and Adaptive Bilateral Filtering (ABF) to get rid of any noise that may be present. The SqueezeNet paradigm is utilized to obtain relevant characteristics from the pictures that have been preprocessed, and the EO technique is utilized to fine-tune the hyperparameters. Finally, the SAE model categorises the diseases that affect the grape leaf. The simulation analysis of the EODTL-GLDC technique tested New Plant Diseases Datasets and the results were inspected in many prospects. The results demonstrate that this model outperforms other deep learning techniques and methods that are more often related to machine learning. Specifically, this technique was able to attain a precision of 96.31% on the testing datasets and 96.88% on the training data set that was split 80:20. These results offer more proof that the suggested strategy is successful in automating the detection and categorization of grape leaf diseases.


Assuntos
Doença da Deficiência da Carbamoil-Fosfato Sintase I , Desnutrição , Vitis , Aprendizado de Máquina , Folhas de Planta
16.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38973163

RESUMO

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Assuntos
Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patologia , Humanos
17.
Graefes Arch Clin Exp Ophthalmol ; 262(7): 2247-2267, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38400856

RESUMO

BACKGROUND: Diabetic retinopathy (DR) is a serious eye complication that results in permanent vision damage. As the number of patients suffering from DR increases, so does the delay in treatment for DR diagnosis. To bridge this gap, an efficient DR screening system that assists clinicians is required. Although many artificial intelligence (AI) screening systems have been deployed in recent years, accuracy remains a metric that can be improved. METHODS: An enumerative pre-processing approach is implemented in the deep learning model to attain better accuracies for DR severity grading. The proposed approach is compared with various pre-trained models, and the necessary performance metrics were tabulated. This paper also presents the comparative analysis of various optimization algorithms that are utilized in the deep network model, and the results were outlined. RESULTS: The experimental results are carried out on the MESSIDOR dataset to assess the performance. The experimental results show that an enumerative pipeline combination K1-K2-K3-DFNN-LOA shows better results when compared with other combinations. When compared with various optimization algorithms and pre-trained models, the proposed model has better performance with maximum accuracy, precision, recall, F1 score, and macro-averaged metric of 97.60%, 94.60%, 98.40%, 94.60%, and 0.97, respectively. CONCLUSION: This study focussed on developing and implementing a DR screening system on color fundus photographs. This artificial intelligence-based system offers the possibility to enhance the efficacy and approachability of DR diagnosis.


Assuntos
Algoritmos , Retinopatia Diabética , Índice de Gravidade de Doença , Humanos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/classificação , Aprendizado Profundo , Inteligência Artificial , Retina/patologia , Retina/diagnóstico por imagem , Reprodutibilidade dos Testes , Masculino
18.
MAGMA ; 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39003384

RESUMO

OBJECTIVES: Signal drift has been put forward as one of the fundamental confounding factors in diffusion MRI (dMRI) of the brain. This study characterizes signal drift in dMRI of the brain, evaluates correction methods, and exemplifies its impact on parameter estimation for three intravoxel incoherent motion (IVIM) protocols. MATERIALS AND METHODS: dMRI of the brain was acquired in ten healthy subjects using protocols designed to enable retrospective characterization and correction of signal drift. All scans were acquired twice for repeatability analysis. Three temporal polynomial correction methods were evaluated: (1) global, (2) voxelwise, and (3) spatiotemporal. Effects of acquisition order were simulated using estimated drift fields. RESULTS: Signal drift was around 2% per 5 min in the brain as a whole, but reached above 5% per 5 min in the frontal regions. Only correction methods taking spatially varying signal drift into account could achieve effective corrections. Altered acquisition order introduced both systematic changes and differences in repeatability in the presence of signal drift. DISCUSSION: Signal drift in dMRI of the brain was found to be spatially varying, calling for correction methods taking this into account. Without proper corrections, choice of protocol can affect dMRI parameter estimates and their repeatability.

19.
BMC Med Imaging ; 24(1): 201, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095688

RESUMO

Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.


Assuntos
Aprendizado Profundo , Dermoscopia , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos
20.
BMC Public Health ; 24(1): 1777, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38961394

RESUMO

BACKGROUND: Dyslipidemia, characterized by variations in plasma lipid profiles, poses a global health threat linked to millions of deaths annually. OBJECTIVES: This study focuses on predicting dyslipidemia incidence using machine learning methods, addressing the crucial need for early identification and intervention. METHODS: The dataset, derived from the Lifestyle Promotion Project (LPP) in East Azerbaijan Province, Iran, undergoes a comprehensive preprocessing, merging, and null handling process. Target selection involves five distinct dyslipidemia-related variables. Normalization techniques and three feature selection algorithms are applied to enhance predictive modeling. RESULT: The study results underscore the potential of different machine learning algorithms, specifically multi-layer perceptron neural network (MLP), in reaching higher performance metrics such as accuracy, F1 score, sensitivity and specificity, among other machine learning methods. Among other algorithms, Random Forest also showed remarkable accuracies and outperformed K-Nearest Neighbors (KNN) in metrics like precision, recall, and F1 score. The study's emphasis on feature selection detected meaningful patterns among five target variables related to dyslipidemia, indicating fundamental shared unities among dyslipidemia-related factors. Features such as waist circumference, serum vitamin D, blood pressure, sex, age, diabetes, and physical activity related to dyslipidemia. CONCLUSION: These results cooperatively highlight the complex nature of dyslipidemia and its connections with numerous factors, strengthening the importance of applying machine learning methods to understand and predict its incidence precisely.


Assuntos
Dislipidemias , Aprendizado de Máquina , Humanos , Dislipidemias/epidemiologia , Incidência , Irã (Geográfico)/epidemiologia , Masculino , Feminino , Estilo de Vida , Algoritmos , Promoção da Saúde/métodos , Pessoa de Meia-Idade , Adulto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA