Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
1.
BMC Med Inform Decis Mak ; 24(1): 142, 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38802836

RESUMEN

Lung cancer remains a leading cause of cancer-related mortality globally, with prognosis significantly dependent on early-stage detection. Traditional diagnostic methods, though effective, often face challenges regarding accuracy, early detection, and scalability, being invasive, time-consuming, and prone to ambiguous interpretations. This study proposes an advanced machine learning model designed to enhance lung cancer stage classification using CT scan images, aiming to overcome these limitations by offering a faster, non-invasive, and reliable diagnostic tool. Utilizing the IQ-OTHNCCD lung cancer dataset, comprising CT scans from various stages of lung cancer and healthy individuals, we performed extensive preprocessing including resizing, normalization, and Gaussian blurring. A Convolutional Neural Network (CNN) was then trained on this preprocessed data, and class imbalance was addressed using Synthetic Minority Over-sampling Technique (SMOTE). The model's performance was evaluated through metrics such as accuracy, precision, recall, F1-score, and ROC curve analysis. The results demonstrated a classification accuracy of 99.64%, with precision, recall, and F1-score values exceeding 98% across all categories. SMOTE significantly enhanced the model's ability to classify underrepresented classes, contributing to the robustness of the diagnostic tool. These findings underscore the potential of machine learning in transforming lung cancer diagnostics, providing high accuracy in stage classification, which could facilitate early detection and tailored treatment strategies, ultimately improving patient outcomes.


Asunto(s)
Neoplasias Pulmonares , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/clasificación , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo
2.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38732936

RESUMEN

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Asunto(s)
Enfermedades Pulmonares , Redes Neurales de la Computación , Humanos , Enfermedades Pulmonares/diagnóstico por imagen , Enfermedades Pulmonares/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Algoritmos , Pulmón/diagnóstico por imagen , Pulmón/patología
3.
BMC Med Inform Decis Mak ; 24(1): 113, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38689289

RESUMEN

Brain tumors pose a significant medical challenge necessitating precise detection and diagnosis, especially in Magnetic resonance imaging(MRI). Current methodologies reliant on traditional image processing and conventional machine learning encounter hurdles in accurately discerning tumor regions within intricate MRI scans, often susceptible to noise and varying image quality. The advent of artificial intelligence (AI) has revolutionized various aspects of healthcare, providing innovative solutions for diagnostics and treatment strategies. This paper introduces a novel AI-driven methodology for brain tumor detection from MRI images, leveraging the EfficientNetB2 deep learning architecture. Our approach incorporates advanced image preprocessing techniques, including image cropping, equalization, and the application of homomorphic filters, to enhance the quality of MRI data for more accurate tumor detection. The proposed model exhibits substantial performance enhancement by demonstrating validation accuracies of 99.83%, 99.75%, and 99.2% on BD-BrainTumor, Brain-tumor-detection, and Brain-MRI-images-for-brain-tumor-detection datasets respectively, this research holds promise for refined clinical diagnostics and patient care, fostering more accurate and reliable brain tumor identification from MRI images. All data is available on Github: https://github.com/muskan258/Brain-Tumor-Detection-from-MRI-Images-Utilizing-EfficientNetB2 ).


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Inteligencia Artificial
4.
Adv Neurobiol ; 36: 173-189, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38468032

RESUMEN

This chapter begins by showing the difference between fractal geometry and fractal analysis. The text shows the difference between mathematical and natural fractals and how they are best defined by explaining the concept of fractal analysis. Furthermore, the text presents the most famous technique of fractal analysis: the box-counting method. Defining this method and showing the methodology that leads to the precise value of the fractal (i.e., the box) dimension is done by demonstrating the images of human dentate neurons. A more detailed explanation of the methodology was presented in the previous version of this chapter.This version promotes the notion of monofractal analysis and shows how three types of the same neuronal images can quantify four image properties. The results showed that monofractal parameters successfully quantified four image properties in three nuclei of the cerebellum. Finally, the author discusses the results of this chapter and previously published conclusions. The results show how the monofractal parameters discriminate images of neurons from the three nuclei of the human cerebrum. These outcomes are discussed along with the results of previous studies.


Asunto(s)
Encéfalo , Neuronas , Humanos , Neuronas/fisiología , Encéfalo/diagnóstico por imagen , Fractales , Cerebelo/diagnóstico por imagen
5.
J Pathol Inform ; 15: 100356, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38222323

RESUMEN

The introduction of deep learning caused a significant breakthrough in digital pathology. Thanks to its capability of mining hidden data patterns in digitised histological slides to resolve diagnostic tasks and extract prognostic and predictive information. However, the high performance achieved in classification tasks depends on the availability of large datasets, whose collection and preprocessing are still time-consuming processes. Therefore, strategies to make these steps more efficient are worth investigation. This work introduces SlideTiler, an open-source software with a user-friendly graphical interface. SlideTiler can manage several image preprocessing phases through an intuitive workflow that does not require specific coding skills. The software was designed to provide direct access to virtual slides, allowing custom tiling of specific regions of interest drawn by the user, tile labelling, quality assessment, and direct export to dataset directories. To illustrate the functions and the scalability of SlideTiler, a deep learning-based classifier was implemented to classify 4 different tumour histotypes available in the TCGA repository. The results demonstrate the effectiveness of SlideTiler in facilitating data preprocessing and promoting accessibility to digitised pathology images for research purposes. Considering the increasing interest in deep learning applications of digital pathology, SlideTiler has a positive impact on this field. Moreover, SlideTiler has been conceived as a dynamic tool in constant evolution, and more updated and efficient versions will be released in the future.

6.
Med Phys ; 51(3): 2119-2127, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37727132

RESUMEN

BACKGROUND: The concept of volumetric modulated arc therapy-computed tomography (VMAT-CT) was proposed more than a decade ago. However, its application has been very limited mainly due to the poor image quality. More specifically, the blurred areas in electronic portal imaging device (EPID) images collected during VMAT heavily degrade the image quality of VMAT-CT. PURPOSE: The goal of this study was to propose systematic methods to preprocess EPID images and improve the image quality of VMAT-CT. METHODS: Online region-based active contour method was introduced to binarize portal images. Multi-leaf collimator (MLC) motion modeling was developed to remove the MLC motion blur. Outlier filtering was then applied to replace the remaining artifacts with plausible data. To assess the impact of these preprocessing methods on the image quality of VMAT-CT, 44 clinical VMAT plans for several treatment sites (lung, esophagus, and head & neck) were delivered to a Rando phantom, and several real-patient cases were also acquired. VMAT-CT reconstruction was attempted for all the cases, and image quality was evaluated. RESULTS: All three preprocessing methods could effectively remove the blurred edges of EPID images. The combined preprocessing methods not only saved VMAT-CT from distortions and artifacts, but also increased the percentage of VMAT plans that can be reconstructed. CONCLUSIONS: The systematic preprocessing of portal images improves the image quality of VMAT-CT significantly, and facilitates the application of VMAT-CT as an effective image guidance tool.


Asunto(s)
Radioterapia de Intensidad Modulada , Humanos , Radioterapia de Intensidad Modulada/métodos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Tomografía Computarizada por Rayos X , Pulmón
7.
Network ; 35(1): 55-72, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37933604

RESUMEN

Our approach includes picture preprocessing, feature extraction utilizing the SqueezeNet model, hyperparameter optimisation utilising the Equilibrium Optimizer (EO) algorithm, and classification utilising a Stacked Autoencoder (SAE) model. Each of these processes is carried out in a series of separate steps. During the image preprocessing stage, contrast limited adaptive histogram equalisations (CLAHE) is utilized to improve the contrasts, and Adaptive Bilateral Filtering (ABF) to get rid of any noise that may be present. The SqueezeNet paradigm is utilized to obtain relevant characteristics from the pictures that have been preprocessed, and the EO technique is utilized to fine-tune the hyperparameters. Finally, the SAE model categorises the diseases that affect the grape leaf. The simulation analysis of the EODTL-GLDC technique tested New Plant Diseases Datasets and the results were inspected in many prospects. The results demonstrate that this model outperforms other deep learning techniques and methods that are more often related to machine learning. Specifically, this technique was able to attain a precision of 96.31% on the testing datasets and 96.88% on the training data set that was split 80:20. These results offer more proof that the suggested strategy is successful in automating the detection and categorization of grape leaf diseases.


Asunto(s)
Enfermedad por Deficiencia de Carbamoil-Fosfato Sintasa I , Desnutrición , Vitis , Aprendizaje Automático , Hojas de la Planta
8.
Stat Med ; 43(5): 1019-1047, 2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38155152

RESUMEN

Birth defects and their associated deaths, high health and financial costs of maternal care and associated morbidity are major contributors to infant mortality. If permitted by law, prenatal diagnosis allows for intrauterine care, more complicated hospital deliveries, and termination of pregnancy. During pregnancy, a set of measurements is commonly used to monitor the fetal health, including fetal head circumference, crown-rump length, abdominal circumference, and femur length. Because of the intricate interactions between the biological tissues and the US waves mother and fetus, analyzing fetal US images from a specialized perspective is difficult. Artifacts include acoustic shadows, speckle noise, motion blur, and missing borders. The fetus moves quickly, body structures close, and the weeks of pregnancy vary greatly. In this work, we propose a fetal growth analysis through US image of head circumference biometry using optimal segmentation and hybrid classifier. First, we introduce a hybrid whale with oppositional fruit fly optimization (WOFF) algorithm for optimal segmentation of segment fetal head which improves the detection accuracy. Next, an improved U-Net design is utilized for the hidden feature (head circumference biometry) extraction which extracts features from the segmented extraction. Then, we design a modified Boosting arithmetic optimization (MBAO) algorithm for feature optimization to selects optimal best features among multiple features for the reduction of data dimensionality issues. Furthermore, a hybrid deep learning technique called bi-directional LSTM with convolutional neural network (B-LSTM-CNN) for fetal growth analysis to compute the fetus growth and health. Finally, we validate our proposed method through the open benchmark datasets are HC18 (Ultrasound image) and oxford university research archive (ORA-data) (Ultrasound video frames). We compared the simulation results of our proposed algorithm with the existing state-of-art techniques in terms of various metrics.


Asunto(s)
Desarrollo Fetal , Ultrasonografía Prenatal , Embarazo , Femenino , Humanos , Ultrasonografía Prenatal/métodos , Biometría , Algoritmos , Redes Neurales de la Computación
9.
Front Nutr ; 10: 1247075, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37920287

RESUMEN

Grading dried shiitake mushrooms is an indispensable production step, as there are large quality differences between different grades, which affect the product's price and marketability. Dried shiitake mushroom samples have irregular shapes, small morphological differences between different grades of the same species, and they may occur in mixed grades, which causes challenges to the automatic grade recognition using machine vision. In this study, a comprehensive method to solve this problem is provided, including image acquisition, preprocessing, dataset creation, and grade recognition. The osprey optimization algorithm (OOA) is used to improve the computational efficiency of Otsu's threshold binarization and obtain complete mushroom contours samples efficiently. Then, a method for dried shiitake mushroom grade recognition based on the improved VGG network (D-VGG) is proposed. The method uses the VGG16 network as the base framework, optimizes the convolutional layer of the network, and uses a global average pooling layer instead of a fully connected layer to reduce the risk of model overfitting. In addition, a residual module and batch normalization are introduced to enhance the learning effect of texture details, accelerate the convergence of the model, and improve the stability of the training process. An improved channel attention network is proposed to enhance the feature weights of different channels and improve the grading performance of the model. The experimental results show that the improved network model (D-VGG) can recognize different dried shiitake mushroom grades with high accuracy and recognition efficiency, achieving a final grading accuracy of 96.21%, with only 46.77 ms required to process a single image. The dried shiitake mushroom grade recognition method proposed in this study provides a new implementation approach for the dried shiitake mushroom quality grading process, as well as a reference for real-time grade recognition of other agricultural products.

10.
Heliyon ; 9(11): e21369, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37885728

RESUMEN

Introduction: Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose: The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method: Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result: The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion: The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.

11.
Tissue Eng Part C Methods ; 29(12): 572-582, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37672553

RESUMEN

Due to a growing need in visualizing human pluripotent stem cell-derived organoids from recent advancements in the field, an efficient bulk-processing application is necessary to provide preprocessing and image analysis services. In this study, we developed Organalysis, a high-accuracy, multifunctional, and accessible application that meets these needs by providing the functionality of image manipulation and enhancement, organoid area and intensity calculation, fractal analysis, noise removal, and feature importance computation. The image manipulation feature includes brightness and contrast adjustment. The area and intensity calculation computes six values for each image: organoid area, total image area, percentage of the image covered by organoid, the total intensity of organoid, the total intensity of organoid-by-organoid area, and total intensity of organoid by total image area. The fractal analysis function computes the fractal dimension value for each image. The noise removal function removes superfluous marks from the input images, such as bubbles and other unwanted noise. The feature importance function trains a lasso-regularized linear regression machine learning algorithm to identify cardiac growth factors that are the strongest determinants for cell differentiation. The batch processing of this application further builds on existing services like ImageJ to provide a more convenient way to process multiple images. Collectively, the versatility and preciseness of Organalysis demonstrate novelty, since no other current imaging software combines the capability of batch processing and the breadth of feature analysis. Therefore, Organalysis provides unique functions in cardiac organoid research and proves to be invaluable in regenerative medicine.


Asunto(s)
Algoritmos , Programas Informáticos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Organoides , Fractales
12.
Bioengineering (Basel) ; 10(7)2023 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-37508829

RESUMEN

Furcation defects pose a significant challenge in the diagnosis and treatment planning of periodontal diseases. The accurate detection of furcation involvements (FI) on periapical radiographs (PAs) is crucial for the success of periodontal therapy. This research proposes a deep learning-based approach to furcation defect detection using convolutional neural networks (CNN) with an accuracy rate of 95%. This research has undergone a rigorous review by the Institutional Review Board (IRB) and has received accreditation under number 202002030B0C505. A dataset of 300 periapical radiographs of teeth with and without FI were collected and preprocessed to enhance the quality of the images. The efficient and innovative image masking technique used in this research better enhances the contrast between FI symptoms and other areas. Moreover, this technology highlights the region of interest (ROI) for the subsequent CNN models training with a combination of transfer learning and fine-tuning techniques. The proposed segmentation algorithm demonstrates exceptional performance with an overall accuracy up to 94.97%, surpassing other conventional methods. Moreover, in comparison with existing CNN technology for identifying dental problems, this research proposes an improved adaptive threshold preprocessing technique that produces clearer distinctions between teeth and interdental molars. The proposed model achieves impressive results in detecting FI with identification rates ranging from 92.96% to a remarkable 94.97%. These findings suggest that our deep learning approach holds significant potential for improving the accuracy and efficiency of dental diagnosis. Such AI-assisted dental diagnosis has the potential to improve periodontal diagnosis, treatment planning, and patient outcomes. This research demonstrates the feasibility and effectiveness of using deep learning algorithms for furcation defect detection on periapical radiographs and highlights the potential for AI-assisted dental diagnosis. With the improvement of dental abnormality detection, earlier intervention could be enabled and could ultimately lead to improved patient outcomes.

13.
Sensors (Basel) ; 23(14)2023 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-37514610

RESUMEN

Compared to wide-field telescopes, small-field detection systems have higher spatial resolution, resulting in stronger detection capabilities and higher positioning accuracy. When detecting by small fields in synchronous orbit, both space debris and fixed stars are imaged as point targets, making it difficult to distinguish them. In addition, with the improvement in detection capabilities, the number of stars in the background rapidly increases, which puts higher requirements on recognition algorithms. Therefore, star detection is indispensable for identifying and locating space debris in complex backgrounds. To address these difficulties, this paper proposes a real-time star extraction method based on adaptive filtering and multi-frame projection. We use bad point repair and background suppression algorithms to preprocess star images. Afterwards, we analyze and enhance the target signal-to-noise ratio (SNR). Then, we use multi-frame projection to fuse information. Subsequently, adaptive filtering, adaptive morphology, and adaptive median filtering algorithms are proposed to detect trajectories. Finally, the projection is released to locate the target. Our recognition algorithm has been verified by real star images, and the images were captured using small-field telescopes. The experimental results demonstrate the effectiveness of the algorithm proposed in this paper. We successfully extracted hip-27066 star, which has a magnitude of about 12 and an SNR of about 1.5. Compared with existing methods, our algorithm has advantages in both recognition rate and false-alarm rate, and can be used as a real-time target recognition algorithm for space-based synchronous orbit detection payloads.

14.
Asian Pac J Cancer Prev ; 24(6): 2061-2072, 2023 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-37378937

RESUMEN

AIM: To examine computed tomography (CT) radiomic feature stability on various texture patterns during pre-processing utilizing the Credence Cartridge Radiomics (CCR) phantom textures. MATERIALS AND METHODS: Imaging Biomarker Explorer (IBEX) expansion for the abbreviation IBEX extracted 51 radiomic features of 4 categories from 11 textures image regions of interest (ROI) of the phantom. 19 software pre-processing algorithms processed each CCR phantom ROI. All ROI texture processed image features were retrieved. Pre-processed CT image radiomic features were compared to non-processed features to measure its textural influence. Wilcoxon T-tests measured the pre-processing relevance of CT radiomic features on various textures. Hierarchical cluster analysis (HCA) was performed to cluster processer potency and texture impression likeness. RESULTS: The pre-processing filter, CT texture Cartridge, and feature category affect the CCR phantom CT image's radiomic properties. Pre-processing is statistically unaltered by Gray Level Run Length Matrix (GLRLM ) expansion  for the abbreviation GLRLM and Neighborhood Intensity Difference matrix (NID) expansion for the abbreviation NID feature categories. The 30%, 40%, and 50% honeycomb are regular directional textures and smooth 3D-printed plaster resin, most of the image pre-processing feature alterations exhibited significant p-values in the histogram feature category. The Laplacian Filter, Log Filter, Resample, and Bit Depth Rescale Range pre-processing algorithms hugely influenced histogram and Gray Level Co-occurrence Matrix (GLCM) image features. CONCLUSION: We found that homogenous intensity phantom inserts, CT radiomic feature, are less sensitive to feature swaps during pre-processing than normal directed honeycomb and regular projected smooth 3D-printed plaster resin CT image textures. Because they lose fewer information during image enhancement, This feature concentration empowerment of the images also enhances texture pattern recognition.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Tomógrafos Computarizados por Rayos X , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
15.
Biomedicines ; 11(6)2023 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-37371661

RESUMEN

Diabetic retinopathy (DR) is the foremost cause of blindness in people with diabetes worldwide, and early diagnosis is essential for effective treatment. Unfortunately, the present DR screening method requires the skill of ophthalmologists and is time-consuming. In this study, we present an automated system for DR severity classification employing the fine-tuned Compact Convolutional Transformer (CCT) model to overcome these issues. We assembled five datasets to generate a more extensive dataset containing 53,185 raw images. Various image pre-processing techniques and 12 types of augmentation procedures were applied to improve image quality and create a massive dataset. A new DR-CCTNet model is proposed. It is a modification of the original CCT model to address training time concerns and work with a large amount of data. Our proposed model delivers excellent accuracy even with low-pixel images and still has strong performance with fewer images, indicating that the model is robust. We compare our model's performance with transfer learning models such as VGG19, VGG16, MobileNetV2, and ResNet50. The test accuracy of the VGG19, ResNet50, VGG16, and MobileNetV2 were, respectively, 72.88%, 76.67%, 73.22%, and 71.98%. Our proposed DR-CCTNet model to classify DR outperformed all of these with a 90.17% test accuracy. This approach provides a novel and efficient method for the detection of DR, which may lower the burden on ophthalmologists and expedite treatment for patients.

16.
Data Brief ; 48: 109196, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37234732

RESUMEN

Cocoa cultivation is the basis for chocolate production; it has a unique aroma that makes it useful in the production of snacks and usable for cooking or baking. The maximum harvest period of cocoa is normally once or twice a year and spread over several months, depending on the country. Determining the best harvesting period for cocoa pods plays a major role in the export process and the pods quality. The degree of ripening of the pods affects the quality of the resulting beans. Also, unripe pods do not have enough sugar and may prevent proper bean fermentation. As for too-mature pods, they are usually dry, and their beans may germinate inside the pods, or they may develop a fungal disease and cannot be used. Computer-based determination of the ripeness of cocoa pods throughout image analysis could facilitate massive cocoa ripeness detection. Recent technological advances in computing power, communication systems, and machine learning techniques provide opportunities for agricultural engineering and computer scientists to meet the demands of the manual. The need for diverse and representative sets of pod images is essential for developing and testing automatic cocoa pod maturity detection systems. In this perspective, we collected images of cocoa pods to set up a database of cocoa pods of the Côte d'Ivoire named CocoaMFDB. We performed a pre-processing step using the CLAHE algorithm to improve the quality of the images since the effect of the light was not controlled on our data set. CocoaMFDB allows the characterization of cocoa pods according to their maturity level and provides information on the pod family for each image. Our dataset comprises three large families, namely Amelonado, Angoleta, and Guiana, grouped into two maturity categories: the ripe and unripe pods. It is, therefore, perfect for developing and evaluating image analysis algorithms for future research.

17.
Bioengineering (Basel) ; 10(4)2023 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-37106584

RESUMEN

BACKGROUND: Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. OBJECTIVE: This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. METHOD: This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. RESULTS: a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). CONCLUSION: Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation.

18.
Hum Brain Mapp ; 44(7): 2669-2683, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36807461

RESUMEN

The preprocessing of diffusion magnetic resonance imaging (dMRI) data involve numerous steps, including the corrections for head motion, susceptibility distortion, low signal-to-noise ratio, and signal drifting. Researchers or clinical practitioners often need to configure different preprocessing steps depending on disparate image acquisition schemes, which increases the technical threshold for dMRI analysis for nonexpert users. This could cause disparities in data processing approaches and thus hinder the comparability between studies. To make the dMRI data processing steps transparent and adapt to various dMRI acquisition schemes for researchers, we propose a semi-automated pipeline tool for dMRI named integrated diffusion image operator or iDIO. This pipeline integrates features from a wide range of advanced dMRI software tools and targets at providing a one-click solution for dMRI data analysis, via adaptive configuration for a set of suggested processing steps based on the image header of the input data. Additionally, the pipeline provides options for post-processing, such as estimation of diffusion tensor metrics and whole-brain tractography-based connectomes reconstruction using common brain atlases. The iDIO pipeline also outputs an easy-to-interpret quality control report to facilitate users to assess the data quality. To keep the transparency of data processing, the execution log and all the intermediate images produced in the iDIO's workflow are accessible. The goal of iDIO is to reduce the barriers for clinical or nonspecialist users to adopt the state-of-art dMRI processing steps.


Asunto(s)
Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Encéfalo , Imagen por Resonancia Magnética , Programas Informáticos
19.
Cancers (Basel) ; 15(3)2023 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-36765922

RESUMEN

PURPOSE: This study investigates the impact of different intensity normalization (IN) methods on the overall survival (OS) radiomics models' performance of MR sequences in primary (pHGG) and recurrent high-grade glioma (rHGG). METHODS: MR scans acquired before radiotherapy were retrieved from two independent cohorts (rHGG C1: 197, pHGG C2: 141) from multiple scanners (15, 14). The sequences are T1 weighted (w), contrast-enhanced T1w (T1wce), T2w, and T2w-FLAIR. Sequence-specific significant features (SF) associated with OS, extracted from the tumour volume, were derived after applying 15 different IN methods. Survival analyses were conducted using Cox proportional hazard (CPH) and Poisson regression (POI) models. A ranking score was assigned based on the 10-fold cross-validated (CV) concordance index (C-I), mean square error (MSE), and the Akaike information criterion (AICs), to evaluate the methods' performance. RESULTS: Scatter plots of the 10-CV C-I and MSE against the AIC showed an impact on the survival predictions between the IN methods and MR sequences (C1/C2 C-I range: 0.62-0.71/0.61-0.72, MSE range: 0.20-0.42/0.13-0.22). White stripe showed stable results for T1wce (C1/C2 C-I: 0.71/0.65, MSE: 0.21/0.14). Combat (0.68/0.62, 0.22/0.15) and histogram matching (HM, 0.67/0.64, 0.22/0.15) showed consistent prediction results for T2w models. They were also the top-performing methods for T1w in C2 (Combat: 0.67, 0.13; HM: 0.67, 0.13); however, only HM achieved high predictions in C1 (0.66, 0.22). After eliminating IN impacted SF using Spearman's rank-order correlation coefficient, a mean decrease in the C-I and MSE of 0.05 and 0.03 was observed in all four sequences. CONCLUSION: The IN method impacted the predictive power of survival models; thus, performance is sequence-dependent.

20.
J Imaging ; 9(1)2023 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-36662110

RESUMEN

The paper explored the problem of automatic diagnosis based on immunohistochemical image analysis. The issue of automated diagnosis is a preliminary and advisory statement for a diagnostician. The authors studied breast cancer histological and immunohistochemical images using the following biomarkers progesterone, estrogen, oncoprotein, and a cell proliferation biomarker. The authors developed a breast cancer diagnosis method based on immunohistochemical image analysis. The proposed method consists of algorithms for image preprocessing, segmentation, and the determination of informative indicators (relative area and intensity of cells) and an algorithm for determining the molecular genetic breast cancer subtype. An adaptive algorithm for image preprocessing was developed to improve the quality of the images. It includes median filtering and image brightness equalization techniques. In addition, the authors developed a software module part of the HIAMS software package based on the Java programming language and the OpenCV computer vision library. Four molecular genetic breast cancer subtypes could be identified using this solution: subtype Luminal A, subtype Luminal B, subtype HER2/neu amplified, and basalt-like subtype. The developed algorithm for the quantitative characteristics of the immunohistochemical images showed sufficient accuracy in determining the cancer subtype "Luminal A". It was experimentally established that the relative area of the nuclei of cells covered with biomarkers of progesterone, estrogen, and oncoprotein was more than 85%. The given approach allows for automating and accelerating the process of diagnosis. Developed algorithms for calculating the quantitative characteristics of cells on immunohistochemical images can increase the accuracy of diagnosis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA