Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.201
Filter
1.
Sensors (Basel) ; 24(19)2024 Oct 02.
Article in English | MEDLINE | ID: mdl-39409424

ABSTRACT

Recent research has demonstrated the effectiveness of convolutional neural networks (CNN) in assessing the health status of bee colonies by classifying acoustic patterns. However, developing a monitoring system using CNNs compared to conventional machine learning models can result in higher computation costs, greater energy demand, and longer inference times. This study examines the potential of CNN architectures in developing a monitoring system based on constrained hardware. The experimentation involved testing ten CNN architectures from the PyTorch and Torchvision libraries on single-board computers: an Nvidia Jetson Nano (NJN), a Raspberry Pi 5 (RPi5), and an Orange Pi 5 (OPi5). The CNN architectures were trained using four datasets containing spectrograms of acoustic samples of different durations (30, 10, 5, or 1 s) to analyze their impact on performance. The hyperparameter search was conducted using the Optuna framework, and the CNN models were validated using k-fold cross-validation. The inference time and power consumption were measured to compare the performance of the CNN models and the SBCs. The aim is to provide a basis for developing a monitoring system for precision applications in apiculture based on constrained devices and CNNs.


Subject(s)
Acoustics , Neural Networks, Computer , Animals , Bees/physiology , Machine Learning , Algorithms
2.
Sensors (Basel) ; 24(19)2024 Sep 27.
Article in English | MEDLINE | ID: mdl-39409301

ABSTRACT

Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address the challenge of drowsiness detection is to anticipate drowsiness by alerting drivers in a timely and effective manner. Thus, this paper presents a Convolutional Neural Network (CNN)-based approach for drowsiness detection by analyzing the eye region and Mouth Aspect Ratio (MAR) for yawning detection. As part of this approach, endpoint delineation is optimized for extraction of the region of interest (ROI) around the eyes. An NVIDIA Jetson Nano-based device and near-infrared (NIR) camera are used for real-time applications. A Driver Drowsiness Artificial Intelligence (DD-AI) architecture is proposed for the eye state detection procedure. In a performance analysis, the results of the proposed approach were compared with architectures based on InceptionV3, VGG16, and ResNet50V2. Night-Time Yawning-Microsleep-Eyeblink-Driver Distraction (NITYMED) was used for training, validation, and testing of the architectures. The proposed DD-AI network achieved an accuracy of 99.88% with the NITYMED test data, proving superior to the other networks. In the hardware implementation, tests were conducted in a real environment, resulting in 96.55% and 14 fps on average for the DD-AI network, thereby confirming its superior performance.


Subject(s)
Automobile Driving , Neural Networks, Computer , Humans , Mouth/physiology , Eye , Sleep Stages/physiology , Sleepiness , Artificial Intelligence , Accidents, Traffic
3.
Sensors (Basel) ; 24(19)2024 Sep 29.
Article in English | MEDLINE | ID: mdl-39409361

ABSTRACT

The integration of machine learning (ML) with edge computing and wearable devices is rapidly advancing healthcare applications. This study systematically maps the literature in this emerging field, analyzing 171 studies and focusing on 28 key articles after rigorous selection. The research explores the key concepts, techniques, and architectures used in healthcare applications involving ML, edge computing, and wearable devices. The analysis reveals a significant increase in research over the past six years, particularly in the last three years, covering applications such as fall detection, cardiovascular monitoring, and disease prediction. The findings highlight a strong focus on neural network models, especially Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs), and diverse edge computing platforms like Raspberry Pi and smartphones. Despite the diversity in approaches, the field is still nascent, indicating considerable opportunities for future research. The study emphasizes the need for standardized architectures and the further exploration of both hardware and software to enhance the effectiveness of ML-driven healthcare solutions. The authors conclude by identifying potential research directions that could contribute to continued innovation in healthcare technologies.


Subject(s)
Machine Learning , Neural Networks, Computer , Wearable Electronic Devices , Humans , Delivery of Health Care , Smartphone , Monitoring, Physiologic/instrumentation , Monitoring, Physiologic/methods
4.
Rev Assoc Med Bras (1992) ; 70(9): e20240381, 2024.
Article in English | MEDLINE | ID: mdl-39292083

ABSTRACT

OBJECTIVE: The study used machine learning models to predict the clinical outcome with various attributes or when the models chose features based on their algorithms. METHODS: Patients who presented to an orthopedic outpatient department with joint swelling or myalgia were included in the study. A proforma collected clinical information on age, gender, uric acid, C-reactive protein, and complete blood count/liver function test/renal function test parameters. Machine learning decision models (Random Forest and Gradient Boosted) were evaluated with the selected features/attributes. To categorize input data into outputs of indications of joint discomfort, multilayer perceptron and radial basis function-neural networks were used. RESULTS: The random forest decision model outperformed with 97% accuracy and minimum errors to anticipate joint pain from input attributes. For predicted classifications, the multilayer perceptron fared better with an accuracy of 98% as compared to the radial basis function. Multilayer perceptron achieved the following normalized relevance: 100% (uric acid), 10.3% (creatinine), 9.8% (AST), 5.4% (lymphocytes), and 5% (C-reactive protein) for having joint pain. Uric acid has the highest normalized relevance for predicting joint pain. CONCLUSION: The earliest artificial intelligence-based detection of joint pain will aid in the prevention of more serious orthopedic complications.


Subject(s)
Arthralgia , Artificial Intelligence , C-Reactive Protein , Machine Learning , Uric Acid , Humans , Female , Male , Uric Acid/blood , Adult , Middle Aged , Arthralgia/blood , Arthralgia/diagnosis , Arthralgia/etiology , C-Reactive Protein/analysis , Algorithms , Predictive Value of Tests , Young Adult , Aged , Neural Networks, Computer , Reproducibility of Results , Creatinine/blood , Biomarkers/blood , Adolescent
5.
PLoS One ; 19(9): e0305610, 2024.
Article in English | MEDLINE | ID: mdl-39292688

ABSTRACT

The aim of the present research was to evaluate the efficiency of different vegetation indices (VI) obtained from satellites with varying spatial resolutions in discriminating the phenological stages of soybean crops. The experiment was carried out in a soybean cultivation area irrigated by central pivot, in Balsas, MA, Brazil, where weekly assessments of phenology and leaf area index were carried out. Throughout the crop cycle, spectral data from the study area were collected from sensors, onboard the Sentinel-2 and Amazônia-1 satellites. The images obtained were processed to obtain the VI based on NIR (NDVI, NDWI and SAVI) and RGB (VARI, IV GREEN and GLI), for the different phenological stages of the crop. The efficiency in identifying phenological stages by VI was determined through discriminant analysis and the Algorithm Neural Network-ANN, where the best classifications presented an Apparent Error Rate (APER) equal to zero. The APER for the discriminant analysis varied between 53.4% and 70.4% while, for the ANN, it was between 47.4% and 73.9%, making it not possible to identify which of the two analysis techniques is more appropriate. The study results demonstrated that the difference in sensors spatial resolution is not a determining factor in the correct identification of soybean phenological stages. Although no VI, obtained from the Amazônia-1 and Sentinel-2 sensor systems, was 100% effective in identifying all phenological stages, specific indices can be used to identify some key phenological stages of soybean crops, such as: flowering (R1 and R2); pod development (R4); grain development (R5.1); and plant physiological maturity (R8). Therefore, VI obtained from orbital sensors are effective in identifying soybean phenological stages quickly and cheaply.


Subject(s)
Glycine max , Glycine max/growth & development , Neural Networks, Computer , Brazil , Crops, Agricultural/growth & development , Plant Leaves/growth & development , Algorithms , Discriminant Analysis
6.
Sensors (Basel) ; 24(18)2024 Sep 19.
Article in English | MEDLINE | ID: mdl-39338791

ABSTRACT

There are two widely used methods to measure the cardiac cycle and obtain heart rate measurements: the electrocardiogram (ECG) and the photoplethysmogram (PPG). The sensors used in these methods have gained great popularity in wearable devices, which have extended cardiac monitoring beyond the hospital environment. However, the continuous monitoring of ECG signals via mobile devices is challenging, as it requires users to keep their fingers pressed on the device during data collection, making it unfeasible in the long term. On the other hand, the PPG does not contain this limitation. However, the medical knowledge to diagnose these anomalies from this sign is limited by the need for familiarity, since the ECG is studied and used in the literature as the gold standard. To minimize this problem, this work proposes a method, PPG2ECG, that uses the correlation between the domains of PPG and ECG signals to infer from the PPG signal the waveform of the ECG signal. PPG2ECG consists of mapping between domains by applying a set of convolution filters, learning to transform a PPG input signal into an ECG output signal using a U-net inception neural network architecture. We assessed our proposed method using two evaluation strategies based on personalized and generalized models and achieved mean error values of 0.015 and 0.026, respectively. Our method overcomes the limitations of previous approaches by providing an accurate and feasible method for continuous monitoring of ECG signals through PPG signals. The short distances between the infer-red ECG and the original ECG demonstrate the feasibility and potential of our method to assist in the early identification of heart diseases.


Subject(s)
Electrocardiography , Heart Rate , Neural Networks, Computer , Photoplethysmography , Signal Processing, Computer-Assisted , Humans , Electrocardiography/methods , Photoplethysmography/methods , Heart Rate/physiology , Algorithms , Wearable Electronic Devices
7.
Sensors (Basel) ; 24(18)2024 Sep 19.
Article in English | MEDLINE | ID: mdl-39338799

ABSTRACT

The use of artificial intelligence algorithms (AI) has gained importance for dental applications in recent years. Analyzing AI information from different sensor data such as images or panoramic radiographs (panoramic X-rays) can help to improve medical decisions and achieve early diagnosis of different dental pathologies. In particular, the use of deep learning (DL) techniques based on convolutional neural networks (CNNs) has obtained promising results in dental applications based on images, in which approaches based on classification, detection, and segmentation are being studied with growing interest. However, there are still several challenges to be tackled, such as the data quality and quantity, the variability among categories, and the analysis of the possible bias and variance associated with each dataset distribution. This study aims to compare the performance of three deep learning object detection models-Faster R-CNN, YOLO V2, and SSD-using different ResNet architectures (ResNet-18, ResNet-50, and ResNet-101) as feature extractors for detecting and classifying third molar angles in panoramic X-rays according to Winter's classification criterion. Each object detection architecture was trained, calibrated, validated, and tested with three different feature extraction CNNs which are ResNet-18, ResNet-50, and ResNet-101, which were the networks that best fit our dataset distribution. Based on such detection networks, we detect four different categories of angles in third molars using panoramic X-rays by using Winter's classification criterion. This criterion characterizes the third molar's position relative to the second molar's longitudinal axis. The detected categories for the third molars are distoangular, vertical, mesioangular, and horizontal. For training, we used a total of 644 panoramic X-rays. The results obtained in the testing dataset reached up to 99% mean average accuracy performance, demonstrating the YOLOV2 obtained higher effectiveness in solving the third molar angle detection problem. These results demonstrate that the use of CNNs for object detection in panoramic radiographs represents a promising solution in dental applications.


Subject(s)
Deep Learning , Molar, Third , Neural Networks, Computer , Radiography, Panoramic , Radiography, Panoramic/methods , Humans , Molar, Third/diagnostic imaging , Algorithms , Artificial Intelligence , Image Processing, Computer-Assisted/methods
8.
Biochem Biophys Res Commun ; 733: 150671, 2024 11 12.
Article in English | MEDLINE | ID: mdl-39298919

ABSTRACT

In the current biopharmaceutical scenario, constant bioprocess monitoring is crucial for the quality and integrity of final products. Thus, process analytical techniques, such as those based on Raman spectroscopy, have been used as multiparameter tracking methods in pharma bioprocesses, which can be combined with chemometric tools, like Partial Least Squares (PLS) and Artificial Neural Networks (ANN). In some cases, applying spectra pre-processing techniques before modeling can improve the accuracy of chemometric model fittings to observed values. One of the biological applications of these techniques could have as a target the virus-like particles (VLP), a vaccine production platform for viral diseases. A disease that has drawn attention in recent years is Zika, with large-scale production sometimes challenging without an appropriate monitoring approach. This work aimed to define global models for Zika VLP upstream production monitoring with Raman considering different laser intensities (200 mW and 495 mW), sample clarification (with or without cells), spectra pre-processing approaches, and PLS and ANN modeling techniques. Six experiments were performed in a benchtop bioreactor to collect the Raman spectral and biochemical datasets for modeling calibration. The best models generated presented a mean absolute error and mean relative error respectively of 3.46 × 105 cell/mL and 35 % for viable cell density (Xv); 4.1 % and 5 % for cell viability (CV); 0.245 g/L and 3 % for glucose (Glc); 0.006 g/L and 18 % for lactate (Lac); 0.115 g/L and 26 % for glutamine (Gln); 0.132 g/L and 18 % for glutamate (Glu); 0.0029 g/L and 3 % for ammonium (NH4+); and 0.0103 g/L and 2 % for potassium (K+). Sample without conditioning (with cells) improved the models' adequacy, except for Glutamine. ANN better predicted CV, Gln, Glu, and K+, while Xv, Glc, Lac, and NH4+ presented no statistical difference between the chemometric tools. For most of the assessed experimental parameters, there was no statistical need for spectra pre-filtering, for which the models based on the raw spectra were selected as the best ones. Laser intensity impacts quality model predictions in some parameters, Xv, Gln, and K+ had a better performance with 200 mW of intensity (for PLS, ANN, and ANN, respectively), for CV the 495 mW laser intensity was better (for PLS), and for the other biochemical variables, the use of 200 or 495 mW did not impact model fitting adequacy.


Subject(s)
Spectrum Analysis, Raman , Zika Virus , Spectrum Analysis, Raman/methods , Bioreactors , Least-Squares Analysis , Neural Networks, Computer , Lasers , Humans , Zika Virus Infection/virology , Animals
9.
J Mol Model ; 30(10): 350, 2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39325274

ABSTRACT

CONTEXT: Alzheimer's disease (AD) is the leading cause of dementia around the world, totaling about 55 million cases, with an estimated growth to 74.7 million cases in 2030, which makes its treatment widely desired. Several studies and strategies are being developed considering the main theories regarding its origin since it is not yet fully understood. Among these strategies, the 5-HT6 receptor antagonism emerges as an auspicious and viable symptomatic treatment approach for AD. The 5-HT6 receptor belongs to the G protein-coupled receptor (GPCR) family and is closely implicated in memory loss processes. As a serotonin receptor, it plays an important role in cognitive function. Consequently, targeting this receptor presents a compelling therapeutic opportunity. By employing antagonists to block its activity, the 5-HT6 receptor's functions can be effectively modulated, leading to potential improvements in cognition and memory. METHODS: Addressing this challenge, our research explored a promising avenue in drug discovery for AD, employing Artificial Neural Networks-Quantitative Structure-Activity Relationship (ANN-QSAR) models. These models have demonstrated great potential in predicting the biological activity of compounds based on their molecular structures. By harnessing the capabilities of machine learning and computational chemistry, we aimed to create a systematic approach for analyzing and forecasting the activity of potential drug candidates, thus streamlining the drug discovery process. We assembled a diverse set of compounds targeting this receptor and utilized density functional theory (DFT) calculations to extract essential molecular descriptors, effectively representing the structural features of the compounds. Subsequently, these molecular descriptors served as input for training the ANN-QSAR models alongside corresponding biological activity data, enabling us to predict the potential efficacy of novel compounds as 5-hydroxytryptamine receptor 6 (5-HT6) antagonists. Through extensive analysis and validation of ANN-QSAR models, we identified eight new promising compounds with therapeutic potential against AD.


Subject(s)
Alzheimer Disease , Drug Design , Quantitative Structure-Activity Relationship , Receptors, Serotonin , Serotonin Antagonists , Alzheimer Disease/drug therapy , Alzheimer Disease/metabolism , Receptors, Serotonin/metabolism , Receptors, Serotonin/chemistry , Humans , Serotonin Antagonists/chemistry , Serotonin Antagonists/pharmacology , Serotonin Antagonists/therapeutic use , Neural Networks, Computer , Models, Molecular
10.
PLoS One ; 19(9): e0307569, 2024.
Article in English | MEDLINE | ID: mdl-39250439

ABSTRACT

Smart indoor tourist attractions, such as smart museums and aquariums, require a significant investment in indoor localization devices. The use of Global Positioning Systems on smartphones is unsuitable for scenarios where dense materials such as concrete and metal blocks weaken GPS signals, which is most often the case in indoor tourist attractions. With the help of deep learning, indoor localization can be done region by region using smartphone images. This approach requires no investment in infrastructure and reduces the cost and time needed to turn museums and aquariums into smart museums or smart aquariums. In this paper, we propose using deep learning algorithms to classify locations based on smartphone camera images for indoor tourist attractions. We evaluate our proposal in a real-world scenario in Brazil. We extensively collect images from ten different smartphones to classify biome-themed fish tanks in the Pantanal Biopark, creating a new dataset of 3654 images. We tested seven state-of-the-art neural networks, three of them based on transformers. On average, we achieved a precision of about 90% and a recall and f-score of about 89%. The results show that the proposal is suitable for most indoor tourist attractions.


Subject(s)
Deep Learning , Smartphone , Tourism , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Geographic Information Systems , Brazil
11.
PLoS One ; 19(9): e0310707, 2024.
Article in English | MEDLINE | ID: mdl-39325750

ABSTRACT

Over the last ten years, social media has become a crucial data source for businesses and researchers, providing a space where people can express their opinions and emotions. To analyze this data and classify emotions and their polarity in texts, natural language processing (NLP) techniques such as emotion analysis (EA) and sentiment analysis (SA) are employed. However, the effectiveness of these tasks using machine learning (ML) and deep learning (DL) methods depends on large labeled datasets, which are scarce in languages like Spanish. To address this challenge, researchers use data augmentation (DA) techniques to artificially expand small datasets. This study aims to investigate whether DA techniques can improve classification results using ML and DL algorithms for sentiment and emotion analysis of Spanish texts. Various text manipulation techniques were applied, including transformations, paraphrasing (back-translation), and text generation using generative adversarial networks, to small datasets such as song lyrics, social media comments, headlines from national newspapers in Chile, and survey responses from higher education students. The findings show that the Convolutional Neural Network (CNN) classifier achieved the most significant improvement, with an 18% increase using the Generative Adversarial Networks for Sentiment Text (SentiGan) on the Aggressiveness (Seriousness) dataset. Additionally, the same classifier model showed an 11% improvement using the Easy Data Augmentation (EDA) on the Gender-Based Violence dataset. The performance of the Bidirectional Encoder Representations from Transformers (BETO) also improved by 10% on the back-translation augmented version of the October 18 dataset, and by 4% on the EDA augmented version of the Teaching survey dataset. These results suggest that data augmentation techniques enhance performance by transforming text and adapting it to the specific characteristics of the dataset. Through experimentation with various augmentation techniques, this research provides valuable insights into the analysis of subjectivity in Spanish texts and offers guidance for selecting algorithms and techniques based on dataset features.


Subject(s)
Emotions , Natural Language Processing , Social Media , Humans , Algorithms , Neural Networks, Computer , Machine Learning , Language , Deep Learning
12.
Sensors (Basel) ; 24(17)2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39275707

ABSTRACT

Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.


Subject(s)
Deep Learning , Emotions , Neural Networks, Computer , Humans , Emotions/physiology , Speech/physiology , Databases, Factual , Algorithms , Pattern Recognition, Automated/methods
13.
Int J Biometeorol ; 68(11): 2387-2398, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39136712

ABSTRACT

Soybean (Glycine max) is the world's most cultivated legume; currently, most of its varieties are Bt. Spodoptera spp. (Lepidoptera: Noctuidae) are important pests of soybean. An artificial neural network (ANN) is an artificial intelligence tool that can be used in the study of spatiotemporal dynamics of pest populations. Thus, this work aims to determine ANN to identify population regulation factors of Spodoptera spp. and predict its density in Bt soybean. For two years, the density of Spodoptera spp. caterpillars, predators, and parasitoids, climate data, and plant age was evaluated in commercial soybean fields. The selected ANN was the one with the weather data from 25 days before the pest's density evaluation. ANN forecasting and pest densities in soybean fields presented a correlation of 0.863. It was found that higher densities of the pest occurred in dry seasons, with less wind, higher atmospheric pressure and with increasing plant age. Pest density increased with the increase in temperature until this curve reached its maximum value. ANN forecasting and pest densities in soybean fields in different years, seasons, and stages of plant development were similar. Therefore, this ANN is promising to be implemented into integrated pest management programs in soybean fields.


Subject(s)
Glycine max , Neural Networks, Computer , Seasons , Spodoptera , Glycine max/growth & development , Animals , Spodoptera/growth & development , Plants, Genetically Modified , Larva , Forecasting , Weather
14.
Parasit Vectors ; 17(1): 329, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39095920

ABSTRACT

BACKGROUND: Identifying mosquito vectors is crucial for controlling diseases. Automated identification studies using the convolutional neural network (CNN) have been conducted for some urban mosquito vectors but not yet for sylvatic mosquito vectors that transmit the yellow fever. We evaluated the ability of the AlexNet CNN to identify four mosquito species: Aedes serratus, Aedes scapularis, Haemagogus leucocelaenus and Sabethes albiprivus and whether there is variation in AlexNet's ability to classify mosquitoes based on pictures of four different body regions. METHODS: The specimens were photographed using a cell phone connected to a stereoscope. Photographs were taken of the full-body, pronotum and lateral view of the thorax, which were pre-processed to train the AlexNet algorithm. The evaluation was based on the confusion matrix, the accuracy (ten pseudo-replicates) and the confidence interval for each experiment. RESULTS: Our study found that the AlexNet can accurately identify mosquito pictures of the genus Aedes, Sabethes and Haemagogus with over 90% accuracy. Furthermore, the algorithm performance did not change according to the body regions submitted. It is worth noting that the state of preservation of the mosquitoes, which were often damaged, may have affected the network's ability to differentiate between these species and thus accuracy rates could have been even higher. CONCLUSIONS: Our results support the idea of applying CNNs for artificial intelligence (AI)-driven identification of mosquito vectors of tropical diseases. This approach can potentially be used in the surveillance of yellow fever vectors by health services and the population as well.


Subject(s)
Aedes , Mosquito Vectors , Neural Networks, Computer , Yellow Fever , Animals , Mosquito Vectors/classification , Yellow Fever/transmission , Aedes/classification , Aedes/physiology , Algorithms , Image Processing, Computer-Assisted/methods , Culicidae/classification , Artificial Intelligence
15.
PLoS One ; 19(8): e0305839, 2024.
Article in English | MEDLINE | ID: mdl-39167612

ABSTRACT

This paper presents an artificial intelligence-based classification model for the detection of pulmonary embolism in computed tomography angiography. The proposed model, developed from public data and validated on a large dataset from a tertiary hospital, uses a two-dimensional approach that integrates temporal series to classify each slice of the examination and make predictions at both slice and examination levels. The training process consists of two stages: first using a convolutional neural network InceptionResNet V2 and then a recurrent neural network long short-term memory model. This approach achieved an accuracy of 93% at the slice level and 77% at the examination level. External validation using a hospital dataset resulted in a precision of 86% for positive pulmonary embolism cases and 69% for negative pulmonary embolism cases. Notably, the model excels in excluding pulmonary embolism, achieving a precision of 73% and a recall of 82%, emphasizing its clinical value in reducing unnecessary interventions. In addition, the diverse demographic distribution in the validation dataset strengthens the model's generalizability. Overall, this model offers promising potential for accurate detection and exclusion of pulmonary embolism, potentially streamlining diagnosis and improving patient outcomes.


Subject(s)
Artificial Intelligence , Computed Tomography Angiography , Neural Networks, Computer , Pulmonary Embolism , Pulmonary Embolism/diagnosis , Pulmonary Embolism/diagnostic imaging , Pulmonary Embolism/classification , Humans , Male , Female , Middle Aged , Computed Tomography Angiography/methods , Aged , Adult
16.
Food Res Int ; 192: 114836, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39147524

ABSTRACT

The classification of carambola, also known as starfruit, according to quality parameters is usually conducted by trained human evaluators through visual inspections. This is a costly and subjective method that can generate high variability in results. As an alternative, computer vision systems (CVS) combined with deep learning (DCVS) techniques have been introduced in the industry as a powerful and an innovative tool for the rapid and non-invasive classification of fruits. However, validating the learning capability and trustworthiness of a DL model, aka black box, to obtain insights can be challenging. To reduce this gap, we propose an integrated eXplainable Artificial Intelligence (XAI) method for the classification of carambolas at different maturity stages. We compared two Residual Neural Networks (ResNet) and Visual Transformers (ViT) to identify the image regions that are enhanced by a Random Forest (RF) model, with the aim of providing more detailed information at the feature level for classifying the maturity stage. Changes in fruit colour and physicochemical data throughout the maturity stages were analysed, and the influence of these parameters on the maturity stages was evaluated using the Gradient-weighted Class Activation Mapping (Grad-CAM), the Attention Maps using RF importance. The proposed approach provides a visualization and description of the most important regions that led to the model decision, in wide visualization follows the models an importance features from RF. Our approach has promising potential for standardized and rapid carambolas classification, achieving 91 % accuracy with ResNet and 95 % with ViT, with potential application for other fruits.


Subject(s)
Averrhoa , Fruit , Neural Networks, Computer , Fruit/growth & development , Fruit/classification , Averrhoa/chemistry , Deep Learning , Artificial Intelligence , Color
17.
Molecules ; 29(15)2024 Jul 28.
Article in English | MEDLINE | ID: mdl-39124967

ABSTRACT

The development of new methods of identification of active pharmaceutical ingredients (API) is a subject of paramount importance for research centers, the pharmaceutical industry, and law enforcement agencies. Here, a system for identifying and classifying pharmaceutical tablets containing acetaminophen (AAP) by brand has been developed. In total, 15 tablets of 11 brands for a total of 165 samples were analyzed. Mid-infrared vibrational spectroscopy with multivariate analysis was employed. Quantum cascade lasers (QCLs) were used as mid-infrared sources. IR spectra in the spectral range 980-1600 cm-1 were recorded. Five different classification methods were used. First, a spectral search through correlation indices. Second, machine learning algorithms such as principal component analysis (PCA), support vector classification (SVC), decision tree classifier (DTC), and artificial neural network (ANN) were employed to classify tablets by brands. SNV and first derivative were used as preprocessing to improve the spectral information. Precision, recall, specificity, F1-score, and accuracy were used as criteria to evaluate the best SVC, DEE, and ANN classification models obtained. The IR spectra of the tablets show characteristic vibrational signals of AAP and other APIs present. Spectral classification by spectral search and PCA showed limitations in differentiating between brands, particularly for tablets containing AAP as the only API. Machine learning models, specifically SVC, achieved high accuracy in classifying AAP tablets according to their brand, even for brands containing only AAP.


Subject(s)
Acetaminophen , Machine Learning , Principal Component Analysis , Spectrophotometry, Infrared , Tablets , Acetaminophen/chemistry , Acetaminophen/analysis , Tablets/chemistry , Spectrophotometry, Infrared/methods , Neural Networks, Computer , Algorithms , Support Vector Machine
18.
An Acad Bras Cienc ; 96(3): e20221041, 2024.
Article in English | MEDLINE | ID: mdl-39194050

ABSTRACT

Cerrado is the second largest biome in Brazil, and it is responsible for providing us several ecosystem services, including the functions of storing Carbon and biodiversity conservation. In this study, we developed a modeling approach to predict the Aboveground biomass (AGB) in Cerrado vegetation using Artificial Neural Networks (ANNs), vegetation indices retrieved from RapidEye satellite imagery, and field data acquired within the Federal District territory, Brazil. Correlation testing was performed to identify potential vegetation index candidates to be used as input in the AGB modeling. Several ANNs were trained to predict the AGB in the study area using vegetation indices and field data. The optimum ANN was selected according to criteria of mean error of the estimate, correlation coefficient, and graphical analysis. The best performing ANN showed a predictive power of 90% and RMSE less than 17%. The validation tests showed no significant difference between the observed and ANN-predicted values. We estimated an average AGB of 16.55± 8.6 Mg.ha-1 in shrublands in the study area. Our study results indicate that vegetation indices and ANNs combined could accurately estimate the AGB in the Cerrado vegetation in the study area, showing to be a promising methodological approach to be broadly applied throughout the Cerrado biome.


Subject(s)
Biomass , Neural Networks, Computer , Remote Sensing Technology , Brazil , Ecosystem , Environmental Monitoring/methods
19.
Ann Hepatol ; 29(5): 101528, 2024.
Article in English | MEDLINE | ID: mdl-38971372

ABSTRACT

INTRODUCTION AND OBJECTIVES: Despite the huge clinical burden of MASLD, validated tools for early risk stratification are lacking, and heterogeneous disease expression and a highly variable rate of progression to clinical outcomes result in prognostic uncertainty. We aimed to investigate longitudinal electronic health record-based outcome prediction in MASLD using a state-of-the-art machine learning model. PATIENTS AND METHODS: n = 940 patients with histologically-defined MASLD were used to develop a deep-learning model for all-cause mortality prediction. Patient timelines, spanning 12 years, were fully-annotated with demographic/clinical characteristics, ICD-9 and -10 codes, blood test results, prescribing data, and secondary care activity. A Transformer neural network (TNN) was trained to output concomitant probabilities of 12-, 24-, and 36-month all-cause mortality. In-sample performance was assessed using 5-fold cross-validation. Out-of-sample performance was assessed in an independent set of n = 528 MASLD patients. RESULTS: In-sample model performance achieved AUROC curve 0.74-0.90 (95 % CI: 0.72-0.94), sensitivity 64 %-82 %, specificity 75 %-92 % and Positive Predictive Value (PPV) 94 %-98 %. Out-of-sample model validation had AUROC 0.70-0.86 (95 % CI: 0.67-0.90), sensitivity 69 %-70 %, specificity 96 %-97 % and PPV 75 %-77 %. Key predictive factors, identified using coefficients of determination, were age, presence of type 2 diabetes, and history of hospital admissions with length of stay >14 days. CONCLUSIONS: A TNN, applied to routinely-collected longitudinal electronic health records, achieved good performance in prediction of 12-, 24-, and 36-month all-cause mortality in patients with MASLD. Extrapolation of our technique to population-level data will enable scalable and accurate risk stratification to identify people most likely to benefit from anticipatory health care and personalized interventions.


Subject(s)
Electronic Health Records , Humans , Male , Female , Middle Aged , Risk Assessment , Aged , Prognosis , Cause of Death , Deep Learning , Risk Factors , Predictive Value of Tests , Non-alcoholic Fatty Liver Disease/mortality , Non-alcoholic Fatty Liver Disease/diagnosis , Adult , Neural Networks, Computer , Retrospective Studies
20.
BMC Bioinformatics ; 25(1): 231, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969970

ABSTRACT

PURPOSE: In this study, we present DeepVirusClassifier, a tool capable of accurately classifying Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) viral sequences among other subtypes of the coronaviridae family. This classification is achieved through a deep neural network model that relies on convolutional neural networks (CNNs). Since viruses within the same family share similar genetic and structural characteristics, the classification process becomes more challenging, necessitating more robust models. With the rapid evolution of viral genomes and the increasing need for timely classification, we aimed to provide a robust and efficient tool that could increase the accuracy of viral identification and classification processes. Contribute to advancing research in viral genomics and assist in surveilling emerging viral strains. METHODS: Based on a one-dimensional deep CNN, the proposed tool is capable of training and testing on the Coronaviridae family, including SARS-CoV-2. Our model's performance was assessed using various metrics, including F1-score and AUROC. Additionally, artificial mutation tests were conducted to evaluate the model's generalization ability across sequence variations. We also used the BLAST algorithm and conducted comprehensive processing time analyses for comparison. RESULTS: DeepVirusClassifier demonstrated exceptional performance across several evaluation metrics in the training and testing phases. Indicating its robust learning capacity. Notably, during testing on more than 10,000 viral sequences, the model exhibited a more than 99% sensitivity for sequences with fewer than 2000 mutations. The tool achieves superior accuracy and significantly reduced processing times compared to the Basic Local Alignment Search Tool algorithm. Furthermore, the results appear more reliable than the work discussed in the text, indicating that the tool has great potential to revolutionize viral genomic research. CONCLUSION: DeepVirusClassifier is a powerful tool for accurately classifying viral sequences, specifically focusing on SARS-CoV-2 and other subtypes within the Coronaviridae family. The superiority of our model becomes evident through rigorous evaluation and comparison with existing methods. Introducing artificial mutations into the sequences demonstrates the tool's ability to identify variations and significantly contributes to viral classification and genomic research. As viral surveillance becomes increasingly critical, our model holds promise in aiding rapid and accurate identification of emerging viral strains.


Subject(s)
COVID-19 , Deep Learning , Genome, Viral , SARS-CoV-2 , SARS-CoV-2/genetics , SARS-CoV-2/classification , Genome, Viral/genetics , COVID-19/virology , Coronaviridae/genetics , Coronaviridae/classification , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL