Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 325
Filtrar
1.
Int Endod J ; 58(4): 658-671, 2025 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-39873266

RESUMO

AIM: To develop and validate an artificial intelligence (AI)-powered tool based on convolutional neural network (CNN) for automatic segmentation of root canals in single-rooted teeth using cone-beam computed tomography (CBCT). METHODOLOGY: A total of 69 CBCT scans were retrospectively recruited from a hospital database and acquired from two devices with varying protocols. These scans were randomly assigned to the training (n = 31, 88 teeth), validation (n = 8, 15 teeth) and testing (n = 30, 120 teeth) sets. For the training and validation data sets, each CBCT scan was imported to the Virtual Patient Creator platform, where manual segmentation of root canals was performed by two operators, establishing the ground truth. Subsequently, the AI model was tested on 30 CBCT scans (120 teeth), and the AI-generated three-dimensional (3D) virtual models were exported in standard triangle language (STL) format. Importantly, the testing data set encompassed different types of single-rooted teeth. An experienced operator evaluated the automated segmentation, and manual refinements were made to create refined 3D models (R-AI). The AI and R-AI models were compared for performance evaluation. Additionally, 30% of the testing sample was manually segmented at two different times to compare AI-based and human segmentation methods. The time taken by each segmentation method to obtain 3D models was recorded in seconds(s) for further comparison. RESULTS: The AI-driven tool demonstrated highly accurate segmentation of single-rooted teeth (Dice similarity coefficient [DSC] ranging from 89% to 93%; 95% Hausdorff distance [HD] ranging from 0.10 to 0.13 mm), with no significant impact of tooth type on accuracy metrics (p > .05). The AI approach outperformed the manual method (p < .05), showing higher DSC and lower 95% HD values. In terms of time efficiency, manual segmentation required significantly more time (2262.4 ± 679.1 s) compared to R-AI (94 ± 64.7 s) and AI (41.8 ± 12.2 s) methods (p < .05), representing a 54-fold decrease. CONCLUSIONS: The novel AI-based tool exhibited highly accurate and time-efficient performance in the automatic root canal segmentation on CBCT, surpassing the human performance.


Assuntos
Inteligência Artificial , Tomografia Computadorizada de Feixe Cônico , Cavidade Pulpar , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Cavidade Pulpar/diagnóstico por imagem , Cavidade Pulpar/anatomia & histologia , Estudos Retrospectivos , Imageamento Tridimensional/métodos , Redes Neurais de Computação
2.
Sci Rep ; 15(1): 3746, 2025 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-39885248

RESUMO

Accurate malaria diagnosis with precise identification of Plasmodium species is crucial for an effective treatment. While microscopy is still the gold standard in malaria diagnosis, it relies heavily on trained personnel. Artificial intelligence (AI) advances, particularly convolutional neural networks (CNNs), have significantly improved diagnostic capabilities and accuracy by enabling the automated analysis of medical images. Previous models efficiently detected malaria parasites in red blood cells but had difficulty differentiating between species. We propose a CNN-based model for classifying cells infected by P. falciparum, P. vivax, and uninfected white blood cells from thick blood smears. Our best-performing model utilizes a seven-channel input and correctly predicted 12,876 out of 12,954 cases. We also generated a cross-validation confusion matrix that showed the results of five iterations, achieving 63,654 out of 64,126 true predictions. The model's accuracy reached 99.51%, a precision of 99.26%, a recall of 99.26%, a specificity of 99.63%, an F1 score of 99.26%, and a loss of 2.3%. We are now developing a system based on real-world quality images to create a comprehensive detection tool for remote regions where trained microscopists are unavailable.


Assuntos
Aprendizado Profundo , Malária Falciparum , Malária Vivax , Redes Neurais de Computação , Plasmodium falciparum , Plasmodium vivax , Plasmodium vivax/isolamento & purificação , Plasmodium falciparum/isolamento & purificação , Humanos , Malária Vivax/diagnóstico , Malária Vivax/parasitologia , Malária Falciparum/parasitologia , Malária Falciparum/diagnóstico , Processamento de Imagem Assistida por Computador/métodos
3.
Netw Neurosci ; 8(4): 1529-1544, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39735504

RESUMO

Memories are thought to use coding schemes that dynamically adjust their representational structure to maximize both persistence and efficiency. However, the nature of these coding scheme adjustments and their impact on the temporal evolution of memory after initial encoding is unclear. Here, we introduce the Segregation-to-Integration Transformation (SIT) model, a network formalization that offers a unified account of how the representational structure of a memory is transformed over time. The SIT model asserts that memories initially adopt a highly modular or segregated network structure, functioning as an optimal storage buffer by balancing protection from disruptions and accommodating substantial information. Over time, a repeated combination of neural network reactivations involving activation spreading and synaptic plasticity transforms the initial modular structure into an integrated memory form, facilitating intercommunity spreading and fostering generalization. The SIT model identifies a nonlinear or inverted U-shaped function in memory evolution where memories are most susceptible to changing their representation. This time window, located early during the transformation, is a consequence of the memory's structural configuration, where the activation diffusion across the network is maximized.


The Segregation-to-Integration Transformation (SIT) model provides a framework for memory transformation based on changes in the neural network properties. SIT posits that memories shift from highly modular to less modular network forms over time, driven by neural reactivations, activation spread, and plasticity rules. The SIT model identified a critical period, shortly after memory formation, where reactivations can induce significant structural modifications. As repeated reactivation passes, the network becomes more stable and integrated, becoming more resistant to change, thus preserving the core information while reducing the likelihood of distortion or loss.

4.
Sci Rep ; 14(1): 30332, 2024 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-39639089

RESUMO

Innovation is currently driving enhanced performance and productivity across various fields through process automation. However, identifying intricate details in images can often pose challenges due to morphological variations or specific conditions. Here, artificial intelligence (AI) plays a crucial role by simplifying the segmentation of images. This is achieved by training algorithms to detect specific pixels, thereby recognizing details within images. In this study, an algorithm incorporating modules based on Efficient Sub-Pixel Convolutional Neural Network for image super-resolution, U-Net based Neural baseline for image segmentation, and image binarization for masking was developed. The combination of these modules aimed to identify capillary structures at pixel level. The method was applied on different datasets containing images of eye fundus, citrus leaves, printed circuit boards to test how well it could segment the capillary structures. Notably, the trained model exhibited versatility in recognizing capillary structures across various image types. When tested with the Set 5 and Set 14 datasets, a PSNR of 37.92 and SSIM of 0.9219 was achieved, surpassing significantly other image superresolution methods. The enhancement module processes the image using three different varaiables in the same way, which imposes a complexity of O(n) and takes 308,734 ms to execute; the segmentation module evaluates each pixel against its neighbors to correctly segment regions of interes, generating an [Formula: see text] quadratic complexity and taking 687,509 ms to execute; the masking module makes several runs through the whole image and in several occasions it calls processes of [Formula: see text] complexity at 581686 microseconds to execute, which makes it not only the most complex but also the most exhaustive part of the program. This versatility, rooted in its pixel-level operation, enables the algorithm to identify initially unnoticed details, enhancing its applicability across diverse image datasets. This innovation holds significant potential for precisely studying certain structures' characteristics while enhancing and processing images with high fidelity through AI-driven machine learning algorithms.


Assuntos
Algoritmos , Inteligência Artificial , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Humanos , Fundo de Olho , Citrus
5.
Int Breastfeed J ; 19(1): 79, 2024 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-39639329

RESUMO

BACKGROUND: Breastfeeding rates remain below the globally recommended levels, a situation associated with higher infant and neonatal mortality rates. The implementation of artificial intelligence (AI) could help improve and increase breastfeeding rates. This study aimed to identify and synthesize the current information on the use of AI in the analysis of human milk and breastfeeding. METHODS: A scoping review was conducted according to the PRISMA Extension for Scoping Reviews guidelines. The literature search, performed in December 2023, used predetermined keywords from the PubMed, Scopus, LILACS, and WoS databases. Observational and qualitative studies evaluating AI in the analysis of breastfeeding patterns and human milk composition have been conducted. A thematic analysis was employed to categorize and synthesize the data. RESULTS: Nineteen studies were included. The primary AI approaches were machine learning, neural networks, and chatbot development. The thematic analysis revealed five major categories: 1. Prediction of exclusive breastfeeding patterns: AI models, such as decision trees and machine learning algorithms, identify factors influencing breastfeeding practices, including maternal experience, hospital policies, and social determinants, highlighting actionable predictors for intervention. 2. Analysis of macronutrients in human milk: AI predicted fat, protein, and nutrient content with high accuracy, improving the operational efficiency of milk banks and nutritional assessments. 3. Education and support for breastfeeding mothers: AI-driven chatbots address breastfeeding concerns, debunked myths, and connect mothers to milk donation programs, demonstrating high engagement and satisfaction rates. 4. Detection and transmission of drugs in breast milk: AI techniques, including neural networks and predictive models, identified drug transfer rates and assessed pharmacological risks during lactation. 5. Identification of environmental contaminants in milk: AI models predict exposure to contaminants, such as polychlorinated biphenyls, based on maternal and environmental factors, aiding in risk assessment. CONCLUSION: AI-based models have shown the potential to increase breastfeeding rates by identifying high-risk populations and providing tailored support. Additionally, AI has enabled a more precise analysis of human milk composition, drug transfer, and contaminant detection, offering significant insights into lactation science and maternal-infant health. These findings suggest that AI can promote breastfeeding, improve milk safety, and enhance infant nutrition.


Assuntos
Inteligência Artificial , Aleitamento Materno , Leite Humano , Humanos , Leite Humano/química , Feminino , Recém-Nascido , Lactente , Aprendizado de Máquina
6.
Front Artif Intell ; 7: 1467051, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39664102

RESUMO

Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions.

7.
Therap Adv Gastroenterol ; 17: 17562848241251949, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39664232

RESUMO

Background: The treatment for Crohn's disease (CD) has increasingly required the use of biological agents. Safe and affordable tests have led to the active implementation of therapeutic drug monitoring (TDM) in clinical practice, which, although not yet widely available across all health services, has been proven effective. Objective: To analyze serum infliximab (IFX) and antidrug antibody (ADA) levels in CD patients, compare two tests, as well as construct a prediction of neural network using a combination of clinical, epidemiological, and laboratory variables. Design: Cross-sectional observational study. Method: A cross-sectional observational study was conducted on 75 CD patients in the maintenance phase of IFX treatment. The participants were allocated into two groups: CD in activity (CDA) and in remission (CDR). Disease activity was defined by endoscopic or radiological criteria. Serum IFX levels were measured by enzyme-linked immunosorbent assay (ELISA) and rapid lateral flow assay; ADA levels were measured by ELISA. A nonparametric test was used for statistical analysis; p value of ⩽0.05 was considered significant. Differences between ELISA and rapid lateral flow results within the measurement range were assessed by the Wilcoxon test, Passing-Bablok regression, and Bland-Altman method. Prediction models were created using four neural network sets. Neural networks and performance receiver operating characteristic curves were created using the Keras package in Python software. Results: Most participants exhibited supratherapeutic IFX levels (>7 mg/mL). Both tests showed no difference in IFX levels between the CDA and CDR groups (p > 0.05). The use of immunosuppressive therapy did not affect IFX levels (p > 0.05). Only 14.66% of patients had ADA levels >5 AU/mL, and all ADA-positive participants exhibited subtherapeutic IFX levels in both tests. The median results of both tests showed significant differences and moderate agreement (r = -0.6758, p < 0.001). Of the four neural networks developed, two showed excellent performance, with area under the curve (AUCs) of 82-92% and 100%. Conclusion: Most participants exhibited supratherapeutic IFX levels, with no significant serum level difference between the groups. There was moderate agreement between tests. Two neural network sets showed disease activity and the presence of ADA, noninvasively determined in patients using IFX by presenting an AUC of >80%.


Infliximab drug monitoring in Crohn's disease Crohn's disease (CD) is a chronic condition that affects the gastrointestinal tract, with potential effects anywhere between the mouth and the anus. The primary treatment goal is symptom control and disease remission. The objective of this study was to analyze blood levels of infliximab (IFX), a commonly used medication for CD treatment. We also evaluated the level of antibodies that the body can produce against this medication to justify nonresponse to the drug. IFX levels were compared in 75 patients with CD in activity and in remission and using two different tests. The results showed that most patients had serum IFX above the recommended level (> 7 mg/mL). Neither of these tests showed differences in IFX levels when we evaluated disease activity or when the patients used immunosuppressants. Both tests showed antibodies against IFX in 14.66% of patients, all of whom had IFX levels below the therapeutic level. We compared two tests, ELISA and rapid test, and observed a difference between them, with moderate agreement. Normal serum IFX levels were higher with the rapid test than with the ELISA; however, they presented linear relationship. We also created prediction models using neural networks (artificial intelligence), which demonstrated excellent performance in noninvasively predicting disease activity and the presence of antibodies against IFX, achieving an area under the curve between 82% and 100%.

8.
Proteins ; 2024 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-39711079

RESUMO

Recent technological advancements have enabled the experimental determination of amino acid sequences for numerous proteins. However, analyzing protein functions, which is essential for understanding their roles within cells, remains a challenging task due to the associated costs and time constraints. To address this challenge, various computational approaches have been proposed to aid in the categorization of protein functions, mainly utilizing amino acid sequences. In this study, we introduce SUPERMAGO, a method that leverages amino acid sequences to predict protein functions. Our approach employs Transformer architectures, pre-trained on protein data, to extract features from the sequences. We use multilayer perceptrons for classification and a stacking neural network to aggregate the predictions, which significantly enhances the performance of our method. We also present SUPERMAGO+, an ensemble of SUPERMAGO and DIAMOND, based on neural networks that assign different weights to each term, offering a novel weighting mechanism compared with existing methods in the literature. Additionally, we introduce SUPERMAGO+Web, a web server-compatible version of SUPERMAGO+ designed to operate with reduced computational resources. Both SUPERMAGO and SUPERMAGO+ consistently outperformed state-of-the-art approaches in our evaluations, establishing them as leading methods for this task when considering only amino acid sequence information.

9.
World J Clin Oncol ; 15(10): 1256-1263, 2024 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-39473862

RESUMO

In their recent study published in the World Journal of Clinical Cases, the article found that minimally invasive laparoscopic surgery under general anesthesia demonstrates superior efficacy and safety compared to traditional open surgery for early ovarian cancer patients. This editorial discusses the integration of machine learning in laparoscopic surgery, emphasizing its transformative potential in improving patient outcomes and surgical precision. Machine learning algorithms analyze extensive datasets to optimize procedural techniques, enhance decision-making, and personalize treatment plans. Advanced imaging modalities like augmented reality and real-time tissue classification, alongside robotic surgical systems and virtual reality simulations driven by machine learning, enhance imaging and training techniques, offering surgeons clearer visualization and precise tissue manipulation. Despite promising advancements, challenges such as data privacy, algorithm bias, and regulatory hurdles need addressing for the responsible deployment of machine learning technologies. Interdisciplinary collaborations and ongoing technological innovations promise further enhancement in laparoscopic surgery, fostering a future where personalized medicine and precision surgery redefine patient care.

10.
Plant Methods ; 20(1): 164, 2024 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-39472979

RESUMO

Building models that allow phenotypic evaluation of complex agronomic traits in crops of global economic interest, such as grain yield (GY) in soybean and maize, is essential for improving the efficiency of breeding programs. In this sense, understanding the relationships between agronomic variables and those obtained by high-throughput phenotyping (HTP) is crucial to this goal. Our hypothesis is that vegetation indices (VIs) obtained from HTP can be used to indirectly measure agronomic variables in annual crops. The objectives were to study the association between agronomic variables in maize and soybean genotypes with VIs obtained from remote sensing and to identify computational intelligence for predicting GY of these crops from VIs as input in the models. Comparative trials were carried out with 30 maize genotypes in the 2020/2021, 2021/2022 and 2022/2023 crop seasons, and with 32 soybean genotypes in the 2021/2022 and 2022/2023 seasons. In all trials, an overflight was performed at R1 stage using the UAV Sensefly eBee equipped with a multispectral sensor for acquiring canopy reflectance in the green (550 nm), red (660 nm), near-infrared (735 nm) and infrared (790 nm) wavelengths, which were used to calculate the VIs assessed. Agronomic traits evaluated in maize crop were: leaf nitrogen content, plant height, first ear insertion height, and GY, while agronomic traits evaluated in soybean were: days to maturity, plant height, first pod insertion height, and GY. The association between the variables were expressed by a correlation network, and to identify which indices are best associated with each of the traits evaluated, a path analysis was performed. Lastly, VIs with a cause-and-effect association on each variable in maize and soybean trials were adopted as independent explanatory variables in multiple regression model (MLR) and artificial neural network (ANN), in which the 10 best topologies able to simultaneously predict all the agronomic variables evaluated in each crop were selected. Our findings reveal that VIs can be used to predict agronomic variables in maize and soybean. Soil-adjusted Vegetation Index (SAVI) and Green Normalized Dif-ference Vegetation Index (GNDVI) have a positive and high direct effect on all agronomic variables evaluated in maize, while Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) have a positive cause-and-effect association with all soybean variables. ANN outperformed MLR, providing higher accuracy when predicting agronomic variables using the VIs select by path analysis as input. Future studies should evaluate other plant traits, such as physiological or nutritional ones, as well as different spectral variables from those evaluated here, with a view to contributing to an in-depth understanding about cause-and-effect relationships between plant traits and spectral variables. Such studies could contribute to more specific HTP at the level of traits of interest in each crop, helping to develop genetic materials that meet the future demands of population growth and climate change.

11.
Sci Rep ; 14(1): 24562, 2024 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-39427062

RESUMO

The aim of this study was to build and validate an artificial neural network (ANN) algorithm to predict sleep using data from a portable monitor (Biologix system) consisting of a high-resolution oximeter with built-in accelerometer plus smartphone application with snoring recording and cloud analysis. A total of 268 patients with suspected obstructive sleep apnea (OSA) were submitted to standard polysomnography (PSG) with simultaneous Biologix (age: 56 ± 11 years; body mass index: 30.9 ± 4.6 kg/m 2 , apnea-hypopnea index [AHI]: 35 ± 30 events/h). Biologix channels were input features for construction an ANN model to predict sleep. A k-fold cross-validation method (k=10) was applied, ensuring that all sleep studies (N=268; 246,265 epochs) were included in both training and testing across all iterations. The final ANN model, evaluated as the mean performance across all folds, resulted in a sensitivity, specificity and accuracy of 91.5%, 71.0% and 86.1%, respectively, for detecting sleep. As compared to the oxygen desaturation index (ODI) from Biologix without sleep prediction, the bias (mean difference) between PSG-AHI and Biologix-ODI with sleep prediction (Biologix-Sleep-ODI) decreased significantly (3.40 vs. 1.02 events/h, p<0.001). We conclude that sleep prediction by an ANN model using data from oximeter, accelerometer, and snoring is accurate and improves Biologix system OSA diagnostic precision.


Assuntos
Acelerometria , Redes Neurais de Computação , Oximetria , Polissonografia , Apneia Obstrutiva do Sono , Ronco , Humanos , Apneia Obstrutiva do Sono/diagnóstico , Apneia Obstrutiva do Sono/fisiopatologia , Pessoa de Meia-Idade , Ronco/diagnóstico , Ronco/fisiopatologia , Masculino , Oximetria/instrumentação , Oximetria/métodos , Feminino , Polissonografia/instrumentação , Polissonografia/métodos , Acelerometria/instrumentação , Acelerometria/métodos , Idoso , Sono/fisiologia , Adulto , Smartphone , Algoritmos
12.
Biomimetics (Basel) ; 9(10)2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39451816

RESUMO

The precision of robotic manipulators in the industrial or medical field is very important, especially when it comes to repetitive or exhaustive tasks. Geometric deformations are the most common in this field. For this reason, new robotic vision techniques have been proposed, including 3D methods that made it possible to determine the geometric distances between the parts of a robotic manipulator. The aim of this work is to measure the angular position of a robotic arm with six degrees of freedom. For this purpose, a stereo camera and a convolutional neural network algorithm are used to reduce the degradation of precision caused by geometric errors. This method is not intended to replace encoders, but to enhance accuracy by compensating for degradation through an intelligent visual measurement system. The camera is tested and the accuracy is about one millimeter. The implementation of this method leads to better results than traditional and simple neural network methods.

13.
Animals (Basel) ; 14(20)2024 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-39457929

RESUMO

Identifying and counting fish are crucial for managing stocking, harvesting, and marketing of farmed fish. Researchers have used convolutional networks for these tasks and explored various approaches to enhance network learning. Batch normalization is one technique that improves network stability and accuracy. This study aimed to evaluate machine learning for identifying and counting pirapitinga Piaractus brachypomus fry with different batch sizes. The researchers used one thousand photographic images of Pirapitinga fingerlings, labeled with bounding boxes. They trained the adapted convolutional network model with batch normalization layers added at the end of each convolution block. They set the training to one hundred and fifty epochs and tested batch sizes of 5, 10, and 20. Furthermore, they measured network performance using precision, recall, and mAP@0.5. Models with smaller batch sizes performed less effectively. The training with a batch size of 20 achieved the best performance, with a precision of 96.74%, recall of 95.48%, mAP@0.5 of 97.08%, and accuracy of 98%. This indicates that larger batch sizes improve accuracy in detecting and counting pirapitinga fry across different fish densities.

14.
Head Neck Pathol ; 18(1): 117, 2024 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-39466448

RESUMO

OBJECTIVE: This study aimed to implement and evaluate a Deep Convolutional Neural Network for classifying myofibroblastic lesions into benign and malignant categories based on patch-based images. METHODS: A Residual Neural Network (ResNet50) model, pre-trained with weights from ImageNet, was fine-tuned to classify a cohort of 20 patients (11 benign and 9 malignant cases). Following annotation of tumor regions, the whole-slide images (WSIs) were fragmented into smaller patches (224 × 224 pixels). These patches were non-randomly divided into training (308,843 patches), validation (43,268 patches), and test (42,061 patches) subsets, maintaining a 78:11:11 ratio. The CNN training was caried out for 75 epochs utilizing a batch size of 4, the Adam optimizer, and a learning rate of 0.00001. RESULTS: ResNet50 achieved an accuracy of 98.97%, precision of 99.91%, sensitivity of 97.98%, specificity of 99.91%, F1 score of 98.94%, and AUC of 0.99. CONCLUSIONS: The ResNet50 model developed exhibited high accuracy during training and robust generalization capabilities in unseen data, indicating nearly flawless performance in distinguishing between benign and malignant myofibroblastic tumors, despite the small sample size. The excellent performance of the AI model in separating such histologically similar classes could be attributed to its ability to identify hidden discriminative features, as well as to use a wide range of features and benefit from proper data preprocessing.


Assuntos
Redes Neurais de Computação , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Sensibilidade e Especificidade , Neoplasias de Cabeça e Pescoço/patologia , Neoplasias de Cabeça e Pescoço/classificação
15.
Sensors (Basel) ; 24(19)2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39409301

RESUMO

Currently, the number of vehicles in circulation continues to increase steadily, leading to a parallel increase in vehicular accidents. Among the many causes of these accidents, human factors such as driver drowsiness play a fundamental role. In this context, one solution to address the challenge of drowsiness detection is to anticipate drowsiness by alerting drivers in a timely and effective manner. Thus, this paper presents a Convolutional Neural Network (CNN)-based approach for drowsiness detection by analyzing the eye region and Mouth Aspect Ratio (MAR) for yawning detection. As part of this approach, endpoint delineation is optimized for extraction of the region of interest (ROI) around the eyes. An NVIDIA Jetson Nano-based device and near-infrared (NIR) camera are used for real-time applications. A Driver Drowsiness Artificial Intelligence (DD-AI) architecture is proposed for the eye state detection procedure. In a performance analysis, the results of the proposed approach were compared with architectures based on InceptionV3, VGG16, and ResNet50V2. Night-Time Yawning-Microsleep-Eyeblink-Driver Distraction (NITYMED) was used for training, validation, and testing of the architectures. The proposed DD-AI network achieved an accuracy of 99.88% with the NITYMED test data, proving superior to the other networks. In the hardware implementation, tests were conducted in a real environment, resulting in 96.55% and 14 fps on average for the DD-AI network, thereby confirming its superior performance.


Assuntos
Condução de Veículo , Redes Neurais de Computação , Humanos , Boca/fisiologia , Olho , Fases do Sono/fisiologia , Sonolência , Inteligência Artificial , Acidentes de Trânsito
16.
Artigo em Inglês | MEDLINE | ID: mdl-39382655

RESUMO

The present work focused on inline Raman spectroscopy monitoring of SARS-CoV-2 VLP production using two culture media by fitting chemometric models for biochemical parameters (viable cell density, cell viability, glucose, lactate, glutamine, glutamate, ammonium, and viral titer). For that purpose, linear, partial least square (PLS), and nonlinear approaches, artificial neural network (ANN), were used as correlation techniques to build the models for each variable. ANN approach resulted in better fitting for most parameters, except for viable cell density and glucose, whose PLS presented more suitable models. Both were statistically similar for ammonium. The mean absolute error of the best models, within the quantified value range for viable cell density (375,000-1,287,500 cell/mL), cell viability (29.76-100.00%), glucose (8.700-10.500 g/), lactate (0.019-0.400 g/L), glutamine (0.925-1.520 g/L), glutamate (0.552-1.610 g/L), viral titer (no virus quantified-7.505 log10 PFU/mL) and ammonium (0.0074-0.0478 g/L) were, respectively, 41,533 ± 45,273 cell/mL (PLS), 1.63 ± 1.54% (ANN), 0.058 ± 0.065 g/L (PLS), 0.007 ± 0.007 g/L (ANN), 0.007 ± 0.006 g/L (ANN), 0.006 ± 0.006 g/L (ANN), 0.211 ± 0.221 log10 PFU/mL (ANN), and 0.0026 ± 0.0026 g/L (PLS) or 0.0027 ± 0.0034 g/L (ANN). The correlation accuracy, errors, and best models obtained are in accord with studies, both online and offline approaches while using the same insect cell/baculovirus expression system or different cell host. Besides, the biochemical tracking throughout bioreactor runs using the models showed suitable profiles, even using two different culture media.

17.
Sensors (Basel) ; 24(17)2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-39275408

RESUMO

Precise measurement of fiber diameter in animal and synthetic textiles is crucial for quality assessment and pricing; however, traditional methods often struggle with accuracy, particularly when fibers are densely packed or overlapping. Current computer vision techniques, while useful, have limitations in addressing these challenges. This paper introduces a novel deep-learning-based method to automatically generate distance maps of fiber micrographs, enabling more accurate fiber segmentation and diameter calculation. Our approach utilizes a modified U-Net architecture, trained on both real and simulated micrographs, to regress distance maps. This allows for the effective separation of individual fibers, even in complex scenarios. The model achieves a mean absolute error (MAE) of 0.1094 and a mean square error (MSE) of 0.0711, demonstrating its effectiveness in accurately measuring fiber diameters. This research highlights the potential of deep learning to revolutionize fiber analysis in the textile industry, offering a more precise and automated solution for quality control and pricing.

18.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39275707

RESUMO

Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.


Assuntos
Aprendizado Profundo , Emoções , Redes Neurais de Computação , Humanos , Emoções/fisiologia , Fala/fisiologia , Bases de Dados Factuais , Algoritmos , Reconhecimento Automatizado de Padrão/métodos
19.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39275715

RESUMO

This article implements a hybrid Machine Learning (ML) model to classify stoppage events in a copper-crushing equipment, more specifically, a conveyor belt. The model combines Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) with Principal Component Analysis (PCA) to identify the type of stoppage event when they occur in an industrial sector that is significant for the Chilean economy. This research addresses the critical need to optimise maintenance management in the mining industry, highlighting the technological relevance and motivation for using advanced ML techniques. This study focusses on combining and implementing three ML models trained with historical data composed of information from various sensors, real and virtual, as well from maintenance reports that report operational conditions and equipment failure characteristics. The main objective of this study is to improve the efficiency when identifying the nature of a stoppage serving as a basis for the subsequent development of a reliable failure prediction system. The results indicate that this approach significantly increases information reliability, addressing the persistent challenges in data management within the maintenance area. With a classification accuracy of 96.2% and a recall of 96.3%, the model validates and automates the classification of stoppage events, significantly reducing dependency on interdepartmental interactions. This advancement eliminates the need for reliance on external databases, which have previously been prone to errors, missing critical data, or containing outdated information. By implementing this methodology, a robust and reliable foundation is established for developing a failure prediction model, fostering both efficiency and reliability in the maintenance process. The application of ML in this context produces demonstrably positive outcomes in the classification of stoppage events, underscoring its significant impact on industry operations.

20.
PeerJ ; 12: e18192, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39329141

RESUMO

The massive arrival of pelagic Sargassum on the coasts of several countries of the Atlantic Ocean began in 2011 and to date continues to generate social and environmental challenges for the region. Therefore, knowing the distribution and quantity of Sargassum in the ocean, coasts, and beaches is necessary to understand the phenomenon and develop protocols for its management, use, and final disposal. In this context, the present study proposes a methodology to calculate the area Sargassum occupies on beaches in square meters, based on the semantic segmentation of aerial images using the pix2pix architecture. For training and testing the algorithm, a unique dataset was built from scratch, consisting of 15,268 aerial images segmented into three classes. The images correspond to beaches in the cities of Mahahual and Puerto Morelos, located in Quintana Roo, Mexico. To analyze the results the fß-score metric was used. The results for the Sargassum class indicate that there is a balance between false positives and false negatives, with a slight bias towards false negatives, which means that the algorithm tends to underestimate the Sargassum pixels in the images. To know the confidence intervals within which the algorithm performs better, the results of the f0.5-score metric were resampled by bootstrapping considering all classes and considering only the Sargassum class. From the above, we found that the algorithm offers better performance when segmenting Sargassum images on the sand. From the results, maps showing the Sargassum coverage area along the beach were designed to complement the previous ones and provide insight into the field of study.


Assuntos
Aprendizado Profundo , Sargassum , México , Algoritmos , Monitoramento Ambiental/métodos , Oceano Atlântico , Humanos , Imagens de Satélites , Conservação dos Recursos Naturais/métodos , Praias
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA