Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 725
Filtrar
1.
Physiol Behav ; 287: 114696, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39293590

RESUMEN

Behavior is fundamental to neuroscience research, providing insights into the mechanisms underlying thoughts, actions and responses. Various model organisms, including mice, flies, and fish, are employed to understand these mechanisms. Zebrafish, in particular, serve as a valuable model for studying anxiety-like behavior, typically measured through the novel tank diving (NTD) assay. Traditional methods for analyzing NTD assays are either manually intensive or costly when using specialized software. To address these limitations, it is useful to develop methods for the automated analysis of zebrafish NTD assays using deep-learning models. In this study, we classified zebrafish based on their anxiety levels using DeepLabCut. Subsequently, based on a training dataset of image frames, we compared deep-learning models to identify the model best suited to classify zebrafish as anxious or non anxious and found that specific architectures, such as InceptionV3, are able to effectively perform this classification task. Our findings suggest that these deep learning models hold promise for automated behavioral analysis in zebrafish, offering an efficient and cost-effective alternative to traditional methods.

2.
Sci Rep ; 14(1): 20754, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39237695

RESUMEN

To ensure the reliability of machining quality, it is crucial to predict tool wear accurately. In this paper, a novel deep learning-based model is proposed, which synthesizes the advantages of power spectral density (PSD), convolutional neural networks (CNN), and vision transformer model (ViT), namely PSD-CVT. PSD maps can provide a comprehensive understanding of the spectral characteristics of the signals. It makes the spectral characteristics more obvious and makes it easy to analyze and compare different signals. CNN focuses on local feature extraction, which can capture local information such as the texture, edge, and shape of the image, while the attention mechanism in ViT can effectively capture the global structure and long-range dependencies present in the image. Two fully connected layers with a ReLU function are used to obtain the predicted tool wear values. The experimental results on the PHM 2010 dataset demonstrate that the proposed model has higher prediction accuracy than the CNN model or ViT model alone, as well as outperforms several existing methods in accurately predicting tool wear. The proposed prediction method can also be applied to predict tool wear in other machining fields.

3.
Radiol Artif Intell ; 6(6): e230529, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39230423

RESUMEN

Mammography screening supported by deep learning-based artificial intelligence (AI) solutions can potentially reduce workload without compromising breast cancer detection accuracy, but the site of deployment in the workflow might be crucial. This retrospective study compared three simulated AI-integrated screening scenarios with standard double reading with arbitration in a sample of 249 402 mammograms from a representative screening population. A commercial AI system replaced the first reader (scenario 1: integrated AIfirst), the second reader (scenario 2: integrated AIsecond), or both readers for triaging of low- and high-risk cases (scenario 3: integrated AItriage). AI threshold values were chosen based partly on previous validation and setting the screen-read volume reduction at approximately 50% across scenarios. Detection accuracy measures were calculated. Compared with standard double reading, integrated AIfirst showed no evidence of a difference in accuracy metrics except for a higher arbitration rate (+0.99%, P < .001). Integrated AIsecond had lower sensitivity (-1.58%, P < .001), negative predictive value (NPV) (-0.01%, P < .001), and recall rate (-0.06%, P = .04) but a higher positive predictive value (PPV) (+0.03%, P < .001) and arbitration rate (+1.22%, P < .001). Integrated AItriage achieved higher sensitivity (+1.33%, P < .001), PPV (+0.36%, P = .03), and NPV (+0.01%, P < .001) but lower arbitration rate (-0.88%, P < .001). Replacing one or both readers with AI seems feasible; however, the site of application in the workflow can have clinically relevant effects on accuracy and workload. Keywords: Mammography, Breast, Neoplasms-Primary, Screening, Epidemiology, Diagnosis, Convolutional Neural Network (CNN) Supplemental material is available for this article. Published under a CC BY 4.0 license.


Asunto(s)
Neoplasias de la Mama , Estudios de Factibilidad , Mamografía , Humanos , Mamografía/métodos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico , Estudios Retrospectivos , Persona de Mediana Edad , Inteligencia Artificial , Anciano , Detección Precoz del Cáncer/métodos , Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tamizaje Masivo/métodos , Sensibilidad y Especificidad , Reproducibilidad de los Resultados
4.
Bioengineering (Basel) ; 11(9)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39329609

RESUMEN

Dermatological conditions are primarily prevalent in humans and are primarily caused by environmental and climatic fluctuations, as well as various other reasons. Timely identification is the most effective remedy to avert minor ailments from escalating into severe conditions. Diagnosing skin illnesses is consistently challenging for health practitioners. Presently, they rely on conventional methods, such as examining the condition of the skin. State-of-the-art technologies can enhance the accuracy of skin disease diagnosis by utilizing data-driven approaches. This paper presents a Computer Assisted Diagnosis (CAD) framework that has been developed to detect skin illnesses at an early stage. We suggest a computationally efficient and lightweight deep learning model that utilizes a CNN architecture. We then do thorough experiments to compare the performance of shallow and deep learning models. The CNN model under consideration consists of seven convolutional layers and has obtained an accuracy of 87.64% when applied to three distinct disease categories. The studies were conducted using the International Skin Imaging Collaboration (ISIC) dataset, which exclusively consists of dermoscopic images. This study enhances the field of skin disease diagnostics by utilizing state-of-the-art technology, attaining exceptional levels of accuracy, and striving for efficiency improvements. The unique features and future considerations of this technology create opportunities for additional advancements in the automated diagnosis of skin diseases and tailored treatment.

5.
Sensors (Basel) ; 24(17)2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39275594

RESUMEN

Monolithic zirconia (MZ) crowns are widely utilized in dental restorations, particularly for substantial tooth structure loss. Inspection, tactile, and radiographic examinations can be time-consuming and error-prone, which may delay diagnosis. Consequently, an objective, automatic, and reliable process is required for identifying dental crown defects. This study aimed to explore the potential of transforming acoustic emission (AE) signals to continuous wavelet transform (CWT), combined with Conventional Neural Network (CNN) to assist in crack detection. A new CNN image segmentation model, based on multi-class semantic segmentation using Inception-ResNet-v2, was developed. Real-time detection of AE signals under loads, which induce cracking, provided significant insights into crack formation in MZ crowns. Pencil lead breaking (PLB) was used to simulate crack propagation. The CWT and CNN models were used to automate the crack classification process. The Inception-ResNet-v2 architecture with transfer learning categorized the cracks in MZ crowns into five groups: labial, palatal, incisal, left, and right. After 2000 epochs, with a learning rate of 0.0001, the model achieved an accuracy of 99.4667%, demonstrating that deep learning significantly improved the localization of cracks in MZ crowns. This development can potentially aid dentists in clinical decision-making by facilitating the early detection and prevention of crack failures.


Asunto(s)
Coronas , Aprendizaje Profundo , Circonio , Circonio/química , Humanos , Redes Neurales de la Computación , Acústica , Análisis de Ondículas
6.
Heliyon ; 10(16): e36411, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253213

RESUMEN

This study introduces a groundbreaking method to enhance the accuracy and reliability of emotion recognition systems by combining electrocardiogram (ECG) with electroencephalogram (EEG) data, using an eye-tracking gated strategy. Initially, we propose a technique to filter out irrelevant portions of emotional data by employing pupil diameter metrics from eye-tracking data. Subsequently, we introduce an innovative approach for estimating effective connectivity to capture the dynamic interaction between the brain and the heart during emotional states of happiness and sadness. Granger causality (GC) is estimated and utilized to optimize input for a highly effective pre-trained convolutional neural network (CNN), specifically ResNet-18. To assess this methodology, we employed EEG and ECG data from the publicly available MAHNOB-HCI database, using a 5-fold cross-validation approach. Our method achieved an impressive average accuracy and area under the curve (AUC) of 91.00 % and 0.97, respectively, for GC-EEG-ECG images processed with ResNet-18. Comparative analysis with state-of-the-art studies clearly shows that augmenting ECG with EEG and refining data with an eye-tracking strategy significantly enhances emotion recognition performance across various emotions.

7.
Biol Methods Protoc ; 9(1): bpae063, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39258158

RESUMEN

Deep learning applications in taxonomic classification for animals and plants from images have become popular, while those for microorganisms are still lagging behind. Our study investigated the potential of deep learning for the taxonomic classification of hundreds of filamentous fungi from colony images, which is typically a task that requires specialized knowledge. We isolated soil fungi, annotated their taxonomy using standard molecular barcode techniques, and took images of the fungal colonies grown in petri dishes (n = 606). We applied a convolutional neural network with multiple training approaches and model architectures to deal with some common issues in ecological datasets: small amounts of data, class imbalance, and hierarchically structured grouping. Model performance was overall low, mainly due to the relatively small dataset, class imbalance, and the high morphological plasticity exhibited by fungal colonies. However, our approach indicates that morphological features like color, patchiness, and colony extension rate could be used for the recognition of fungal colonies at higher taxonomic ranks (i.e. phylum, class, and order). Model explanation implies that image recognition characters appear at different positions within the colony (e.g. outer or inner hyphae) depending on the taxonomic resolution. Our study suggests the potential of deep learning applications for a better understanding of the taxonomy and ecology of filamentous fungi amenable to axenic culturing. Meanwhile, our study also highlights some technical challenges in deep learning image analysis in ecology, highlighting that the domain of applicability of these methods needs to be carefully considered.

8.
J Insur Med ; 51(2): 64-76, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39266002

RESUMEN

Recent artificial intelligence (AI) advancements in cardiovascular medicine offer potential enhancements in diagnosis, prediction, treatment, and outcomes. This article aims to provide a basic understanding of AI enabled ECG technology. Specific conditions and findings will be discussed, followed by reviewing associated terminology and methodology. In the appendix, definitions of AUC versus accuracy are explained. The application of deep learning models enables detecting diseases from normal electrocardiograms at accuracy not previously achieved by technology or human experts. Results with AI enabled ECG are encouraging as they considerably exceeded current screening models for specific conditions (i.e., atrial fibrillation, left ventricular dysfunction, aortic stenosis, and hypertrophic cardiomyopathy). This could potentially lead to a revitalization of the utilization of the ECG in the insurance domain. While we are embracing the findings with this rapidly evolving technology, but cautious optimism is still necessary at this point.


Asunto(s)
Inteligencia Artificial , Electrocardiografía , Humanos , Electrocardiografía/métodos , Aprendizaje Profundo , Fibrilación Atrial/diagnóstico
9.
Curr Med Imaging ; 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39297463

RESUMEN

BACKGROUND: Brain tumours represent a diagnostic challenge, especially in the imaging area, where the differentiation of normal and pathologic tissues should be precise. The use of up-to-date machine learning techniques would be of great help in terms of brain tumor identification accuracy from MRI data. Objective This research paper aims to check the efficiency of a federated learning method that joins two classifiers, such as convolutional neural networks (CNNs) and random forests (R.F.F.), with dual U-Net segmentation for federated learning. This procedure benefits the image identification task on preprocessed MRI scan pictures that have already been categorized. METHODS: In addition to using a variety of datasets, federated learning was utilized to train the CNN-RF model while taking data privacy into account. The processed MRI images with Median, Gaussian, and Wiener filters are used to filter out the noise level and make the feature extraction process easy and efficient. The surgical part used a dual U-Net layout, and the performance assessment was based on precision, recall, F1-score, and accuracy. RESULTS: The model achieved excellent classification performance on local datasets as CRPs were high, from 91.28% to 95.52% for macro, micro, and weighted averages. Throughout the process of federated averaging, the collective model outperformed by reaching 97% accuracy compared to those of 99%, which were subjected to different clients. The correctness of how data is used helps the federated averaging method convert individual model insights into a consistent global model while keeping all personal data private. CONCLUSION: The combined structure of the federated learning framework, CNN-RF hybrid model, and dual U-Net segmentation is a robust and privacypreserving approach for identifying MRI images from brain tumors. The results of the present study exhibited that the technique is promising in improving the quality of brain tumor categorization and provides a pathway for practical utilization in clinical settings.

10.
Sci Rep ; 14(1): 21643, 2024 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284813

RESUMEN

The main bottleneck in training a robust tumor segmentation algorithm for non-small cell lung cancer (NSCLC) on H&E is generating sufficient ground truth annotations. Various approaches for generating tumor labels to train a tumor segmentation model was explored. A large dataset of low-cost low-accuracy panCK-based annotations was used to pre-train the model and determine the minimum required size of the expensive but highly accurate pathologist annotations dataset. PanCK pre-training was compared to foundation models and various architectures were explored for model backbone. Proper study design and sample procurement for training a generalizable model that captured variations in NSCLC H&E was studied. H&E imaging was performed on 112 samples (three centers, two scanner types, different staining and imaging protocols). Attention U-Net architecture was trained using the large panCK-based annotations dataset (68 samples, total area 10,326 [mm2]) followed by fine-tuning using a small pathologist annotations dataset (80 samples, total area 246 [mm2]). This approach resulted in mean intersection over union (mIoU) of 82% [77 87]. Using panCK pretraining provided better performance compared to foundation models and allowed for 70% reduction in pathologist annotations with no drop in performance. Study design ensured model generalizability over variations on H&E where performance was consistent across centers, scanners, and subtypes.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Patólogos , Humanos , Neoplasias Pulmonares/patología , Carcinoma de Pulmón de Células no Pequeñas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...