Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 313
Filtrar
1.
Cereb Cortex ; 33(6): 2415-2425, 2023 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35641181

RESUMO

Major depressive disorder (MDD) is the second leading cause of disability worldwide. Currently, the structural magnetic resonance imaging-based MDD diagnosis models mainly utilize local grayscale information or morphological characteristics in a single site with small samples. Emerging evidence has demonstrated that different brain structures in different circuits have distinct developmental timing, but mature coordinately within the same functional circuit. Thus, establishing an attention-guided unified classification framework with deep learning and individual structural covariance networks in a large multisite dataset could facilitate developing an accurate diagnosis strategy. Our results showed that attention-guided classification could improve the classification accuracy from primary 75.1% to ultimate 76.54%. Furthermore, the discriminative features of regional covariance connectivities and local structural characteristics were found to be mainly located in prefrontal cortex, insula, superior temporal cortex, and cingulate cortex, which have been widely reported to be closely associated with depression. Our study demonstrated that our attention-guided unified deep learning framework may be an effective tool for MDD diagnosis. The identified covariance connectivities and structural features may serve as biomarkers for MDD.


Assuntos
Transtorno Depressivo Maior , Humanos , Encéfalo , Imageamento por Ressonância Magnética , Atenção , Redes Neurais de Computação
2.
BMC Med Imaging ; 24(1): 36, 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38321373

RESUMO

BACKGROUND: Ultrasound imaging is the most frequently performed for the patients with chronic hepatitis or liver cirrhosis. However, ultrasound imaging is highly operator dependent and interpretation of ultrasound images is subjective, thus well-trained radiologist is required for evaluation. Automated classification of liver fibrosis could alleviate the shortage of skilled radiologist especially in low-to-middle income countries. The purposed of this study is to evaluate deep convolutional neural networks (DCNNs) for classifying the degree of liver fibrosis according to the METAVIR score using US images. METHODS: We used ultrasound (US) images from two tertiary university hospitals. A total of 7920 US images from 933 patients were used for training/validation of DCNNs. All patient were underwent liver biopsy or hepatectomy, and liver fibrosis was categorized based on pathology results using the METAVIR score. Five well-established DCNNs (VGGNet, ResNet, DenseNet, EfficientNet and ViT) was implemented to predict the METAVIR score. The performance of DCNNs for five-level (F0/F1/F2/F3/F4) classification was evaluated through area under the receiver operating characteristic curve (AUC) with 95% confidential interval, accuracy, sensitivity, specificity, positive and negative likelihood ratio. RESULTS: Similar mean AUC values were achieved for five models; VGGNet (0.96), ResNet (0.96), DenseNet (0.95), EfficientNet (0.96), and ViT (0.95). The same mean accuracy (0.94) and specificity values (0.96) were yielded for all models. In terms of sensitivity, EffcientNet achieved highest mean value (0.85) while the other models produced slightly lower values range from 0.82 to 0.84. CONCLUSION: In this study, we demonstrated that DCNNs can classify the staging of liver fibrosis according to METAVIR score with high performance using conventional B-mode images. Among them, EfficientNET that have fewer parameters and computation cost produced highest performance. From the results, we believe that DCNNs based classification of liver fibrosis may allow fast and accurate diagnosis of liver fibrosis without needs of additional equipment for add-on test and may be powerful tool for supporting radiologists in clinical practice.


Assuntos
Técnicas de Imagem por Elasticidade , Humanos , Técnicas de Imagem por Elasticidade/métodos , Cirrose Hepática/patologia , Ultrassonografia , Curva ROC , Redes Neurais de Computação , Fígado/diagnóstico por imagem
3.
BMC Med Imaging ; 24(1): 51, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38418987

RESUMO

Pulmonary diseases are various pathological conditions that affect respiratory tissues and organs, making the exchange of gas challenging for animals inhaling and exhaling. It varies from gentle and self-limiting such as the common cold and catarrh, to life-threatening ones, such as viral pneumonia (VP), bacterial pneumonia (BP), and tuberculosis, as well as a severe acute respiratory syndrome, such as the coronavirus 2019 (COVID-19). The cost of diagnosis and treatment of pulmonary infections is on the high side, most especially in developing countries, and since radiography images (X-ray and computed tomography (CT) scan images) have proven beneficial in detecting various pulmonary infections, many machine learning (ML) models and image processing procedures have been utilized to identify these infections. The need for timely and accurate detection can be lifesaving, especially during a pandemic. This paper, therefore, suggested a deep convolutional neural network (DCNN) founded image detection model, optimized with image augmentation technique, to detect three (3) different pulmonary diseases (COVID-19, bacterial pneumonia, and viral pneumonia). The dataset containing four (4) different classes (healthy (10,325), COVID-19 (3,749), BP (883), and VP (1,478)) was utilized as training/testing data for the suggested model. The model's performance indicates high potential in detecting the three (3) classes of pulmonary diseases. The model recorded average detection accuracy of 94%, 95.4%, 99.4%, and 98.30%, and training/detection time of about 60/50 s. This result indicates the proficiency of the suggested approach when likened to the traditional texture descriptors technique of pulmonary disease recognition utilizing X-ray and CT scan images. This study introduces an innovative deep convolutional neural network model to enhance the detection of pulmonary diseases like COVID-19 and pneumonia using radiography. This model, notable for its accuracy and efficiency, promises significant advancements in medical diagnostics, particularly beneficial in developing countries due to its potential to surpass traditional diagnostic methods.


Assuntos
COVID-19 , Aprendizado Profundo , Pneumopatias , Pneumonia Bacteriana , Pneumonia Viral , Humanos , COVID-19/diagnóstico por imagem , SARS-CoV-2 , Pneumonia Viral/diagnóstico por imagem , Pneumonia Bacteriana/diagnóstico por imagem
4.
BMC Med Imaging ; 24(1): 59, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459518

RESUMO

OBJECTIVE: This study aims to classify tongue lesion types using tongue images utilizing Deep Convolutional Neural Networks (DCNNs). METHODS: A dataset consisting of five classes, four tongue lesion classes (coated, geographical, fissured tongue, and median rhomboid glossitis), and one healthy/normal tongue class, was constructed using tongue images of 623 patients who were admitted to our clinic. Classification performance was evaluated on VGG19, ResNet50, ResNet101, and GoogLeNet networks using fusion based majority voting (FBMV) approach for the first time in the literature. RESULTS: In the binary classification problem (normal vs. tongue lesion), the highest classification accuracy performance of 93,53% was achieved utilizing ResNet101, and this rate was increased to 95,15% with the application of the FBMV approach. In the five-class classification problem of tongue lesion types, the VGG19 network yielded the best accuracy rate of 83.93%, and the fusion approach improved this rate to 88.76%. CONCLUSION: The obtained test results showed that tongue lesions could be identified with a high accuracy by applying DCNNs. Further improvement of these results has the potential for the use of the proposed method in clinic applications.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Humanos , Língua/diagnóstico por imagem , Hospitalização , Votação
5.
Sensors (Basel) ; 24(13)2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-39000965

RESUMO

Regarding the difficulty of extracting the acquired fault signal features of bearings from a strong background noise vibration signal, coupled with the fact that one-dimensional (1D) signals provide limited fault information, an optimal time frequency fusion symmetric dot pattern (SDP) bearing fault feature enhancement and diagnosis method is proposed. Firstly, the vibration signals are transformed into two-dimensional (2D) features by the time frequency fusion algorithm SDP, which can multi-scale analyze the fluctuations of signals at minor scales, as well as enhance bearing fault features. Secondly, the bat algorithm is employed to optimize the SDP parameters adaptively. It can effectively improve the distinctions between various types of faults. Finally, the fault diagnosis model can be constructed by a deep convolutional neural network (DCNN). To validate the effectiveness of the proposed method, Case Western Reserve University's (CWRU) bearing fault dataset and bearing fault dataset laboratory experimental platform were used. The experimental results illustrate that the fault diagnosis accuracy of the proposed method is 100%, which proves the feasibility and effectiveness of the proposed method. By comparing with other 2D transformer methods, the experimental results illustrate that the proposed method achieves the highest accuracy in bearing fault diagnosis. It validated the superiority of the proposed methodology.

6.
Sensors (Basel) ; 24(3)2024 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-38339723

RESUMO

Accurately extracting pixel-level buildings from high-resolution remote sensing images is significant for various geographical information applications. Influenced by different natural, cultural, and social development levels, buildings may vary in shape and distribution, making it difficult for the network to maintain a stable segmentation effect of buildings in different areas of the image. In addition, the complex spectra of features in remote sensing images can affect the extracted details of multi-scale buildings in different ways. To this end, this study selects parts of Xi'an City, Shaanxi Province, China, as the study area. A parallel encoded building extraction network (MARS-Net) incorporating multiple attention mechanisms is proposed. MARS-Net builds its parallel encoder through DCNN and transformer to take advantage of their extraction of local and global features. According to the different depth positions of the network, coordinate attention (CA) and convolutional block attention module (CBAM) are introduced to bridge the encoder and decoder to retain richer spatial and semantic information during the encoding process, and adding the dense atrous spatial pyramid pooling (DenseASPP) captures multi-scale contextual information during the upsampling of the layers of the decoder. In addition, a spectral information enhancement module (SIEM) is designed in this study. SIEM further enhances building segmentation by blending and enhancing multi-band building information with relationships between bands. The experimental results show that MARS-Net performs better extraction results and obtains more effective enhancement after adding SIEM. The IoU on the self-built Xi'an and WHU building datasets are 87.53% and 89.62%, respectively, while the respective F1 scores are 93.34% and 94.52%.

7.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732936

RESUMO

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Assuntos
Pneumopatias , Redes Neurais de Computação , Humanos , Pneumopatias/diagnóstico por imagem , Pneumopatias/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , Pulmão/diagnóstico por imagem , Pulmão/patologia
8.
Amino Acids ; 55(9): 1121-1136, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37402073

RESUMO

The ongoing COVID-19 pandemic has caused dramatic loss of human life. There is an urgent need for safe and efficient anti-coronavirus infection drugs. Anti-coronavirus peptides (ACovPs) can inhibit coronavirus infection. With high-efficiency, low-toxicity, and broad-spectrum inhibitory effects on coronaviruses, they are promising candidates to be developed into a new type of anti-coronavirus drug. Experiment is the traditional way of ACovPs' identification, which is less efficient and more expensive. With the accumulation of experimental data on ACovPs, computational prediction provides a cheaper and faster way to find anti-coronavirus peptides' candidates. In this study, we ensemble several state-of-the-art machine learning methodologies to build nine classification models for the prediction of ACovPs. These models were pre-trained using deep neural networks, and the performance of our ensemble model, ACP-Dnnel, was evaluated across three datasets and independent dataset. We followed Chou's 5-step rules. (1) we constructed the benchmark datasets data1, data2, and data3 for training and testing, and introduced the independent validation dataset ACVP-M; (2) we analyzed the peptides sequence composition feature of the benchmark dataset; (3) we constructed the ACP-Dnnel model with deep convolutional neural network (DCNN) merged the bi-directional long short-term memory (BiLSTM) as the base model for pre-training to extract the features embedded in the benchmark dataset, and then, nine classification algorithms were introduced to ensemble together for classification prediction and voting together; (4) tenfold cross-validation was introduced during the training process, and the final model performance was evaluated; (5) finally, we constructed a user-friendly web server accessible to the public at http://150.158.148.228:5000/ . The highest accuracy (ACC) of ACP-Dnnel reaches 97%, and the Matthew's correlation coefficient (MCC) value exceeds 0.9. On three different datasets, its average accuracy is 96.0%. After the latest independent dataset validation, ACP-Dnnel improved at MCC, SP, and ACC values 6.2%, 7.5% and 6.3% greater, respectively. It is suggested that ACP-Dnnel can be helpful for the laboratory identification of ACovPs, speeding up the anti-coronavirus peptide drug discovery and development. We constructed the web server of anti-coronavirus peptides' prediction and it is available at http://150.158.148.228:5000/ .


Assuntos
COVID-19 , Pandemias , Humanos , Peptídeos/farmacologia , Peptídeos/química , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina
9.
BMC Pulm Med ; 23(1): 474, 2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38012620

RESUMO

The accurate recognition of malignant lung nodules on CT images is critical in lung cancer screening, which can offer patients the best chance of cure and significant reductions in mortality from lung cancer. Convolutional Neural Network (CNN) has been proven as a powerful method in medical image analysis. Radiomics which is believed to be of interest based on expert opinion can describe high-throughput extraction from CT images. Graph Convolutional Network explores the global context and makes the inference on both graph node features and relational structures. In this paper, we propose a novel fusion algorithm, RGD, for benign-malignant lung nodule classification by incorporating Radiomics study and Graph learning into the multiple Deep CNNs to form a more complete and distinctive feature representation, and ensemble the predictions for robust decision-making. The proposed method was conducted on the publicly available LIDC-IDRI dataset in a 10-fold cross-validation experiment and it obtained an average accuracy of 93.25%, a sensitivity of 89.22%, a specificity of 95.82%, precision of 92.46%, F1 Score of 0.9114 and AUC of 0.9629. Experimental results illustrate that the RGD model achieves superior performance compared with the state-of-the-art methods. Moreover, the effectiveness of the fusion strategy has been confirmed by extensive ablation studies. In the future, the proposed model which performs well on the pulmonary nodule classification on CT images will be applied to increase confidence in the clinical diagnosis of lung cancer.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/patologia , Nódulo Pulmonar Solitário/patologia , Detecção Precoce de Câncer , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Pulmão/patologia , Oligopeptídeos
10.
Cytopathology ; 34(5): 466-471, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37350108

RESUMO

AIM: To evaluate the application of an artificial neural network in the detection of malignant cells in effusion samples. MATERIALS AND METHODS: In this retrospective study, we selected 90 cases of effusion cytology samples over 2 years. There were 52 cases of metastatic adenocarcinoma and 38 benign effusion samples. In each case, an average of five microphotographs from the representative areas were taken at 40× magnification from Papanicolaou-stained samples. A total of 492 images were obtained from these 90 cases. We applied a deep convolutional neural network (DCNN) model to identify malignant cells in the cytology images of effusion cytology smears. The training was performed for 15 epochs. The model consisted of 783 layers with 188 convolution-max pool layers in between. RESULTS: In the test set, the DCNN model correctly identified 54 of 56 images of benign samples and 49 out of 56 images of malignant samples. It showed 88% sensitivity, 96% specificity and 96% positive predictive value in the screening of malignant cases in effusion. The area under the receiver operating curve was 0.92. CONCLUSION: DCNN is a unique technology that can detect malignant cells from cytological images. The model works rapidly and there is no bias in cell selection or feature extraction. The present DCNN model is promising and can have a significant impact on the diagnosis of malignancy in cytology.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Estudos Retrospectivos , Sensibilidade e Especificidade , Redes Neurais de Computação , Neoplasias/diagnóstico
11.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-37514882

RESUMO

The demand for cybersecurity is growing to safeguard information flow and enhance data privacy. This essay suggests a novel authenticated public key elliptic curve based on a deep convolutional neural network (APK-EC-DCNN) for cybersecurity image encryption application. The public key elliptic curve discrete logarithmic problem (EC-DLP) is used for elliptic curve Diffie-Hellman key exchange (EC-DHKE) in order to generate a shared session key, which is used as the chaotic system's beginning conditions and control parameters. In addition, the authenticity and confidentiality can be archived based on ECC to share the EC parameters between two parties by using the EC-DHKE algorithm. Moreover, the 3D Quantum Chaotic Logistic Map (3D QCLM) has an extremely chaotic behavior of the bifurcation diagram and high Lyapunov exponent, which can be used in high-level security. In addition, in order to achieve the authentication property, the secure hash function uses the output sequence of the DCNN and the output sequence of the 3D QCLM in the proposed authenticated expansion diffusion matrix (AEDM). Finally, partial frequency domain encryption (PFDE) technique is achieved by using the discrete wavelet transform in order to satisfy the robustness and fast encryption process. Simulation results and security analysis demonstrate that the proposed encryption algorithm achieved the performance of the state-of-the-art techniques in terms of quality, security, and robustness against noise- and signal-processing attacks.

12.
Sensors (Basel) ; 23(2)2023 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-36679455

RESUMO

Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients' recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model "MobileNet-SVM", which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.


Assuntos
Internet das Coisas , Máquina de Vetores de Suporte , Humanos , Diagnóstico por Imagem , Cintilografia , Internet
13.
Sensors (Basel) ; 23(3)2023 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-36772219

RESUMO

Removing redundant features and improving classifier performance necessitates the use of meta-heuristic and deep learning (DL) algorithms in feature selection and classification problems. With the maturity of DL tools, many data-driven polarimetric synthetic aperture radar (POLSAR) representation models have been suggested, most of which are based on deep convolutional neural networks (DCNNs). In this paper, we propose a hybrid approach of a new multi-objective binary chimp optimization algorithm (MOBChOA) and DCNN for optimal feature selection. We implemented the proposed method to classify POLSAR images from San Francisco, USA. To do so, we first performed the necessary preprocessing, including speckle reduction, radiometric calibration, and feature extraction. After that, we implemented the proposed MOBChOA for optimal feature selection. Finally, we trained the fully connected DCNN to classify the pixels into specific land-cover labels. We evaluated the performance of the proposed MOBChOA-DCNN in comparison with nine competitive methods. Our experimental results with the POLSAR image datasets show that the proposed architecture had a great performance for different important optimization parameters. The proposed MOBChOA-DCNN provided fewer features (27) and the highest overall accuracy. The overall accuracy values of MOBChOA-DCNN on the training and validation datasets were 96.89% and 96.13%, respectively, which were the best results. The overall accuracy of SVM was 89.30%, which was the worst result. The results of the proposed MOBChOA on two real-world benchmark problems were also better than the results with the other methods. Furthermore, it was shown that the MOBChOA-DCNN performed better than methods from previous studies.

14.
Sensors (Basel) ; 23(23)2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-38067688

RESUMO

Pavement surface maintenance is pivotal for road safety. There exist a number of manual, time-consuming methods to examine pavement conditions and spot distresses. More recently, alternative pavement monitoring methods have been developed, which take advantage of unmanned aerial systems (UASs). However, existing UAS-based approaches make use of either image or LiDAR data, which do not allow for exploring the complementary characteristics of the two systems. This study explores the feasibility of fusing UAS-based imaging and low-cost LiDAR data to enhance pavement crack segmentation using a deep convolutional neural network (DCNN) model. Three datasets are collected using two different UASs at varying flight heights, and two types of pavement distress are investigated, namely cracks and sealed cracks. Four different imaging/LiDAR fusing combinations are created, namely RGB, RGB + intensity, RGB + elevation, and RGB + intensity + elevation. A modified U-net with residual blocks inspired by ResNet was adopted for enhanced pavement crack segmentation. Comparative analyses were conducted against state-of-the-art networks, namely U-net and FPHBN networks, demonstrating the superiority of the developed DCNN in terms of accuracy and generalizability. Using the RGB case of the first dataset, the obtained precision, recall, and F-measure are 77.48%, 87.66%, and 82.26%, respectively. The fusion of the geometric information from the elevation layer with RGB images led to a 2% increase in recall. Fusing the intensity layer with the RGB images yielded a reduction of approximately 2%, 8%, and 5% in the precision, recall, and F-measure. This is attributed to the low spatial resolution and high point cloud noise of the used LiDAR sensor. The second dataset crack samples obtained largely similar results to those of the first dataset. In the third dataset, capturing higher-resolution LiDAR data at a lower altitude led to improved recall, indicating finer crack detail detection. This fusion, however, led to a decrease in precision due to point cloud noise, which caused misclassifications. In contrast, for the sealed crack, the addition of LiDAR data improved the sealed crack segmentation by about 4% and 7% in the second and third datasets, respectively, compared to the RGB cases.

15.
Sensors (Basel) ; 23(17)2023 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-37687801

RESUMO

In this paper, we present a comprehensive assessment of individuals' mental engagement states during manual and autonomous driving scenarios using a driving simulator. Our study employed two sensor fusion approaches, combining the data and features of multimodal signals. Participants in our experiment were equipped with Electroencephalogram (EEG), Skin Potential Response (SPR), and Electrocardiogram (ECG) sensors, allowing us to collect their corresponding physiological signals. To facilitate the real-time recording and synchronization of these signals, we developed a custom-designed Graphical User Interface (GUI). The recorded signals were pre-processed to eliminate noise and artifacts. Subsequently, the cleaned data were segmented into 3 s windows and labeled according to the drivers' high or low mental engagement states during manual and autonomous driving. To implement sensor fusion approaches, we utilized two different architectures based on deep Convolutional Neural Networks (ConvNets), specifically utilizing the Braindecode Deep4 ConvNet model. The first architecture consisted of four convolutional layers followed by a dense layer. This model processed the synchronized experimental data as a 2D array input. We also proposed a novel second architecture comprising three branches of the same ConvNet model, each with four convolutional layers, followed by a concatenation layer for integrating the ConvNet branches, and finally, two dense layers. This model received the experimental data from each sensor as a separate 2D array input for each ConvNet branch. Both architectures were evaluated using a Leave-One-Subject-Out (LOSO) cross-validation approach. For both cases, we compared the results obtained when using only EEG signals with the results obtained by adding SPR and ECG signals. In particular, the second fusion approach, using all sensor signals, achieved the highest accuracy score, reaching 82.0%. This outcome demonstrates that our proposed architecture, particularly when integrating EEG, SPR, and ECG signals at the feature level, can effectively discern the mental engagement of drivers.


Assuntos
Artefatos , Cultura , Humanos , Eletrocardiografia , Eletroencefalografia , Redes Neurais de Computação
16.
Sensors (Basel) ; 23(18)2023 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-37765824

RESUMO

Too often, the testing and evaluation of object detection, as well as the classification techniques for high-resolution remote sensing imagery, are confined to clean, discretely partitioned datasets, i.e., the closed-world model. In recent years, the performance on a number of benchmark datasets has exceeded 99% when evaluated using cross-validation techniques. However, real-world remote sensing data are truly big data, which often exceed billions of pixels. Therefore, one of the greatest challenges regarding the evaluation of machine learning models taken out of the clean laboratory setting and into the real world is the difficulty of measuring performance. It is necessary to evaluate these models on a grander scale, namely, tens of thousands of square kilometers, where it is intractable to the ground truth and the ever-changing anthropogenic surface of Earth. The ultimate goal of computer vision model development for automated analysis and broad area search and discovery is to augment and assist humans, specifically human-machine teaming for real-world tasks. In this research, various models have been trained using object classes from benchmark datasets such as UC Merced, PatternNet, RESISC-45, and MDSv2. We detail techniques to scan broad swaths of the Earth with deep convolutional neural networks. We present algorithms for localizing object detection results, as well as a methodology for the evaluation of the results of broad-area scans. Our research explores the challenges of transitioning these models out of the training-validation laboratory setting and into the real-world application domain. We show a scalable approach to leverage state-of-the-art deep convolutional neural networks for the search, detection, and annotation of objects within large swaths of imagery, with the ultimate goal of providing a methodology for evaluating object detection machine learning models in real-world scenarios.

17.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991642

RESUMO

Lung cancer is a high-risk disease that causes mortality worldwide; nevertheless, lung nodules are the main manifestation that can help to diagnose lung cancer at an early stage, lowering the workload of radiologists and boosting the rate of diagnosis. Artificial intelligence-based neural networks are promising technologies for automatically detecting lung nodules employing patient monitoring data acquired from sensor technology through an Internet-of-Things (IoT)-based patient monitoring system. However, the standard neural networks rely on manually acquired features, which reduces the effectiveness of detection. In this paper, we provide a novel IoT-enabled healthcare monitoring platform and an improved grey-wolf optimization (IGWO)-based deep convulution neural network (DCNN) model for lung cancer detection. The Tasmanian Devil Optimization (TDO) algorithm is utilized to select the most pertinent features for diagnosing lung nodules, and the convergence rate of the standard grey wolf optimization (GWO) algorithm is modified, resulting in an improved GWO algorithm. Consequently, an IGWO-based DCNN is trained on the optimal features obtained from the IoT platform, and the findings are saved in the cloud for the doctor's judgment. The model is built on an Android platform with DCNN-enabled Python libraries, and the findings are evaluated against cutting-edge lung cancer detection models.


Assuntos
Inteligência Artificial , Neoplasias Pulmonares , Humanos , Detecção Precoce de Câncer , Redes Neurais de Computação , Algoritmos , Neoplasias Pulmonares/diagnóstico , Atenção à Saúde
18.
J Digit Imaging ; 36(5): 2025-2034, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37268841

RESUMO

Ankylosing spondylitis (AS) is a chronic inflammatory disease that causes inflammatory low back pain and may even limit activity. The grading diagnosis of sacroiliitis on imaging plays a central role in diagnosing AS. However, the grading diagnosis of sacroiliitis on computed tomography (CT) images is viewer-dependent and may vary between radiologists and medical institutions. In this study, we aimed to develop a fully automatic method to segment sacroiliac joint (SIJ) and further grading diagnose sacroiliitis associated with AS on CT. We studied 435 CT examinations from patients with AS and control at two hospitals. No-new-UNet (nnU-Net) was used to segment the SIJ, and a 3D convolutional neural network (CNN) was used to grade sacroiliitis with a three-class method, using the grading results of three veteran musculoskeletal radiologists as the ground truth. We defined grades 0-I as class 0, grade II as class 1, and grades III-IV as class 2 according to modified New York criteria. nnU-Net segmentation of SIJ achieved Dice, Jaccard, and relative volume difference (RVD) coefficients of 0.915, 0.851, and 0.040 with the validation set, respectively, and 0.889, 0.812, and 0.098 with the test set, respectively. The areas under the curves (AUCs) of classes 0, 1, and 2 using the 3D CNN were 0.91, 0.80, and 0.96 with the validation set, respectively, and 0.94, 0.82, and 0.93 with the test set, respectively. 3D CNN was superior to the junior and senior radiologists in the grading of class 1 for the validation set and inferior to expert for the test set (P < 0.05). The fully automatic method constructed in this study based on a convolutional neural network could be used for SIJ segmentation and then accurately grading and diagnosis of sacroiliitis associated with AS on CT images, especially for class 0 and class 2. The method for class 1 was less effective but still more accurate than that of the senior radiologist.


Assuntos
Sacroileíte , Espondilite Anquilosante , Humanos , Espondilite Anquilosante/diagnóstico , Sacroileíte/diagnóstico por imagem , Articulação Sacroilíaca/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos
19.
J Digit Imaging ; 36(3): 1216-1236, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36650303

RESUMO

Medical imaging has acquired more attention due to the emerging design of wireless technologies, the internet, and data storage. The reflection of these technologies has gained attraction in medicine and medical sciences facilitating the diagnosis and treatment of different diseases in an effective manner. However, medical images are vulnerable to noise, which can make the image unclear and perplex the identification. Thus, denoising of medical images is imperative for processing medical images. This paper devises a novel optimal deep convolution neural network-based vectorial variation (ODVV) filter for denoising medical computed tomography (CT) images and Lena images. Here, the input medical images are fed to a noisy pixel map identification module wherein the deep convolutional neural network (Deep CNN) is adapted for discovering noisy pixel maps. Here, Deep CNN training is done with the Adam algorithm. Once noisy pixels are identified, it is further given to noise removal module which is performed using the proposed optimization algorithm, namely Feedback Artificial Lion (FAL). Here, the FAL is devised by combining the FAT and Lion algorithm. After noise removal, the pixel enhancement is performed using the vectorial total variation norm to get final pixel-enhanced image. The proposed FAL algorithm offered enhanced performance in contrast to other techniques with the highest peak signal-to-noise ratio (PSNR) of 24.149 dB, highest second-derivative-like measure of enhancement (SDME) of 32.142 dB, highest structural index similarity (SSIM) of 0.800, and Edge Preserve Index (EPI) of 0.9267.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos
20.
J Synchrotron Radiat ; 29(Pt 5): 1232-1240, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36073882

RESUMO

New developments at synchrotron beamlines and the ongoing upgrades of synchrotron facilities allow the possibility to study complex structures with a much better spatial and temporal resolution than ever before. However, the downside is that the data collected are also significantly larger (more than several terabytes) than ever before, and post-processing and analyzing these data is very challenging to perform manually. This issue can be solved by employing automated methods such as machine learning, which show significantly improved performance in data processing and image segmentation than manual methods. In this work, a 3D U-net deep convolutional neural network (DCNN) model with four layers and base-8 characteristic features has been developed to segment precipitates and porosities in synchrotron transmission X-ray micrograms. Transmission X-ray microscopy experiments were conducted on micropillars prepared from additively manufactured 316L steel to evaluate precipitate information. After training the 3D U-net DCNN model, it was used on unseen data and the prediction was compared with manual segmentation. A good agreement was found between both segmentations. An ablation study was performed and revealed that the proposed model showed better statistics than other models with lower numbers of layers and/or characteristic features. The proposed model is able to segment several hundreds of gigabytes of data in a few minutes and could be applied to other materials and tomography techniques. The code and the fitted weights are made available with this paper for any interested researcher to use for their needs (https://github.com/manasvupadhyay/erc-gamma-3D-DCNN).


Assuntos
Imageamento Tridimensional , Síncrotrons , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Porosidade , Tomografia , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA