Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Br J Haematol ; 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39024119

ABSTRACT

Palpebral conjunctival hue alteration is used in non-invasive screening for anaemia, whereas it is a qualitative measure. This study constructed machine/deep learning models for predicting haemoglobin values using 150 palpebral conjunctival images taken by a smartphone. The median haemoglobin value was 13.1 g/dL, including 10 patients with <11 g/dL. A segmentation model using U-net was successfully constructed. The segmented images were subjected to non-convolutional neural network (CNN)-based and CNN-based regression models for predicting haemoglobin values. The correlation coefficients between the actual and predicted haemoglobin values were 0.38 and 0.44 in the non-CNN-based and CNN-based models, respectively. The sensitivity and specificity for anaemia detection were 13% and 98% for the non-CNN-based model and 20% and 99% for the CNN-based model. The performance of the CNN-based model did not improve with a mask layer guiding the model's attention towards the conjunctival regions, however, slightly improved with correction by the aspect ratio and exposure time of input images. The gradient-weighted class activation mapping heatmap indicated that the lower half area of the conjunctiva was crucial for haemoglobin value prediction. In conclusion, the CNN-based model had better results than the non-CNN-based model. The prediction accuracy would improve by using more input data with anaemia.

2.
Traffic Inj Prev ; : 1-9, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39046244

ABSTRACT

OBJECTIVES: Aggressive driving behavior can lead to potential traffic collision risks, and abnormal weather conditions can exacerbate this behavior. This study aims to develop recognition models for aggressive driving under various climate conditions, addressing the challenge of collecting sufficient data in abnormal weather. METHODS: Driving data was collected in a virtual environment using a driving simulator under both normal and abnormal weather conditions. A model was trained on data from normal weather (source domain) and then transferred to foggy and rainy weather conditions (target domains) for retraining and fine-tuning. The K-means algorithm clustered driving behavior instances into three styles: aggressive, normal, and cautious. These clusters were used as labels for each instance in training a CNN model. The pre-trained CNN model was then transferred and fine-tuned for abnormal weather conditions. RESULTS: The transferred models showed improved recognition performance, achieving an accuracy score of 0.81 in both foggy and rainy weather conditions. This surpassed the non-transferred models' accuracy scores of 0.72 and 0.69, respectively. CONCLUSIONS: The study demonstrates the significant application value of transfer learning in recognizing aggressive driving behaviors with limited data. It also highlights the feasibility of using this approach to address the challenges of driving behavior recognition under abnormal weather conditions.

3.
Sci Total Environ ; 946: 174158, 2024 Oct 10.
Article in English | MEDLINE | ID: mdl-38909816

ABSTRACT

Short-term exposure to ground-level ozone (O3) poses significant health risks, particularly respiratory and cardiovascular diseases, and mortality. This study addresses the pressing need for accurate O3 forecasting to mitigate these risks, focusing on South Korea. We introduce Deep Bias Correction (Deep-BC), a novel framework leveraging Convolutional Neural Networks (CNNs), to refine hourly O3 forecasts from the Community Multiscale Air Quality (CMAQ) model. Our approach involves training Deep-BC using data from 2016 to 2019, including CMAQ's 72-hour O3 forecasts, 31 meteorological variables from the Weather Research and Forecasting (WRF) model, and previous days' station measurements of 6 air pollutants. Deep-BC significantly outperforms CMAQ in 2021, reducing biases in O3 forecasts. Furthermore, we utilize Deep-BC's daily maximum 8-hour average O3 (MDA8 O3) forecasts as input for the AirQ+ model to assess O3's potential impact on mortality across seven major provinces of South Korea: Seoul, Busan, Daegu, Incheon, Daejeon, Ulsan, and Sejong. Short-term O3 exposure is associated with 0.40 % to 0.48 % of natural cause and respiratory deaths and 0.67 % to 0.81 % of cardiovascular deaths. Gender-specific analysis reveals higher mortality rates among men, particularly from respiratory causes. Our findings underscore the critical need for region-specific interventions to address air pollution's detrimental effects on public health in South Korea. By providing improved O3 predictions and quantifying its impact on mortality, this research offers valuable insights for formulating targeted strategies to mitigate air pollution's adverse effects. Moreover, we highlight the urgency of proactive measures in health policies, emphasizing the significance of accurate forecasting and effective interventions to safeguard public health from the deleterious effects of air pollution.


Subject(s)
Air Pollutants , Air Pollution , Deep Learning , Ozone , Ozone/analysis , Republic of Korea , Air Pollutants/analysis , Air Pollution/statistics & numerical data , Humans , Risk Assessment/methods , Forecasting , Environmental Exposure/statistics & numerical data , Environmental Monitoring/methods , Cardiovascular Diseases/epidemiology
4.
Nanomaterials (Basel) ; 14(8)2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38668211

ABSTRACT

In this research, a method was developed for fabricating Au-Au nanorod array substrates through the deposition of large-area Au nanostructures on an Au nanorod array using a galvanic cell reaction. The incorporation of a granular structure enhanced both the number and intensity of surface-enhanced Raman scattering (SERS) hot spots on the substrate, thereby elevating the SERS performance beyond that of substrates composed solely of an Au nanorod. Calculations using the finite difference time domain method confirmed the generation of a strong electromagnetic field around the nanoparticles. Motivated by the electromotive force, Au ions in the chloroauric acid solution were reduced to form nanostructures on the nanorod array. The size and distribution density of these granular nanostructures could be modulated by varying the reaction time and the concentration of chloroauric acid. The resulting Au-Au nanorod array substrate exhibited an active, uniform, and reproducible SERS effect. With 1,2-bis(4-pyridyl)ethylene as the probe molecule, the detection sensitivity of the Au-Au nanorod array substrate was enhanced to 10-11 M, improving by five orders of magnitude over the substrate consisting only of an Au nanorod array. For a practical application, this substrate was utilized for the detection of pesticides, including thiram, thiabendazole, carbendazim, and phosmet, within the concentration range of 10-4 to 5 × 10-7 M. An analytical model combining a random forest and a one-dimensional convolutional neural network, referring to the important variable-one-dimensional convolutional neural network model, was developed for the precise identification of thiram. This approach demonstrated significant potential for biochemical sensing and rapid on-site identification.

5.
Heliyon ; 10(5): e26938, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38468922

ABSTRACT

Coronavirus disease (COVID-2019) is emerging in Wuhan, China in 2019. It has spread throughout the world since the year 2020. Millions of people were affected and caused death to them till now. To avoid the spreading of COVID-2019, various precautions and restrictions have been taken by all nations. At the same time, infected persons are needed to identify and isolate, and medical treatment should be provided to them. Due to a deficient number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, a Chest X-ray image is becoming an effective technique for diagnosing COVID-19. In this work, the Hybrid Deep Learning CNN model is proposed for the diagnosis COVID-19 using chest X-rays. The proposed model consists of a heading model and a base model. The base model utilizes two pre-trained deep learning structures such as VGG16 and VGG19. The feature dimensions from these pre-trained models are reduced by incorporating different pooling layers, such as max and average. In the heading part, dense layers of size three with different activation functions are also added. A dropout layer is supplemented to avoid overfitting. The experimental analyses are conducted to identify the efficacy of the proposed hybrid deep learning with existing transfer learning architectures such as VGG16, VGG19, EfficientNetB0 and ResNet50 using a COVID-19 radiology database. Various classification techniques, such as K-Nearest Neighbor (KNN), Naive Bayes, Random Forest, Support Vector Machine (SVM), and Neural Network, were also used for the performance comparison of the proposed model. The hybrid deep learning model with average pooling layers, along with SVM-linear and neural networks, both achieved an accuracy of 92%.These proposed models can be employed to assist radiologists and physicians in avoiding misdiagnosis rates and to validate the positive COVID-19 infected cases.

6.
Methods ; 225: 62-73, 2024 May.
Article in English | MEDLINE | ID: mdl-38490594

ABSTRACT

The multipotent stem cells of our body have been largely harnessed in biotherapeutics. However, as they are derived from multiple anatomical sources, from different tissues, human mesenchymal stem cells (hMSCs) are a heterogeneous population showing ambiguity in their in vitro behavior. Intra-clonal population heterogeneity has also been identified and pre-clinical mechanistic studies suggest that these cumulatively depreciate the therapeutic effects of hMSC transplantation. Although various biomarkers identify these specific stem cell populations, recent artificial intelligence-based methods have capitalized on the cellular morphologies of hMSCs, opening a new approach to understand their attributes. A robust and rapid platform is required to accommodate and eliminate the heterogeneity observed in the cell population, to standardize the quality of hMSC therapeutics globally. Here, we report our primary findings of morphological heterogeneity observed within and across two sources of hMSCs namely, stem cells from human exfoliated deciduous teeth (SHEDs) and human Wharton jelly mesenchymal stem cells (hWJ MSCs), using real-time single-cell images generated on immunophenotyping by imaging flow cytometry (IFC). We used the ImageJ software for identification and comparison between the two types of hMSCs using statistically significant morphometric descriptors that are biologically relevant. To expand on these insights, we have further applied deep learning methods and successfully report the development of a Convolutional Neural Network-based image classifier. In our research, we introduced a machine learning methodology to streamline the entire procedure, utilizing convolutional neural networks and transfer learning for binary classification, achieving an accuracy rate of 97.54%. We have also critically discussed the challenges, comparisons between solutions and future directions of machine learning in hMSC classification in biotherapeutics.


Subject(s)
Machine Learning , Mesenchymal Stem Cells , Single-Cell Analysis , Humans , Mesenchymal Stem Cells/cytology , Single-Cell Analysis/methods , Immunophenotyping/methods , Flow Cytometry/methods , Tooth, Deciduous/cytology , Image Processing, Computer-Assisted/methods , Wharton Jelly/cytology , Cells, Cultured
7.
J Transl Med ; 22(1): 162, 2024 02 16.
Article in English | MEDLINE | ID: mdl-38365732

ABSTRACT

BACKGROUND: Epilepsy is a common neurological disorder that affects approximately 60 million people worldwide. Characterized by unpredictable neural electrical activity abnormalities, it results in seizures with varying intensity levels. Electroencephalography (EEG), as a crucial technology for monitoring and predicting epileptic seizures, plays an essential role in improving the quality of life for people with epilepsy. METHOD: This study introduces an innovative deep learning model, a lightweight triscale yielding convolutional neural network" (LTY-CNN), that is specifically designed for EEG signal analysis. The model integrates a parallel convolutional structure with a multihead attention mechanism to capture complex EEG signal features across multiple scales and enhance the efficiency achieved when processing time series data. The lightweight design of the LTY-CNN enables it to maintain high performance in environments with limited computational resources while preserving the interpretability and maintainability of the model. RESULTS: In tests conducted on the SWEC-ETHZ and CHB-MIT datasets, the LTY-CNN demonstrated outstanding performance. On the SWEC-ETHZ dataset, the LTY-CNN achieved an accuracy of 99.9%, an area under the receiver operating characteristic curve (AUROC) of 0.99, a sensitivity of 99.9%, and a specificity of 98.8%. Furthermore, on the CHB-MIT dataset, it recorded an accuracy of 99%, an AUROC of 0.932, a sensitivity of 99.1%, and a specificity of 93.2%. These results signify the remarkable ability of the LTY-CNN to distinguish between epileptic seizures and nonseizure events. Compared to other existing epilepsy detection classifiers, the LTY-CNN attained higher accuracy and sensitivity. CONCLUSION: The high accuracy and sensitivity of the LTY-CNN model demonstrate its significant potential for epilepsy management, particularly in terms of predicting and mitigating epileptic seizures. Its value in personalized treatments and widespread clinical applications reflects the broad prospects of deep learning in the health care sector. This also highlights the crucial role of technological innovation in enhancing the quality of life experienced by patients.


Subject(s)
Epilepsy , Quality of Life , Humans , Seizures/diagnosis , Epilepsy/diagnosis , Neural Networks, Computer , Electroencephalography/methods , Technology , Algorithms
8.
Sensors (Basel) ; 23(18)2023 Sep 08.
Article in English | MEDLINE | ID: mdl-37765819

ABSTRACT

The reliable and safe operation of industrial systems needs to detect and diagnose bearing faults as early as possible. Intelligent fault diagnostic systems that use deep learning convolutional neural network (CNN) techniques have achieved a great deal of success in recent years. In a traditional CNN, the fully connected layer is located in the final three layers, and such a layer consists of multiple layers that are all connected. However, the fully connected layer of the CNN has the disadvantage of too many training parameters, which makes the model training and testing time longer and incurs overfitting. Additionally, because the working load is constantly changing and noise from the place of operation is unavoidable, the efficiency of intelligent fault diagnosis techniques suffers great reductions. In this research, we propose a novel technique that can effectively solve the problem of traditional CNN and accurately identify the bearing fault. Firstly, the best pre-trained CNN model is identified by considering the classification's success rate for bearing fault diagnosis. Secondly, the selected CNN model is modified to effectively reduce the parameter quantities, overfitting, and calculating time of this model. Finally, the best classifier is identified to make a hybrid model concept to achieve the best performance. It is found that the proposed technique performs well under different load conditions, even in noisy environments, with variable signal-to-noise ratio (SNR) values. Our experimental results confirm that this proposed method is highly reliable and efficient in detecting and classifying bearing faults.

9.
Sensors (Basel) ; 23(18)2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37765976

ABSTRACT

Vehicle make and model recognition (VMMR) is an important aspect of intelligent transportation systems (ITS). In VMMR systems, surveillance cameras capture vehicle images for real-time vehicle detection and recognition. These captured images pose challenges, including shadows, reflections, changes in weather and illumination, occlusions, and perspective distortion. Another significant challenge in VMMR is the multiclass classification. This scenario has two main categories: (a) multiplicity and (b) ambiguity. Multiplicity concerns the issue of different forms among car models manufactured by the same company, while the ambiguity problem arises when multiple models from the same manufacturer have visually similar appearances or when vehicle models of different makes have visually comparable rear/front views. This paper introduces a novel and robust VMMR model that can address the above-mentioned issues with accuracy comparable to state-of-the-art methods. Our proposed hybrid CNN model selects the best descriptive fine-grained features with the help of Fisher Discriminative Least Squares Regression (FDLSR). These features are extracted from a deep CNN model fine-tuned on the fine-grained vehicle datasets Stanford-196 and BoxCars21k. Using ResNet-152 features, our proposed model outperformed the SVM and FC layers in accuracy by 0.5% and 4% on Stanford-196 and 0.4 and 1% on BoxCars21k, respectively. Moreover, this model is well-suited for small-scale fine-grained vehicle datasets.

10.
J Arrhythm ; 39(4): 664-668, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37560272

ABSTRACT

Background: Cardiomyocytes derived from human iPS cells (hiPSCs) include cells showing SAN- and non-SAN-type spontaneous APs. Objectives: To examine whether the deep learning technology could identify hiPSC-derived SAN-like cells showing SAN-type-APs by their shape. Methods: We acquired phase-contrast images for hiPSC-derived SHOX2/HCN4 double-positive SAN-like and non-SAN-like cells and made a VGG16-based CNN model to classify an input image as SAN-like or non-SAN-like cell, compared to human discriminability. Results: All parameter values such as accuracy, recall, specificity, and precision obtained from the trained CNN model were higher than those of human classification. Conclusions: Deep learning technology could identify hiPSC-derived SAN-like cells with considerable accuracy.

11.
Arab J Sci Eng ; : 1-13, 2023 Apr 14.
Article in English | MEDLINE | ID: mdl-37361471

ABSTRACT

Lung opacities are extremely important for physicians to monitor and can have irreversible consequences for patients if misdiagnosed or confused with other findings. Therefore, long-term monitoring of the regions of lung opacity is recommended by physicians. Tracking the regional dimensions of images and classifying differences from other lung cases can provide significant ease to physicians. Deep learning methods can be easily used for the detection, classification, and segmentation of lung opacity. In this study, a three-channel fusion CNN model is applied to effectively detect lung opacity on a balanced dataset compiled from public datasets. The MobileNetV2 architecture is used in the first channel, the InceptionV3 model in the second channel, and the VGG19 architecture in the third channel. The ResNet architecture is used for feature transfer from the previous layer to the current layer. In addition to being easy to implement, the proposed approach can also provide significant cost and time advantages to physicians. Our accuracy values for two, three, four, and five classes on the newly compiled dataset for lung opacity classifications are found to be 92.52%, 92.44%, 87.12%, and 91.71%, respectively.

12.
Diagnostics (Basel) ; 13(9)2023 Apr 27.
Article in English | MEDLINE | ID: mdl-37174953

ABSTRACT

Brain tumor (BT) diagnosis is a lengthy process, and great skill and expertise are required from radiologists. As the number of patients has expanded, so has the amount of data to be processed, making previous techniques both costly and ineffective. Many academics have examined a range of reliable and quick techniques for identifying and categorizing BTs. Recently, deep learning (DL) methods have gained popularity for creating computer algorithms that can quickly and reliably diagnose or segment BTs. To identify BTs in medical images, DL permits a pre-trained convolutional neural network (CNN) model. The suggested magnetic resonance imaging (MRI) images of BTs are included in the BT segmentation dataset, which was created as a benchmark for developing and evaluating algorithms for BT segmentation and diagnosis. There are 335 annotated MRI images in the collection. For the purpose of developing and testing BT segmentation and diagnosis algorithms, the brain tumor segmentation (BraTS) dataset was produced. A deep CNN was also utilized in the model-building process for segmenting BTs using the BraTS dataset. To train the model, a categorical cross-entropy loss function and an optimizer, such as Adam, were employed. Finally, the model's output successfully identified and segmented BTs in the dataset, attaining a validation accuracy of 98%.

13.
Sensors (Basel) ; 23(4)2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36850816

ABSTRACT

MicroRNAs (miRNA) are small, non-coding regulatory molecules whose effective alteration might result in abnormal gene manifestation in the downstream pathway of their target. miRNA gene variants can impact miRNA transcription, maturation, or target selectivity, impairing their usefulness in plant growth and stress responses. Simple Sequence Repeat (SSR) based on miRNA is a newly introduced functional marker that has recently been used in plant breeding. MicroRNA and long non-coding RNA (lncRNA) are two examples of non-coding RNA (ncRNA) that play a vital role in controlling the biological processes of animals and plants. According to recent studies, the major objective for decoding their functional activities is predicting the relationship between lncRNA and miRNA. Traditional feature-based classification systems' prediction accuracy and reliability are frequently harmed because of the small data size, human factors' limits, and huge quantity of noise. This paper proposes an optimized deep learning model built with Independently Recurrent Neural Networks (IndRNNs) and Convolutional Neural Networks (CNNs) to predict the interaction in plants between lncRNA and miRNA. The deep learning ensemble model automatically investigates the function characteristics of genetic sequences. The proposed model's main advantage is the enhanced accuracy in plant miRNA-IncRNA prediction due to optimal hyperparameter tuning, which is performed by the artificial Gorilla Troops Algorithm and the proposed intelligent preying algorithm. IndRNN is adapted to derive the representation of learned sequence dependencies and sequence features by overcoming the inaccuracies of natural factors in traditional feature architecture. Working with large-scale data, the suggested model outperforms the current deep learning model and shallow machine learning, notably for extended sequences, according to the findings of the experiments, where we obtained an accuracy of 97.7% in the proposed method.


Subject(s)
Deep Learning , MicroRNAs , Plant Physiological Phenomena , RNA, Long Noncoding , Animals , Humans , Algorithms , MicroRNAs/genetics , Reproducibility of Results , RNA, Long Noncoding/genetics , Plant Physiological Phenomena/genetics
14.
Sensors (Basel) ; 22(21)2022 Nov 07.
Article in English | MEDLINE | ID: mdl-36366277

ABSTRACT

Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods.


Subject(s)
COVID-19 , Pneumonia, Viral , Humans , COVID-19/diagnostic imaging , Pandemics , COVID-19 Testing , X-Rays
15.
Diagnostics (Basel) ; 12(11)2022 Nov 05.
Article in English | MEDLINE | ID: mdl-36359545

ABSTRACT

Background: Hospitals face a significant problem meeting patients' medical needs during epidemics, especially when the number of patients increases rapidly, as seen during the recent COVID-19 pandemic. This study designs a treatment recommender system (RS) for the efficient management of human capital and resources such as doctors, medicines, and resources in hospitals. We hypothesize that a deep learning framework, when combined with search paradigms in an image framework, can make the RS very efficient. Methodology: This study uses a Convolutional neural network (CNN) model for the feature extraction of the images and discovers the most similar patients. The input queries patients from the hospital database with similar chest X-ray images. It uses a similarity metric for the similarity computation of the images. Results: This methodology recommends the doctors, medicines, and resources associated with similar patients to a COVID-19 patients being admitted to the hospital. The performance of the proposed RS is verified with five different feature extraction CNN models and four similarity measures. The proposed RS with a ResNet-50 CNN feature extraction model and Maxwell-Boltzmann similarity is found to be a proper framework for treatment recommendation with a mean average precision of more than 0.90 for threshold similarities in the range of 0.7 to 0.9 and an average highest cosine similarity of more than 0.95. Conclusions: Overall, an RS with a CNN model and image similarity is proven as an efficient tool for the proper management of resources during the peak period of pandemics and can be adopted in clinical settings.

16.
Foods ; 11(21)2022 Oct 27.
Article in English | MEDLINE | ID: mdl-36360004

ABSTRACT

(1) Background: Extra virgin olive oil production is strictly influenced by the quality of fruits. The optical selection allows for obtaining high quality oils starting from batches with different qualitative characteristics. This study aims to test a CNN algorithm in order to assess its potential for olive classification into several quality classes for industrial purposes, specifically its potential integration and sorting performance evaluation. (2) Methods: The acquired samples were all subjected to visual analysis by a trained operator for the distinction of the products in five classes related to the state of external veraison and the presence of visible defects. The olive samples were placed at a regular distance and in a fixed position on a conveyor belt that moved at a constant speed of 1 cm/s. The images of the olives were taken every 15 s with a compact industrial RGB camera mounted on the main frame in aluminum to allow overlapping of the images, and to avoid loss of information. (3) Results: The modelling approaches used, all based on AI techniques, showed excellent results for both RGB datasets. (4) Conclusions: The presented approach regarding the qualitative discrimination of olive fruits shows its potential for both sorting machine performance evaluation and for future implementation on machines used for industrial sorting processes.

17.
Front Psychiatry ; 13: 861930, 2022.
Article in English | MEDLINE | ID: mdl-35669265

ABSTRACT

Mood disorders are ubiquitous mental disorders with familial aggregation. Extracting family history of psychiatric disorders from large electronic hospitalization records is helpful for further study of onset characteristics among patients with a mood disorder. This study uses an observational clinical data set of in-patients of Nanjing Brain Hospital, affiliated with Nanjing Medical University, from the past 10 years. This paper proposes a pretrained language model: Bidirectional Encoder Representations from Transformers (BERT)-Convolutional Neural Network (CNN). We first project the electronic hospitalization records into a low-dimensional dense matrix via the pretrained Chinese BERT model, then feed the dense matrix into the stacked CNN layer to capture high-level features of texts; finally, we use the fully connected layer to extract family history based on high-level features. The accuracy of our BERT-CNN model was 97.12 ± 0.37% in the real-world data set from Nanjing Brain Hospital. We further studied the correlation between mood disorders and family history of psychiatric disorder.

18.
Diagnostics (Basel) ; 12(5)2022 May 08.
Article in English | MEDLINE | ID: mdl-35626328

ABSTRACT

Parkinson's Disease (PD) is a progressive central nervous system disorder that is caused due to the neural degeneration mainly in the substantia nigra in the brain. It is responsible for the decline of various motor functions due to the loss of dopamine-producing neurons. Tremors in hands is usually the initial symptom, followed by rigidity, bradykinesia, postural instability, and impaired balance. Proper diagnosis and preventive treatment can help patients improve their quality of life. We have proposed an ensemble of Deep Learning (DL) models to predict Parkinson's using DaTscan images. Initially, we have used four DL models, namely, VGG16, ResNet50, Inception-V3, and Xception, to classify Parkinson's disease. In the next stage, we have applied a Fuzzy Fusion logic-based ensemble approach to enhance the overall result of the classification model. The proposed model is assessed on a publicly available database provided by the Parkinson's Progression Markers Initiative (PPMI). The achieved recognition accuracy, Precision, Sensitivity, Specificity, F1-score from the proposed model are 98.45%, 98.84%, 98.84%, 97.67%, and 98.84%, respectively which are higher than the individual model. We have also developed a Graphical User Interface (GUI)-based software tool for public use that instantly detects all classes using Magnetic Resonance Imaging (MRI) with reasonable accuracy. The proposed method offers better performance compared to other state-of-the-art methods in detecting PD. The developed GUI-based software tool can play a significant role in detecting the disease in real-time.

19.
Math Biosci Eng ; 19(1): 997-1025, 2022 01.
Article in English | MEDLINE | ID: mdl-34903023

ABSTRACT

Classifying and identifying surface defects is essential during the production and use of aluminum profiles. Recently, the dual-convolutional neural network(CNN) model fusion framework has shown promising performance for defects classification and recognition. Spurred by this trend, this paper proposes an improved dual-CNN model fusion framework to classify and identify defects in aluminum profiles. Compared with traditional dual-CNN model fusion frameworks, the proposed architecture involves an improved fusion layer, fusion strategy, and classifier block. Specifically, the suggested method extracts the feature map of the aluminum profile RGB image from the pre-trained VGG16 model's pool5 layer and the feature map of the maximum pooling layer of the suggested A4 network, which is added after the Alexnet model. then, weighted bilinear interpolation unsamples the feature maps extracted from the maximum pooling layer of the A4 part. The network layer and upsampling schemes ensure equal feature map dimensions ensuring feature map merging utilizing an improved wavelet transform. Finally, global average pooling is employed in the classifier block instead of dense layers to reduce the model's parameters and avoid overfitting. The fused feature map is then input into the classifier block for classification. The experimental setup involves data augmentation and transfer learning to prevent overfitting due to the small-sized data sets exploited, while the K cross-validation method is employed to evaluate the model's performance during the training process. The experimental results demonstrate that the proposed dual-CNN model fusion framework attains a classification accuracy higher than current techniques, and specifically 4.3% higher than Alexnet, 2.5% for VGG16, 2.9% for Inception v3, 2.2% for VGG19, 3.6% for Resnet50, 3% for Resnet101, and 0.7% and 1.2% than the conventional dual-CNN fusion framework 1 and 2, respectively, proving the effectiveness of the proposed strategy.


Subject(s)
Aluminum , Neural Networks, Computer , Wavelet Analysis
20.
Sensors (Basel) ; 21(15)2021 Jul 29.
Article in English | MEDLINE | ID: mdl-34372366

ABSTRACT

BACKGROUND: We aimed to create a novel model using a deep learning method to estimate stroke volume variation (SVV), a widely used predictor of fluid responsiveness, from arterial blood pressure waveform (ABPW). METHODS: In total, 557 patients and 8,512,564 SVV datasets were collected and were divided into three groups: training, validation, and test. Data was composed of 10 s of ABPW and corresponding SVV data recorded every 2 s. We built a convolutional neural network (CNN) model to estimate SVV from the ABPW with pre-existing commercialized model (EV1000) as a reference. We applied pre-processing, multichannel, and dimension reduction to improve the CNN model with diversified inputs. RESULTS: Our CNN model showed an acceptable performance with sample data (r = 0.91, MSE = 6.92). Diversification of inputs, such as normalization, frequency, and slope of ABPW significantly improved the model correlation (r = 0.95), lowered mean squared error (MSE = 2.13), and resulted in a high concordance rate (96.26%) with the SVV from the commercialized model. CONCLUSIONS: We developed a new CNN deep-learning model to estimate SVV. Our CNN model seems to be a viable alternative when the necessary medical device is not available, thereby allowing a wider range of application and resulting in optimal patient management.


Subject(s)
Arterial Pressure , Neural Networks, Computer , Blood Pressure , Humans , Stroke Volume
SELECTION OF CITATIONS
SEARCH DETAIL