Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 214
Filtrar
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-39013383

RESUMO

Unlike animals, variability in transcription factors (TFs) and their binding regions (TFBRs) across the plants species is a major problem that most of the existing TFBR finding software fail to tackle, rendering them hardly of any use. This limitation has resulted into underdevelopment of plant regulatory research and rampant use of Arabidopsis-like model species, generating misleading results. Here, we report a revolutionary transformers-based deep-learning approach, PTFSpot, which learns from TF structures and their binding regions' co-variability to bring a universal TF-DNA interaction model to detect TFBR with complete freedom from TF and species-specific models' limitations. During a series of extensive benchmarking studies over multiple experimentally validated data, it not only outperformed the existing software by >30% lead but also delivered consistently >90% accuracy even for those species and TF families that were never encountered during the model-building process. PTFSpot makes it possible now to accurately annotate TFBRs across any plant genome even in the total lack of any TF information, completely free from the bottlenecks of species and TF-specific models.


Assuntos
Aprendizado Profundo , Fatores de Transcrição , Fatores de Transcrição/metabolismo , Sítios de Ligação , Software , Arabidopsis/metabolismo , Arabidopsis/genética , Genoma de Planta , Biologia Computacional/métodos , Plantas/metabolismo , Plantas/genética
2.
BMC Genomics ; 25(1): 242, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38443802

RESUMO

BACKGROUND: 5-Methylcytosine (5mC) plays a very important role in gene stability, transcription, and development. Therefore, accurate identification of the 5mC site is of key importance in genetic and pathological studies. However, traditional experimental methods for identifying 5mC sites are time-consuming and costly, so there is an urgent need to develop computational methods to automatically detect and identify these 5mC sites. RESULTS: Deep learning methods have shown great potential in the field of 5mC sites, so we developed a deep learning combinatorial model called i5mC-DCGA. The model innovatively uses the Convolutional Block Attention Module (CBAM) to improve the Dense Convolutional Network (DenseNet), which is improved to extract advanced local feature information. Subsequently, we combined a Bidirectional Gated Recurrent Unit (BiGRU) and a Self-Attention mechanism to extract global feature information. Our model can learn feature representations of abstract and complex from simple sequence coding, while having the ability to solve the sample imbalance problem in benchmark datasets. The experimental results show that the i5mC-DCGA model achieves 97.02%, 96.52%, 96.58% and 85.58% in sensitivity (Sn), specificity (Sp), accuracy (Acc) and matthews correlation coefficient (MCC), respectively. CONCLUSIONS: The i5mC-DCGA model outperforms other existing prediction tools in predicting 5mC sites, and it is currently the most representative promoter 5mC site prediction tool. The benchmark dataset and source code for the i5mC-DCGA model can be found in https://github.com/leirufeng/i5mC-DCGA .


Assuntos
5-Metilcitosina , Benchmarking , Regiões Promotoras Genéticas , Projetos de Pesquisa , Software
3.
Network ; : 1-37, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648017

RESUMO

Cancer-related deadly diseases affect both developed and underdeveloped nations worldwide. Effective network learning is crucial to more reliably identify and categorize breast carcinoma in vast and unbalanced image datasets. The absence of early cancer symptoms makes the early identification process challenging. Therefore, from the perspectives of diagnosis, prevention, and therapy, cancer continues to be among the healthcare concerns that numerous researchers work to advance. It is highly essential to design an innovative breast cancer detection model by considering the complications presented in the classical techniques. Initially, breast cancer images are gathered from online sources and it is further subjected to the segmentation region. Here, it is segmented using Adaptive Trans-Dense-Unet (A-TDUNet), and their parameters are tuned using the developed Modified Sheep Flock Optimization Algorithm (MSFOA). The segmented images are further subjected to the breast cancer detection stage and effective breast cancer detection is performed by Multiscale Dilated Densenet with Attention Mechanism (MDD-AM). Throughout the result validation, the Net Present Value (NPV) and accuracy rate of the designed approach are 96.719% and 93.494%. Hence, the implemented breast cancer detection model secured a better efficacy rate than the baseline detection methods in diverse experimental conditions.

4.
Sensors (Basel) ; 24(13)2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-39001200

RESUMO

Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.


Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Leucemia-Linfoma Linfoblástico de Células Precursoras/diagnóstico , Inteligência Artificial , Leucemia/diagnóstico , Leucemia/classificação , Leucemia/patologia , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
5.
Environ Monit Assess ; 196(3): 279, 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38367185

RESUMO

Efficient waste management is essential for human well-being and environmental health, as neglecting proper disposal practices can lead to financial losses and the depletion of natural resources. Given the rapid urbanization and population growth, developing an automated, innovative waste classification model becomes imperative. To address this need, our paper introduces a novel and robust solution - a smart waste classification model that leverages a hybrid deep learning model (Optimized DenseNet-121 + SVM) to categorize waste items using the TrashNet datasets. Our proposed approach uses the advanced deep learning model DenseNet-121, optimized for superior performance, to extract meaningful features from an expanded TrashNet dataset. These features are subsequently fed into a support vector machine (SVM) for precise classification. Employing data augmentation techniques further enhances classification accuracy while mitigating the risk of overfitting, especially when working with limited TrashNet data. The results of our experimental evaluation of this hybrid deep learning model are highly promising, with an impressive accuracy rate of 99.84%. This accuracy surpasses similar existing models, affirming the efficacy and potential of our approach to revolutionizing waste classification for a sustainable and cleaner future.


Assuntos
Aprendizado Profundo , Humanos , Monitoramento Ambiental , Saúde Ambiental , Recursos Naturais , Crescimento Demográfico
6.
J Prosthodont ; 33(7): 645-654, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38566564

RESUMO

PURPOSE: The study aimed to compare the performance of four pre-trained convolutional neural networks in recognizing seven distinct prosthodontic scenarios involving the maxilla, as a preliminary step in developing an artificial intelligence (AI)-powered prosthesis design system. MATERIALS AND METHODS: Seven distinct classes, including cleft palate, dentulous maxillectomy, edentulous maxillectomy, reconstructed maxillectomy, completely dentulous, partially edentulous, and completely edentulous, were considered for recognition. Utilizing transfer learning and fine-tuned hyperparameters, four AI models (VGG16, Inception-ResNet-V2, DenseNet-201, and Xception) were employed. The dataset, consisting of 3541 preprocessed intraoral occlusal images, was divided into training, validation, and test sets. Model performance metrics encompassed accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve (AUC), and confusion matrix. RESULTS: VGG16, Inception-ResNet-V2, DenseNet-201, and Xception demonstrated comparable performance, with maximum test accuracies of 0.92, 0.90, 0.94, and 0.95, respectively. Xception and DenseNet-201 slightly outperformed the other models, particularly compared with InceptionResNet-V2. Precision, recall, and F1 scores exceeded 90% for most classes in Xception and DenseNet-201 and the average AUC values for all models ranged between 0.98 and 1.00. CONCLUSIONS: While DenseNet-201 and Xception demonstrated superior performance, all models consistently achieved diagnostic accuracy exceeding 90%, highlighting their potential in dental image analysis. This AI application could help work assignments based on difficulty levels and enable the development of an automated diagnosis system at patient admission. It also facilitates prosthesis designing by integrating necessary prosthesis morphology, oral function, and treatment difficulty. Furthermore, it tackles dataset size challenges in model optimization, providing valuable insights for future research.


Assuntos
Maxila , Redes Neurais de Computação , Prostodontia , Humanos , Maxila/diagnóstico por imagem , Prostodontia/métodos , Inteligência Artificial
7.
Int Ophthalmol ; 44(1): 90, 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38367098

RESUMO

OBJECTIVE: Diabetic Retinopathy (DR) is a severe complication of diabetes that damages the retina and affects approximately 80% of patients with diabetes for 10 years or more. This condition primarily impacts young and productive individuals, resulting in significant long-term medical complications for patients and society. The early stages of diabetic retinopathy often advance without noticeable symptoms, resulting in delayed identification and intervention. Therefore, develop approaches employing transfer learning methodologies to enhance early detection capabilities, facilitating timely diagnosis and intervention to mitigate the progression of diabetic retinopathy. METHODS: This study introduces a transfer learning approach for detecting four stages of DR: No DR, Mild, Moderate, and Severe. The methods AlexNet, VGG16, ResNet50, Inception v3, and DenseNet121 are utilized and trained using the Kaggle DR dataset. RESULTS: To assess the efficiency of the suggested improved network, the Kaggle dataset is employed to analyze four performance metrics: Sensitivity, Precision, Accuracy, and F1 score. DenseNet121 demonstrated superior accuracy among the two models, outperforming other models, making it a suitable option for automatic DR sign detection. CONCLUSION: The integration of the DenseNet121 model shows great promise in transforming the timely identification and treatment of DR, resulting in enhanced patient results in the long run and alleviating the burden on society.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/complicações , Retina , Diagnóstico Precoce , Inteligência Artificial
8.
BMC Bioinformatics ; 24(1): 68, 2023 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-36849908

RESUMO

BACKGROUND: Although research on non-coding RNAs (ncRNAs) is a hot topic in life sciences, the functions of numerous ncRNAs remain unclear. In recent years, researchers have found that ncRNAs of the same family have similar functions, therefore, it is important to accurately predict ncRNAs families to identify their functions. There are several methods available to solve the prediction problem of ncRNAs family, whose main ideas can be divided into two categories, including prediction based on the secondary structure features of ncRNAs, and prediction according to sequence features of ncRNAs. The first type of prediction method requires a complicated process and has a low accuracy in obtaining the secondary structure of ncRNAs, while the second type of method has a simple prediction process and a high accuracy, but there is still room for improvement. The existing methods for ncRNAs family prediction are associated with problems such as complicated prediction processes and low accuracy, in this regard, it is necessary to propose a new method to predict the ncRNAs family more perfectly. RESULTS: A deep learning model-based method, ncDENSE, was proposed in this study, which predicted ncRNAs families by extracting ncRNAs sequence features. The bases in ncRNAs sequences were encoded by one-hot coding and later fed into an ensemble deep learning model, which contained the dynamic bi-directional gated recurrent unit (Bi-GRU), the dense convolutional network (DenseNet), and the Attention Mechanism (AM). To be specific, dynamic Bi-GRU was used to extract contextual feature information and capture long-term dependencies of ncRNAs sequences. AM was employed to assign different weights to features extracted by Bi-GRU and focused the attention on information with greater weights. Whereas DenseNet was adopted to extract local feature information of ncRNAs sequences and classify them by the full connection layer. According to our results, the ncDENSE method improved the Accuracy, Sensitivity, Precision, F-score, and MCC by 2.08[Formula: see text], 2.33[Formula: see text], 2.14[Formula: see text], 2.16[Formula: see text], and 2.39[Formula: see text], respectively, compared with the suboptimal method. CONCLUSIONS: Overall, the ncDENSE method proposed in this paper extracts sequence features of ncRNAs by dynamic Bi-GRU and DenseNet and improves the accuracy in predicting ncRNAs family and other data.


Assuntos
Disciplinas das Ciências Biológicas , Aprendizado Profundo , Humanos , RNA não Traduzido/genética
9.
BMC Bioinformatics ; 24(1): 397, 2023 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-37880673

RESUMO

BACKGROUND: N6, 2'-O-dimethyladenosine (m6Am) is an abundant RNA methylation modification on vertebrate mRNAs and is present in the transcription initiation region of mRNAs. It has recently been experimentally shown to be associated with several human disorders, including obesity genes, and stomach cancer, among others. As a result, N6,2'-O-dimethyladenosine (m6Am) site will play a crucial part in the regulation of RNA if it can be correctly identified. RESULTS: This study proposes a novel deep learning-based m6Am prediction model, EMDL_m6Am, which employs one-hot encoding to expressthe feature map of the RNA sequence and recognizes m6Am sites by integrating different CNN models via stacking. Including DenseNet, Inflated Convolutional Network (DCNN) and Deep Multiscale Residual Network (MSRN), the sensitivity (Sn), specificity (Sp), accuracy (ACC), Mathews correlation coefficient (MCC) and area under the curve (AUC) of our model on the training data set reach 86.62%, 88.94%, 87.78%, 0.7590 and 0.8778, respectively, and the prediction results on the independent test set are as high as 82.25%, 79.72%, 80.98%, 0.6199, and 0.8211. CONCLUSIONS: In conclusion, the experimental results demonstrated that EMDL_m6Am greatly improved the predictive performance of the m6Am sites and could provide a valuable reference for the next part of the study. The source code and experimental data are available at: https://github.com/13133989982/EMDL-m6Am .


Assuntos
Aprendizado Profundo , Humanos , RNA Mensageiro/genética , RNA , Metilação , Software
10.
Magn Reson Med ; 90(4): 1345-1362, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37357374

RESUMO

PURPOSE: An end-to-end differentiable 2D Bloch simulation is used to reduce T2 induced blurring in single-shot turbo spin echo sequences, also called rapid imaging with refocused echoes (RARE) sequences, by using a joint optimization of refocusing flip angles and a convolutional neural network. METHODS: Simulation and optimization were performed in the MR-zero framework. Variable flip angle train and DenseNet parameters were optimized jointly using the instantaneous transverse magnetization, available in our simulation, at a certain echo time, which serves as ideal blurring-free target. Final optimized sequences were exported for in vivo measurements at a real system (3 T Siemens, PRISMA) using the Pulseq standard. RESULTS: The optimized RARE was able to successfully lower T2 -induced blurring for single-shot RARE sequences in proton density-weighted and T2 -weighted images. In addition to an increased sharpness, the neural network allowed correction of the contrast changes to match the theoretical transversal magnetization. The optimization found flip angle design strategies similar to existing literature, however, visual inspection of the images and evaluation of the respective point spread function demonstrated an improved performance. CONCLUSIONS: This work demonstrates that when variable flip angles and a convolutional neural network are optimized jointly in an end-to-end approach, sequences with more efficient minimization of T2 -induced blurring can be found. This allows faster single- or multi-shot RARE MRI with longer echo trains.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Simulação por Computador , Fatores de Tempo , Prótons
11.
Curr Genomics ; 24(3): 171-186, 2023 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-38178985

RESUMO

Introduction: N4 acetylcytidine (ac4C) is a highly conserved nucleoside modification that is essential for the regulation of immune functions in organisms. Currently, the identification of ac4C is primarily achieved using biological methods, which can be time-consuming and labor-intensive. In contrast, accurate identification of ac4C by computational methods has become a more effective method for classification and prediction. Aim: To the best of our knowledge, although there are several computational methods for ac4C locus prediction, the performance of the models they constructed is poor, and the network structure they used is relatively simple and suffers from the disadvantage of network degradation. This study aims to improve these limitations by proposing a predictive model based on integrated deep learning to better help identify ac4C sites. Methods: In this study, we propose a new integrated deep learning prediction framework, DLC-ac4C. First, we encode RNA sequences based on three feature encoding schemes, namely C2 encoding, nucleotide chemical property (NCP) encoding, and nucleotide density (ND) encoding. Second, one-dimensional convolutional layers and densely connected convolutional networks (DenseNet) are used to learn local features, and bi-directional long short-term memory networks (Bi-LSTM) are used to learn global features. Third, a channel attention mechanism is introduced to determine the importance of sequence characteristics. Finally, a homomorphic integration strategy is used to limit the generalization error of the model, which further improves the performance of the model. Results: The DLC-ac4C model performed well in terms of sensitivity (Sn), specificity (Sp), accuracy (Acc), Mathews correlation coefficient (MCC), and area under the curve (AUC) for the independent test data with 86.23%, 79.71%, 82.97%, 66.08%, and 90.42%, respectively, which was significantly better than the prediction accuracy of the existing methods. Conclusion: Our model not only combines DenseNet and Bi-LSTM, but also uses the channel attention mechanism to better capture hidden information features from a sequence perspective, and can identify ac4C sites more effectively.

12.
J Appl Clin Med Phys ; 24(3): e13875, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36546583

RESUMO

In this study, we investigated 3D convolutional neural networks (CNNs) with input from radiographic and dosimetric datasets of primary lung tumors and surrounding lung volumes to predict the likelihood of radiation pneumonitis (RP). Pre-treatment, 3- and 6-month follow-up computed tomography (CT) and 3D dose datasets from one hundred and ninety-three NSCLC patients treated with stereotactic body radiotherapy (SBRT) were retrospectively collected and analyzed for this study. DenseNet-121 and ResNet-50 models were selected for this study as they are deep neural networks and have been proven to have high accuracy for complex image classification tasks. Both were modified with 3D convolution and max pooling layers to accept 3D datasets. We used a minority class oversampling approach and data augmentation to address the challenges of data imbalance and data scarcity. We built two sets of models for classification of three (No RP, Grade 1 RP, Grade 2 RP) and two (No RP, Yes RP) classes as outputs. The 3D DenseNet-121 models performed better (F1 score [0.81], AUC [0.91] [three class]; F1 score [0.77], AUC [0.84] [two class]) than the 3D ResNet-50 models (F1 score [0.54], AUC [0.72] [three-class]; F1 score [0.68], AUC [0.71] [two-class]) (p = 0.017 for three class predictions). We also attempted to identify salient regions within the input 3D image dataset via integrated gradient (IG) techniques to assess the relevance of the tumor surrounding volume for RP stratification. These techniques appeared to indicate the significance of the tumor and surrounding regions in the prediction of RP. Overall, 3D CNNs performed well to predict clinical RP in our cohort based on the provided image sets and radiotherapy dose information.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Pneumonite por Radiação , Radiocirurgia , Humanos , Radiocirurgia/efeitos adversos , Pneumonite por Radiação/diagnóstico , Pneumonite por Radiação/etiologia , Pneumonite por Radiação/patologia , Estudos Retrospectivos , Carcinoma Pulmonar de Células não Pequenas/radioterapia , Carcinoma Pulmonar de Células não Pequenas/cirurgia , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/cirurgia , Neoplasias Pulmonares/patologia , Redes Neurais de Computação
13.
Sensors (Basel) ; 23(16)2023 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-37631825

RESUMO

A thyroid nodule, a common abnormal growth within the thyroid gland, is often identified through ultrasound imaging of the neck. These growths may be solid- or fluid-filled, and their treatment is influenced by factors such as size and location. The Thyroid Imaging Reporting and Data System (TI-RADS) is a classification method that categorizes thyroid nodules into risk levels based on features such as size, echogenicity, margin, shape, and calcification. It guides clinicians in deciding whether a biopsy or other further evaluation is needed. Machine learning (ML) can complement TI-RADS classification, thereby improving the detection of malignant tumors. When combined with expert rules (TI-RADS) and explanations, ML models may uncover elements that TI-RADS misses, especially when TI-RADS training data are scarce. In this paper, we present an automated system for classifying thyroid nodules according to TI-RADS and assessing malignancy effectively. We use ResNet-101 and DenseNet-201 models to classify thyroid nodules according to TI-RADS and malignancy. By analyzing the models' last layer using the Grad-CAM algorithm, we demonstrate that these models can identify risk areas and detect nodule features relevant to the TI-RADS score. By integrating Grad-CAM results with feature probability calculations, we provide a precise heat map, visualizing specific features within the nodule and potentially assisting doctors in their assessments. Our experiments show that the utilization of ResNet-101 and DenseNet-201 models, in conjunction with Grad-CAM visualization analysis, improves TI-RADS classification accuracy by up to 10%. This enhancement, achieved through iterative analysis and re-training, underscores the potential of machine learning in advancing thyroid nodule diagnosis, offering a promising direction for further exploration and clinical application.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Pescoço , Projetos de Pesquisa , Algoritmos
14.
Sensors (Basel) ; 23(18)2023 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-37766014

RESUMO

Cloud observation serves as the fundamental bedrock for acquiring comprehensive cloud-related information. The categorization of distinct ground-based clouds holds profound implications within the meteorological domain, boasting significant applications. Deep learning has substantially improved ground-based cloud classification, with automated feature extraction being simpler and far more accurate than using traditional methods. A reengineering of the DenseNet architecture has given rise to an innovative cloud classification method denoted as CloudDenseNet. A novel CloudDense Block has been meticulously crafted to amplify channel attention and elevate the salient features pertinent to cloud classification endeavors. The lightweight CloudDenseNet structure is designed meticulously according to the distinctive characteristics of ground-based clouds and the intricacies of large-scale diverse datasets, which amplifies the generalization ability and elevates the recognition accuracy of the network. The optimal parameter is obtained by combining transfer learning with designed numerous experiments, which significantly enhances the network training efficiency and expedites the process. The methodology achieves an impressive 93.43% accuracy on the large-scale diverse dataset, surpassing numerous published methods. This attests to the substantial potential of the CloudDenseNet architecture for integration into ground-based cloud classification tasks.

15.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36679381

RESUMO

This article is devoted to the development of a classification method based on an artificial neural network architecture to solve the problem of recognizing the sources of acoustic influences recorded by a phase-sensitive OTDR. At the initial stage of signal processing, we propose the use of a band-pass filter to collect data sets with an increased signal-to-noise ratio. When solving the classification problem, we study three widely used convolutional neural network architectures: AlexNet, ResNet50, and DenseNet169. As a result of computational experiments, it is shown that the AlexNet and DenseNet169 architectures can obtain accuracies above 90%. In addition, we propose a novel CNN architecture based on AlexNet, which obtains the best results; in particular, its accuracy is above 98%. The advantages of the proposed model include low power consumption (400 mW) and high speed (0.032 s per net evaluation). In further studies, in order to increase the accuracy, reliability, and data invariance, the use of new algorithms for the filtering and extraction of acoustic signals recorded by a phase-sensitive reflectometer will be considered.


Assuntos
Algoritmos , Redes Neurais de Computação , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Acústica
16.
Sensors (Basel) ; 23(12)2023 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-37420891

RESUMO

Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Cegueira , Pessoal de Saúde
17.
Sensors (Basel) ; 23(17)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37688036

RESUMO

Some recent studies show that filters in convolutional neural networks (CNNs) have low color selectivity in datasets of natural scenes such as Imagenet. CNNs, bio-inspired by the visual cortex, are characterized by their hierarchical learning structure which appears to gradually transform the representation space. Inspired by the direct connection between the LGN and V4, which allows V4 to handle low-level information closer to the trichromatic input in addition to processed information that comes from V2/V3, we propose the addition of a long skip connection (LSC) between the first and last blocks of the feature extraction stage to allow deeper parts of the network to receive information from shallower layers. This type of connection improves classification accuracy by combining simple-visual and complex-abstract features to create more color-selective ones. We have applied this strategy to classic CNN architectures and quantitatively and qualitatively analyzed the improvement in accuracy while focusing on color selectivity. The results show that, in general, skip connections improve accuracy, but LSC improves it even more and enhances the color selectivity of the original CNN architectures. As a side result, we propose a new color representation procedure for organizing and filtering feature maps, making their visualization more manageable for qualitative color selectivity analysis.

18.
Sensors (Basel) ; 23(3)2023 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-36772510

RESUMO

The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. In this context, Deep Neural Network (DNN) has been proven to be a powerful tool for disease detection. Pulmonary Embolism (PE) is considered the leading cause of death disease, with a death toll of 180,000 per year in the US alone. It appears due to a blood clot in pulmonary arteries, which blocks the blood supply to the lungs or a part of the lung. An early diagnosis and treatment of PE could reduce the mortality rate. Doctors and radiologists prefer Computed Tomography (CT) scans as a first-hand tool, which contain 200 to 300 images of a single study for diagnosis. Most of the time, it becomes difficult for a doctor and radiologist to maintain concentration going through all the scans and giving the correct diagnosis, resulting in a misdiagnosis or false diagnosis. Given this, there is a need for an automatic Computer-Aided Diagnosis (CAD) system to assist doctors and radiologists in decision-making. To develop such a system, in this paper, we proposed a deep learning framework based on DenseNet201 to classify PE into nine classes in CT scans. We utilized DenseNet201 as a feature extractor and customized fully connected decision-making layers. The model was trained on the Radiological Society of North America (RSNA)-Pulmonary Embolism Detection Challenge (2020) Kaggle dataset and achieved promising results of 88%, 88%, 89%, and 90% in terms of the accuracy, sensitivity, specificity, and Area Under the Curve (AUC), respectively.


Assuntos
Aprendizado Profundo , Embolia Pulmonar , Humanos , Tomografia Computadorizada por Raios X/métodos , Diagnóstico por Computador/métodos , Embolia Pulmonar/diagnóstico por imagem , Computadores , Sensibilidade e Especificidade
19.
Sensors (Basel) ; 23(1)2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36617076

RESUMO

This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet.


Assuntos
COVID-19 , Aprendizado Profundo , Pneumonia , Humanos , COVID-19/diagnóstico por imagem , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Algoritmos , Teste para COVID-19
20.
Sensors (Basel) ; 23(3)2023 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-36772553

RESUMO

In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. To achieve this target, we specify the problem as a Markov decision process (MDP) and deploy a deep reinforcement learning (RL) temporal difference model-free algorithm known as the deep Q-network (DQN). We consider three actions in our MDP; one is 'grasping' from the prehensile manipulation category and the other two are 'left-slide' and 'right-slide' from the non-prehensile manipulation category. Our DQN is composed of three fully convolutional networks (FCN) based on the memory-efficient architecture of DenseNet-121 which are trained together without causing any bottleneck situations. Each FCN corresponds to each discrete action and outputs a pixel-wise map of affordances for the relevant action. Rewards are allocated after every forward pass and backpropagation is carried out for weight tuning in the corresponding FCN. In this manner, non-prehensile manipulations are learnt which can, in turn, lead to possible successful prehensile manipulations in the near future and vice versa, thus increasing the efficiency and throughput of the pick-and-place task. The Results section shows performance comparisons of our approach to a baseline deep learning approach and a ResNet architecture-based approach, along with very promising test results at varying clutter densities across a range of complex scenario test cases.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa