Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Brief Bioinform ; 24(4)2023 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-37253692

RESUMO

Classifying epitopes is essential since they can be applied in various fields, including therapeutics, diagnostics and peptide-based vaccines. To determine the epitope or peptide against an antibody, epitope mapping with peptides is the most extensively used method. However, this method is more time-consuming and inefficient than using present methods. The ability to retrieve data on protein sequences through laboratory procedures has led to the development of computational models that predict epitope binding based on machine learning and deep learning (DL). It has also evolved to become a crucial part of developing effective cancer immunotherapies. This paper proposes an architecture to generalize this case since various research strives to solve a low-performance classification problem. A proposed DL model is the fusion architecture, which combines two architectures: Transformer architecture and convolutional neural network (CNN), called MITNet and MITNet-Fusion. Combining these two architectures enriches feature space to correlate epitope labels with the binary classification method. The selected epitope-T-cell receptor (TCR) interactions are GILG, GLCT and NLVP, acquired from three databases: IEDB, VDJdb and McPAS-TCR. The previous input data was extracted using amino acid composition, dipeptide composition, spectrum descriptor and the combination of all those features called AADIP composition to encode the input data to DL architecture. For ensuring consistency, fivefold cross-validations were performed using the area under curve metric. Results showed that GILG, GLCT and NLVP received scores of 0.85, 0.87 and 0.86, respectively. Those results were compared to prior architecture and outperformed other similar deep learning models.


Assuntos
Epitopos de Linfócito T , Redes Neurais de Computação , Sequência de Aminoácidos , Peptídeos/química , Receptores de Antígenos de Linfócitos T
2.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991662

RESUMO

Farming is a fundamental factor driving economic development in most regions of the world. As in agricultural activity, labor has always been hazardous and can result in injury or even death. This perception encourages farmers to use proper tools, receive training, and work in a safe environment. With the wearable device as an Internet of Things (IoT) subsystem, the device can read sensor data as well as compute and send information. We investigated the validation and simulation dataset to determine whether accidents occurred with farmers by applying the Hierarchical Temporal Memory (HTM) classifier with each dataset input from the quaternion feature that represents 3D rotation. The performance metrics analysis showed a significant 88.00% accuracy, precision of 0.99, recall of 0.04, F_Score of 0.09, average Mean Square Error (MSE) of 5.10, Mean Absolute Error (MAE) of 0.19, and a Root Mean Squared Error (RMSE) of 1.51 for the validation dataset, 54.00% accuracy, precision of 0.97, recall of 0.50, F_Score of 0.66, MSE = 0.06, MAE = 3.24, and = 1.51 for the Farming-Pack motion capture (mocap) dataset. The computational framework with wearable device technology connected to ubiquitous systems, as well as statistical results, demonstrate that our proposed method is feasible and effective in solving the problem's constraints in a time series dataset that is acceptable and usable in a real rural farming environment for optimal solutions.


Assuntos
Aprendizado Profundo , Internet das Coisas , Humanos , Fazendeiros , Fazendas , Agricultura
3.
Sensors (Basel) ; 23(3)2023 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-36772398

RESUMO

In the last decade, deep learning has enjoyed its spotlight as the game-changing addition to smart farming and precision agriculture. Such development has been predominantly observed in developed countries, while on the other hand, in developing countries most farmers especially ones with smallholder farms have not enjoyed such wide and deep adoption of this new technologies. In this paper we attempt to improve the image classification part of smart farming and precision agriculture. Agricultural commodities tend to possess certain textural details on their surfaces which we attempt to exploit. In this work, we propose a deep learning based approach called Selective Context Adaptation Network (SCANet). SCANet performs feature enhancement strategy by leveraging level-wise information and employing context selection mechanism. In exploiting contextual correlation feature of the crop images our proposed approach demonstrates the effectiveness of the context selection mechanism. Our proposed scheme achieves 88.72% accuracy and outperforms the existing approaches. Our model is evaluated on the cocoa bean dataset constructed from the real cocoa bean industry scene in Indonesia.

4.
Sensors (Basel) ; 23(14)2023 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-37514907

RESUMO

Infertility has become a common problem in global health, and unsurprisingly, many couples need medical assistance to achieve reproduction. Many human behaviors can lead to infertility, which is none other than unhealthy sperm. The important thing is that assisted reproductive techniques require selecting healthy sperm. Hence, machine learning algorithms are presented as the subject of this research to effectively modernize and make accurate standards and decisions in classifying sperm. In this study, we developed a deep learning fusion architecture called SwinMobile that combines the Shifted Windows Vision Transformer (Swin) and MobileNetV3 into a unified feature space and classifies sperm from impurities in the SVIA Subset-C. Swin Transformer provides long-range feature extraction, while MobileNetV3 is responsible for extracting local features. We also explored incorporating an autoencoder into the architecture for an automatic noise-removing model. Our model was tested on SVIA, HuSHem, and SMIDS. Comparison to the state-of-the-art models was based on F1-score and accuracy. Our deep learning results accurately classified sperm and performed well in direct comparisons with previous approaches despite the datasets' different characteristics. We compared the model from Xception on the SVIA dataset, the MC-HSH model on the HuSHem dataset, and Ilhan et al.'s model on the SMIDS dataset and the astonishing results given by our model. The proposed model, especially SwinMobile-AE, has strong classification capabilities that enable it to function with high classification results on three different datasets. We propose that our deep learning approach to sperm classification is suitable for modernizing the clinical world. Our work leverages the potential of artificial intelligence technologies to rival humans in terms of accuracy, reliability, and speed of analysis. The SwinMobile-AE method we provide can achieve better results than state-of-the-art, even for three different datasets. Our results were benchmarked by comparisons with three datasets, which included SVIA, HuSHem, and SMIDS, respectively (95.4% vs. 94.9%), (97.6% vs. 95.7%), and (91.7% vs. 90.9%). Thus, the proposed model can realize technological advances in classifying sperm morphology based on the evidential results with three different datasets, each having its characteristics related to data size, number of classes, and color space.


Assuntos
Inteligência Artificial , Infertilidade , Masculino , Humanos , Reprodutibilidade dos Testes , Sêmen , Espermatozoides
5.
Sensors (Basel) ; 22(11)2022 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-35684880

RESUMO

There have been several studies of hand gesture recognition for human-machine interfaces. In the early work, most solutions were vision-based and usually had privacy problems that make them unusable in some scenarios. To address the privacy issues, more and more research on non-vision-based hand gesture recognition techniques has been proposed. This paper proposes a dynamic hand gesture system based on 60 GHz FMCW radar that can be used for contactless device control. In this paper, we receive the radar signals of hand gestures and transform them into human-understandable domains such as range, velocity, and angle. With these signatures, we can customize our system to different scenarios. We proposed an end-to-end training deep learning model (neural network and long short-term memory), that extracts the transformed radar signals into features and classifies the extracted features into hand gesture labels. In our training data collecting effort, a camera is used only to support labeling hand gesture data. The accuracy of our model can reach 98%.


Assuntos
Gestos , Reconhecimento Psicológico , Humanos , Memória de Longo Prazo , Ultrassonografia Doppler , Extremidade Superior
6.
Sensors (Basel) ; 22(24)2022 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-36560087

RESUMO

The utilization of computer vision in smart farming is becoming a trend in constructing an agricultural automation scheme. Deep learning (DL) is famous for the accurate approach to addressing the tasks in computer vision, such as object detection and image classification. The superiority of the deep learning model on the smart farming application, called Progressive Contextual Excitation Network (PCENet), has also been studied in our recent study to classify cocoa bean images. However, the assessment of the computational time on the PCENet model shows that the original model is only 0.101s or 9.9 FPS on the Jetson Nano as the edge platform. Therefore, this research demonstrates the compression technique to accelerate the PCENet model using pruning filters. From our experiment, we can accelerate the current model and achieve 16.7 FPS assessed in the Jetson Nano. Moreover, the accuracy of the compressed model can be maintained at 86.1%, while the original model is 86.8%. In addition, our approach is more accurate than ResNet18 as the state-of-the-art only reaches 82.7%. The assessment using the corn leaf disease dataset indicates that the compressed model can achieve an accuracy of 97.5%, while the accuracy of the original PCENet is 97.7%.


Assuntos
Agricultura , Compressão de Dados , Fazendas , Fenômenos Físicos , Automação
7.
ScientificWorldJournal ; 2014: 917060, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25300280

RESUMO

As the digitization is integrated into daily life, media including video and audio are heavily transferred over the Internet nowadays. Voice-over-Internet Protocol (VoIP), the most popular and mature technology, becomes the focus attracting many researches and investments. However, most of the existing studies focused on a one-to-one communication model in a homogeneous network, instead of one-to-many broadcasting model among diverse embedded devices in a heterogeneous network. In this paper, we present the implementation of a VoIP broadcasting service on the open source-Linphone-in a heterogeneous network environment, including WiFi, 3G, and LAN networks. The proposed system featuring VoIP broadcasting over heterogeneous networks can be integrated with heterogeneous agile devices, such as embedded devices or mobile phones. VoIP broadcasting over heterogeneous networks can be integrated into modern smartphones or other embedded devices; thus when users run in a traditional AM/FM signal unreachable area, they still can receive the broadcast voice through the IP network. Also, comprehensive evaluations are conducted to verify the effectiveness of the proposed implementation.


Assuntos
Redes de Comunicação de Computadores , Fonação , Telefone Celular/instrumentação , Humanos
8.
Int J Cardiovasc Imaging ; 40(4): 709-722, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38150139

RESUMO

The existing multilabel X-Ray image learning tasks generally contain much information on pathology co-occurrence and interdependency, which is very important for clinical diagnosis. However, the challenging part of this subject is to accurately diagnose multiple diseases that occurred in a single X-Ray image since multiple levels of features are generated in the images, and create different features as in single label detection. Various works were developed to address this challenge with proposed deep learning architectures to improve classification performance and enrich diagnosis results with multi-probability disease detection. The objective is to create an accurate result and a faster inference system to support a quick diagnosis in the medical system. To contribute to this state-of-the-art, we designed a fusion architecture, CheXNet and Feature Pyramid Network (FPN), to classify and discriminate multiple thoracic diseases from chest X-Rays. This concept enables the model to extract while creating a pyramid of feature maps with different spatial resolutions that capture low-level and high-level semantic information to encounter multiple features. The model's effectiveness is evaluated using the NIH ChestXray14 dataset, with the Area Under Curve (AUC) and accuracy metrics used to compare the results against other cutting-edge approaches. The overall results demonstrate that our method outperforms other approaches and has become promising for multilabel disease classification in chest X-Rays, with potential applications in clinical practice. The result demonstrated that we achieved an average AUC of 0.846 and an accuracy of 0.914. Further, our proposed architecture diagnoses images in 0.013 s, faster than the latest approaches.


Assuntos
Aprendizado Profundo , Valor Preditivo dos Testes , Interpretação de Imagem Radiográfica Assistida por Computador , Radiografia Torácica , Humanos , Reprodutibilidade dos Testes , Bases de Dados Factuais , Conjuntos de Dados como Assunto , Doenças Torácicas/diagnóstico por imagem , Doenças Torácicas/classificação , Pulmão/diagnóstico por imagem
9.
PeerJ Comput Sci ; 9: e1403, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346695

RESUMO

Investor sentiment plays a crucial role in the stock market, and in recent years, numerous studies have aimed to predict future stock prices by analyzing market sentiment obtained from social media or news. This study investigates the use of investor sentiment from social media, with a focus on Stocktwits, a social media platform for investors. However, using investor sentiment on Stocktwits to predict stock price movements may be challenging due to a lack of user-initiated sentiment data and the limitations of existing sentiment analyzers, which may inaccurately classify neutral comments. To overcome these challenges, this study proposes an alternative approach using FinBERT, a pre-trained language model specifically designed to analyze the sentiment of financial text. This study proposes an ensemble support vector machine for improving the accuracy of stock price movement predictions. Then, it predicts the future movement of SPDR S&P 500 Index Exchange Traded Funds using the rolling window approach to prevent look-ahead bias. Through comparing various techniques for generating sentiment, our results show that using the FinBERT model for sentiment analysis yields the best results, with an F1-score that is 4-5% higher than other techniques. Additionally, the proposed ensemble support vector machine improves the accuracy of stock price movement predictions when compared to the original support vector machine in a series of experiments.

10.
Comput Biol Med ; 148: 105913, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35940164

RESUMO

As one of the most reliable and significant indicators, Chronic Obstructive Pulmonary Disease (COPD) becomes a robust predictor of lung cancer early detection, the world's leading cause of cancer death. One of the methods is to analyze the Volatile Organic Compounds (VOCs) in exhaled breath using electronic noses (E-noses), which have become emerging tools for analyzing breath because of their potential and promising technology for diagnosing. However, the signal processing of the E-Nose sensor becomes vital in exposing information about the subject condition, which most researchers strive to accomplish. We proposed a Convolutional Neural Network (CNN) architecture to classify COPD in smokers and non-smokers, healthy subjects, and smokers from E-Nose signals to contribute to this field. Two models were constructed following E-Nose signal processing state-of-the-arts. One was by combined feature extraction and classifier, and the second was by CNN, which directly processed the raw signal. In addition, various feature extraction and classifier (Machine Learning and CNN) used in prior research were investigated. Using 3K and 5K Fold cross-validation results demonstrated that our proposed models outperformed in Kernel Principal Component Analysis (KPCA) with Fx-ConvNet and Pure-ConvNet. They all reached maximum F1-Score with zero standard deviation values indicating a consistent result. Further experiments also showed that KPCA contributed to the increasing performance of some classifiers with average F1-Score 0.933 and 0.068 as standard deviation values.


Assuntos
Nariz Eletrônico , Doença Pulmonar Obstrutiva Crônica , Testes Respiratórios , Expiração , Voluntários Saudáveis , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA