Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Sensors (Basel) ; 21(4)2021 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-33672476

RESUMO

Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.

2.
Sensors (Basel) ; 16(9)2016 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-27608023

RESUMO

Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.


Assuntos
Moradias Assistidas , Internet , Saúde Mental , Adulto , Idoso , Algoritmos , Área Sob a Curva , Técnicas Biossensoriais , Análise Discriminante , Feminino , Humanos , Masculino , Cadeias de Markov , Pessoa de Meia-Idade , Monitorização Ambulatorial , Razão de Chances , Análise de Componente Principal , Curva ROC , Adulto Jovem
3.
Healthcare (Basel) ; 11(3)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36766986

RESUMO

The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.

4.
Ultrasonics ; 132: 107017, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37148701

RESUMO

Ultrasound imaging is a valuable tool for assessing the development of the fetal during pregnancy. However, interpreting ultrasound images manually can be time-consuming and subject to variability. Automated image categorization using machine learning algorithms can streamline the interpretation process by identifying stages of fetal development present in ultrasound images. In particular, deep learning architectures have shown promise in medical image analysis, enabling accurate automated diagnosis. The objective of this research is to identify fetal planes from ultrasound images with higher precision. To achieve this, we trained several convolutional neural network (CNN) architectures on a dataset of 12400 images. Our study focuses on the impact of enhanced image quality by adopting Histogram Equalization and Fuzzy Logic-based contrast enhancement on fetal plane detection using the Evidential Dempster-Shafer Based CNN Architecture, PReLU-Net, SqueezeNET, and Swin Transformer. The results of each classifier were noteworthy, with PreLUNet achieving an accuracy of 91.03%, SqueezeNET reaching 91.03% accuracy, Swin Transformer reaching an accuracy of 88.90%, and the Evidential classifier achieving an accuracy of 83.54%. We evaluated the results in terms of both training and testing accuracies. Additionally, we used LIME and GradCam to examine the decision-making process of the classifiers, providing explainability for their outputs. Our findings demonstrate the potential for automated image categorization in large-scale retrospective assessments of fetal development using ultrasound imaging.


Assuntos
Algoritmos , Redes Neurais de Computação , Gravidez , Feminino , Humanos , Estudos Retrospectivos , Aprendizado de Máquina , Ultrassonografia
5.
Sci Rep ; 12(1): 14122, 2022 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-35986065

RESUMO

Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.


Assuntos
Eletroencefalografia , Redes Neurais de Computação , Nível de Alerta , Eletroencefalografia/métodos , Emoções , Humanos , Máquina de Vetores de Suporte
6.
Sci Rep ; 12(1): 11440, 2022 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-35794172

RESUMO

Renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. This research deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community's research scope e.g., modeling digital-twin of renal functions. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.


Assuntos
Cistos , Neoplasias Renais , Humanos , Rim/diagnóstico por imagem , Neoplasias Renais/diagnóstico por imagem , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
7.
PLoS One ; 17(12): e0279262, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36538513

RESUMO

The Recency, Frequency, and Monetary model, also known as the RFM model, is a popular and widely used business model for determining beneficial client segments and analyzing profit. It is also recommended and frequently used in superstores to identify customer segments and increase profit margins. Later, the Length, Recency, Frequency, and Monetary model, also known as the LRFM model, was introduced as an improved version of the RFM model to identify more relevant and exact consumer groups for profit maximization. Superstores have a varying number of different products. In RFM and LRFM models, the relationship between profit and purchased quantity has never been investigated. Therefore, this paper proposed an efficient customer segmentation model, namely LRFMV (Length, Recency, Frequency, Monetary and Volume) and studied the profit-quantity relationship. A new dimension V (volume) has been added to the existing LRFM model to show a direct profit-quantity relationship in customer segmentation. The V stands for volume, which is derived by calculating the average number of products purchased by a frequent superstore client in a single day. The data obtained from feature extraction of the LRMFV model is then clustered by using conventional K-means, K-Medoids, and Mini Batch K-means methods. The results obtained from the three algorithms are compared, and the K-means algorithm is chosen for the superstore dataset of the proposed LRFMV model. All clusters created using these three algorithms are evaluated in the LRFMV model, and a close relationship between profit and volume is observed. A clear profit-quantity relationship of items has yet not been seen in any prior study on the RFM and LRFM models. Grouping customers aiming at profit maximization existed previously, but there was no clear and direct depiction of profit and quantity of sold items. This study applied unsupervised machine learning to investigate the patterns, trends, and correlations between volume and profit. The traits of all the clusters are analyzed by the Customer-Classification Matrix. The LRFMV values, larger or less than the overall average for each cluster, are identified as their traits. The performance of the proposed LRFMV model is compared with the legacy RFM and LRFM customer segmentation models. The outcome shows that the LRFMV model creates precise customer segments with the same number of customers while maintaining a greater profit.


Assuntos
Comércio , Comportamento do Consumidor , Humanos , Algoritmos , Fenótipo
8.
J Imaging ; 8(9)2022 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-36135395

RESUMO

Dengue is a viral disease that primarily affects tropical and subtropical regions and is especially prevalent in South-East Asia. This mosquito-borne disease sometimes triggers nationwide epidemics, which results in a large number of fatalities. The development of Dengue Haemorrhagic Fever (DHF) is where most cases occur, and a large portion of them are detected among children under the age of ten, with severe conditions often progressing to a critical state known as Dengue Shock Syndrome (DSS). In this study, we analysed two separate datasets from two different countries- Vietnam and Bangladesh, which we referred as VDengu and BDengue, respectively. For the VDengu dataset, as it was structured, supervised learning models were effective for predictive analysis, among which, the decision tree classifier XGBoost in particular produced the best outcome. Furthermore, Shapley Additive Explanation (SHAP) was used over the XGBoost model to assess the significance of individual attributes of the dataset. Among the significant attributes, we applied the SHAP dependence plot to identify the range for each attribute against the number of DHF or DSS cases. In parallel, the dataset from Bangladesh was unstructured; therefore, we applied an unsupervised learning technique, i.e., hierarchical clustering, to find clusters of vital blood components of the patients according to their complete blood count reports. The clusters were further analysed to find the attributes in the dataset that led to DSS or DHF.

9.
PLoS One ; 11(9): e0162702, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27635654

RESUMO

Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.


Assuntos
Entropia , Expressão Facial , Reconhecimento Facial , Cadeias de Markov , Modelos Teóricos , Humanos
10.
PLoS One ; 11(8): e0160366, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27494334

RESUMO

The aim of this research is to explore factors influencing the management decisions to adopt human resource information system (HRIS) in the hospital industry of Bangladesh-an emerging developing country. To understand this issue, this paper integrates two prominent adoption theories-Human-Organization-Technology fit (HOT-fit) model and Technology-Organization-Environment (TOE) framework. Thirteen factors under four dimensions were investigated to explore their influence on HRIS adoption decisions in hospitals. Employing non-probability sampling method, a total of 550 copies of structured questionnaires were distributed among HR executives of 92 private hospitals in Bangladesh. Among the respondents, usable questionnaires were 383 that suggesting a valid response rate of 69.63%. We classify the sample into 3 core groups based on the HRIS initial implementation, namely adopters, prospectors, and laggards. The obtained results specify 5 most critical factors i.e. IT infrastructure, top management support, IT capabilities of staff, perceived cost, and competitive pressure. Moreover, the most significant dimension is technological dimension followed by organisational, human, and environmental among the proposed 4 dimensions. Lastly, the study found existence of significant differences in all factors across different adopting groups. The study results also expose constructive proposals to researchers, hospitals, and the government to enhance the likelihood of adopting HRIS. The present study has important implications in understanding HRIS implementation in developing countries.


Assuntos
Hospitais , Sistemas de Informação/organização & administração , Gestão de Recursos Humanos/métodos , Adulto , Atitude Frente aos Computadores , Bangladesh , Difusão de Inovações , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Inquéritos e Questionários , Recursos Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA