RESUMO
The explosive growth and wide proliferation of mobile devices, the majority of which are smartphones, led to the inception of several novel and intuitive services, including on-the-go services, online customer services, and location-based services (LBS) [...].
RESUMO
Indoor positioning and localization have been regarded as some of the most widely researched areas during the last decade. The wide proliferation of smartphones and the availability of fast-speed internet have initiated several location-based services. Concerning the importance of precise location information, many sensors are embedded into modern smartphones. Besides Wi-Fi positioning, a rich variety of technologies have been introduced or adopted for indoor positioning such as ultrawideband, infrared, radio frequency identification, Bluetooth beacons, pedestrian dead reckoning, and magnetic field, etc. However, special emphasis is put on infrastructureless approaches like Wi-Fi and magnetic field-based positioning, as they do not require additional infrastructure. Magnetic field positioning is an attractive solution for indoors; yet lack of public benchmarks and selection of suitable benchmarks are among the big challenges. While several benchmarks have been introduced over time, the selection criteria of a benchmark are not properly defined, which leads to positioning results that lack generalization. This study aims at analyzing various public benchmarks for magnetic field positioning and highlights their pros and cons for evaluation positioning algorithms. The concept of DUST (device, user, space, time) and DOWTS (dynamicity, orientation, walk, trajectory, and sensor fusion) is introduced which divides the characteristics of the magnetic field dataset into basic and advanced groups and discusses the publicly available datasets accordingly.
RESUMO
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Análise de Ondaletas , Algoritmos , Humanos , Reconhecimento Automatizado de Padrão/métodosRESUMO
Favourable genotypes of IFNL3 polymorphism CC for rs12979860 and TT for rs8099917 are strongly associated with the interferon/ribavirin treatment outcome in hepatitis C virus (HCV) patients with genotypes 1 and 4. Contrarily, conflicting results have been reported for patients with HCV genotypes 2 and 3. Therefore, we sought to investigate the association between IFNL3 with sustained virological response (SVR) after treatment to ascertain the predictive value of IFNL3 single-nucleotide polymorphisms (SNPs) in HCV patients with genotype 3. For this purpose, we genotyped five IFNL3 SNPs, rs12980275, rs12979860, rs9109886, rs8099917 and rs7248668, in HCV patients with genotype 3 and assessed its association with SVR, individually and in haplotype. Interestingly, we report that the IFNL3 SNPs we genotyped have shown no association with SVR following treatment, either individually or in haplotype, indicating that genotyping IFNL3 SNPs have limited predictive value in HCV patients with genotype 3. Therefore, we propose that IFNL3 genotyping can be excluded from a patient's pre-treatment workup for subsequent treatment choice. This will greatly reduce the economic burden for HCV patients with genotype 3 in resource-limited regions, especially South Asia where genotype 3 is predominant.
Assuntos
Antivirais/administração & dosagem , Hepacivirus/efeitos dos fármacos , Hepacivirus/genética , Hepatite C/tratamento farmacológico , Hepatite C/genética , Interleucinas/genética , Polimorfismo de Nucleotídeo Único , Ribavirina/administração & dosagem , Adulto , Feminino , Genótipo , Hepacivirus/classificação , Hepacivirus/isolamento & purificação , Hepatite C/metabolismo , Humanos , Interferons , Interleucinas/metabolismo , Masculino , Pessoa de Meia-Idade , Resultado do Tratamento , Adulto JovemRESUMO
The rising occurrence and notable public health consequences of skin cancer, especially of the most challenging form known as melanoma, have created an urgent demand for more advanced approaches to disease management. The integration of modern computer vision methods into clinical procedures offers the potential for enhancing the detection of skin cancer . The UNet model has gained prominence as a valuable tool for this objective, continuously evolving to tackle the difficulties associated with the inherent diversity of dermatological images. These challenges stem from diverse medical origins and are further complicated by variations in lighting, patient characteristics, and hair density. In this work, we present an innovative end-to-end trainable network crafted for the segmentation of skin cancer . This network comprises an encoder-decoder architecture, a novel feature extraction block, and a densely connected multi-rate Atrous convolution block. We evaluated the performance of the proposed lightweight skin cancer segmentation network (LSCS-Net) on three widely used benchmark datasets for skin lesion segmentation: ISIC 2016, ISIC 2017, and ISIC 2018. The generalization capabilities of LSCS-Net are testified by the excellent performance on breast cancer and thyroid nodule segmentation datasets. The empirical findings confirm that LSCS-net attains state-of-the-art results, as demonstrated by a significantly elevated Jaccard index.
Assuntos
Neoplasias da Mama , Melanoma , Neoplasias Cutâneas , Humanos , Feminino , Neoplasias Cutâneas/diagnóstico por imagem , Melanoma/diagnóstico por imagem , Benchmarking , Cabelo , Processamento de Imagem Assistida por ComputadorRESUMO
Cancer is an invasive and malignant growth of cells and is known to be one of the most fatal diseases. Its early detection is essential for decreasing the mortality rate and increasing the probability of survival. This study presents an efficient machine learning approach based on the state vector machine (SVM) to diagnose and classify tumors into malignant or benign cancer using the online lymphographic data. Further, two types of neural network architectures are also implemented to evaluate the performance of the proposed SVM-based approach. The optimal structures of the classifiers are obtained by varying the architecture, topology, learning rate, and kernel function and recording the results' accuracy. The classifiers are trained with the preprocessed data examples after noise removal and tested on the unknown cases to diagnose each example as positive or negative. Further, the positive cases are classified into different stages including metastases, malign lymph, and fibrosis. The results are evaluated against the feed-forward and generalized regression neural networks. It is found that the proposed SVM-based approach significantly improves the early detection and classification accuracy in comparison to the experienced physicians and the other machine learning approaches. The proposed approach is robust and can perform sub-class divisions for multipurpose tasks. Experimental results demonstrate that the two-class SVM gives the best results and can effectively be used for the classification of cancer. It has outperformed all other classifiers with an average accuracy of 94.90%.
Assuntos
Neoplasias , Máquina de Vetores de Suporte , Algoritmos , Redes Neurais de Computação , Aprendizado de Máquina , Probabilidade , Neoplasias/diagnósticoRESUMO
Cardiac arrhythmia is one of the prime reasons for death globally. Early diagnosis of heart arrhythmia is crucial to provide timely medical treatment. Heart arrhythmias are diagnosed by analyzing the electrocardiogram (ECG) of patients. Manual analysis of ECG is time-consuming and challenging. Hence, effective automated detection of heart arrhythmias is important to produce reliable results. Different deep-learning techniques to detect heart arrhythmias such as Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Transformer, and Hybrid CNN-LSTM were proposed. However, these techniques, when used individually, are not sufficient to effectively learn multiple features from the ECG signal. The fusion of CNN and LSTM overcomes the limitations of CNN in the existing studies as CNN-LSTM hybrids can extract spatiotemporal features. However, LSTMs suffer from long-range dependency issues due to which certain features may be ignored. Hence, to compensate for the drawbacks of the existing models, this paper proposes a more comprehensive feature fusion technique by merging CNN, LSTM, and Transformer models. The fusion of these models facilitates learning spatial, temporal, and long-range dependency features, hence, helping to capture different attributes of the ECG signal. These features are subsequently passed to a majority voting classifier equipped with three traditional base learners. The traditional learners are enriched with deep features instead of handcrafted features. Experiments are performed on the MIT-BIH arrhythmias database and the model performance is compared with that of the state-of-art models. Results reveal that the proposed model performs better than the existing models yielding an accuracy of 99.56%.
Assuntos
Arritmias Cardíacas , Processamento de Sinais Assistido por Computador , Humanos , Arritmias Cardíacas/diagnóstico , Redes Neurais de Computação , Eletrocardiografia/métodos , Aprendizado de Máquina , AlgoritmosRESUMO
With the outbreak of the COVID-19 pandemic, social isolation and quarantine have become commonplace across the world. IoT health monitoring solutions eliminate the need for regular doctor visits and interactions among patients and medical personnel. Many patients in wards or intensive care units require continuous monitoring of their health. Continuous patient monitoring is a hectic practice in hospitals with limited staff; in a pandemic situation like COVID-19, it becomes much more difficult practice when hospitals are working at full capacity and there is still a risk of medical workers being infected. In this study, we propose an Internet of Things (IoT)-based patient health monitoring system that collects real-time data on important health indicators such as pulse rate, blood oxygen saturation, and body temperature but can be expanded to include more parameters. Our system is comprised of a hardware component that collects and transmits data from sensors to a cloud-based storage system, where it can be accessed and analyzed by healthcare specialists. The ESP-32 microcontroller interfaces with the multiple sensors and wirelessly transmits the collected data to the cloud storage system. A pulse oximeter is utilized in our system to measure blood oxygen saturation and body temperature, as well as a heart rate monitor to measure pulse rate. A web-based interface is also implemented, allowing healthcare practitioners to access and visualize the collected data in real-time, making remote patient monitoring easier. Overall, our IoT-based patient health monitoring system represents a significant advancement in remote patient monitoring, allowing healthcare practitioners to access real-time data on important health metrics and detect potential health issues before they escalate.
Assuntos
Computação em Nuvem , Internet das Coisas , Humanos , Pandemias , Monitorização Fisiológica , Armazenamento e Recuperação da InformaçãoRESUMO
A suitable temporal and spectral processing of the electrocardiogram (ECG) signals can facilitate the visual interpretation and discrimination between known patterns for classification. This paper proposes a non-invasive hybrid neural network and time-frequency (TF) based method to detect and classify commonly found cardiac abnormalities in ECG signals including congestive heart failure, ventricular tachyarrhythmia, intracardiac atrial fibrillation, arrhythmia, malignant ventricular ectopy, normal sinus rhythm, and postictal heart rate oscillations in partial epilepsy. Non-stationary raw ECG signals are collected from an online healthcare dataset source 'PhysioBank' that contains physiologic signals. These temporal signals are processed through Wigner-Ville distribution to produce high-resolution and concentrated TF images depicting specific visual patterns of cardiac abnormalities. The TF images are used to extract the abnormality parameters with the help of medical experts with good diagnostic accuracy. Principal component analysis (PCA) is employed for feature reduction and important features selection from the ECG signals. The selected features are used for training the multilayer feed-forward artificial neural network (ANN) for detection and classification while training parameters like the number of epochs, activation functions, and the learning rate is suitably selected with appropriate stopping criteria. Experimental results demonstrate the effectiveness of the hybrid neural-TF approach using PCA for abnormality detection and classification.
Assuntos
Fibrilação Atrial , Cardiopatias Congênitas , Algoritmos , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Coração , Frequência Cardíaca , Humanos , Redes Neurais de Computação , Processamento de Sinais Assistido por ComputadorRESUMO
The full potential of data analysis is crippled by imbalanced and high-dimensional data, which makes these topics significantly important. Consequently, substantial research efforts have been directed to obtain dimension reduction and resolve data imbalance, especially in the context of fraud detection analysis. This work aims to investigate the effectiveness of hybrid learning methods for alleviating the class imbalance and integrating dimensionality reduction techniques. In this regard, the current study examines different classification combinations to achieve optimal savings and improve classification performance. Against this background, several well-known machine learning models are selected such as logistic regression, random forest, CatBoost (CB), and XGBoost. These models are constructed and optimized based on Bayes minimum risk (BMR) associated with the oversampling method synthetic minority oversampling technique (SMOTE) and different feature selection (FS) techniques, both univariate and multivariate. To investigate the performance of the proposed approach, different possible scenarios are analyzed both with and without balancing, with and without FS, and optimization using BMR. With a major insight about the best method to use, BMR shows a good optimization when used with SMOTE, symmetrical uncertainty for FS, and CB as a boosted classifier, principally in terms of F1 score and savings metrics.
Assuntos
Análise de Dados , Aprendizado de Máquina , Teorema de Bayes , RendaRESUMO
Wireless in vivo actuators and sensors are examples of sophisticated technologies. Another breakthrough is the use of in vivo wireless medical devices, which provide scalable and cost-effective solutions for wearable device integration. In vivo wireless body area networks devices reduce surgery invasiveness and provide continuous health monitoring. Also, patient data may be collected over a long period of time. Given the large fading in in vivo channels due to the signal path going through flesh, bones, skins, and blood, channel coding is considered a solution for increasing the efficiency and overcoming inter-symbol interference in wireless communications. Simulations are performed by using 50 MHz bandwidth at Ultra-Wideband frequencies (3.10-10.60 GHz). Optimal channel coding (Turbo codes, Convolutional codes, with the help of polar codes) improves data transmission performance over the in vivo channel in this research. Moreover, the results reveal that turbo codes outperform polar and convolutional codes in terms of bit error rate. Other approaches perform similarly when the information block length is increased. The simulation in this work indicates that the in vivo channel shows less performance than the Rayleigh channel due to the dense structure of the human body (flesh, skins, blood, bones, muscles, and fat).
RESUMO
The diagnosis of early-stage lung cancer is challenging due to its asymptomatic nature, especially given the repeated radiation exposure and high cost of computed tomography(CT). Examining the lung CT images to detect pulmonary nodules, especially the cell lung cancer lesions, is also tedious and prone to errors even by a specialist. This study proposes a cancer diagnostic model based on a deep learning-enabled support vector machine (SVM). The proposed computer-aided design (CAD) model identifies the physiological and pathological changes in the soft tissues of the cross-section in lung cancer lesions. The model is first trained to recognize lung cancer by measuring and comparing the selected profile values in CT images obtained from patients and control patients at their diagnosis. Then, the model is tested and validated using the CT scans of both patients and control patients that are not shown in the training phase. The study investigates 888 annotated CT scans from the publicly available LIDC/IDRI database. The proposed deep learning-assisted SVM-based model yields 94% accuracy for pulmonary nodule detection representing early-stage lung cancer. It is found superior to other existing methods including complex deep learning, simple machine learning, and the hybrid techniques used on lung CT images for nodule detection. Experimental results demonstrate that the proposed approach can greatly assist radiologists in detecting early lung cancer and facilitating the timely management of patients.
RESUMO
Detection and prediction of the novel Coronavirus present new challenges for the medical research community due to its widespread across the globe. Methods driven by Artificial Intelligence can help predict specific parameters, hazards, and outcomes of such a pandemic. Recently, deep learning-based approaches have proven a novel opportunity to determine various difficulties in prediction. In this work, two learning algorithms, namely deep learning and reinforcement learning, were developed to forecast COVID-19. This article constructs a model using Recurrent Neural Networks (RNN), particularly the Modified Long Short-Term Memory (MLSTM) model, to forecast the count of newly affected individuals, losses, and cures in the following few days. This study also suggests deep learning reinforcement to optimize COVID-19's predictive outcome based on symptoms. Real-world data was utilized to analyze the success of the suggested system. The findings show that the established approach promises prognosticating outcomes concerning the current COVID-19 pandemic and outperformed the Long Short-Term Memory (LSTM) model and the Machine Learning model, Logistic Regresion (LR) in terms of error rate.
Assuntos
Inteligência Artificial , COVID-19 , Humanos , Redes Neurais de Computação , Pandemias , SARS-CoV-2RESUMO
The ongoing COVID-19 corona virus outbreak has caused a global disaster with its deadly spreading. Due to the absence of effective remedial agents and the shortage of immunizations against the virus, population vulnerability increases. In the current situation, as there are no vaccines available; therefore, social distancing is thought to be an adequate precaution (norm) against the spread of the pandemic virus. The risks of virus spread can be minimized by avoiding physical contact among people. The purpose of this work is, therefore, to provide a deep learning platform for social distance tracking using an overhead perspective. The framework uses the YOLOv3 object recognition paradigm to identify humans in video sequences. The transfer learning methodology is also implemented to increase the accuracy of the model. In this way, the detection algorithm uses a pre-trained algorithm that is connected to an extra trained layer using an overhead human data set. The detection model identifies peoples using detected bounding box information. Using the Euclidean distance, the detected bounding box centroid's pairwise distances of people are determined. To estimate social distance violations between people, we used an approximation of physical distance to pixel and set a threshold. A violation threshold is established to evaluate whether or not the distance value breaches the minimum social distance threshold. In addition, a tracking algorithm is used to detect individuals in video sequences such that the person who violates/crosses the social distance threshold is also being tracked. Experiments are carried out on different video sequences to test the efficiency of the model. Findings indicate that the developed framework successfully distinguishes individuals who walk too near and breaches/violates social distances; also, the transfer learning approach boosts the overall efficiency of the model. The accuracy of 92% and 98% achieved by the detection model without and with transfer learning, respectively. The tracking accuracy of the model is 95%.
RESUMO
Monogenic forms of obesity have been identified in ≤10% of severely obese European patients. However, the overall spectrum of deleterious variants (point mutations and structural variants) responsible for childhood severe obesity remains elusive. In this study, we genetically screened 225 severely obese children from consanguineous Pakistani families through a combination of techniques, including an in-house-developed augmented whole-exome sequencing method (CoDE-seq) that enables simultaneous detection of whole-exome copy number variations (CNVs) and point mutations in coding regions. We identified 110 (49%) probands carrying 55 different pathogenic point mutations and CNVs in 13 genes/loci responsible for nonsyndromic and syndromic monofactorial obesity. CoDE-seq also identified 28 rare or novel CNVs associated with intellectual disability in 22 additional obese subjects (10%). Additionally, we highlight variants in candidate genes for obesity warranting further investigation. Altogether, 59% of cases in the studied cohort are likely to have a discrete genetic cause, with 13% of these as a result of CNVs, demonstrating a remarkably higher prevalence of monofactorial obesity than hitherto reported and a plausible overlapping of obesity and intellectual disabilities in several cases. Finally, inbred populations with a high prevalence of obesity provide unique, genetically enriched material in the quest of new genes/variants influencing energy balance.
Assuntos
Obesidade Mórbida/genética , Obesidade Infantil/genética , Adolescente , Criança , Pré-Escolar , Variações do Número de Cópias de DNA , Feminino , Humanos , Lactente , Leptina/genética , Masculino , Mutação , Obesidade Mórbida/epidemiologia , Obesidade Mórbida/etiologia , Obesidade Infantil/epidemiologia , Obesidade Infantil/etiologia , Prevalência , Receptor Tipo 4 de Melanocortina/genética , Receptores para Leptina/genética , Adulto JovemRESUMO
BACKGROUND: The levels of adenosine deaminase (ADA) are increased in tubercular pleural effusion and its determination has acquired popularity as a diagnostic test which is inexpensive and is readily accessible. Pleural fluid ADA showed sensitivity (86.36%), specificity (61.54%), diagnostic accuracy (80.70%), positive predictive value (88.37%), and negative predictive value (82.42%) confirmed by pleural biopsy as a gold standard. METHODOLOGY: Our study was a prospective cross-sectional study which was conducted for three years at a tertiary care center in Karachi, Pakistan. The data were collected and analyzed using IBM statistics SPSS vs21. RESULTS: There were 52 patients included in our study. Twenty one were males and thirty one were females. Most patients presented with shortness of breath. There was a significant association found between raised ADA levels and pulmonary tuberculosis (p < 0.05). The ADA levels are 12 times more likely to be raised in tubercular pleural effusion. CONCLUSION: The ADA level is an important marker for diagnosis of pulmonary tuberculosis in lymphocytic pleural effusion. It is a convenient and an inexpensive method. The ADA levels assessment is economical when compared to other diagnostic methods.
RESUMO
CYP2C19 polymorphism is associated with pretreatment drug response prediction, metabolism, and disposition. Pakistan consists of a population comprising of various ethnic groups residing in different regions of the country each claiming diverse ethnic origins. The identification of CYP450 genotypic composition of these populations is therefore necessary to avoid adverse drug reactions in these individuals. The main objective of the study was to investigate the prevalence of CYP2C19*2 and CYP2C19*17 alleles in these ethnic groups. The study was conducted on one thousand and twenty-eight (n = 1028) healthy volunteers from nine ethnic groups of Pakistan namely Brusho (n = 28), Hazara (n = 102), Kalash (n = 64), Pathan (n = 170), Punjabi (n = 218), Saraiki (n = 59), Brahui (n = 118), Parsi (n = 90), and Sindhi (n = 179). DNA was extracted from leukocytes and analyzed by allele specific amplification polymerase chain reaction (ASA-PCR). Multi allelic polymorphism of CYP2C19 led to four distinct phenotypes identified as extensive metabolizer (EM), poor metabolizer (PM), intermediate metabolizer (IM), and ultra-rapid metabolizer (UM). Over all, the percentage of predicted poor metabolizer allele was 29.0% compared to UM allele (23.70%). Among the studied groups, Saraiki and Brahui showed highest percentage of PM allele (40%, 36%) whereas Parsi and Hazara had highest percentage of UM allele (37% and 30% respectively). In conclusion, the high allele frequency of PM (CYP2C19*2 and *17) in Pakistani population led to the recommendation of a pre-treatment test to monitor drug response and dosage (personalized medicine) to avoid post-treatment adverse drug reactions.
RESUMO
Study of monogenic forms of obesity has demonstrated the pivotal role of the central leptin-melanocortin pathway in controlling energy balance, appetite and body weight 1 . The majority of loss-of-function mutations (mostly recessive or co-dominant) have been identified in genes that are directly involved in leptin-melanocortin signaling. These genes, however, only explain obesity in <5% of cases, predominantly from outbred populations 2 . We previously showed that, in a consanguineous population in Pakistan, recessive mutations in known obesity-related genes explain ~30% of cases with severe obesity3-5. These data suggested that new monogenic forms of obesity could also be identified in this population. Here we identify and functionally characterize homozygous mutations in the ADCY3 gene encoding adenylate cyclase 3 in children with severe obesity from consanguineous Pakistani families, as well as compound heterozygous mutations in a severely obese child of European-American descent. These findings highlight ADCY3 as an important mediator of energy homeostasis and an attractive pharmacological target in the treatment of obesity.