Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
Add more filters











Publication year range
1.
Sci Rep ; 14(1): 18075, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103381

ABSTRACT

The intrusion detection process is important in various applications to identify unauthorized Internet of Things (IoT) network access. IoT devices are accessed by intermediators while transmitting the information, which causes security issues. Several intrusion detection systems are developed to identify intruders and unauthorized access in different software applications. Existing systems consume high computation time, making it difficult to identify intruders accurately. This research issue is mitigated by applying the Interrupt-aware Anonymous User-System Detection Method (IAU-S-DM). The method uses concealed service sessions to identify the anonymous interrupts. During this process, the system is trained with the help of different parameters such as origin, session access demands, and legitimate and illegitimate users of various sessions. These parameters help to recognize the intruder's activities with minimum computation time. In addition, the collected data is processed using the deep recurrent learning approach that identifies service failures and breaches, improving the overall intruder detection rate. The created system uses the TON-IoT dataset information that helps to identify the intruder activities while accessing the different data resources. This method's consistency is verified using the metrics of service failures of 10.65%, detection precision of 14.63%, detection time of 15.54%, and classification ratio of 20.51%.

2.
Heliyon ; 10(12): e32400, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38975160

ABSTRACT

Pests are a significant challenge in paddy cultivation, resulting in a global loss of approximately 20 % of rice yield. Early detection of paddy insects can help to save these potential losses. Several ways have been suggested for identifying and categorizing insects in paddy fields, employing a range of advanced, noninvasive, and portable technologies. However, none of these systems have successfully incorporated feature optimization techniques with Deep Learning and Machine Learning. Hence, the current research provided a framework utilizing these techniques to detect and categorize images of paddy insects promptly. Initially, the suggested research will gather the image dataset and categorize it into two groups: one without paddy insects and the other with paddy insects. Furthermore, various pre-processing techniques, such as augmentation and image filtering, will be applied to enhance the quality of the dataset and eliminate any unwanted noise. To determine and analyze the deep characteristics of an image, the suggested architecture will incorporate 5 pre-trained Convolutional Neural Network models. Following that, feature selection techniques, including Principal Component Analysis (PCA), Recursive Feature Elimination (RFE), Linear Discriminant Analysis (LDA), and an optimization algorithm called Lion Optimization, were utilized in order to further reduce the redundant number of features that were collected for the study. Subsequently, the process of identifying the paddy insects will be carried out by employing 7 ML algorithms. Finally, a set of experimental data analysis has been conducted to achieve the objectives, and the proposed approach demonstrates that the extracted feature vectors of ResNet50 with Logistic Regression and PCA have achieved the highest accuracy, precisely 99.28 %. However, the present idea will significantly impact how paddy insects are diagnosed in the field.

3.
PLoS One ; 19(5): e0302880, 2024.
Article in English | MEDLINE | ID: mdl-38718092

ABSTRACT

Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.


Subject(s)
Gastrointestinal Neoplasms , Gastrointestinal Tract , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Gastrointestinal Neoplasms/diagnostic imaging , Gastrointestinal Neoplasms/pathology , Gastrointestinal Tract/diagnostic imaging , Semantics , Image Processing, Computer-Assisted/methods , Female , Male , Stomach/diagnostic imaging , Stomach/pathology
4.
Sci Rep ; 14(1): 9614, 2024 04 26.
Article in English | MEDLINE | ID: mdl-38671304

ABSTRACT

The abnormal heart conduction, known as arrhythmia, can contribute to cardiac diseases that carry the risk of fatal consequences. Healthcare professionals typically use electrocardiogram (ECG) signals and certain preliminary tests to identify abnormal patterns in a patient's cardiac activity. To assess the overall cardiac health condition, cardiac specialists monitor these activities separately. This procedure may be arduous and time-intensive, potentially impacting the patient's well-being. This study automates and introduces a novel solution for predicting the cardiac health conditions, specifically identifying cardiac morbidity and arrhythmia in patients by using invasive and non-invasive measurements. The experimental analyses conducted in medical studies entail extremely sensitive data and any partial or biased diagnoses in this field are deemed unacceptable. Therefore, this research aims to introduce a new concept of determining the uncertainty level of machine learning algorithms using information entropy. To assess the effectiveness of machine learning algorithms information entropy can be considered as a unique performance evaluator of the machine learning algorithm which is not selected previously any studies within the realm of bio-computational research. This experiment was conducted on arrhythmia and heart disease datasets collected from Massachusetts Institute of Technology-Berth Israel Hospital-arrhythmia (DB-1) and Cleveland Heart Disease (DB-2), respectively. Our framework consists of four significant steps: 1) Data acquisition, 2) Feature preprocessing approach, 3) Implementation of learning algorithms, and 4) Information Entropy. The results demonstrate the average performance in terms of accuracy achieved by the classification algorithms: Neural Network (NN) achieved 99.74%, K-Nearest Neighbor (KNN) 98.98%, Support Vector Machine (SVM) 99.37%, Random Forest (RF) 99.76 % and Naïve Bayes (NB) 98.66% respectively. We believe that this study paves the way for further research, offering a framework for identifying cardiac health conditions through machine learning techniques.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Machine Learning , Humans , Electrocardiography/methods , Arrhythmias, Cardiac/diagnosis , Algorithms , Monitoring, Physiologic/methods , Heart Diseases/diagnosis
5.
Sci Rep ; 14(1): 7406, 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38548726

ABSTRACT

Software vulnerabilities pose a significant threat to system security, necessitating effective automatic detection methods. Current techniques face challenges such as dependency issues, language bias, and coarse detection granularity. This study presents a novel deep learning-based vulnerability detection system for Java code. Leveraging hybrid feature extraction through graph and sequence-based techniques enhances semantic and syntactic understanding. The system utilizes control flow graphs (CFG), abstract syntax trees (AST), program dependencies (PD), and greedy longest-match first vectorization for graph representation. A hybrid neural network (GCN-RFEMLP) and the pre-trained CodeBERT model extract features, feeding them into a quantum convolutional neural network with self-attentive pooling. The system addresses issues like long-term information dependency and coarse detection granularity, employing intermediate code representation and inter-procedural slice code. To mitigate language bias, a benchmark software assurance reference dataset is employed. Evaluations demonstrate the system's superiority, achieving 99.2% accuracy in detecting vulnerabilities, outperforming benchmark methods. The proposed approach comprehensively addresses vulnerabilities, including improper input validation, missing authorizations, buffer overflow, cross-site scripting, and SQL injection attacks listed by common weakness enumeration (CWE).

6.
Heliyon ; 10(3): e25369, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38352790

ABSTRACT

In recent years, scientific data on cancer has expanded, providing potential for a better understanding of malignancies and improved tailored care. Advances in Artificial Intelligence (AI) processing power and algorithmic development position Machine Learning (ML) and Deep Learning (DL) as crucial players in predicting Leukemia, a blood cancer, using integrated multi-omics technology. However, realizing these goals demands novel approaches to harness this data deluge. This study introduces a novel Leukemia diagnosis approach, analyzing multi-omics data for accuracy using ML and DL algorithms. ML techniques, including Random Forest (RF), Naive Bayes (NB), Decision Tree (DT), Logistic Regression (LR), Gradient Boosting (GB), and DL methods such as Recurrent Neural Networks (RNN) and Feedforward Neural Networks (FNN) are compared. GB achieved 97 % accuracy in ML, while RNN outperformed by achieving 98 % accuracy in DL. This approach filters unclassified data effectively, demonstrating the significance of DL for leukemia prediction. The testing validation was based on 17 different features such as patient age, sex, mutation type, treatment methods, chromosomes, and others. Our study compares ML and DL techniques and chooses the best technique that gives optimum results. The study emphasizes the implications of high-throughput technology in healthcare, offering improved patient care.

7.
Sci Rep ; 14(1): 1345, 2024 01 16.
Article in English | MEDLINE | ID: mdl-38228639

ABSTRACT

A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.


Subject(s)
Brain Neoplasms , Semantics , Humans , Brain Neoplasms/diagnostic imaging , Algorithms , Intelligence , Neural Networks, Computer , Image Processing, Computer-Assisted
8.
PLoS One ; 19(1): e0292100, 2024.
Article in English | MEDLINE | ID: mdl-38236900

ABSTRACT

Diabetes prediction is an ongoing study topic in which medical specialists are attempting to forecast the condition with greater precision. Diabetes typically stays lethargic, and on the off chance that patients are determined to have another illness, like harm to the kidney vessels, issues with the retina of the eye, or a heart issue, it can cause metabolic problems and various complexities in the body. Various worldwide learning procedures, including casting a ballot, supporting, and sacking, have been applied in this review. The Engineered Minority Oversampling Procedure (Destroyed), along with the K-overlay cross-approval approach, was utilized to achieve class evening out and approve the discoveries. Pima Indian Diabetes (PID) dataset is accumulated from the UCI Machine Learning (UCI ML) store for this review, and this dataset was picked. A highlighted engineering technique was used to calculate the influence of lifestyle factors. A two-phase classification model has been developed to predict insulin resistance using the Sequential Minimal Optimisation (SMO) and SMOTE approaches together. The SMOTE technique is used to preprocess data in the model's first phase, while SMO classes are used in the second phase. All other categorization techniques were outperformed by bagging decision trees in terms of Misclassification Error rate, Accuracy, Specificity, Precision, Recall, F1 measures, and ROC curve. The model was created using a combined SMOTE and SMO strategy, which achieved 99.07% correction with 0.1 ms of runtime. The suggested system's result is to enhance the classifier's performance in spotting illness early.


Subject(s)
Algorithms , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/diagnosis , Machine Learning , ROC Curve , Forecasting
9.
Sensors (Basel) ; 23(19)2023 Sep 22.
Article in English | MEDLINE | ID: mdl-37836846

ABSTRACT

Due to the modern power system's rapid development, more scattered smart grid components are securely linked into the power system by encircling a wide electrical power network with the underpinning communication system. By enabling a wide range of applications, such as distributed energy management, system state forecasting, and cyberattack security, these components generate vast amounts of data that automate and improve the efficiency of the smart grid. Due to traditional computer technologies' inability to handle the massive amount of data that smart grid systems generate, AI-based alternatives have received a lot of interest. Long Short-Term Memory (LSTM) and recurrent Neural Networks (RNN) will be specifically developed in this study to address this issue by incorporating the adaptively time-developing energy system's attributes to enhance the model of the dynamic properties of contemporary Smart Grid (SG) that are impacted by Revised Encoding Scheme (RES) or system reconfiguration to differentiate LSTM changes & real-time threats. More specifically, we provide a federated instructional strategy for consumer sharing of power data to Power Grid (PG) that is supported by edge clouds, protects consumer privacy, and is communication-efficient. They then design two optimization problems for Energy Data Owners (EDO) and energy service operations, as well as a local information assessment method in Federated Learning (FL) by taking non-independent and identically distributed (IID) effects into consideration. The test results revealed that LSTM had a longer training duration, four hidden levels, and higher training loss than other models. The provided method works incredibly well in several situations to identify FDIA. The suggested approach may successfully induce EDOs to employ high-quality local models, increase the payout of the ESP, and decrease task latencies, according to extensive simulations, which are the last points. According to the verification results, every assault sample could be effectively recognized utilizing the current detection methods and the LSTM RNN-based structure created by Smart.

10.
Life (Basel) ; 13(10)2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37895472

ABSTRACT

Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.

11.
Life (Basel) ; 13(10)2023 Oct 21.
Article in English | MEDLINE | ID: mdl-37895474

ABSTRACT

Breast cancer (BC) is the most common cancer among women, making it essential to have an accurate and dependable system for diagnosing benign or malignant tumors. It is essential to detect this cancer early in order to inform subsequent treatments. Currently, fine needle aspiration (FNA) cytology and machine learning (ML) models can be used to detect and diagnose this cancer more accurately. Consequently, an effective and dependable approach needs to be developed to enhance the clinical capacity to diagnose this illness. This study aims to detect and divide BC into two categories using the Wisconsin Diagnostic Breast Cancer (WDBC) benchmark feature set and to select the fewest features to attain the highest accuracy. To this end, this study explores automated BC prediction using multi-model features and ensemble machine learning (EML) techniques. To achieve this, we propose an advanced ensemble technique, which incorporates voting, bagging, stacking, and boosting as combination techniques for the classifier in the proposed EML methods to distinguish benign breast tumors from malignant cancers. In the feature extraction process, we suggest a recursive feature elimination technique to find the most important features of the WDBC that are pertinent to BC detection and classification. Furthermore, we conducted cross-validation experiments, and the comparative results demonstrated that our method can effectively enhance classification performance and attain the highest value in six evaluation metrics, including precision, sensitivity, area under the curve (AUC), specificity, accuracy, and F1-score. Overall, the stacking model achieved the best average accuracy, at 99.89%, and its sensitivity, specificity, F1-score, precision, and AUC/ROC were 1.00%, 0.999%, 1.00%, 1.00%, and 1.00%, respectively, thus generating excellent results. The findings of this study can be used to establish a reliable clinical detection system, enabling experts to make more precise and operative decisions in the future. Additionally, the proposed technology might be used to detect a variety of cancers.

12.
Diagnostics (Basel) ; 13(19)2023 Oct 09.
Article in English | MEDLINE | ID: mdl-37835895

ABSTRACT

Glomeruli are interconnected capillaries in the renal cortex that are responsible for blood filtration. Damage to these glomeruli often signifies the presence of kidney disorders like glomerulonephritis and glomerulosclerosis, which can ultimately lead to chronic kidney disease and kidney failure. The timely detection of such conditions is essential for effective treatment. This paper proposes a modified UNet model to accurately detect glomeruli in whole-slide images of kidney tissue. The UNet model was modified by changing the number of filters and feature map dimensions from the first to the last layer to enhance the model's capacity for feature extraction. Moreover, the depth of the UNet model was also improved by adding one more convolution block to both the encoder and decoder sections. The dataset used in the study comprised 20 large whole-side images. Due to their large size, the images were cropped into 512 × 512-pixel patches, resulting in a dataset comprising 50,486 images. The proposed model performed well, with 95.7% accuracy, 97.2% precision, 96.4% recall, and 96.7% F1-score. These results demonstrate the proposed model's superior performance compared to the original UNet model, the UNet model with EfficientNetb3, and the current state-of-the-art. Based on these experimental findings, it has been determined that the proposed model accurately identifies glomeruli in extracted kidney patches.

13.
PeerJ Comput Sci ; 9: e1524, 2023.
Article in English | MEDLINE | ID: mdl-37705647

ABSTRACT

The use of offensive terms in user-generated content on different social media platforms is one of the major concerns for these platforms. The offensive terms have a negative impact on individuals, which may lead towards the degradation of societal and civilized manners. The immense amount of content generated at a higher speed makes it humanly impossible to categorise and detect offensive terms. Besides, it is an open challenge for natural language processing (NLP) to detect such terminologies automatically. Substantial efforts are made for high-resource languages such as English. However, it becomes more challenging when dealing with resource-poor languages such as Urdu. Because of the lack of standard datasets and pre-processing tools for automatic offensive terms detection. This paper introduces a combinatorial pre-processing approach in developing a classification model for cross-platform (Twitter and YouTube) use. The approach uses datasets from two different platforms (Twitter and YouTube) the training and testing the model, which is trained to apply decision tree, random forest and naive Bayes algorithms. The proposed combinatorial pre-processing approach is applied to check how machine learning models behave with different combinations of standard pre-processing techniques for low-resource language in the cross-platform setting. The experimental results represent the effectiveness of the machine learning model over different subsets of traditional pre-processing approaches in building a classification model for automatic offensive terms detection for a low resource language, i.e., Urdu, in the cross-platform scenario. In the experiments, when dataset D1 is used for training and D2 is applied for testing, the pre-processing approach named Stopword removal produced better results with an accuracy of 83.27%. Whilst, in this case, when dataset D2 is used for training and D1 is applied for testing, stopword removal and punctuation removal were observed as a better preprocessing approach with an accuracy of 74.54%. The combinatorial approach proposed in this paper outperformed the benchmark for the considered datasets using classical as well as ensemble machine learning with an accuracy of 82.9% and 97.2% for dataset D1 and D2, respectively.

14.
PeerJ Comput Sci ; 9: e1440, 2023.
Article in English | MEDLINE | ID: mdl-37409077

ABSTRACT

Vehicular ad hoc networks (VANETs) are intelligent transport subsystems; vehicles can communicate through a wireless medium in this system. There are many applications of VANETs such as traffic safety and preventing the accident of vehicles. Many attacks affect VANETs communication such as denial of service (DoS) and distributed denial of service (DDoS). In the past few years the number of DoS (denial of service) attacks are increasing, so network security and protection of the communication systems are challenging topics; intrusion detection systems need to be improved to identify these attacks effectively and efficiently. Many researchers are currently interested in enhancing the security of VANETs. Based on intrusion detection systems (IDS), machine learning (ML) techniques were employed to develop high-security capabilities. A massive dataset containing application layer network traffic is deployed for this purpose. Interpretability technique Local interpretable model-agnostic explanations (LIME) technique for better interpretation model functionality and accuracy. Experimental results demonstrate that utilizing a random forest (RF) classifier achieves 100% accuracy, demonstrating its capability to identify intrusion-based threats in a VANET setting. In addition, LIME is applied to the RF machine learning model to explain and interpret the classification, and the performance of machine learning models is evaluated in terms of accuracy, recall, and F1 score.

15.
Diagnostics (Basel) ; 13(14)2023 Jul 18.
Article in English | MEDLINE | ID: mdl-37510142

ABSTRACT

The segmentation of gastrointestinal (GI) organs is crucial in radiation therapy for treating GI cancer. It allows for developing a targeted radiation therapy plan while minimizing radiation exposure to healthy tissue, improving treatment success, and decreasing side effects. Medical diagnostics in GI tract organ segmentation is essential for accurate disease detection, precise differential diagnosis, optimal treatment planning, and efficient disease monitoring. This research presents a hybrid encoder-decoder-based model for segmenting healthy organs in the GI tract in biomedical images of cancer patients, which might help radiation oncologists treat cancer more quickly. Here, EfficientNet B0 is used as a bottom-up encoder architecture for downsampling to capture contextual information by extracting meaningful and discriminative features from input images. The performance of the EfficientNet B0 encoder is compared with that of three encoders: ResNet 50, MobileNet V2, and Timm Gernet. The Feature Pyramid Network (FPN) is a top-down decoder architecture used for upsampling to recover spatial information. The performance of the FPN decoder was compared with that of three decoders: PAN, Linknet, and MAnet. This paper proposes a segmentation model named as the Feature Pyramid Network (FPN), with EfficientNet B0 as the encoder. Furthermore, the proposed hybrid model is analyzed using Adam, Adadelta, SGD, and RMSprop optimizers. Four performance criteria are used to assess the models: the Jaccard and Dice coefficients, model loss, and processing time. The proposed model can achieve Dice coefficient and Jaccard index values of 0.8975 and 0.8832, respectively. The proposed method can assist radiation oncologists in precisely targeting areas hosting cancer cells in the gastrointestinal tract, allowing for more efficient and timely cancer treatment.

16.
PeerJ Comput Sci ; 9: e1294, 2023.
Article in English | MEDLINE | ID: mdl-37346705

ABSTRACT

Higher educational institutes generate massive amounts of student data. This data needs to be explored in depth to better understand various facets of student learning behavior. The educational data mining approach has given provisions to extract useful and non-trivial knowledge from large collections of student data. Using the educational data mining method of classification, this research analyzes data of 291 university students in an attempt to predict student performance at the end of a 4-year degree program. A student segmentation framework has also been proposed to identify students at various levels of academic performance. Coupled with the prediction model, the proposed segmentation framework provides a useful mechanism for devising pedagogical policies to increase the quality of education by mitigating academic failure and encouraging higher performance. The experimental results indicate the effectiveness of the proposed framework and the applicability of classifying students into multiple performance levels using a small subset of courses being taught in the initial two years of the 4-year degree program.

17.
Sensors (Basel) ; 23(11)2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37299987

ABSTRACT

A vehicular ad hoc network (VANET) is a technique that uses vehicles with the ability to sense data from the environment and use it for their safety measures. Flooding is a commonly used term used for sending network packets. VANET may cause redundancy, delay, collision, and the incorrect receipt of the messages to their destination. Weather information is one of the most important types of information used for network control and provides an enhanced version of the network simulation environments. The network traffic delay and packet losses are the main problems identified inside the network. In this research, we propose a routing protocol which can transmit the weather forecasting information on demand based on source vehicle to destination vehicles, with the minimum number of hop counts, and provide significant control over network performance parameters. We propose a BBSF-based routing approach. The proposed technique effectively enhances the routing information and provides the secure and reliable service delivery of the network performance. The results taken from the network are based on hop count, network latency, network overhead, and packet delivery ratio. The results effectively show that the proposed technique is reliable in reducing the network latency, and that the hop count is minimized when transferring the weather information.


Subject(s)
Blockchain , Algorithms , Computer Communication Networks , Wireless Technology , Weather
18.
Healthcare (Basel) ; 11(11)2023 May 26.
Article in English | MEDLINE | ID: mdl-37297701

ABSTRACT

Pneumonia has been directly responsible for a huge number of deaths all across the globe. Pneumonia shares visual features with other respiratory diseases, such as tuberculosis, which can make it difficult to distinguish between them. Moreover, there is significant variability in the way chest X-ray images are acquired and processed, which can impact the quality and consistency of the images. This can make it challenging to develop robust algorithms that can accurately identify pneumonia in all types of images. Hence, there is a need to develop robust, data-driven algorithms that are trained on large, high-quality datasets and validated using a range of imaging techniques and expert radiological analysis. In this research, a deep-learning-based model is demonstrated for differentiating between normal and severe cases of pneumonia. This complete proposed system has a total of eight pre-trained models, namely, ResNet50, ResNet152V2, DenseNet121, DenseNet201, Xception, VGG16, EfficientNet, and MobileNet. These eight pre-trained models were simulated on two datasets having 5856 images and 112,120 images of chest X-rays. The best accuracy is obtained on the MobileNet model with values of 94.23% and 93.75% on two different datasets. Key hyperparameters including batch sizes, number of epochs, and different optimizers have all been considered during comparative interpretation of these models to determine the most appropriate model.

19.
Diagnostics (Basel) ; 13(12)2023 Jun 20.
Article in English | MEDLINE | ID: mdl-37371016

ABSTRACT

Acute Lymphocytic Leukemia is a type of cancer that occurs when abnormal white blood cells are produced in the bone marrow which do not function properly, crowding out healthy cells and weakening the immunity of the body and thus its ability to resist infections. It spreads quickly in children's bodies, and if not treated promptly it may lead to death. The manual detection of this disease is a tedious and slow task. Machine learning and deep learning techniques are faster than manual detection and more accurate. In this paper, a deep feature selection-based approach ResRandSVM is proposed for the detection of Acute Lymphocytic Leukemia in blood smear images. The proposed approach uses seven deep-learning models: ResNet152, VGG16, DenseNet121, MobileNetV2, InceptionV3, EfficientNetB0 and ResNet50 for deep feature extraction from blood smear images. After that, three feature selection methods are used to extract valuable and important features: analysis of variance (ANOVA), principal component analysis (PCA), and Random Forest. Then the selected feature map is fed to four different classifiers, Adaboost, Support Vector Machine, Artificial Neural Network and Naïve Bayes models, to classify the images into leukemia and normal images. The model performs best with a combination of ResNet50 as a feature extractor, Random Forest as feature selection and Support Vector Machine as a classifier with an accuracy of 0.900, precision of 0.902, recall of 0.957 and F1-score of 0.929.

20.
Diagnostics (Basel) ; 13(9)2023 May 08.
Article in English | MEDLINE | ID: mdl-37175042

ABSTRACT

The segmentation of lungs from medical images is a critical step in the diagnosis and treatment of lung diseases. Deep learning techniques have shown great promise in automating this task, eliminating the need for manual annotation by radiologists. In this research, a convolution neural network architecture is proposed for lung segmentation using chest X-ray images. In the proposed model, concatenate block is embedded to learn a series of filters or features used to extract meaningful information from the image. Moreover, a transpose layer is employed in the concatenate block to improve the spatial resolution of feature maps generated by a prior convolutional layer. The proposed model is trained using k-fold validation as it is a powerful and flexible tool for evaluating the performance of deep learning models. The proposed model is evaluated on five different subsets of the data by taking the value of k as 5 to obtain the optimized model to obtain more accurate results. The performance of the proposed model is analyzed for different hyper-parameters such as the batch size as 32, optimizer as Adam and 40 epochs. The dataset used for the segmentation of disease is taken from the Kaggle repository. The various performance parameters such as accuracy, IoU, and dice coefficient are calculated, and the values obtained are 0.97, 0.93, and 0.96, respectively.

SELECTION OF CITATIONS
SEARCH DETAIL