Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(10)2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37430604

RESUMEN

One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas , Humanos , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagen , Reproducibilidad de los Resultados
2.
Sensors (Basel) ; 23(19)2023 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-37836902

RESUMEN

Phishing attacks are evolving with more sophisticated techniques, posing significant threats. Considering the potential of machine-learning-based approaches, our research presents a similar modern approach for web phishing detection by applying powerful machine learning algorithms. An efficient layered classification model is proposed to detect websites based on their URL structure, text, and image features. Previously, similar studies have used machine learning techniques for URL features with a limited dataset. In our research, we have used a large dataset of 20,000 website URLs, and 22 salient features from each URL are extracted to prepare a comprehensive dataset. Along with this, another dataset containing website text is also prepared for NLP-based text evaluation. It is seen that many phishing websites contain text as images, and to handle this, the text from images is extracted to classify it as spam or legitimate. The experimental evaluation demonstrated efficient and accurate phishing detection. Our layered classification model uses support vector machine (SVM), XGBoost, random forest, multilayer perceptron, linear regression, decision tree, naïve Bayes, and SVC algorithms. The performance evaluation revealed that the XGBoost algorithm outperformed other applied models with maximum accuracy and precision of 94% in the training phase and 91% in the testing phase. Multilayer perceptron also worked well with an accuracy of 91% in the testing phase. The accuracy results for random forest and decision tree were 91% and 90%, respectively. Logistic regression and SVM algorithms were used in the text-based classification, and the accuracy was found to be 87% and 88%, respectively. With these precision values, the models classified phishing and legitimate websites very well, based on URL, text, and image features. This research contributes to early detection of sophisticated phishing attacks, enhancing internet user security.

3.
Sensors (Basel) ; 22(15)2022 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-35898063

RESUMEN

Software-defined networking (SDN) is an innovative network architecture that splits the control and management planes from the data plane. It helps in simplifying network manageability and programmability, along with several other benefits. Due to the programmability features, SDN is gaining popularity in both academia and industry. However, this emerging paradigm has been facing diverse kinds of challenges during the SDN implementation process and with respect to adoption of existing technologies. This paper evaluates several existing approaches in SDN and compares and analyzes the findings. The paper is organized into seven categories, namely network testing and verification, flow rule installation mechanisms, network security and management issues related to SDN implementation, memory management studies, SDN simulators and emulators, SDN programming languages, and SDN controller platforms. Each category has significance in the implementation of SDN networks. During the implementation process, network testing and verification is very important to avoid packet violations and network inefficiencies. Similarly, consistent flow rule installation, especially in the case of policy change at the controller, needs to be carefully implemented. Effective network security and memory management, at both the network control and data planes, play a vital role in SDN. Furthermore, SDN simulation tools, controller platforms, and programming languages help academia and industry to implement and test their developed network applications. We also compare the existing SDN studies in detail in terms of classification and discuss their benefits and limitations. Finally, future research guidelines are provided, and the paper is concluded.

4.
Molecules ; 26(3)2021 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-33504080

RESUMEN

The polymeric composite material with desirable features can be gained by selecting suitable biopolymers with selected additives to get polymer-filler interaction. Several parameters can be modified according to the design requirements, such as chemical structure, degradation kinetics, and biopolymer composites' mechanical properties. The interfacial interactions between the biopolymer and the nanofiller have substantial control over biopolymer composites' mechanical characteristics. This review focuses on different applications of biopolymeric composites in controlled drug release, tissue engineering, and wound healing with considerable properties. The biopolymeric composite materials are required with advanced and multifunctional properties in the biomedical field and regenerative medicines with a complete analysis of routine biomaterials with enhanced biomedical engineering characteristics. Several studies in the literature on tissue engineering, drug delivery, and wound dressing have been mentioned. These results need to be reviewed for possible development and analysis, which makes an essential study.


Asunto(s)
Materiales Biocompatibles/química , Biopolímeros/química , Medicina Regenerativa/métodos , Ingeniería de Tejidos/métodos , Animales , Sistemas de Liberación de Medicamentos/métodos , Humanos , Cicatrización de Heridas/efectos de los fármacos
5.
P T ; 42(10): 641-651, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29018301

RESUMEN

PURPOSE: In the last few decades, changes to formulary management processes have taken place in institutions with closed formulary systems. However, many P&T committees continued to operate using traditional paper-based systems. Paper-based systems have many limitations, including confidentiality, efficiency, open voting, and paper wastage. This becomes more challenging when dealing with a multisite P&T committee that handles formulary matters across the whole health care system. In this paper, we discuss the implementation of the first paperless, completely electronic, Web-based formulary management system across a large health care system in the Middle East. SUMMARY: We describe the transitioning of a multisite P&T committee in a large tertiary care institution from a paper-based to an all-electronic system. The challenges and limitations of running a multisite P&T committee utilizing a paper system are discussed. The design and development of a Web-based committee floor management application that can be used from notebooks, tablets, and hand-held devices is described. Implementation of a flexible, interactive, easy-to-use, and efficient electronic formulary management system is explained in detail. CONCLUSION: The development of an electronic P&T committee meeting system that encompasses electronic document sharing, voting, and communication could help multisite health care systems unify their formularies across multiple sites. Our experience might not be generalizable to all institutions because this depends heavily on system features, existing processes and workflow, and implementation across different sites.

6.
J Nanosci Nanotechnol ; 16(4): 4126-30, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-27451775

RESUMEN

We report on the concentration-dependent surface-assisted growth and time-temperature-dependent detachment of one-dimensional 5 helix DNA ribbons (5HR) on a mica substrate. The growth coverage ratio was determined by varying the concentration of the 5HR strands in a test tube, and the detachment rate of 5HR on mica was determined by varying the incubation time at a fixed temperature on a heat block. The topological changes in the concentration-dependent attachment and the time-temperature-dependent detachment for 5HR on mica were observed via atomic force microscopy. The observations indicate that 5HR started to grow on mica at ~10 nM and provided full coverage at ~50 nM. In contrast, 5HR at 65 °C started to detach from mica after 5 min and was completely removed after 10 min. The growth and detachment coverage show a sinusoidal variation in the growth ratio and a linear variation with a rate of detachment of 20%/min, respectively. The physical parameters that control the stability of the DNA structures on a given substrate should be studied to successfully integrate DNA structures for physical and chemical applications.


Asunto(s)
Silicatos de Aluminio/química , Cristalización/métodos , ADN/química , ADN/ultraestructura , Nanopartículas/química , Nanopartículas/ultraestructura , Adsorción , Ensayo de Materiales
7.
Artículo en Inglés | MEDLINE | ID: mdl-38847261

RESUMEN

INTRODUCTION: Commercial plastics are potentially hazardous and can be carcinogenic due to the incorporation of chemical additives along with other additional components utilized as brominated flame retardants and phthalate plasticizers during production that excessively produce large numbers of gases, litter, and toxic components resulting in environmental pollution. METHOD: Biodegradable plastic derived from natural renewable resources is the novel, alternative, and innovative approach considered to be potentially safe as a substitute for traditional synthetic plastic as they decompose easily without causing any harm to the ecosystem and natural habitat. The utilization of undervalued compounds, such as by-products of fruits and vegetables in the production of biodegradable packaging films, is currently a matter of interest because of their accessibility, affordability, ample supply, nontoxicity, physiochemical and nutritional properties. Industrial food waste was processed under controlled conditions with appropriate plasticizers to extract polymeric materials. Biodegradability, solubility, and air test analysis were performed to examine the physical properties of polymers prior to the characterization of the biofilm by Fourier-transformed infrared spectroscopy (FTIR) for the determination of polymeric characteristics. RESULT: The loss of mass examined in each bioplastic film was in the range of 0.01g to 0.20g. The dimension of each bioplastic was recorded in the range of 4.6 mm to 28.7 mm. The existence of -OH, C=C, C=O stretching, and other crucial functional groups that aid in the creation of a solid polymeric material are confirmed by FTIR analysis. This study provides an alternative approach for sustainable and commercially value-added production of polymeric-based biomaterials from agro-industrial waste as they are rich in starch, cellulose, and pectin for the development of bio-plastics. CONCLUSION: The rationale of this project is to achieve a straightforward, economical, and durable method for the production of bio-plastics through effective utilization of industrial and commercial fruit waste, ultimately aiding in revenue generation.

8.
PLoS One ; 19(6): e0303890, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38843255

RESUMEN

Anomaly detection in time series data is essential for fraud detection and intrusion monitoring applications. However, it poses challenges due to data complexity and high dimensionality. Industrial applications struggle to process high-dimensional, complex data streams in real time despite existing solutions. This study introduces deep ensemble models to improve traditional time series analysis and anomaly detection methods. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks effectively handle variable-length sequences and capture long-term relationships. Convolutional Neural Networks (CNNs) are also investigated, especially for univariate or multivariate time series forecasting. The Transformer, an architecture based on Artificial Neural Networks (ANN), has demonstrated promising results in various applications, including time series prediction and anomaly detection. Graph Neural Networks (GNNs) identify time series anomalies by capturing temporal connections and interdependencies between periods, leveraging the underlying graph structure of time series data. A novel feature selection approach is proposed to address challenges posed by high-dimensional data, improving anomaly detection by selecting different or more critical features from the data. This approach outperforms previous techniques in several aspects. Overall, this research introduces state-of-the-art algorithms for anomaly detection in time series data, offering advancements in real-time processing and decision-making across various industrial sectors.


Asunto(s)
Redes Neurales de la Computación , Algoritmos , Análisis Multivariante , Aprendizaje Profundo , Factores de Tiempo
9.
PLoS One ; 19(3): e0299127, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38536782

RESUMEN

Depression is a serious mental health disorder affecting millions of individuals worldwide. Timely and precise recognition of depression is vital for appropriate mediation and effective treatment. Electroencephalography (EEG) has surfaced as a promising tool for inspecting the neural correlates of depression and therefore, has the potential to contribute to the diagnosis of depression effectively. This study presents an EEG-based mental depressive disorder detection mechanism using a publicly available EEG dataset called Multi-modal Open Dataset for Mental-disorder Analysis (MODMA). This study uses EEG data acquired from 55 participants using 3 electrodes in the resting-state condition. Twelve temporal domain features are extracted from the EEG data by creating a non-overlapping window of 10 seconds, which is presented to a novel feature selection mechanism. The feature selection algorithm selects the optimum chunk of attributes with the highest discriminative power to classify the mental depressive disorders patients and healthy controls. The selected EEG attributes are classified using three different classification algorithms i.e., Best- First (BF) Tree, k-nearest neighbor (KNN), and AdaBoost. The highest classification accuracy of 96.36% is achieved using BF-Tree using a feature vector length of 12. The proposed mental depressive classification scheme outperforms the existing state-of-the-art depression classification schemes in terms of the number of electrodes used for EEG recording, feature vector length, and the achieved classification accuracy. The proposed framework could be used in psychiatric settings, providing valuable support to psychiatrists.


Asunto(s)
Depresión , Máquina de Vectores de Soporte , Humanos , Depresión/diagnóstico , Algoritmos , Electroencefalografía , Aprendizaje Automático
10.
Sci Rep ; 14(1): 14976, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38951646

RESUMEN

Software-defined networking (SDN) is a pioneering network paradigm that strategically decouples the control plane from the data and management planes, thereby streamlining network administration. SDN's centralized network management makes configuring access control list (ACL) policies easier, which is important as these policies frequently change due to network application needs and topology modifications. Consequently, this action may trigger modifications at the SDN controller. In response, the controller performs computational tasks to generate updated flow rules in accordance with modified ACL policies and installs flow rules at the data plane. Existing research has investigated reactive flow rules installation that changes in ACL policies result in packet violations and network inefficiencies. Network management becomes difficult due to deleting inconsistent flow rules and computing new flow rules per modified ACL policies. The proposed solution efficiently handles ACL policy change phenomena by automatically detecting ACL policy change and accordingly detecting and deleting inconsistent flow rules along with the caching at the controller and adding new flow rules at the data plane. A comprehensive analysis of both proactive and reactive mechanisms in SDN is carried out to achieve this. To facilitate the evaluation of these mechanisms, the ACL policies are modeled using a 5-tuple structure comprising Source, Destination, Protocol, Ports, and Action. The resulting policies are then translated into a policy implementation file and transmitted to the controller. Subsequently, the controller utilizes the network topology and the ACL policies to calculate the necessary flow rules and caches these flow rules in hash table in addition to installing them at the switches. The proposed solution is simulated in Mininet Emulator using a set of ACL policies, hosts, and switches. The results are presented by varying the ACL policy at different time instances, inter-packet delay and flow timeout value. The simulation results show that the reactive flow rule installation performs better than the proactive mechanism with respect to network throughput, packet violations, successful packet delivery, normalized overhead, policy change detection time and end-to-end delay. The proposed solution, designed to be directly used on SDN controllers that support the Pyretic language, provides a flexible and efficient approach for flow rule installation. The proposed mechanism can be employed to facilitate network administrators in implementing ACL policies. It may also be integrated with network monitoring and debugging tools to analyze the effectiveness of the policy change mechanism.

11.
Neural Comput Appl ; 35(11): 8505-8516, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36536673

RESUMEN

In late 2019, a new Coronavirus disease (COVID-19) appeared in Wuhan, Hubei Province, China. The virus began to spread throughout many countries, affecting a large population. Polymerase chain reaction is currently being utilized to diagnose COVID-19 in suspected patients; however, its sensitivity is quite low. The researchers also developed automated approaches for reliably and timely identifying COVID-19 from X-ray images. However, traditional machine learning-based image classification algorithms necessitate manual image segmentation and feature extraction, which is a time-consuming task. Due to promising results and robust performance, Convolutional Neural Network (CNN)-based techniques are being used widely to classify COVID-19 from Chest X-rays (CXR). This study explores CNN-based COVID-19 classification methods. A series of experiments aimed at COVID-19 detection and classification validates the viability of our proposed framework. Initially, the dataset is preprocessed and then fed into two Residual Network (ResNet) architectures for deep feature extraction, such as ResNet18 and ResNet50, whereas support vector machines with its multiple kernels, including Quadratic, Linear, Gaussian and Cubic, are used to classify these features. The experimental results suggest that the proposed framework efficiently detects COVID-19 from CXR images. The proposed framework obtained the best accuracy of 97.3% using ResNet50.

12.
Complex Intell Systems ; 9(3): 3043-3070, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35668732

RESUMEN

Cloud computing refers to the on-demand availability of personal computer system assets, specifically data storage and processing power, without the client's input. Emails are commonly used to send and receive data for individuals or groups. Financial data, credit reports, and other sensitive data are often sent via the Internet. Phishing is a fraudster's technique used to get sensitive data from users by seeming to come from trusted sources. The sender can persuade you to give secret data by misdirecting in a phished email. The main problem is email phishing attacks while sending and receiving the email. The attacker sends spam data using email and receives your data when you open and read the email. In recent years, it has been a big problem for everyone. This paper uses different legitimate and phishing data sizes, detects new emails, and uses different features and algorithms for classification. A modified dataset is created after measuring the existing approaches. We created a feature extracted comma-separated values (CSV) file and label file, applied the support vector machine (SVM), Naive Bayes (NB), and long short-term memory (LSTM) algorithm. This experimentation considers the recognition of a phished email as a classification issue. According to the comparison and implementation, SVM, NB and LSTM performance is better and more accurate to detect email phishing attacks. The classification of email attacks using SVM, NB, and LSTM classifiers achieve the highest accuracy of 99.62%, 97% and 98%, respectively.

13.
Multimed Tools Appl ; 82(9): 14135-14152, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36196269

RESUMEN

Coronavirus triggers several respirational infections such as sneezing, coughing, and pneumonia, which transmit humans to humans through airborne droplets. According to the guidelines of the World Health Organization, the spread of COVID-19 can be mitigated by avoiding public interactions in proximity and following standard operating procedures (SOPs) including wearing a face mask and maintaining social distancing in schools, shopping malls, and crowded areas. However, enforcing the adaptation of these SOPs on a larger scale is still a challenging task. With the emergence of deep learning-based visual object detection networks, numerous methods have been proposed to perform face mask detection on public spots. However, these methods require a huge amount of data to ensure robustness in real-time applications. Also, to the best of our knowledge, there is no standard outdoor surveillance-based dataset available to ensure the efficacy of face mask detection and social distancing methods in public spots. To this end, we present a large-scale dataset comprising of 10,000 outdoor images categorized into a binary class labeling i.e., face mask, and non-face masked people to accelerate the development of automated face mask detection and social distance measurement on public spots. Alongside, we also present an end-to-end pipeline to perform real-time face mask detection and social distance measurement in an outdoor environment. Initially, existing state-of-the-art single and multi-stage object detection networks are fine-tuned on the proposed dataset to evaluate their performance in terms of accuracy and inference time. Based on better performance, YOLO-v3 architecture is further optimized by tuning its feature extraction and region proposal generation layers to improve the performance in real-time applications. Our results indicate that the presented pipeline performed better than the baseline version, showing an improvement of 5.3% in terms of accuracy.

14.
Sci Rep ; 13(1): 7422, 2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37156887

RESUMEN

Due to the wide availability of easy-to-access content on social media, along with the advanced tools and inexpensive computing infrastructure, has made it very easy for people to produce deep fakes that can cause to spread disinformation and hoaxes. This rapid advancement can cause panic and chaos as anyone can easily create propaganda using these technologies. Hence, a robust system to differentiate between real and fake content has become crucial in this age of social media. This paper proposes an automated method to classify deep fake images by employing Deep Learning and Machine Learning based methodologies. Traditional Machine Learning (ML) based systems employing handcrafted feature extraction fail to capture more complex patterns that are poorly understood or easily represented using simple features. These systems cannot generalize well to unseen data. Moreover, these systems are sensitive to noise or variations in the data, which can reduce their performance. Hence, these problems can limit their usefulness in real-world applications where the data constantly evolves. The proposed framework initially performs an Error Level Analysis of the image to determine if the image has been modified. This image is then supplied to Convolutional Neural Networks for deep feature extraction. The resultant feature vectors are then classified via Support Vector Machines and K-Nearest Neighbors by performing hyper-parameter optimization. The proposed method achieved the highest accuracy of 89.5% via Residual Network and K-Nearest Neighbor. The results prove the efficiency and robustness of the proposed technique; hence, it can be used to detect deep fake images and reduce the potential threat of slander and propaganda.

15.
J Nanosci Nanotechnol ; 12(5): 3918-21, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-22852325

RESUMEN

We develop two simple methods-the dip coat stamping and lift-off methods-to transfer large area, high quality graphene films onto the top and side faces of the polymer optical fiber. The graphene films can be synthesized using chemical vapor deposition method on large Cu foils. After synthesis, the graphene films are characterized by scanning electron microscopy, atomic force microscopy and Raman spectroscopy. The polymer optical fiber probe with the transferred graphene film can be used as a chemical sensor for the detection of various organic aerosols.

16.
J Nanosci Nanotechnol ; 12(3): 2300-10, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22755051

RESUMEN

This report documents the design and characterization of DNA molecular nanoarchitectures consisting of artificial double crossover DNA tiles with different geometry and chemistry. The Structural characterization of the unit tiles, including normal, biotinylated and hairpin loop structures, are morphologically studied by atomic force microscopy. The specific proton resonance of the individual tiles and their intra/inter nucleotide relationships are verified by proton nuclear magnetic resonance spectroscopy and 2-dimensional correlation spectral studies, respectively. Significant up-field and down-field shifts in the resonance signals of the individual residues at various temperatures are discussed. The results suggest that with artificially designed DNA tiles it is feasible to obtain structural information of the relative base sequences. These tiles were later fabricated into 2D DNA lattice structures for specific applications such as protein arrangement by biotinylated bulged loops or pattern generation using a hairpin structure.


Asunto(s)
ADN/química , Resonancia Magnética Nuclear Biomolecular/métodos , Conformación de Ácido Nucleico
17.
J Nanosci Nanotechnol ; 12(7): 5381-5, 2012 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-22966575

RESUMEN

Graphene is a zero band-gap semi-metal with remarkable electromagnetic and mechanical characteristics. This study is the first ever attempt to use graphene in the surface plasmon resonance (SPR) sensor as replacement material for gold/silver. Graphene, comprised of a single atomic layer of carbon, is a purely two-dimensional material and it is an ideal candidate for use as a biosensor because of its high surface-to-volume ratio. This sensor is based on the resonance occasion of the surface plasmon wave (SPW) according to the dielectric constants of each metal film and detected material in gas or aqueous phase. Graphene in the SPR sensor is expected to enlarge the range of analyte to bio-aerosols based on the superior electromagnetic properties of graphene. In this study, a SPR-based fiber optic sensor coated with multi-layered graphene is described. The multi-layered graphene film synthesized by chemical vapor deposition (CVD) on Ni substrate was transferred on the sensing region of an optical fiber. The graphene coated SPR sensor is used to analyze the interaction between structured DNA biotin and Streptavidin is analyzed. Transmitted light after passing through the sensing region is measured by a spectrometer and multimeter. As the light source, blue light which of 450 to 460 nm in wavelength was used. We observed the SPR phenomena in the sensor and show the contrary trends between bare fiber and graphene coated fiber. The fabricated graphene based fiber optic sensor shows excellent detection sensitivity of the interaction between structured DNA and Streptavidin.


Asunto(s)
Técnicas Biosensibles/instrumentación , Tecnología de Fibra Óptica/instrumentación , Grafito/química , Nanotecnología/instrumentación , Resonancia por Plasmón de Superficie/instrumentación , Diseño de Equipo , Análisis de Falla de Equipo
18.
Comput Intell Neurosci ; 2022: 6294058, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35498213

RESUMEN

The most often reported danger to computer security is malware. Antivirus company AV-Test Institute reports that more than 5 million malware samples are created each day. A malware classification method is frequently required to prioritize these occurrences because security teams cannot address all of that malware at once. Malware's variety, volume, and sophistication are all growing at an alarming rate. Hackers and attackers routinely design systems that can automatically rearrange and encrypt their code to escape discovery. Traditional machine learning approaches, in which classifiers learn based on a hand-crafted feature vector, are ineffective for classifying malware. Recently, deep convolutional neural networks (CNNs) successfully identified and classified malware. To categorize malware, a smart system has been suggested in this research. A novel model of deep learning is introduced to categorize malware families and multiclassification. The malware file is converted to a grayscale picture, and the image is then classified using a convolutional neural network. To evaluate the performance of our technique, we used a Microsoft malware dataset of 10,000 samples with nine distinct classifications. The findings stood out among the deep learning models with 99.97% accuracy for nine malware types.


Asunto(s)
Seguridad Computacional , Mano , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Extremidad Superior
19.
Comput Intell Neurosci ; 2022: 7897669, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35378808

RESUMEN

Brain tumors are difficult to treat and cause substantial fatalities worldwide. Medical professionals visually analyze the images and mark out the tumor regions to identify brain tumors, which is time-consuming and prone to error. Researchers have proposed automated methods in recent years to detect brain tumors early. These approaches, however, encounter difficulties due to their low accuracy and large false-positive values. An efficient tumor identification and classification approach is required to extract robust features and perform accurate disease classification. This paper proposes a novel multiclass brain tumor classification method based on deep feature fusion. The MR images are preprocessed using min-max normalization, and then extensive data augmentation is applied to MR images to overcome the lack of data problem. The deep CNN features obtained from transfer learned architectures such as AlexNet, GoogLeNet, and ResNet18 are fused to build a single feature vector and then loaded into Support Vector Machine (SVM) and K-nearest neighbor (KNN) to predict the final output. The novel feature vector contains more information than the independent vectors, boosting the proposed method's classification performance. The proposed framework is trained and evaluated on 15,320 Magnetic Resonance Images (MRIs). The study shows that the fused feature vector performs better than the individual vectors. Moreover, the proposed technique performed better than the existing systems and achieved accuracy of 99.7%; hence, it can be used in clinical setup to classify brain tumors from MRIs.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Automático , Encéfalo/patología , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Máquina de Vectores de Soporte
20.
Comput Intell Neurosci ; 2022: 4239536, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35498201

RESUMEN

Stress is the response or a change in our bodies to environmental factors like challenges or demands that are physical and emotional. The main cause of stress is illnesses and it is gaining more interest, a hot topic for many researchers. Stress can be brought about by a wide range of normal life occasions that are hard to avoid. Stress generally refers to two things: first, the psychological perception of pressure and the body's response to it. On the other hand, it involves multiple systems, from metabolism to muscles to memory. Many methods and tools are being developed to reduce stress in humans. Stress can be a short-term issue or a long-term problem, depending on what changes in your life. The emphasis of this article is to reduce the effects of stress by developing a stress-releasing game and verifying its results through the Profile of Mood States (POMS) and POMS-2 survey. Games are associated with stress levels; hence, parameters like sounds, visuals, and colors associated with reducing stress are used to develop a game for the stress reduction in the players. The survey research aims to determine that the purpose-built game will affect the player's stress level using a reliable psychological survey paper. The survey collected a variety of information from its participants over six months. Different aspects of a person's psychology and reactions are recorded in this scenario by calculating the mean, standard deviation, degree of freedom, zero-error, and probability-value%. The POMS and POMS-2 results are obtained from the custom-built game, and these are found to be effective in reducing stress.


Asunto(s)
Juegos de Video , Cultura , Emociones , Humanos , Músculos , Extremidad Superior
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA