RESUMO
The drastic shape deformation that accompanies the structural phase transition in thermosalient materials offers great potential for their applications as actuators and sensors. The microscopic origin of this fascinating effect has so far remained obscure, while for technological applications, it is important to learn how to drive transitions from one phase to another. Here, we present a combined computational and experimental study, in which we have successfully identified the order parameter for the thermosalient phase transition in the molecular crystal 2,7-di([1,1'-biphenyl]-4-yl)-fluorenone. Molecular dynamics simulations reveal that the transition barrier vanishes at the transition temperature. The simulations further show that two low-frequency vibrational-librational modes are directly related to the order parameter that describes this phase transition, which is supported by experimental Raman spectroscopy studies. By applying a computational THz pulse with the proper frequency and amplitude we predict that we can photoinduce this phase transition on a picosecond timescale.
RESUMO
In this article, power quality disturbances, such as voltage and current unbalances, series and shunt faults, are monitored in three-phase synchronous machines. Because the imbalances affect machine efficiency and service life, it is essential to figure out whether the three-phase voltages/currents are balanced or unbalanced. An integrated detection algorithm is proposed to detect the unbalanced voltages/currents and various abnormal conditions precisely and quickly, which is based on the coherence estimator. Most existing detection methods require more than one cycle time to determine the contingency conditions, while the proposed technique detects these situations in a fast and discrete protection system. The suggested scheme acquires instantaneous measurements of the three-phase voltages and currents taken at the synchronous machine terminals in order to calculate fifteen coherence coefficients that are utilized to detect and assess any unbalance or trouble accurately and swiftly. A new proposal for imbalance and disturbance indicators based on the coherence estimators is developed in this study. Multiple test cases have been conducted to validate the proposed algorithm capabilities using a real power system, which includes a motor-generator set, a three-phase load and measurement transformers. The experimental results have been revealed that the relay reliability and accuracy percentages have been found to be 98.28% and 98.52%, respectively. The quantitative findings of the imbalance and disturbance assessment are recorded.
RESUMO
Background and Aims: Electronic gaming machines (EGMs) are a significant source of gambling spend due to their widespread use. Skill-based gambling machines (SGMs) represent an innovative adaptation, merging EGMs' chance-based rewards with video game-like skills. This study aimed to explore the appeal and behavioural consequences of playing SGMs in comparison to traditional reel-based EGMs, particularly focusing on illusions of control, betting behaviour, and the subjective experience of gamblers. Methods: Participants (N = 1,260) were recruited online and engaged in an online task simulating either an SGM or a reel-based EGM, with outcomes represented to influence their survey compensation. The study examined the effect of SGMs relative to EGMs on bet size, persistence, enjoyment, illusions of control, game immersion, and the influence of demographic and gambling problem severity. Results: SGMs particularly appealed to younger adults, regular EGM players, and people with more gambling problems. Despite identical payout structures, people assigned to play SGM showed greater illusions of control, believing in the influence of skill on game outcomes and that practice could improve results. However, there was no significant difference in overall betting intensity between SGM and EGM players, although specific demographic groups showed faster betting speeds in SGMs. Discussion and Conclusions: SGMs, despite not inherently encouraging higher betting intensity, attract vulnerable groups and create illusions of control, posing new regulatory challenges. The visual and interactive features of SGMs, while appealing, might contribute to these perceptions, indicating a need for careful regulation and further research on their long-term impacts on gambling behaviour and harm.
RESUMO
As the primary grain crop in China, wheat holds a significant position in the country's agricultural production, circulation, consumption, and various other aspects. However, the presence of imperfect grains has greatly impacted wheat quality and, subsequently, food security. In order to detect perfect wheat grains and six types of imperfect grains, a method for the fast and non-destructive identification of imperfect wheat grains using hyperspectral images was proposed. The main contents and results are as follows: (1) We collected wheat grain hyperspectral data. Seven types of wheat grain samples, each containing 300 grains, were prepared to construct a hyperspectral imaging system for imperfect wheat grains, and visible near-infrared hyperspectral data from 2100 wheat grains were collected. The Savitzky-Golay algorithm was used to analyze the hyperspectral images of wheat grains, selecting 261 dimensional effective hyperspectral datapoints within the range of 420.61-980.43 nm. (2) The Successive Projections Algorithm was used to reduce the dimensions of the 261 dimensional hyperspectral datapoints, selecting 33 dimensional hyperspectral datapoints. Principal Component Analysis was used to extract the optimal spectral wavelengths, specifically selecting hyperspectral images at 647.57 nm, 591.78 nm, and 568.36 nm to establish the dataset. (3) Particle Swarm Optimization was used to optimize the Support Vector Machines model, Convolutional Neural Network model, and MobileNet V2 model, which were established to recognize seven types of wheat grains. The comprehensive recognition rates were 93.71%, 95.14%, and 97.71%, respectively. The results indicate that a larger model with more parameters may not necessarily yield better performance. The research shows that the MobileNet V2 network model exhibits superior recognition efficiency, and the integration of hyperspectral image technology with the classification model can accurately identify imperfect wheat grains.
Assuntos
Algoritmos , Imageamento Hiperespectral , Triticum , Triticum/química , Imageamento Hiperespectral/métodos , Grão Comestível/química , Análise de Componente Principal , Máquina de Vetores de Suporte , Processamento de Imagem Assistida por Computador/métodos , Espectroscopia de Luz Próxima ao Infravermelho/métodosRESUMO
Due to the complexity of determining the initial rotor flux and detecting errors, conventional rotor flux observation methods are easily affected by direct current (DC) components and harmonics. To address this issue, this paper proposes an in-phase filter (IPF)-based rotor flux observation strategy for sensorless control of permanent magnet synchronous machines (PMSMs). The core components of the IPF consist of a double second-order generalized integrator (DSOGI) and a phase angle compensation transfer function (PACTF). The DSOGI provides a accurate electrical angular frequency, while the PACTF implements a phase correction to the vq' signals. By employing IPF structure, accurate observations for rotor flux, electronic speed, and rotor position are achieved, which can be effectively used in the sensorless control of PMSMs, eliminating the need for magnitude and phase compensations. Finally, the proposed observation strategy is applied to an experimental bench of a PMSM, and its effectiveness is illustrated by experimental results. From experimental results, it can be concluded that the IPF is significantly better than the LPF, and 5% more accurate than the observer based on cascade second-order generalized integral(CSOGI) overall.
RESUMO
RATIONALE, AIMS AND OBJECTIVES: This research aims to develop an effective algorithm for diagnosing COVID-19 in chest X-rays using the transfer learning method and support vector machines. METHOD: In total, data was collected from 10 clinics, including both large city hospitals and smaller medical institutions. This ensured a diverse range of geographical and demographic information in the sample. An extensive data set was collected, including 10,000 chest X-ray images. 5000 images represent normal cases, 3993 images represent pneumonia cases, and 1007 images represent COVID-19 cases. Machine learning methods were applied to develop a classification model, and the results were compared with seven state-of-the-art models and a lightweight CNN architecture. RESULTS: The results showed that the proposed method achieves high accuracy values (Accuracy): 0.95 for COVID-19, 0.89 for pneumonia, and 0.92 for normal images (p < 0.05). Comparison with other models demonstrates statistically significant superiority of our method in accuracy across all three classes. The EfficientNet-B0 model surpasses our method only in accuracy for normal images with p < 0.01, confirming the advantages of our method. Our method demonstrates high sensitivity values (Sensitivity): 0.96 for COVID-19, 0.88 for pneumonia, and 0.93 for normal images (p < 0.05), outperforming most of the compared models. Correlation analysis showed Pearson coefficients of 0.92, 0.89, and 0.94 for COVID-19, pneumonia, and normal images, respectively, confirming a high degree of consistency between predicted and true class labels. In addition, the model was validated on external datasets to assess its generalizability. This validation confirmed its high level of effectiveness in a variety of clinical settings. CONCLUSION: This study confirms the importance of applying machine learning methods in medical applications and opens new perspectives for early diagnosis of infectious diseases. The practical application of the obtained results can enhance the efficiency of diagnosis and control the spread of COVID-19, as well as contribute to the development of innovative methods in medical practice.
RESUMO
Chemical systems displaying directional motions are relevant to the operation of artificial molecular machines. Herein we present the functioning of a molecule capable of transporting a cyclic species in a preferential direction. Our system is based on a linear, non-symmetric, positively charged molecule. This cation integrates into its structure two different reactive regions. On one side features a bulky ester group that can be exchanged by a smaller substituent; the other extreme contains an acid/base responsive moiety that plays a dual role, as part of the recognition motif and as a terminal group. In the acidic state, a dibenzo-24-crown-8 ether slides into the linear component attracted by the positively charged recognition site. It does this selectively through the extreme that contains the azepanium group, since the other side is sterically hindered. After base addition, intermolecular interactions are lost; however, the macrocycle is unable to escape from the linear component since the energy barrier to slide over the neutral azepane is too large. Therefore, a metastable mechanically interlocked molecule is formed. A second reaction, now on the ester functionality, exchanges the bulky mesityl for a methyl group, small enough to allow macrocycle dissociation, completing the directional transit of the ring along the track.
RESUMO
Artificial intelligence (AI) has a wide and increasing range of applications across various sectors. In medicine, AI has already made an impact in numerous fields, rapidly transforming healthcare delivery through its growing applications in diagnosis, treatment and overall patient care. Equally, AI is swiftly and essentially transforming the landscape of kidney transplantation (KT), offering innovative solutions for longstanding problems that have eluded resolution through traditional approaches outside its spectrum. The purpose of this review is to explore the present and future applications of artificial intelligence in KT, with a focus on pre-transplant evaluation, surgical assistance, outcomes and post-transplant care. We discuss its great potential and the inevitable limitations that accompany these technologies. We conclude that by fostering collaboration between AI technologies and medical practitioners, we can pave the way for a future where advanced, personalised care becomes the standard in KT and beyond.
RESUMO
The Aluminum alloy AA7075 workpiece material is observed under dry finishing turning operation. This work is an investigation reporting promising potential of deep adaptive learning enhanced artificial intelligence process models for L18 (6133) Taguchi orthogonal array experiments and major cost saving potential in machining process optimization. Six different tool inserts are used as categorical parameter along with three continuous operational parameters i.e., depth of cut, feed rate and cutting speed to study the effect of these parameters on workpiece surface roughness and tool life. The data obtained from special L18 (6133) orthogonal array experimental design in dry finishing turning process is used to train AI models. Multi-layer perceptron based artificial neural networks (MLP-ANNs), support vector machines (SVMs) and decision trees are compared for better understanding ability of low resolution experimental design. The AI models can be used with low resolution experimental design to obtain causal relationships between input and output variables. The best performing operational input ranges are identified for output parameters. AI-response surfaces indicate different tool life behavior for alloy based coated tool inserts and non-alloy based coated tool inserts. The AI-Taguchi hybrid modelling and optimization technique helped in achieving 26% of experimental savings (obtaining causal relation with 26% less number of experiments) compared to conventional Taguchi design combined with two screened factors three levels full factorial experimentation.
RESUMO
OBJECTIVES: Pedestrian gap acceptance (PGA) theory is the basic concept for pedestrian dilemma zone (PDZ) analysis and modeling and their gap acceptance behavior depends on dilemma behavior also. Uncontrolled intersections are one of the major locations where pedestrians have more dilemma and there is a possibility of an interaction between the pedestrian and vehicle due to the incorrect decision taken by the pedestrian when the vehicle lies within the limits of PDZ. Elimination and modeling of spatial boundaries of pedestrian dilemma stage improve the PGA. METHODS: The present study is intended to quantify and model the pedestrian dilemma zone (PDZ) boundaries at uncontrolled X-intersections under mixed traffic conditions. Video data were collected from four four-legged uncontrolled intersections in India. Pedestrian and vehicle information was extracted using DataFromSky software and manually from video. The Cumulative Gap Distribution (GCD) and Support Vector Machine (SVM) methods were used to estimate the boundaries of PDZ and developed a binary logistic regression model to estimate the PDZ boundary limits. RESULTS: The lower boundary limits of PDZ using GCD and SVM methods are 9.0 m and 6.0, respectively, and the upper boundary limits of PDZ using GCD and SVM methods are 16.5 and 18.5 m, respectively. The GCD method overestimated the lower limit and underestimated the upper limit compared with the SVM method. The binary logistic regression model results confirmed that pedestrian age, gender and crossing speed have a negative correlation and location of pedestrian crossing, vehicle type, and approaching speed have a positive correlation with the boundary limits of PDZ. CONCLUSIONS: From the present study, it is concluded that SVM better estimated the PDZ boundary limits with the largest margin compared to the GCD method. It is concluded that the boundary limits shift away from the intersection in the case of female and old-age pedestrians compared to male and young-age pedestrians, respectively. The size benefit in the case of 2Ws is the reason for shifting the PDZ boundary limits close to the crosswalk. The lower approaching speeds of the vehicles at uncontrolled intersections are the reason for pedestrians accept the gap at shorter distances.
RESUMO
Controlled quantum machines have matured significantly. A natural next step is to increasingly grant them autonomy, freeing them from time-dependent external control. For example, autonomy could pare down the classical control wires that heat and decohere quantum computers; and an autonomous quantum refrigerator recently reset superconducting qubits to near their ground states, as is necessary before a computation. Which fundamental conditions are necessary for realizing useful autonomous quantum machines? Inspired by recent quantum thermodynamics and chemistry, we posit conditions analogous to DiVincenzo's criteria for quantum computing. Furthermore, we illustrate the criteria with multiple autonomous quantum machines (refrigerators, computers, clocks, etc.) and multiple candidate platforms (neutral atoms, molecules, superconducting qubits, etc.). Our criteria are intended to foment and guide the development of useful autonomous quantum machines.
RESUMO
Backgrounds. Virtual reality (VR) simulates real-life events and scenarios and is widely utilized in education, entertainment, and medicine. VR can be presented in two dimensions (2D) or three dimensions (3D), with 3D VR offering a more realistic and immersive experience. Previous research has shown that electroencephalogram (EEG) profiles induced by 3D VR differ from those of 2D VR in various aspects, including brain rhythm power, activation, and functional connectivity. However, studies focused on classifying EEG in 2D and 3D VR contexts remain limited.Methods. A 56-channel EEG was recorded while visual stimuli were presented in 2D and 3D VR. The recorded EEG signals were classified using two machine learning approaches: traditional machine learning and deep learning. In the traditional approach, features such as power spectral density (PSD) and common spatial patterns (CSP) were extracted, and three classifiers-support vector machines (SVM), K-nearest neighbors (KNN), and random forests (RF)-were used. For the deep learning approach, a specialized convolutional neural network, EEGNet, was employed. The classification performance of these methods was then compared.Results. In terms of accuracy, precision, recall, and F1-score, the deep learning method outperformed traditional machine learning approaches. Specifically, the classification accuracy using the EEGNet deep learning model reached up to 97.86%.Conclusions. EEGNet-based deep learning significantly outperforms conventional machine learning methods in classifying EEG signals induced by 2D and 3D VR. Given EEGNet's design for EEG-based brain-computer interfaces (BCI), this superior classification performance suggests that it can enhance the application of 3D VR in BCI systems.
Assuntos
Aprendizado Profundo , Eletroencefalografia , Realidade Virtual , Humanos , Eletroencefalografia/métodos , Masculino , Adulto , Feminino , Máquina de Vetores de Suporte , Aprendizado de Máquina , Adulto Jovem , Encéfalo/fisiologia , Redes Neurais de Computação , Interfaces Cérebro-Computador , AlgoritmosRESUMO
This article considers the problem of classifying individuals in a dataset of diverse psychosis spectrum conditions, including persons with subsyndromal psychotic-like experiences (PLEs) and healthy controls. This task is more challenging than the traditional problem of distinguishing patients with a diagnosed disorder from controls using brain network features, since the neurobiological differences between PLE individuals and healthy persons are less pronounced. Further, examining a transdiagnostic sample compared to controls is concordant with contemporary approaches to understanding the full spectrum of neurobiology of psychoses. We consider both support vector machines (SVMs) and graph convolutional networks (GCNs) for classification, with a variety of edge selection methods for processing the inputs. We also employ the MultiVERSE algorithm to generate network embeddings of the functional and structural networks for each subject, which are used as inputs for the SVMs. The best models among SVMs and GCNs yielded accuracies >63%. Investigation of network connectivity between persons with PLE and controls identified a region within the right inferior parietal cortex, called the PGi, as a central region for communication among modules (network hub). Class activation mapping revealed that the PLE group had salient regions in the dorsolateral prefrontal, orbital and polar frontal cortices, and the lateral temporal cortex, whereas the controls did not. Our study demonstrates the potential usefulness of deep learning methods to distinguish persons with subclinical psychosis and diagnosable disorders from controls. In the long term, this could help improve accuracy and reliability of clinical diagnoses, provide neurobiological bases for making diagnoses, and initiate early intervention strategies.
RESUMO
Efficiently solving combinatorial optimization problems (COPs) such as Max-Cut is challenging because the resources required increase exponentially with the problem size. This study proposes a hardware-friendly method for solving the Max-Cut problem by implementing a spiking neural network (SNN)-based Boltzmann machine (BM) in neuromorphic hardware systems. To implement the hardware-oriented version of the spiking Boltzmann machine (sBM), the stochastic dynamics of leaky integrate-and-fire (LIF) neurons with random walk noise are analyzed, and an innovative algorithm based on overlapping time windows is proposed. The simulation results demonstrate the effective convergence and high accuracy of the proposed method for large-scale Max-Cut problems. The proposed method is validated through successful hardware implementation on a 6-transistor/2-resistor (6T2R) neuromorphic chip with phase change memory (PCM) synapses. In addition, as an expansion of the algorithm, several annealing techniques and bias split methods are proposed to improve convergence, along with circuit design ideas for efficient evaluation of sampling convergence using cell arrays and spiking systems. Overall, the results of the proposed methods demonstrate the potential of energy-efficient and hardware-implementable approaches using SNNs to solve COPs. To the best of the author's knowledge, this is the first study to solve the Max-Cut problem using an SNN neuromorphic hardware chip.
RESUMO
The rapid growth of cloud computing has led to the widespread adoption of heterogeneous virtualized environments, offering scalable and flexible resources to meet diverse user demands. However, the increasing complexity and variability in workload characteristics pose significant challenges in optimizing energy consumption. Many scheduling algorithms have been suggested to address this. Therefore, a self-attention-based progressive generative adversarial network optimized with Dwarf Mongoose algorithm adopted Energy and Deadline Aware Scheduling in heterogeneous virtualized cloud computing (SAPGAN-DMA-DAS-HVCC) is proposed in this paper. Here, a self-attention based progressive generative adversarial network (SAPGAN) is proposed to schedule activities in a cloud environment with an objective function of makespan and energy consumption. Then Dwarf Mongoose algorithm is proposed to optimize the weight parameters of SAPGAN. Outcome of proposed approach SAPGAN-DMA-DAS-HVCC contains 32.77%, 34.83% and 35.76% higher right skewed makespan, 31.52%, 33.28% and 29.14% lower cost when analysed to the existing models, like task scheduling in heterogeneous cloud environment utilizing mean grey wolf optimization approach, energy and performance-efficient task scheduling in heterogeneous virtualized Energy and Performance Efficient Task Scheduling Algorithm, energy and make span aware scheduling of deadline sensitive tasks on the cloud environment, respectively.
RESUMO
Brushed DC motors and generators (DCMs) are extensively used in various industrial applications, including the automotive industry, where they are critical for electric vehicles (EVs) due to their high torque, power, and efficiency. Despite their advantages, DCMs are prone to premature failure due to sparking between brushes and commutators, which can lead to significant economic losses. This study proposes two approaches for determining the temporal and frequency evolution of Shannon entropy in armature current and stray flux signals. One approach indirectly achieves this through prior analysis using the Short-Time Fourier Transform (STFT), while the other applies the Stockwell Transform (S-Transform) directly. Experimental results show that increased sparking activity generates significant low-frequency harmonics, which are more pronounced compared to mid and high-frequency ranges, leading to a substantial rise in system entropy. This finding enables the introduction of fault-severity indicators or Key Performance Indicators (KPIs) that relate the current condition of commutation quality to a baseline established under healthy conditions. The proposed technique can be used as a predictive maintenance tool to detect and assess sparking phenomena in DCMs, providing early warnings of component failure and performance degradation, thereby enhancing the reliability and availability of these machines.
RESUMO
It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.
RESUMO
PURPOSE: The objective of this research is to explore the applicability of machine learning and fully homomorphic encryption (FHE) in the private pathological assessment, with a focus on the inference phase of support vector machines (SVM) for the classification of confidential medical data. METHODS: A framework is introduced that utilizes the Cheon-Kim-Kim-Song (CKKS) FHE scheme, facilitating the execution of SVM inference on encrypted datasets. This framework ensures the privacy of patient data and negates the necessity of decryption during the analytical process. Additionally, an efficient feature extraction technique is presented for the transformation of medical imagery into vectorial representations. RESULTS: The system's evaluation across various datasets substantiates its practicality and efficacy. The proposed method delivers classification accuracy and performance on par with traditional, non-encrypted SVM inference, while upholding a 128-bit security level against established cryptographic attacks targeting the CKKS scheme. The secure inference process is executed within a temporal span of mere seconds. CONCLUSION: The findings of this study underscore the viability of FHE in enhancing the security and efficiency of bioinformatics analyses, potentially benefiting fields such as cardiology, oncology, and medical imagery. The implications of this research are significant for the future of privacy-preserving machine learning, promoting progress in diagnostic procedures, tailored medical treatments, and clinical investigations.
RESUMO
This study presents a comparative analysis of various Machine Learning (ML) techniques for predicting water consumption using a comprehensive dataset from Kocaeli Province, Turkey. Accurate prediction of water consumption is crucial for effective water resource management and planning, especially considering the significant impact of the COVID-19 pandemic on water usage patterns. A total of four ML models, Artificial Neural Networks (ANN), Random Forest (RF), Support Vector Machines (SVM), and Gradient Boosting Machines (GBM), were evaluated. Additionally, optimization techniques such as Particle Swarm Optimization (PSO) and the Second-Order Optimization (SOO) Levenberg-Marquardt (LM) algorithm were employed to enhance the performance of the ML models. These models incorporate historical data from previous months to enhance model accuracy and generalizability, allowing for robust predictions that account for both short-term fluctuations and long-term trends. The performance of each model was assessed using cross-validation. The R2 and correlation values obtained in this study for the best-performing models are highlighted in the results section. For instance, the GBM model achieved an R2 value of 0.881, indicating a strong capability in capturing the underlying patterns in the data. This study is one of the first to conduct a comprehensive analysis of water consumption prediction using machine learning algorithms on a large-scale dataset of 5000 subscribers, including the unique conditions imposed by the COVID-19 pandemic. The results highlight the strengths and limitations of each technique, providing insights into their applicability for water consumption prediction. This study aims to enhance the understanding of ML applications in water management and offers practical recommendations for future research and implementation.
RESUMO
Lung cancer is the most common causes of death among all cancer-related diseases. A lung scan examination of the patient is the primary diagnostic technique. This scan analysis pertains to an MRI, CT, or X-ray. The automated classification of lung cancer is difficult due to the involvement of multiple steps in imaging patients' lungs. In this manuscript, human lung cancer classification and comprehensive analysis using different machine learning techniques is proposed. Initially, the input images are gathered using lung cancer dataset. The proposed method processes these images using image-processing techniques, and further machine learning techniques are utilized for categorization. Seven different classifiers including the k-nearest neighbors (KNN), support vector machine (SVM), decision tree (DT), multinomial naive Bayes (MNB), stochastic gradient descent (SGD), random forest (RF), and multi-layer perceptron (MLP) classifier are used, which classifies the lung cancer as malignant and benign. The performance of the proposed approach is examined using performances metrics, like positive predictive value, accuracy, sensitivity, and f-score are evaluated. Among them, the performance of the MLP classifier provides 25.34%, 45.39%, 15.39%, 41.28%, 22.17%, and 12.12% higher accuracy than other KNN, SVM, DT, MNB, SGD, and RF respectively. RESEARCH HIGHLIGHTS: Lung cancer is a leading cause of cancer-related death. Imaging (MRI, CT, and X-ray) aids diagnosis. Automated classification of lung cancer faces challenges due to complex imaging steps. This study proposes human lung cancer classification using diverse machine learning techniques. Input images from lung cancer dataset undergo image processing and machine learning. Classifiers like k-nearest neighbors, support vector machine, decision tree, multinomial naive Bayes, stochastic gradient descent, random forest, and multi-layer perceptron (MLP) classify cancer types; MLP excels in accuracy.