RESUMEN
Alignment of each optical element at a synchrotron beamline takes days, even weeks, for each experiment costing valuable beam time. Evolutionary algorithms (EAs), efficient heuristic search methods based on Darwinian evolution, can be utilized for multi-objective optimization problems in different application areas. In this study, the flux and spot size of a synchrotron beam are optimized for two different experimental setups including optical elements such as lenses and mirrors. Calculations were carried out with the X-ray Tracer beamline simulator using swarm intelligence (SI) algorithms and for comparison the same setups were optimized with EAs. The EAs and SI algorithms used in this study for two different experimental setups are the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC). While one of the algorithms optimizes the lens position, the other focuses on optimizing the focal distances of Kirkpatrick-Baez mirrors. First, mono-objective evolutionary algorithms were used and the spot size or flux values checked separately. After comparison of mono-objective algorithms, the multi-objective evolutionary algorithm NSGA-II was run for both objectives - minimum spot size and maximum flux. Every algorithm configuration was run several times for Monte Carlo simulations since these processes generate random solutions and the simulator also produces solutions that are stochastic. The results show that the PSO algorithm gives the best values over all setups.
RESUMEN
Data from omics studies have been used for prediction and classification of various diseases in biomedical and bioinformatics research. In recent years, Machine Learning (ML) algorithms have been used in many different fields related to healthcare systems, especially for disease prediction and classification tasks. Integration of molecular omics data with ML algorithms has offered a great opportunity to evaluate clinical data. RNA sequence (RNA-seq) analysis has been emerged as the gold standard for transcriptomics analysis. Currently, it is being used widely in clinical research. In our present work, RNA-seq data of extracellular vesicles (EV) from healthy and colon cancer patients are analyzed. Our aim is to develop models for prediction and classification of colon cancer stages. Five different canonical ML and Deep Learning (DL) classifiers are used to predict colon cancer of an individual with processed RNA-seq data. The classes of data are formed on the basis of both colon cancer stages and cancer presence (healthy or cancer). The canonical ML classifiers, which are k-Nearest Neighbor (kNN), Logistic Model Tree (LMT), Random Tree (RT), Random Committee (RC), and Random Forest (RF), are tested with both forms of the data. In addition, to compare the performance with canonical ML models, One-Dimensional Convolutional Neural Network (1-D CNN), Long Short-Term Memory (LSTM), and Bidirectional LSTM (BiLSTM) DL models are utilized. Hyper-parameter optimizations of DL models are constructed by using genetic meta-heuristic optimization algorithm (GA). The best accuracy in cancer prediction is obtained with RC, LMT, and RF canonical ML algorithms as 97.33%. However, RT and kNN show 95.33% performance. The best accuracy in cancer stage classification is achieved with RF as 97.33%. This result is followed by LMT, RC, kNN, and RT with 96.33%, 96%, 94.66%, and 94%, respectively. According to the results of the experiments with DL algorithms, the best accuracy in cancer prediction is obtained with 1-D CNN as 97.67%. BiLSTM and LSTM show 94.33% and 93.67% performance, respectively. In classification of the cancer stages, the best accuracy is achieved with BiLSTM as 98%. 1-D CNN and LSTM show 97% and 94.33% performance, respectively. The results reveal that both canonical ML and DL models may outperform each other for different numbers of features.
Asunto(s)
Neoplasias del Colon , ARN , Humanos , ARN/genética , Pronóstico , Secuencia de Bases , RNA-Seq , Aprendizaje Automático , Neoplasias del Colon/diagnóstico , Neoplasias del Colon/genéticaRESUMEN
Clouds play a pivotal role in determining the weather, impacting the daily lives of everyone. The cloud type can offer insights into whether the weather will be sunny or rainy and even serve as a warning for severe and stormy conditions. Classified into ten distinct classes, clouds provide valuable information about both typical and exceptional weather patterns, whether they are short or long-term in nature. This study aims to anticipate cloud formations and classify them based on their shapes and colors, allowing for preemptive measures against potentially hazardous situations. To address this challenge, a solution is proposed using image processing and deep learning technologies to classify cloud images. Several models, including MobileNet V2, Inception V3, EfficientNetV2L, VGG-16, Xception, ConvNeXtSmall, and ResNet-152 V2, were employed for the classification computations. Among them, Xception yielded the best outcome with an impressive accuracy of 97.66%. By integrating artificial intelligence technologies that can accurately detect and classify cloud types into weather forecasting systems, significant improvements in forecast accuracy can be achieved. This research presents an innovative approach to studying clouds, harnessing the power of image processing and deep learning. The ability to classify clouds based on their visual characteristics opens new avenues for enhanced weather prediction and preparedness, ultimately contributing to the overall accuracy and reliability of weather forecasts.
RESUMEN
Deep learning and diagnostic applications in oral and dental health have received significant attention recently. In this review, studies applying deep learning to diagnose anomalies and diseases in dental image material were systematically compiled, and their datasets, methodologies, test processes, explainable artificial intelligence methods, and findings were analyzed. Tests and results in studies involving human-artificial intelligence comparisons are discussed in detail to draw attention to the clinical importance of deep learning. In addition, the review critically evaluates the literature to guide and further develop future studies in this field. An extensive literature search was conducted for the 2019-May 2023 range using the Medline (PubMed) and Google Scholar databases to identify eligible articles, and 101 studies were shortlisted, including applications for diagnosing dental anomalies (n = 22) and diseases (n = 79) using deep learning for classification, object detection, and segmentation tasks. According to the results, the most commonly used task type was classification (n = 51), the most commonly used dental image material was panoramic radiographs (n = 55), and the most frequently used performance metric was sensitivity/recall/true positive rate (n = 87) and accuracy (n = 69). Dataset sizes ranged from 60 to 12,179 images. Although deep learning algorithms are used as individual or at least individualized architectures, standardized architectures such as pre-trained CNNs, Faster R-CNN, YOLO, and U-Net have been used in most studies. Few studies have used the explainable AI method (n = 22) and applied tests comparing human and artificial intelligence (n = 21). Deep learning is promising for better diagnosis and treatment planning in dentistry based on the high-performance results reported by the studies. For all that, their safety should be demonstrated using a more reproducible and comparable methodology, including tests with information about their clinical applicability, by defining a standard set of tests and performance metrics.
RESUMEN
Human microbiota refers to the trillions of microorganisms that inhabit our bodies and have been discovered to have a substantial impact on human health and disease. By sampling the microbiota, it is possible to generate massive quantities of data for analysis using Machine Learning algorithms. In this study, we employed several modern Machine Learning techniques to predict Inflammatory Bowel Disease using raw sequence data. The dataset was obtained from NCBI preprocessed graph representations and converted into a structured form. Seven well-known Machine Learning frameworks, including Random Forest, Support Vector Machines, Extreme Gradient Boosting, Light Gradient Boosting Machine, Gaussian Naïve Bayes, Logistic Regression, and k-Nearest Neighbor, were used. Grid Search was employed for hyperparameter optimization. The performance of the Machine Learning models was evaluated using various metrics such as accuracy, precision, fscore, kappa, and area under the receiver operating characteristic curve. Additionally, Mc Nemar's test was conducted to assess the statistical significance of the experiment. The data was constructed using k-mer lengths of 3, 4 and 5. The Light Gradient Boosting Machine model overperformed over other models with 67.24%, 74.63% and 76.47% accuracy for k-mer lengths of 3, 4 and 5, respectively. The LightGBM model also demonstrated the best performance in each metric. The study showed promising results predicting disease from raw sequence data. Finally, Mc Nemar's test results found statistically significant differences between different Machine Learning approaches.
RESUMEN
BACKGROUND: Pedodontists and general practitioners may need support in planning the early orthodontic treatment of patients with mixed dentition, especially in borderline cases. The use of machine learning algorithms is required to be able to consistently make treatment decisions for such cases. OBJECTIVE: This study aimed to use machine learning algorithms to facilitate the process of deciding whether to choose serial extraction or expansion of maxillary and mandibular dental arches for early treatment of borderline patients suffering from moderate to severe crowding. METHODS: The dataset of 116 patients who were previously treated by senior orthodontists and divided into two groups according to their treatment modalities were examined. Machine Learning algorithms including Multilayer Perceptron, Linear Logistic Regression, k-nearest Neighbors, Naïve Bayes, and Random Forest were trained on this dataset. Several metrics were used for the evaluation of accuracy, precision, recall, and kappa statistic. RESULTS: The most important 12 features were determined with the feature selection algorithm. While all algorithms achieved over 90% accuracy, Random Forest yielded 95% accuracy, with high reliability values (kappa = 0.90). CONCLUSION: The employment of machine learning methods for the treatment decision with or without extraction in the early treatment of patients in the mixed dentition can be particularly useful for pedodontists and general practitioners.
Asunto(s)
Dentición Mixta , Aprendizaje Automático , Humanos , Teorema de Bayes , Reproducibilidad de los Resultados , AlgoritmosRESUMEN
Endoscopic procedures for diagnosing gastrointestinal tract findings depend on specialist experience and inter-observer variability. This variability can cause minor lesions to be missed and prevent early diagnosis. In this study, deep learning-based hybrid stacking ensemble modeling has been proposed for detecting and classifying gastrointestinal system findings, aiming at early diagnosis with high accuracy and sensitive measurements and saving workload to help the specialist and objectivity in endoscopic diagnosis. In the first level of the proposed bi-level stacking ensemble approach, predictions are obtained by applying 5-fold cross-validation to three new CNN models. A machine learning classifier selected at the second level is trained according to the obtained predictions, and the final classification result is reached. The performances of the stacking models were compared with the performances of the deep learning models, and McNemar's statistical test was applied to support the results. According to the experimental results, stacking ensemble models performed with a significant difference with 98.42% ACC and 98.19% MCC in the KvasirV2 dataset and 98.53% ACC and 98.39% MCC in the HyperKvasir dataset. This study is the first to offer a new learning-oriented approach that efficiently evaluates CNN features and provides objective and reliable results with statistical testing compared to state-of-the-art studies on the subject. The proposed approach improves the performance of deep learning models and outperforms the state-of-the-art studies in the literature.
RESUMEN
Artificial Intelligence has guided technological progress in recent years; it has shown significant development with increased academic studies on Machine Learning and the high demand for this field in the sector. In addition to the advancement of technology day by day, the pandemic, which has become a part of our lives since early 2020, has led to social media occupying a larger place in the lives of individuals. Therefore, social media posts have become an excellent data source for the field of sentiment analysis. The main contribution of this study is based on the Natural Language Processing method, which is one of the machine learning topics in the literature. Sentiment analysis classification is a solid example for machine learning tasks that belongs to human-machine interaction. It is essential to make the computer understand people emotional situation with classifiers. There are a limited number of Turkish language studies in the literature. Turkish language has different types of linguistic features from English. Since Turkish is an agglutinative language, it is challenging to make sentiment analysis with that language. This paper aims to perform sentiment analysis of several machine learning algorithms on Turkish language datasets that are collected from Twitter. In this research, besides using public dataset that belongs to Beyaz (2021) to get more general results, another dataset is created to understand the impact of the pandemic on people and to learn about public opinions. Therefore, a custom dataset, namely, SentimentSet (Balli 2021), was created, consisting of Turkish tweets that were filtered with words such as pandemic and corona by manually marking as positive, negative, or neutral. Besides, SentimentSet could be used in future researches as benchmark dataset. Results show classification accuracy of not only up to â¼87% with test data from datasets of both datasets and trained models, but also up to â¼84% with small "Sample Test Data" generated by the same methods as SentimentSet dataset. These research results contributed to indicating Turkish language specific sentiment analysis that is dependent on language specifications.
Asunto(s)
Procesamiento de Lenguaje Natural , Medios de Comunicación Sociales , Inteligencia Artificial , Humanos , Aprendizaje Automático , Opinión PúblicaRESUMEN
It is necessary to know the manufacturer and model of a previously implanted shoulder prosthesis before performing Total Shoulder Arthroplasty operations, which may need to be performed repeatedly in accordance with the need for repair or replacement. In cases where the patient's previous records cannot be found, where the records are not clear, or the surgery was conducted abroad, the specialist should identify the implant manufacturer and model during preoperative X-ray controls. In this study, an auxiliary expert system is proposed for classifying manufacturers of shoulder implants on the basis of X-ray images that is automated, objective, and based on hybrid machine learning models. In the proposed system, ten different hybrid models consisting of a combination of deep learning and machine learning algorithms were created and statistically tested. According to the experimental results, an accuracy of 95.07% was achieved using the DenseNet201 + Logistic Regression model, one of the proposed hybrid machine learning models (p < 0.05). The proposed hybrid machine learning algorithms achieve the goal of low cost and high performance compared to other studies in the literature. The results lead the authors to believe that the proposed system could be used in hospitals as an automatic and objective system for assisting orthopedists in the rapid and effective determination of shoulder implant types before performing revision surgery.
RESUMEN
PURPOSE: In this study, the required dose rates for optimal treatment of tumoral tissues when using proton therapy in the treatment of defective tumours seen in mandibles has been calculated. We aimed to protect the surrounding soft and hard tissues from unnecessary radiation as well as to prevent complications of radiation. Bragg curves of therapeutic energized protons for two different mandible (molar and premolar) plate phantoms were computed and compared with similar calculations in the literature. The results were found to be within acceptable deviation values. METHODS: In this study, mandibular tooth plate phantoms were modelled for the molar and premolar areas and then a Monte Carlo simulation was used to calculate the Bragg curve, lateral straggle/range and recoil values of protons remaining in the therapeutic energy ranges. The mass and atomic densities of all the jawbone layers were selected and the effect of layer type and thickness on the Bragg curve, lateral straggle/range and the recoil were investigated. As protons move through different layers of density, lateral straggle and increases in the range were observed. A range of energies was used for the treatment of tumours at different depths in the mandible phantom. RESULTS: Simulations revealed that as the cortical bone thickness increased, Bragg peak position decreased between 0.47-3.3%. An increase in the number of layers results in a decrease in the Bragg peak position. Finally, as the proton energy increased, the amplitude of the second peak and its effect on Bragg peak position decreased. CONCLUSION: These findings should guide the selection of appropriate energy levels in the treatment of tumour structures without damaging surrounding tissues.
RESUMEN
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately.
RESUMEN
When matching images for applications such as mosaicking and homography estimation, the distribution of features across the overlap region affects the accuracy of the result. This paper uses the spatial statistics of these features, measured by Ripley's K-function, to assess whether feature matches are clustered together or spread around the overlap region. A comparison of the performances of a dozen state-of-the-art feature detectors is then carried out using analysis of variance and a large image database. Results show that SFOP introduces significantly less aggregation than the other detectors tested. When the detectors are rank-ordered by this performance measure, the order is broadly similar to those obtained by other means, suggesting that the ordering reflects genuine performance differences. Experiments on stitching images into mosaics confirm that better coverage values yield better quality outputs.