Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Sensors (Basel) ; 22(24)2022 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-36560259

RESUMO

Inertial sensor-based human activity recognition (HAR) has a range of healthcare applications as it can indicate the overall health status or functional capabilities of people with impaired mobility. Typically, artificial intelligence models achieve high recognition accuracies when trained with rich and diverse inertial datasets. However, obtaining such datasets may not be feasible in neurological populations due to, e.g., impaired patient mobility to perform many daily activities. This study proposes a novel framework to overcome the challenge of creating rich and diverse datasets for HAR in neurological populations. The framework produces images from numerical inertial time-series data (initial state) and then artificially augments the number of produced images (enhanced state) to achieve a larger dataset. Here, we used convolutional neural network (CNN) architectures by utilizing image input. In addition, CNN enables transfer learning which enables limited datasets to benefit from models that are trained with big data. Initially, two benchmarked public datasets were used to verify the framework. Afterward, the approach was tested in limited local datasets of healthy subjects (HS), Parkinson's disease (PD) population, and stroke survivors (SS) to further investigate validity. The experimental results show that when data augmentation is applied, recognition accuracies have been increased in HS, SS, and PD by 25.6%, 21.4%, and 5.8%, respectively, compared to the no data augmentation state. In addition, data augmentation contributes to better detection of stair ascent and stair descent by 39.1% and 18.0%, respectively, in limited local datasets. Findings also suggest that CNN architectures that have a small number of deep layers can achieve high accuracy. The implication of this study has the potential to reduce the burden on participants and researchers where limited datasets are accrued.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Humanos , Aprendizado de Máquina , Atividades Humanas , Reconhecimento Psicológico
2.
Appl Soft Comput ; 98: 106912, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33230395

RESUMO

Coronavirus disease 2019 (COVID-2019), which emerged in Wuhan, China in 2019 and has spread rapidly all over the world since the beginning of 2020, has infected millions of people and caused many deaths. For this pandemic, which is still in effect, mobilization has started all over the world, and various restrictions and precautions have been taken to prevent the spread of this disease. In addition, infected people must be identified in order to control the infection. However, due to the inadequate number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, Chest computed tomography (CT) becomes a popular tool to assist the diagnosis of COVID-19. In this study, two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images. Lung segmentation (preprocessing) in CT images, which are given as input to these proposed architectures, is performed automatically with Artificial Neural Networks (ANN). Since both architectures contain AlexNet architecture, the recommended method is a transfer learning application. However, the second proposed architecture is a hybrid structure as it contains a Bidirectional Long Short-Term Memories (BiLSTM) layer, which also takes into account the temporal properties. While the COVID-19 classification accuracy of the first architecture is 98.14%, this value is 98.70% in the second hybrid architecture. The results prove that the proposed architecture shows outstanding success in infection detection and, therefore this study contributes to previous studies in terms of both deep architectural design and high classification success.

3.
J Sci Food Agric ; 100(2): 817-824, 2020 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-31646637

RESUMO

BACKGROUND: In this study, artificial intelligence models that identify sunn pest-damaged wheat grains (SDG) and healthy wheat grains (HWG) are presented. Svevo durum wheat cultivated in Konya province, Turkey is used for the process, with 150 HWG and 150 SDG being used for classification. Thanks to the constructed imaging setup, photos of the 300 wheat grains are obtained. Seventeen visual features of each wheat grain are extracted by image-processing techniques and evaluated in three different groups of dimension, texture and pattern as visual parameters. Artificial bee colony (ABC) optimization-based artificial neural network (ANN) and extreme learning machine (ELM) algorithms are implemented to classify the damaged wheat grains. RESULTS: A correlation-based feature selection (CFS) technique is also utilized to find the most effective among the 17 features. In the classification process using five selected features, the mean absolute error (MAE) and root mean square error (RMSE) values for ABC-based ANN are calculated as 0.00174 and 0.00433 respectively. The proposed technique is integrated into graphical user interface (GUI) software to construct an effective detection system for practical use. CONCLUSION: The results indicate that, thanks to the modified ANN algorithm and implemented CFS algorithm, the detection accuracy of damaged wheat grains is considerably increased. © 2019 Society of Chemical Industry.


Assuntos
Inteligência Artificial , Sementes/química , Triticum/química , Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Redes Neurais de Computação , Sementes/classificação , Sementes/parasitologia , Triticum/classificação , Triticum/parasitologia , Turquia
4.
J Sci Food Agric ; 100(15): 5577-5585, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32608512

RESUMO

BACKGROUND: Wheat, which is an essential nutrient, is an important food source for human beings because it is used in flour and feed production. As in many nutrients, wheat plays an important role in macaroni and bread production. The types of wheat used for both foods are different, namely bread and durum wheat. A strong separation of these two wheat types is important for product quality. This article differs from the traditional methods available for the identification of bread and durum wheat species. In this study, ultraviolet (UV) and white light (WL) images of wheat are obtained for both species. Wheat types in these images are classified by various machine learning (ML) methods. Afterwards, these images are fused by wavelet-based image fusion method. RESULTS: The highest accuracy value calculated using only UV and only WL image is 94.8276% and these accuracies are obtained by Support Vector Machine (SVM) and multilayer perceptron (MLP) algorithms, respectively. However, this accuracy value is 98.2759% for the fusion image and both MLP and SVM achieved the same success. CONCLUSION: Wavelet-based fusion has increased the classification accuracy of all three learning algorithms. It is concluded that the identification ability in the resulting fusion image is higher than the other two raw images. © 2020 Society of Chemical Industry.


Assuntos
Imageamento Hiperespectral/métodos , Triticum/química , Algoritmos , Pão/análise , Farinha/análise , Aprendizado de Máquina , Triticum/classificação
5.
J Sci Food Agric ; 97(12): 3994-4000, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28194800

RESUMO

BACKGROUND: A computer vision-based classifier using an adaptive neuro-fuzzy inference system (ANFIS) is designed for classifying wheat grains into bread or durum. To train and test the classifier, images of 200 wheat grains (100 for bread and 100 for durum) are taken by a high-resolution camera. Visual feature data of the grains related to dimension (#4), color (#3) and texture (#5) as inputs of the classifier are mainly acquired for each grain using image processing techniques (IPTs). In addition to these main data, nine features are reproduced from the main features to ensure a varied population. Thus four sub-sets including categorized features of reproduced data are constituted to examine their effects on the classification. In order to simplify the classifier, the most effective visual features on the results are investigated. RESULTS: The data sets are compared with each other regarding classification accuracy. A simplified classifier having seven selected features is achieved with the best results. In the testing process, the simplified classifier computes the output with 99.46% accuracy and assorts the wheat grains with 100% accuracy. CONCLUSION: A system which classifies wheat grains with higher accuracy is designed. The proposed classifier integrated to industrial applications can automatically classify a variety of wheat grains. © 2017 Society of Chemical Industry.


Assuntos
Redes Neurais de Computação , Fotografação/métodos , Triticum/química , Lógica Fuzzy , Triticum/classificação
6.
J Sci Food Agric ; 97(8): 2588-2593, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27718230

RESUMO

BACKGROUND: A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. RESULTS: Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10-6 by the simplified ANN model. CONCLUSION: This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Sementes/classificação , Triticum/química , Algoritmos , Redes Neurais de Computação , Sementes/química , Triticum/classificação
7.
Diagnostics (Basel) ; 13(4)2023 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-36832284

RESUMO

Diabetes, one of the most common diseases worldwide, has become an increasingly global threat to humans in recent years. However, early detection of diabetes greatly inhibits the progression of the disease. This study proposes a new method based on deep learning for the early detection of diabetes. Like many other medical data, the PIMA dataset used in the study contains only numerical values. In this sense, the application of popular convolutional neural network (CNN) models to such data are limited. This study converts numerical data into images based on the feature importance to use the robust representation of CNN models in early diabetes diagnosis. Three different classification strategies are then applied to the resulting diabetes image data. In the first, diabetes images are fed into the ResNet18 and ResNet50 CNN models. In the second, deep features of the ResNet models are fused and classified with support vector machines (SVM). In the last approach, the selected fusion features are classified by SVM. The results demonstrate the robustness of diabetes images in the early diagnosis of diabetes.

8.
Sci Rep ; 13(1): 15899, 2023 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-37741865

RESUMO

Biotic stress imposed by pathogens, including fungal, bacterial, and viral, can cause heavy damage leading to yield reduction in maize. Therefore, the identification of resistant genes paves the way to the development of disease-resistant cultivars and is essential for reliable production in maize. Identifying different gene expression patterns can deepen our perception of maize resistance to disease. This study includes machine learning and deep learning-based application for classifying genes expressed under normal and biotic stress in maize. Machine learning algorithms used are Naive Bayes (NB), K-Nearest Neighbor (KNN), Ensemble, Support Vector Machine (SVM), and Decision Tree (DT). A Bidirectional Long Short Term Memory (BiLSTM) based network with Recurrent Neural Network (RNN) architecture is proposed for gene classification with deep learning. To increase the performance of these algorithms, feature selection is made from the raw gene features through the Relief feature selection algorithm. The obtained finding indicated the efficacy of BiLSTM over other machine learning algorithms. Some top genes ((S)-beta-macrocarpene synthase, zealexin A1 synthase, polyphenol oxidase I, chloroplastic, pathogenesis-related protein 10, CHY1, chitinase chem 5, barwin, and uncharacterized LOC100273479 were proved to be differentially upregulated under biotic stress condition.


Assuntos
Inteligência Artificial , Zea mays , Zea mays/genética , Teorema de Bayes , Transcriptoma , Algoritmos
9.
Foods ; 11(19)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36230030

RESUMO

Food processing allows for maintaining the quality of perishable products and extending their shelf life. Nondestructive procedures combining image analysis and machine learning can be used to control the quality of processed foods. This study was aimed at developing an innovative approach to distinguishing fresh and lacto-fermented red bell pepper samples involving selected image textures and machine learning algorithms. Before processing, the pieces of fresh pepper and samples subjected to spontaneous lacto-fermentation were imaged using a digital camera. The texture parameters were extracted from images converted to different color channels L, a, b, R, G, B, X, Y, and Z. The textures after selection were used to build models for the classification of fresh and lacto-fermented samples using algorithms from the groups of Lazy, Functions, Trees, Bayes, Meta, and Rules. The highest average accuracy of classification reached 99% for the models developed based on sets of selected textures for color space Lab using the IBk (instance-based K-nearest learner) algorithm from the group of Lazy, color space RGB using SMO (sequential minimal optimization) from Functions, and color space XYZ and color channel X using IBk (Lazy) and SMO (Functions). The results confirmed the differences in image features of fresh and lacto-fermented red bell pepper and revealed the effectiveness of models built based on textures using machine learning algorithms for the evaluation of the changes in the pepper flesh structure caused by processing.

10.
Diagnostics (Basel) ; 12(12)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36552933

RESUMO

Among the leading causes of mortality and morbidity in people are lung and colon cancers. They may develop concurrently in organs and negatively impact human life. If cancer is not diagnosed in its early stages, there is a great likelihood that it will spread to the two organs. The histopathological detection of such malignancies is one of the most crucial components of effective treatment. Although the process is lengthy and complex, deep learning (DL) techniques have made it feasible to complete it more quickly and accurately, enabling researchers to study a lot more patients in a short time period and for a lot less cost. Earlier studies relied on DL models that require great computational ability and resources. Most of them depended on individual DL models to extract features of high dimension or to perform diagnoses. However, in this study, a framework based on multiple lightweight DL models is proposed for the early detection of lung and colon cancers. The framework utilizes several transformation methods that perform feature reduction and provide a better representation of the data. In this context, histopathology scans are fed into the ShuffleNet, MobileNet, and SqueezeNet models. The number of deep features acquired from these models is subsequently reduced using principal component analysis (PCA) and fast Walsh-Hadamard transform (FHWT) techniques. Following that, discrete wavelet transform (DWT) is used to fuse the FWHT's reduced features obtained from the three DL models. Additionally, the three DL models' PCA features are concatenated. Finally, the diminished features as a result of PCA and FHWT-DWT reduction and fusion processes are fed to four distinct machine learning algorithms, reaching the highest accuracy of 99.6%. The results obtained using the proposed framework based on lightweight DL models show that it can distinguish lung and colon cancer variants with a lower number of features and less computational complexity compared to existing methods. They also prove that utilizing transformation methods to reduce features can offer a superior interpretation of the data, thus improving the diagnosis procedure.

11.
Comput Biol Med ; 142: 105244, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35077936

RESUMO

The coronavirus outbreak 2019, called COVID-19, which originated in Wuhan, negatively affected the lives of millions of people and many people died from this infection. To prevent the spread of the disease, which is still in effect, various restriction decisions have been taken all over the world. In addition, the number of COVID-19 tests has been increased to quarantine infected people. However, due to the problems encountered in the supply of RT-PCR tests and the ease of obtaining Computed Tomography and X-ray images, imaging-based methods have become very popular in the diagnosis of COVID-19. Therefore, studies using these images to classify COVID-19 have increased. This paper presents a classification method for computed tomography chest images in the COVID-19 Radiography Database using features extracted by popular Convolutional Neural Networks (CNN) models (AlexNet, ResNet18, ResNet50, Inceptionv3, Densenet201, Inceptionresnetv2, MobileNetv2, GoogleNet). The determination of hyperparameters of Machine Learning (ML) algorithms by Bayesian optimization, and ANN-based image segmentation are the two main contributions in this study. First of all, lung segmentation is performed automatically from the raw image with Artificial Neural Networks (ANNs). To ensure data diversity, data augmentation is applied to the COVID-19 classes, which are fewer than the other two classes. Then these images are applied as input to five different CNN models. The features extracted from each CNN model are given as input to four different ML algorithms, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Naive Bayes (NB), and Decision Tree (DT) for classification. To achieve the most successful classification accuracy, the hyperparameters of each ML algorithm are determined using Bayesian optimization. With the classification made using these hyperparameters, the highest success is obtained as 96.29% with the DenseNet201 model and SVM algorithm. The Sensitivity, Precision, Specificity, MCC, and F1-Score metric values for this structure are 0.9642, 0.9642, 0.9812, 0.9641 and 0.9453, respectively. These results showed that ML methods with the most optimum hyperparameters can produce successful results.


Assuntos
COVID-19 , Aprendizado Profundo , Teorema de Bayes , Teste para COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2
12.
Front Public Health ; 10: 855994, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35734764

RESUMO

Artificial intelligence researchers conducted different studies to reduce the spread of COVID-19. Unlike other studies, this paper isn't for early infection diagnosis, but for preventing the transmission of COVID-19 in social environments. Among the studies on this is regarding social distancing, as this method is proven to prevent COVID-19 to be transmitted from one to another. In the study, Robot Operating System (ROS) simulates a shopping mall using Gazebo, and customers are monitored by Turtlebot and Unmanned Aerial Vehicle (UAV, DJI Tello). Through frames analysis captured by Turtlebot, a particular person is identified and followed at the shopping mall. Turtlebot is a wheeled robot that follows people without contact and is used as a shopping cart. Therefore, a customer doesn't touch the shopping cart that someone else comes into contact with, and also makes his/her shopping easier. The UAV detects people from above and determines the distance between people. In this way, a warning system can be created by detecting places where social distance is neglected. Histogram of Oriented-Gradients (HOG)-Support Vector Machine (SVM) is applied by Turtlebot to detect humans, and Kalman-Filter is used for human tracking. SegNet is performed for semantically detecting people and measuring distance via UAV. This paper proposes a new robotic study to prevent the infection and proved that this system is feasible.


Assuntos
COVID-19 , Robótica , Inteligência Artificial , COVID-19/prevenção & controle , Feminino , Humanos , Masculino
13.
Front Public Health ; 10: 984099, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36187621

RESUMO

Workplace accidents can cause a catastrophic loss to the company including human injuries and fatalities. Occupational injury reports may provide a detailed description of how the incidents occurred. Thus, the narrative is a useful information to extract, classify and analyze occupational injury. This study provides a systematic review of text mining and Natural Language Processing (NLP) applications to extract text narratives from occupational injury reports. A systematic search was conducted through multiple databases including Scopus, PubMed, and Science Direct. Only original studies that examined the application of machine and deep learning-based Natural Language Processing models for occupational injury analysis were incorporated in this study. A total of 27, out of 210 articles were reviewed in this study by adopting the Preferred Reporting Items for Systematic Review (PRISMA). This review highlighted that various machine and deep learning-based NLP models such as K-means, Naïve Bayes, Support Vector Machine, Decision Tree, and K-Nearest Neighbors were applied to predict occupational injury. On top of these models, deep neural networks are also included in classifying the type of accidents and identifying the causal factors. However, there is a paucity in using the deep learning models in extracting the occupational injury reports. This is due to these techniques are pretty much very recent and making inroads into decision-making in occupational safety and health as a whole. Despite that, this paper believed that there is a huge and promising potential to explore the application of NLP and text-based analytics in this occupational injury research field. Therefore, the improvement of data balancing techniques and the development of an automated decision-making support system for occupational injury by applying the deep learning-based NLP models are the recommendations given for future research.


Assuntos
Traumatismos Ocupacionais , Teorema de Bayes , Mineração de Dados/métodos , Humanos , Aprendizado de Máquina , Processamento de Linguagem Natural
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA