Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Neural Netw ; 169: 685-697, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37972512

RESUMO

With the growing exploration of marine resources, underwater image enhancement has gained significant attention. Recent advances in convolutional neural networks (CNN) have greatly impacted underwater image enhancement techniques. However, conventional CNN-based methods typically employ a single network structure, which may compromise robustness in challenging conditions. Additionally, commonly used UNet networks generally force fusion from low to high resolution for each layer, leading to inaccurate contextual information encoding. To address these issues, we propose a novel network called Cascaded Network with Multi-level Sub-networks (CNMS), which encompasses the following key components: (a) a cascade mechanism based on local modules and global networks for extracting feature representations with richer semantics and enhanced spatial precision, (b) information exchange between different resolution streams, and (c) a triple attention module for extracting attention-based features. CNMS selectively cascades multiple sub-networks through triple attention modules to extract distinct features from underwater images, bolstering the network's robustness and improving generalization capabilities. Within the sub-network, we introduce a Multi-level Sub-network (MSN) that spans multiple resolution streams, combining contextual information from various scales while preserving the original underwater images' high-resolution spatial details. Comprehensive experiments on multiple underwater datasets demonstrate that CNMS outperforms state-of-the-art methods in image enhancement tasks.


Assuntos
Generalização Psicológica , Aumento da Imagem , Redes Neurais de Computação , Semântica , Processamento de Imagem Assistida por Computador
3.
Front Plant Sci ; 14: 1158933, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37025141

RESUMO

Plants play a crucial role in supplying food globally. Various environmental factors lead to plant diseases which results in significant production losses. However, manual detection of plant diseases is a time-consuming and error-prone process. It can be an unreliable method of identifying and preventing the spread of plant diseases. Adopting advanced technologies such as Machine Learning (ML) and Deep Learning (DL) can help to overcome these challenges by enabling early identification of plant diseases. In this paper, the recent advancements in the use of ML and DL techniques for the identification of plant diseases are explored. The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection. This study also addresses the challenges and limitations associated with using ML and DL for plant disease identification, such as issues with data availability, imaging quality, and the differentiation between healthy and diseased plants. The research provides valuable insights for plant disease detection researchers, practitioners, and industry professionals by offering solutions to these challenges and limitations, providing a comprehensive understanding of the current state of research in this field, highlighting the benefits and limitations of these methods, and proposing potential solutions to overcome the challenges of their implementation.

4.
Micromachines (Basel) ; 14(2)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36838056

RESUMO

We proposed a novel approach based on a complementary split-ring resonator metamaterial in a two-port MIMO antenna, giving high gain, multiband results with miniature size. We have also analyzed a circular disk metasurface design. The designs are also defected using ground structure by reducing the width of the ground plane to 8 mm and etching all other parts of the ground plane. The electric length of the proposed design is 0.5λ × 0.35λ × 0.02λ. The design results are also investigated for a different variation of complementary split-ring resonator ring sizes. The inner and outer ring diameters are varied to find the optimized solution for enhanced output performance parameters. Good isolation is also achieved for both bands. The gain and directivity results are also presented. The results are compared for isolation, gain, structure size, and the number of ports. The compact, multiband, high gain and high isolation design can apply to WiMAX, WLAN, and satellite communication applications.

5.
Diagnostics (Basel) ; 13(3)2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36766680

RESUMO

This study uses machine learning to perform the hearing test (audiometry) processes autonomously with EEG signals. Sounds with different amplitudes and wavelengths given to the person tested in standard hearing tests are assigned randomly with the interface designed with MATLAB GUI. The person stated that he heard the random size sounds he listened to with headphones but did not take action if he did not hear them. Simultaneously, EEG (electro-encephalography) signals were followed, and the waves created in the brain by the sounds that the person attended and did not hear were recorded. EEG data generated at the end of the test were pre-processed, and then feature extraction was performed. The heard and unheard information received from the MATLAB interface was combined with the EEG signals, and it was determined which sounds the person heard and which they did not hear. During the waiting period between the sounds given via the interface, no sound was given to the person. Therefore, these times are marked as not heard in EEG signals. In this study, brain signals were measured with Brain Products Vamp 16 EEG device, and then EEG raw data were created using the Brain Vision Recorder program and MATLAB. After the data set was created from the signal data produced by the heard and unheard sounds in the brain, machine learning processes were carried out with the PYTHON programming language. The raw data created with MATLAB was taken with the Python programming language, and after the pre-processing steps were completed, machine learning methods were applied to the classification algorithms. Each raw EEG data has been detected by the Count Vectorizer method. The importance of each EEG signal in all EEG data has been calculated using the TF-IDF (Term Frequency-Inverse Document Frequency) method. The obtained dataset has been classified according to whether people can hear the sound. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms have been applied in the analysis. The algorithms selected in our study were preferred because they showed superior performance in ML and succeeded in analyzing EEG signals. Selected classification algorithms also have features of being used online. Naïve Bayes, Light Gradient Strengthening Machine (LGBM), support vector machine (SVM), decision tree, k-NN, logistic regression, and random forest classifier algorithms were used. In the analysis of EEG signals, Light Gradient Strengthening Machine (LGBM) was obtained as the best method. It was determined that the most successful algorithm in prediction was the prediction of the LGBM classification algorithm, with a success rate of 84%. This study has revealed that hearing tests can also be performed using brain waves detected by an EEG device. Although a completely independent hearing test can be created, an audiologist or doctor may be needed to evaluate the results.

6.
Diagnostics (Basel) ; 13(2)2023 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-36673072

RESUMO

Melanoma is known worldwide as a malignant tumor and the fastest-growing skin cancer type. It is a very life-threatening disease with a high mortality rate. Automatic melanoma detection improves the early detection of the disease and the survival rate. In accordance with this purpose, we presented a multi-task learning approach based on melanoma recognition with dermoscopy images. Firstly, an effective pre-processing approach based on max pooling, contrast, and shape filters is used to eliminate hair details and to perform image enhancement operations. Next, the lesion region was segmented with a VGGNet model-based FCN Layer architecture using enhanced images. Later, a cropping process was performed for the detected lesions. Then, the cropped images were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the decrease in image resolution was minimized. Finally, a deep learning network approach based on pre-trained convolutional neural networks was developed for melanoma classification. We used the International Skin Imaging Collaboration, a publicly available dermoscopic skin lesion dataset in experimental studies. While the performance measures of accuracy, specificity, precision, and sensitivity, obtained for segmentation of the lesion region, were produced at rates of 96.99%, 92.53%, 97.65%, and 98.41%, respectively, the performance measures achieved rates for classification of 97.73%, 99.83%, 99.83%, and 95.67%, respectively.

7.
Micromachines (Basel) ; 13(12)2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36557431

RESUMO

The high-yield optical wireless network (OWN) is a promising framework to strengthen 5G and 6G mobility. In addition, high direction and narrow bandwidth-based laser beams are enormously noteworthy for high data transmission over standard optical fibers. Therefore, in this paper, the performance of a vertical cavity surface emitting laser (VCSEL) is evaluated using the machine learning (ML) technique, aiming to purify the optical beam and enable OWN to support high-speed, multi-user data transmission. The ML technique is applied on a designed VCSEL array to optimize paths for DC injection, AC signal modulation, and multiple-user transmission. The mathematical model of VCSEL narrow beam, OWN, and energy loss through nonlinear interference in an optical wireless network is studied. In addition, the mathematical model is then affirmed with a simulation model following the bit error rate (BER), the laser power, the current, and the fiber-length performance matrices. The results estimations declare that the presented methodology offers a narrow beam of VCSEL, mitigating nonlinear interference in OWN and increasing energy efficiency.

8.
Micromachines (Basel) ; 13(12)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36557460

RESUMO

In this manuscript, we proposed the split ring resonator loaded multiple-input multiple-output (MIMO) antenna design for the frequency range of 1 and 25 GHz. The proposed antenna is numerically investigated and fabricated to analyze the different antenna parameters. We provided statistics on a wide range of antenna parameters for five different designs, including a simple circular patch antenna, a single-split-ring antenna, and a double-split-ring antenna. Reflectance, gain, directivity, efficiency, peak gain, and electric field distribution are all analyzed for all proposed antennas. The maximum achievable bandwidth is 5.28 GHz, and the double-split-ring resonator structure achieves this with a return loss of -20.84 dB. The radiation patterns of all the antenna with different port excitation conditions are presented to identify the behavior of the antenna radiation. We found the effect of the split-ring resonators to form radiation beams in different directions. We found the maximum and minimum half-power beam widths of 75° and 2°, respectively, among the different antenna designs. It was found that the split-ring resonator geometries in patch antenna convert wide-beam antenna radiation patterns to several narrow-beam radiation patterns. We found that each antenna's bandwidth, gain, and return loss performance significantly differs from the others. Overall, the proposed results of the antenna may apply to a wide range of communication applications, including those for Wi-Fi, WiMAX, and 5G.

9.
Inf Sci (N Y) ; 592: 389-401, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36532848

RESUMO

Chest X-ray (CXR) imaging is a low-cost, easy-to-use imaging alternative that can be used to diagnose/screen pulmonary abnormalities due to infectious diseaseX: Covid-19, Pneumonia and Tuberculosis (TB). Not limited to binary decisions (with respect to healthy cases) that are reported in the state-of-the-art literature, we also consider non-healthy CXR screening using a lightweight deep neural network (DNN) with a reduced number of epochs and parameters. On three diverse publicly accessible and fully categorized datasets, for non-healthy versus healthy CXR screening, the proposed DNN produced the following accuracies: 99.87% on Covid-19 versus healthy, 99.55% on Pneumonia versus healthy, and 99.76% on TB versus healthy datasets. On the other hand, when considering non-healthy CXR screening, we received the following accuracies: 98.89% on Covid-19 versus Pneumonia, 98.99% on Covid-19 versus TB, and 100% on Pneumonia versus TB. To further precisely analyze how well the proposed DNN worked, we considered well-known DNNs such as ResNet50, ResNet152V2, MobileNetV2, and InceptionV3. Our results are comparable with the current state-of-the-art, and as the proposed CNN is light, it could potentially be used for mass screening in resource-constraint regions.

10.
Front Plant Sci ; 13: 1064854, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507379

RESUMO

Bacteriosis is one of the most prevalent and deadly infections that affect peach crops globally. Timely detection of Bacteriosis disease is essential for lowering pesticide use and preventing crop loss. It takes time and effort to distinguish and detect Bacteriosis or a short hole in a peach leaf. In this paper, we proposed a novel LightWeight (WLNet) Convolutional Neural Network (CNN) model based on Visual Geometry Group (VGG-19) for detecting and classifying images into Bacteriosis and healthy images. Profound knowledge of the proposed model is utilized to detect Bacteriosis in peach leaf images. First, a dataset is developed which consists of 10000 images: 4500 are Bacteriosis and 5500 are healthy images. Second, images are preprocessed using different steps to prepare them for the identification of Bacteriosis and healthy leaves. These preprocessing steps include image resizing, noise removal, image enhancement, background removal, and augmentation techniques, which enhance the performance of leaves classification and help to achieve a decent result. Finally, the proposed LWNet model is trained for leaf classification. The proposed model is compared with four different CNN models: LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The proposed model obtains an accuracy of 99%, which is higher than LeNet, Alexnet, VGG-16, and the simple VGG-19 model. The achieved results indicate that the proposed model is more effective for the detection of Bacteriosis in peach leaf images, in comparison with the existing models.

11.
Sensors (Basel) ; 22(19)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36236264

RESUMO

There can be many inherent issues in the process of managing cloud infrastructure and the platform of the cloud. The platform of the cloud manages cloud software and legality issues in making contracts. The platform also handles the process of managing cloud software services and legal contract-based segmentation. In this paper, we tackle these issues directly with some feasible solutions. For these constraints, the Averaged One-Dependence Estimators (AODE) classifier and the SELECT Applicable Only to Parallel Server (SELECT-APSL ASA) method are proposed to separate the data related to the place. ASA is made up of the AODE and SELECT Applicable Only to Parallel Server. The AODE classifier is used to separate the data from smart city data based on the hybrid data obfuscation technique. The data from the hybrid data obfuscation technique manages 50% of the raw data, and 50% of hospital data is masked using the proposed transmission. The analysis of energy consumption before the cryptosystem shows the total packet delivered by about 71.66% compared with existing algorithms. The analysis of energy consumption after cryptosystem assumption shows 47.34% consumption, compared to existing state-of-the-art algorithms. The average energy consumption before data obfuscation decreased by 2.47%, and the average energy consumption after data obfuscation was reduced by 9.90%. The analysis of the makespan time before data obfuscation decreased by 33.71%. Compared to existing state-of-the-art algorithms, the study of makespan time after data obfuscation decreased by 1.3%. These impressive results show the strength of our methodology.


Assuntos
Algoritmos , Computação em Nuvem , Software
12.
Neural Netw ; 156: 193-204, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36274526

RESUMO

Facial reenactment is aimed at animating a source face image into a new place using a driving facial picture. In a few shot scenarios, the present strategies are designed with one or more identities or identity-sustained suffering protection challenges. These current solutions are either developed with one or more identities in mind, or face identity protection issues in one or more shot situations. Multiple pictures from the same entity have been used in previous research to model facial reenactment. In contrast, this paper presents a novel model of one-shot many-to-many facial reenactments that uses only one facial image of a face. The proposed model produces a face that represents the objective representation of the same source identity. The proposed technique can simulate motion from a single image by decomposing an object into two layers. Using bi-layer with Convolutional Neural Network (CNN), we named our model Bi-Layer Graph Convolutional Layers (BGCLN) which utilized to create the latent vector's optical flow representation. This yields the precise structure and shape of the optical stream. Comprehensive studies suggest that our technique can produce high-quality results and outperform most recent techniques in both qualitative and quantitative data comparisons. Our proposed system can perform facial reenactment at 15 fps, which is approximately real time. Our code is publicly available at https://github.com/usaeed786/BGCLN.


Assuntos
Redes Neurais de Computação
13.
Comput Intell Neurosci ; 2022: 7935346, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36059415

RESUMO

Recent improvements in current technology have had a significant impact on a wide range of image processing applications, including medical imaging. Classification, detection, and segmentation are all important aspects of medical imaging technology. An enormous need exists for the segmentation of diagnostic images, which can be applied to a wide variety of medical research applications. It is important to develop an effective segmentation technique based on deep learning algorithms for optimal identification of regions of interest and rapid segmentation. To cover this gap, a pipeline for image segmentation using traditional Convolutional Neural Network (CNN) as well as introduced Swarm Intelligence (SI) for optimal identification of the desired area has been proposed. Fuzzy C-means (FCM), K-means, and improvisation of FCM with Particle Swarm Optimization (PSO), improvisation of K-means with PSO, improvisation of FCM with CNN, and improvisation of K-means with CNN are the six modules examined and evaluated. Experiments are carried out on various types of images such as Magnetic Resonance Imaging (MRI) for brain data analysis, dermoscopic for skin, microscopic for blood leukemia, and computed tomography (CT) scan images for lungs. After combining all of the datasets, we have constructed five subsets of data, each of which had a different number of images: 50, 100, 500, 1000, and 2000. Each of the models was executed and trained on the selected subset of the datasets. From the experimental analysis, it is observed that the performance of K-means with CNN is better than others and achieved 96.45% segmentation accuracy with an average time of 9.09 seconds.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Inteligência , Imageamento por Ressonância Magnética/métodos
14.
Comput Math Methods Med ; 2022: 8717263, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35924113

RESUMO

Speech is one form of biometric that combines both physiological and behavioral features. It is beneficial for remote-access transactions over telecommunication networks. Presently, this task is the most challenging one for researchers. People's mental status in the form of emotions is quite complex, and its complexity depends upon internal behavior. Emotion and facial behavior are essential characteristics through which human internal thought can be predicted. Speech is one of the mechanisms through which human's various internal reflections can be expected and extracted by focusing on the vocal track, the flow of voice, voice frequency, etc. Human voice specimens of different ages can be emotions that can be predicted through a deep learning approach using feature removal behavior prediction that will help build a step intelligent healthcare system strong and provide data to various doctors of medical institutes and hospitals to understand the physiological behavior of humans. Healthcare is a clinical area with data concentrated where many details are accessed, generated, and circulated periodically. Healthcare systems with many existing approaches like tracing and tracking continuously disclose the system's constraints in controlling patient data privacy and security. In the healthcare system, majority of the work involves swapping or using decisively confidential and personal data. A key issue is the modeling of approaches that guarantee the value of health-related data while protecting privacy and observing high behavioral standards. This will encourage large-scale perception, especially as healthcare information collection is expected to continue far off this current ongoing pandemic. So, the research section is looking for a privacy-preserving, secure, and sustainable system by using a technology called Blockchain. Data related to healthcare and distribution among institutions is a very challenging task. Storage of facts in the centralized form is a targeted choice for cyber hackers and initiates an accordant sight of patients' facts which will cause a problem in sharing information over a network. So, this research paper's approach based on Blockchain for sharing sufferer data in a secured manner is presented. Finally, the proposed model for extracting optimum value in error rate and accuracy was analyzed using different feature removal approaches to determine which feature removal performs better with different voice specimen variations. The proposed method increases the rate of correct evidence collection and minimizes the loss and authentication issues and using feature extraction based on text validation increases the sustainability of the healthcare system.


Assuntos
Blockchain , Redes de Comunicação de Computadores , Segurança Computacional , Confidencialidade , Atenção à Saúde , Humanos , Privacidade
15.
Comput Biol Med ; 148: 105810, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35868049

RESUMO

This paper focuses on the study of Coronavirus Disease 2019 (COVID-19) X-ray image segmentation technology. We present a new multilevel image segmentation method based on the swarm intelligence algorithm (SIA) to enhance the image segmentation of COVID-19 X-rays. This paper first introduces an improved ant colony optimization algorithm, and later details the directional crossover (DX) and directional mutation (DM) strategy, XMACO. The DX strategy improves the quality of the population search, which enhances the convergence speed of the algorithm. The DM strategy increases the diversity of the population to jump out of the local optima (LO). Furthermore, we design the image segmentation model (MIS-XMACO) by incorporating two-dimensional (2D) histograms, 2D Kapur's entropy, and a nonlocal mean strategy, and we apply this model to COVID-19 X-ray image segmentation. Benchmark function experiments based on the IEEE CEC2014 and IEEE CEC2017 function sets demonstrate that XMACO has a faster convergence speed and higher convergence accuracy than competing models, and it can avoid falling into LO. Other SIAs and image segmentation models were used to ensure the validity of the experiments. The proposed MIS-XMACO model shows more stable and superior segmentation results than other models at different threshold levels by analyzing the experimental results.


Assuntos
COVID-19 , Algoritmos , Entropia , Humanos , Mutação , Raios X
16.
Comput Math Methods Med ; 2022: 2733965, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35693266

RESUMO

Lung cancer has emerged as a major cause of death among all demographics worldwide, largely caused by a proliferation of smoking habits. However, early detection and diagnosis of lung cancer through technological improvements can save the lives of millions of individuals affected globally. Computerized tomography (CT) scan imaging is a proven and popular technique in the medical field, but diagnosing cancer with only CT scans is a difficult task even for doctors and experts. This is why computer-assisted diagnosis has revolutionized disease diagnosis, especially cancer detection. This study looks at 20 CT scan images of lungs. In a preprocessing step, we chose the best filter to be applied to medical CT images between median, Gaussian, 2D convolution, and mean. From there, it was established that the median filter is the most appropriate. Next, we improved image contrast by applying adaptive histogram equalization. Finally, the preprocessed image with better quality is subjected to two optimization algorithms, fuzzy c-means and k-means clustering. The performance of these algorithms was then compared. Fuzzy c-means showed the highest accuracy of 98%. The feature was extracted using Gray Level Cooccurrence Matrix (GLCM). In classification, a comparison between three algorithms-bagging, gradient boosting, and ensemble (SVM, MLPNN, DT, logistic regression, and KNN)-was performed. Gradient boosting performed the best among these three, having an accuracy of 90.9%.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares , Algoritmos , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos
17.
Comput Math Methods Med ; 2022: 1556025, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35529266

RESUMO

Due to the proliferation of COVID-19, the world is in a terrible condition and human life is at risk. The SARS-CoV-2 virus had a significant impact on public health, social issues, and financial issues. Thousands of individuals are infected on a regular basis in India, which is one of the populations most seriously impacted by the pandemic. Despite modern medical and technical technology, predicting the spread of the virus has been extremely difficult. Predictive models have been used by health systems such as hospitals, to get insight into the influence of COVID-19 on outbreaks and possible resources, by minimizing the dangers of transmission. As a result, the main focus of this research is on building a COVID-19 predictive analytic technique. In the Indian dataset, Prophet, ARIMA, and stacked LSTM-GRU models were employed to forecast the number of confirmed and active cases. State-of-the-art models such as the recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), linear regression, polynomial regression, autoregressive integrated moving average (ARIMA), and Prophet were used to compare the outcomes of the prediction. After predictive research, the stacked LSTM-GRU model forecast was found to be more consistent than existing models, with better prediction results. Although the stacked model necessitates a large dataset for training, it aids in creating a higher level of abstraction in the final results and the maximization of the model's memory size. The GRU, on the other hand, assists in vanishing gradient resolution. The study findings reveal that the proposed stacked LSTM and GRU model outperforms all other models in terms of R square and RMSE and that the coupled stacked LSTM and GRU model outperforms all other models in terms of R square and RMSE. This forecasting aids in determining the future transmission paths of the virus.


Assuntos
Síndrome da Imunodeficiência Adquirida , COVID-19 , COVID-19/epidemiologia , Previsões , Humanos , Índia/epidemiologia , Pandemias , SARS-CoV-2
18.
Front Plant Sci ; 13: 1095547, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36589071

RESUMO

Plants are the primary source of food for world's population. Diseases in plants can cause yield loss, which can be mitigated by continual monitoring. Monitoring plant diseases manually is difficult and prone to errors. Using computer vision and artificial intelligence (AI) for the early identification of plant illnesses can prevent the negative consequences of diseases at the very beginning and overcome the limitations of continuous manual monitoring. The research focuses on the development of an automatic system capable of performing the segmentation of leaf lesions and the detection of disease without requiring human intervention. To get lesion region segmentation, we propose a context-aware 3D Convolutional Neural Network (CNN) model based on CANet architecture that considers the ambiguity of plant lesion placement in the plant leaf image subregions. A Deep CNN is employed to recognize the subtype of leaf lesion using the segmented lesion area. Finally, the plant's survival is predicted using a hybrid method combining CNN and Linear Regression. To evaluate the efficacy and effectiveness of our proposed plant disease detection scheme and survival prediction, we utilized the Plant Village Benchmark Dataset, which is composed of several photos of plant leaves affected by a certain disease. Using the DICE and IoU matrices, the segmentation model performance for plant leaf lesion segmentation is evaluated. The proposed lesion segmentation model achieved an average accuracy of 92% with an IoU of 90%. In comparison, the lesion subtype recognition model achieves accuracies of 91.11%, 93.01 and 99.04 for pepper, potato and tomato plants. The higher accuracy of the proposed model indicates that it can be utilized for real-time disease detection in unmanned aerial vehicles and offline to offer crop health updates and reduce the risk of low yield.

19.
Comput Intell Neurosci ; 2021: 9619079, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34912449

RESUMO

In the USA, each year, almost 5.4 million people are diagnosed with skin cancer. Melanoma is one of the most dangerous types of skin cancer, and its survival rate is 5%. The development of skin cancer has risen over the last couple of years. Early identification of skin cancer can help reduce the human mortality rate. Dermoscopy is a technology used for the acquisition of skin images. However, the manual inspection process consumes more time and required much cost. The recent development in the area of deep learning showed significant performance for classification tasks. In this research work, a new automated framework is proposed for multiclass skin lesion classification. The proposed framework consists of a series of steps. In the first step, augmentation is performed. For the augmentation process, three operations are performed: rotate 90, right-left flip, and up and down flip. In the second step, deep models are fine-tuned. Two models are opted, such as ResNet-50 and ResNet-101, and updated their layers. In the third step, transfer learning is applied to train both fine-tuned deep models on augmented datasets. In the succeeding stage, features are extracted and performed fusion using a modified serial-based approach. Finally, the fused vector is further enhanced by selecting the best features using the skewness-controlled SVR approach. The final selected features are classified using several machine learning algorithms and selected based on the accuracy value. In the experimental process, the augmented HAM10000 dataset is used and achieved an accuracy of 91.7%. Moreover, the performance of the augmented dataset is better as compared to the original imbalanced dataset. In addition, the proposed method is compared with some recent studies and shows improved performance.


Assuntos
Aprendizado Profundo , Melanoma , Computadores , Diagnóstico por Computador , Humanos , Redes Neurais de Computação
20.
Behav Neurol ; 2021: 2560388, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34966463

RESUMO

The excessive number of COVID-19 cases reported worldwide so far, supplemented by a high rate of false alarms in its diagnosis using the conventional polymerase chain reaction method, has led to an increased number of high-resolution computed tomography (CT) examinations conducted. The manual inspection of the latter, besides being slow, is susceptible to human errors, especially because of an uncanny resemblance between the CT scans of COVID-19 and those of pneumonia, and therefore demands a proportional increase in the number of expert radiologists. Artificial intelligence-based computer-aided diagnosis of COVID-19 using the CT scans has been recently coined, which has proven its effectiveness in terms of accuracy and computation time. In this work, a similar framework for classification of COVID-19 using CT scans is proposed. The proposed method includes four core steps: (i) preparing a database of three different classes such as COVID-19, pneumonia, and normal; (ii) modifying three pretrained deep learning models such as VGG16, ResNet50, and ResNet101 for the classification of COVID-19-positive scans; (iii) proposing an activation function and improving the firefly algorithm for feature selection; and (iv) fusing optimal selected features using descending order serial approach and classifying using multiclass supervised learning algorithms. We demonstrate that once this method is performed on a publicly available dataset, this system attains an improved accuracy of 97.9% and the computational time is almost 34 (sec).


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , Computadores , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA