Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
J Xray Sci Technol ; 2024 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-38701131

RESUMEN

BACKGROUND: The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE: This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS: Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS: Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS: The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION: Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.

2.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-36679541

RESUMEN

Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients.


Asunto(s)
COVID-19 , Neoplasias Pulmonares , Humanos , Rayos X , COVID-19/diagnóstico por imagen , Radiografía , Tórax/diagnóstico por imagen , Hospitales
3.
Sensors (Basel) ; 23(20)2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-37896548

RESUMEN

Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Humanos , Estudios Prospectivos , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/patología , Melanoma/diagnóstico , Piel/patología , Aprendizaje Automático
4.
Sensors (Basel) ; 23(9)2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37177670

RESUMEN

Hundreds of people are injured or killed in road accidents. These accidents are caused by several intrinsic and extrinsic factors, including the attentiveness of the driver towards the road and its associated features. These features include approaching vehicles, pedestrians, and static fixtures, such as road lanes and traffic signs. If a driver is made aware of these features in a timely manner, a huge chunk of these accidents can be avoided. This study proposes a computer vision-based solution for detecting and recognizing traffic types and signs to help drivers pave the door for self-driving cars. A real-world roadside dataset was collected under varying lighting and road conditions, and individual frames were annotated. Two deep learning models, YOLOv7 and Faster RCNN, were trained on this custom-collected dataset to detect the aforementioned road features. The models produced mean Average Precision (mAP) scores of 87.20% and 75.64%, respectively, along with class accuracies of over 98.80%; all of these were state-of-the-art. The proposed model provides an excellent benchmark to build on to help improve traffic situations and enable future technological advances, such as Advance Driver Assistance System (ADAS) and self-driving cars.


Asunto(s)
Conducción de Automóvil , Aprendizaje Profundo , Peatones , Humanos , Accidentes de Tránsito/prevención & control , Atención
5.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-36298412

RESUMEN

Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source [...].


Asunto(s)
Algoritmos , Aprendizaje Profundo , Radar , Visión Ocular , Computadores
6.
Sensors (Basel) ; 22(15)2022 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-35957209

RESUMEN

Skin cancer is a deadly disease, and its early diagnosis enhances the chances of survival. Deep learning algorithms for skin cancer detection have become popular in recent years. A novel framework based on deep learning is proposed in this study for the multiclassification of skin cancer types such as Melanoma, Melanocytic Nevi, Basal Cell Carcinoma and Benign Keratosis. The proposed model is named as SCDNet which combines Vgg16 with convolutional neural networks (CNN) for the classification of different types of skin cancer. Moreover, the accuracy of the proposed method is also compared with the four state-of-the-art pre-trained classifiers in the medical domain named Resnet 50, Inception v3, AlexNet and Vgg19. The performance of the proposed SCDNet classifier, as well as the four state-of-the-art classifiers, is evaluated using the ISIC 2019 dataset. The accuracy rate of the proposed SDCNet is 96.91% for the multiclassification of skin cancer whereas, the accuracy rates for Resnet 50, Alexnet, Vgg19 and Inception-v3 are 95.21%, 93.14%, 94.25% and 92.54%, respectively. The results showed that the proposed SCDNet performed better than the competing classifiers.


Asunto(s)
Aprendizaje Profundo , Melanoma , Neoplasias Cutáneas , Dermoscopía/métodos , Humanos , Melanoma/diagnóstico por imagen , Redes Neurales de la Computación , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología
7.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-32726915

RESUMEN

Image-to-image conversion based on deep learning techniques is a topic of interest in the fields of robotics and computer vision. A series of typical tasks, such as applying semantic labels to building photos, edges to photos, and raining to de-raining, can be seen as paired image-to-image conversion problems. In such problems, the image generation network learns from the information in the form of input images. The input images and the corresponding targeted images must share the same basic structure to perfectly generate target-oriented output images. However, the shared basic structure between paired images is not as ideal as assumed, which can significantly affect the output of the generating model. Therefore, we propose a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems. We demonstrate, through the experimental results, that our IP-RAN method significantly outperforms the current state-of-the-art techniques.

8.
Sensors (Basel) ; 18(2)2018 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-29401681

RESUMEN

A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.


Asunto(s)
Aprendizaje Automático , Conducción de Automóvil , Automóviles , Movimientos Oculares , Fijación Ocular , Movimientos de la Cabeza , Humanos
9.
Sensors (Basel) ; 18(5)2018 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-29748495

RESUMEN

The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

10.
Sensors (Basel) ; 18(6)2018 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-29795038

RESUMEN

Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker's location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time.

11.
Sensors (Basel) ; 17(4)2017 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-28420114

RESUMEN

Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user's gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods.

12.
Sensors (Basel) ; 16(9)2016 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-27589768

RESUMEN

Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.


Asunto(s)
Investigación Empírica , Fijación Ocular/fisiología , Movimientos de la Cabeza/fisiología , Fotograbar/instrumentación , Diseño de Equipo , Humanos , Imagenología Tridimensional , Ultrasonido
13.
Front Plant Sci ; 15: 1356260, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38545388

RESUMEN

Accurate and rapid plant disease detection is critical for enhancing long-term agricultural yield. Disease infection poses the most significant challenge in crop production, potentially leading to economic losses. Viruses, fungi, bacteria, and other infectious organisms can affect numerous plant parts, including roots, stems, and leaves. Traditional techniques for plant disease detection are time-consuming, require expertise, and are resource-intensive. Therefore, automated leaf disease diagnosis using artificial intelligence (AI) with Internet of Things (IoT) sensors methodologies are considered for the analysis and detection. This research examines four crop diseases: tomato, chilli, potato, and cucumber. It also highlights the most prevalent diseases and infections in these four types of vegetables, along with their symptoms. This review provides detailed predetermined steps to predict plant diseases using AI. Predetermined steps include image acquisition, preprocessing, segmentation, feature selection, and classification. Machine learning (ML) and deep understanding (DL) detection models are discussed. A comprehensive examination of various existing ML and DL-based studies to detect the disease of the following four crops is discussed, including the datasets used to evaluate these studies. We also provided the list of plant disease detection datasets. Finally, different ML and DL application problems are identified and discussed, along with future research prospects, by combining AI with IoT platforms like smart drones for field-based disease detection and monitoring. This work will help other practitioners in surveying different plant disease detection strategies and the limits of present systems.

14.
Comput Biol Med ; 168: 107836, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38086139

RESUMEN

Nurses, often considered the backbone of global health services, are disproportionately vulnerable to COVID-19 due to their front-line roles. They conduct essential patient tests, including blood pressure, temperature, and complete blood counts. The pandemic-induced loss of nursing staff has resulted in critical shortages. To address this, robotic solutions offer promising avenues. To solve this problem, we developed an ensemble deep learning (DL) model that uses seven different models to detect patients. Detected images are then used as input for the soft robot, which performs basic assessment tests. In this study, we introduce a deep learning-based approach for nursing soft robots, and propose a novel deep learning model named Deep Ensemble of Adaptive Architectures. Our method is twofold: firstly, an ensemble deep learning technique detects COVID-19 patients; secondly, a soft robot performs basic assessment tests on the identified patients. We evaluate the performance of various deep learning-based object detectors for patient detection, examining implementations of You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), Region-Based Convolutional Neural Network (RCNN), and Region-Based Fully Convolutional Network (R-FCN) on a proprietary dataset comprising 32,668 hospital surveillance images. Our results indicate that while YOLO and VGG facilitate rapid detection, Faster-RCNN (Inception ResNet-v2) and our proposed Ensemble-DL achieve the highest accuracy. Ensemble-DL offers accurate results in a reasonable timeframe, making it apt for patient detection on embedded platforms. Through real-world experiments, our method outperforms baseline approaches (including Faster-RCNN, R-FCN variants, CNN+LSTM, etc.) in terms of both precision and recall. Achieving an impressive accuracy of 98.32%, our deep learning-based model for nursing soft robots presents a significant advancement in the identification and assessment of COVID-19 patients, ultimately enhancing healthcare efficiency and patient care.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Pandemias , Redes Neurales de la Computación
15.
Front Plant Sci ; 15: 1402835, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38988642

RESUMEN

The agricultural sector is pivotal to food security and economic stability worldwide. Corn holds particular significance in the global food industry, especially in developing countries where agriculture is a cornerstone of the economy. However, corn crops are vulnerable to various diseases that can significantly reduce yields. Early detection and precise classification of these diseases are crucial to prevent damage and ensure high crop productivity. This study leverages the VGG16 deep learning (DL) model to classify corn leaves into four categories: healthy, blight, gray spot, and common rust. Despite the efficacy of DL models, they often face challenges related to the explainability of their decision-making processes. To address this, Layer-wise Relevance Propagation (LRP) is employed to enhance the model's transparency by generating intuitive and human-readable heat maps of input images. The proposed VGG16 model, augmented with LRP, outperformed previous state-of-the-art models in classifying corn leaf diseases. Simulation results demonstrated that the model not only achieved high accuracy but also provided interpretable results, highlighting critical regions in the images used for classification. By generating human-readable explanations, this approach ensures greater transparency and reliability in model performance, aiding farmers in improving their crop yields.

16.
Front Bioeng Biotechnol ; 12: 1392807, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39104626

RESUMEN

Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.

17.
Bioengineering (Basel) ; 10(2)2023 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-36829697

RESUMEN

Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.

18.
Environ Pollut ; 335: 122241, 2023 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-37482338

RESUMEN

To mitigate the impact of dust on human health and the environment, it is crucial to create a model and map that identifies the areas susceptible to dust. The present study focused on identifying dust occurrences in the Bushehr province of Iran between 2002 and 2022 using moderate-resolution imaging spectroradiometer (MODIS) imagery. Subsequently, an ensemble machine learning model was improved to prepare a dust susceptibility map (DSM). The study employed differential evolution (DE), genetic algorithm (GA), and flower pollination algorithm (FPA) - three evolutionary algorithms - to enhance the random forest (RF) ensemble model. A spatial database was created for modeling, including 519 dust occurrence points (extracted from MODIS imagery) and 15 factors affecting dust (Slope, bulk density, aspect, clay, altitude, sand, rainfall, lithology, soil order, distance to river, soil texture, normalized difference vegetation index (NDVI), soil water content, land cover, and wind speed). By utilizing the differential evolution (DE) algorithm, we determined the significance of these factors in impacting dust occurrences. The results indicated that altitude, wind speed, and land cover were the most influential factors, while the distance to the river, bulk density, and soil texture had less impact on dust occurrence. Data were preprocessed using multicollinearity analysis and the frequency ratio (FR) approach. For this research, three RF-based meta-heuristic optimization algorithms, namely RF-FPA, RF-GA, and RF-DE, were created for DSM. The effectiveness prediction of the constructed models by indexes of root-mean-square-error (RMSE), the area under the receiver operating characteristic (AUC-ROC), and coefficient of determination (R2) from best to worst were RF-DE (RMSE = 0.131, AUC-ROC = 0.988, and R2 = 0.93), RF-GA (RMSE = 0.141, AUC-ROC = 0.986, and R2 = 0.919), RF-FPA (RMSE = 0.157, AUC-ROC = 0.981, and R2 = 0.9), and RF (RMSE = 0.173, AUC-ROC = 0.964, and R2 = 0.878). The results showed that combining evolutionary algorithms with an RF model improves the accuracy of dust susceptibility modeling.


Asunto(s)
Polvo , Imágenes Satelitales , Humanos , Factores de Tiempo , Algoritmos , Aprendizaje Automático
19.
Cancers (Basel) ; 15(7)2023 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-37046840

RESUMEN

Skin cancer is one of the most lethal kinds of human illness. In the present state of the health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed initially then it can be threatening to human life. To attain a high prospect of complete recovery, early detection of skin cancer is crucial. In the last several years, the application of deep learning (DL) algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work intended to build a multi-classification technique for diagnosing skin cancers such as melanoma (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN). In this paper, we have proposed a novel model, a deep learning-based skin cancer classification network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of 93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0, and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results showed that our proposed DSCC_Net model performs better as compared to baseline models, thus offering significant support to dermatologists and health experts to diagnose skin cancer.

20.
PLoS One ; 18(4): e0284992, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37099592

RESUMEN

Regular monitoring of the number of various fish species in a variety of habitats is essential for marine conservation efforts and marine biology research. To address the shortcomings of existing manual underwater video fish sampling methods, a plethora of computer-based techniques are proposed. However, there is no perfect approach for the automated identification and categorizing of fish species. This is primarily due to the difficulties inherent in capturing underwater videos, such as ambient changes in luminance, fish camouflage, dynamic environments, watercolor, poor resolution, shape variation of moving fish, and tiny differences between certain fish species. This study has proposed a novel Fish Detection Network (FD_Net) for the detection of nine different types of fish species using a camera-captured image that is based on the improved YOLOv7 algorithm by exchanging Darknet53 for MobileNetv3 and depthwise separable convolution for 3 x 3 filter size in the augmented feature extraction network bottleneck attention module (BNAM). The mean average precision (mAP) is 14.29% higher than it was in the initial version of YOLOv7. The network that is utilized in the method for the extraction of features is an improved version of DenseNet-169, and the loss function is an Arcface Loss. Widening the receptive field and improving the capability of feature extraction are achieved by incorporating dilated convolution into the dense block, removing the max-pooling layer from the trunk, and incorporating the BNAM into the dense block of the DenseNet-169 neural network. The results of several experiments comparisons and ablation experiments demonstrate that our proposed FD_Net has a higher detection mAP than YOLOv3, YOLOv3-TL, YOLOv3-BL, YOLOv4, YOLOv5, Faster-RCNN, and the most recent YOLOv7 model, and is more accurate for target fish species detection tasks in complex environments.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Animales , Peces , Hibridación Fluorescente in Situ , Biología Marina
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA