Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38732978

RESUMEN

Machine learning (ML) models have experienced remarkable growth in their application for multimodal data analysis over the past decade [...].

2.
J Environ Manage ; 351: 119943, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38169263

RESUMEN

Acid mine drainage (AMD) is recognized as a major environmental challenge in the Western United States, particularly in Colorado, leading to extreme subsurface contamination issue. Given Colorado's arid climate and dependence on groundwater, an accurate assessment of AMD-induced contamination is deemed crucial. While in past, machine learning (ML)-based inversion algorithms were used to reconstruct ground electrical properties (GEP) such as relative dielectric permittivity (RDP) from ground penetrating radar (GPR) data for contamination assessment, their inherent non-linear nature can introduce significant uncertainty and non-uniqueness into the reconstructed models. This is a challenge that traditional ML methods are not explicitly designed to address. In this study, a probabilistic hybrid technique has been introduced that combines the DeepLabv3+ architecture-based deep convolutional neural network (DCNN) with an ensemble prediction-based Monte Carlo (MC) dropout method. Different MC dropout rates (1%, 5%, and 10%) were initially evaluated using 1D and 2D synthetic GPR data for accurate and reliable RDP model prediction. The optimal rate was chosen based on minimal prediction uncertainty and the closest alignment of the mean or median model with the true RDP model. Notably, with the optimal MC dropout rate, prediction accuracy of over 95% for the 1D and 2D cases was achieved. Motivated by these results, the hybrid technique was applied to field GPR data collected over an AMD-impacted wetland near Silverton, Colorado. The field results underscored the hybrid technique's ability to predict an accurate subsurface RDP distribution for estimating the spatial extent of AMD-induced contamination. Notably, this technique not only provides a precise assessment of subsurface contamination but also ensures consistent interpretations of subsurface condition by different environmentalists examining the same GPR data. In conclusion, the hybrid technique presents a promising avenue for future environmental studies in regions affected by AMD or other contaminants that alter the natural distribution of GEP.


Asunto(s)
Agua Subterránea , Humedales , Colorado , Monitoreo del Ambiente/métodos , Minería
3.
J Environ Manage ; 352: 120091, 2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38228048

RESUMEN

Water is a vital resource supporting a broad spectrum of ecosystems and human activities. The quality of river water has declined in recent years due to the discharge of hazardous materials and toxins. Deep learning and machine learning have gained significant attention for analysing time-series data. However, these methods often suffer from high complexity and significant forecasting errors, primarily due to non-linear datasets and hyperparameter settings. To address these challenges, we have developed an innovative HDTO-DeepAR approach for predicting water quality indicators. This proposed approach is compared with standalone algorithms, including DeepAR, BiLSTM, GRU and XGBoost, using performance metrics such as MAE, MSE, MAPE, and NSE. The NSE of the hybrid approach ranges between 0.8 to 0.96. Given the value's proximity to 1, the model appears to be efficient. The PICP values (ranging from 95% to 98%) indicate that the model is highly reliable in forecasting water quality indicators. Experimental results reveal a close resemblance between the model's predictions and actual values, providing valuable insights for predicting future trends. The comparative study shows that the suggested model surpasses all existing, well-known models.


Asunto(s)
Ecosistema , Indicadores de Calidad de la Atención de Salud , Humanos , Algoritmos , Agua Dulce , Sustancias Peligrosas , Calidad del Agua , Predicción
4.
Sensors (Basel) ; 23(21)2023 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-37960482

RESUMEN

Road network extraction is a significant challenge in remote sensing (RS). Automated techniques for interpreting RS imagery offer a cost-effective solution for obtaining road network data quickly, surpassing traditional visual interpretation methods. However, the diverse characteristics of road networks, such as varying lengths, widths, materials, and geometries across different regions, pose a formidable obstacle for road extraction from RS imagery. The issue of road extraction can be defined as a task that involves capturing contextual and complex elements while also preserving boundary information and producing high-resolution road segmentation maps for RS data. The objective of the proposed Archimedes tuning process quantum dilated convolutional neural network for road Extraction (ATP-QDCNNRE) technology is to tackle the aforementioned issues by enhancing the efficacy of image segmentation outcomes that exploit remote sensing imagery, coupled with Archimedes optimization algorithm methods (AOA). The findings of this study demonstrate the enhanced road-extraction capabilities achieved by the ATP-QDCNNRE method when used with remote sensing imagery. The ATP-QDCNNRE method employs DL and a hyperparameter tuning process to generate high-resolution road segmentation maps. The basis of this approach lies in the QDCNN model, which incorporates quantum computing (QC) concepts and dilated convolutions to enhance the network's ability to capture both local and global contextual information. Dilated convolutions also enhance the receptive field while maintaining spatial resolution, allowing fine road features to be extracted. ATP-based hyperparameter modifications improve QDCNNRE road extraction. To evaluate the effectiveness of the ATP-QDCNNRE system, benchmark databases are used to assess its simulation results. The experimental results show that ATP-QDCNNRE performed with an intersection over union (IoU) of 75.28%, mean intersection over union (MIoU) of 95.19%, F1 of 90.85%, precision of 87.54%, and recall of 94.41% in the Massachusetts road dataset. These findings demonstrate the superior efficiency of this technique compared to more recent methods.

5.
Sensors (Basel) ; 23(14)2023 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-37514877

RESUMEN

Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Redes Neurales de la Computación , Rayos X , Detección Precoz del Cáncer , Neoplasias Pulmonares/diagnóstico por imagen , Pulmón
6.
Ecotoxicol Environ Saf ; 232: 113271, 2022 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-35121252

RESUMEN

This study evaluates state-of-the-art machine learning models in predicting the most sustainable arsenic mitigation preference. A Gaussian distribution-based Naïve Bayes (NB) classifier scored the highest Area Under the Curve (AUC) of the Receiver Operating Characteristic curve (0.82), followed by Nu Support Vector Classification (0.80), and K-Neighbors (0.79). Ensemble classifiers scored higher than 70% AUC, with Random Forest being the top performer (0.77), and Decision Tree model ranked fourth with an AUC of 0.77. The multilayer perceptron model also achieved high performance (AUC=0.75). Most linear classifiers underperformed, with the Ridge classifier at the top (AUC=0.73) and perceptron at the bottom (AUC=0.57). A Bernoulli distribution-based Naïve Bayes classifier was the poorest model (AUC=0.50). The Gaussian NB was also the most robust ML model with the slightest variation of Kappa score on training (0.58) and test data (0.64). The results suggest that nonlinear or ensemble classifiers could more accurately understand the complex relationships of socio-environmental data and help develop accurate and robust prediction models of sustainable arsenic mitigation. Furthermore, Gaussian NB is the best option when data is scarce.


Asunto(s)
Arsénico , Teorema de Bayes , Aprendizaje Automático , Redes Neurales de la Computación , Curva ROC , Máquina de Vectores de Soporte
7.
Sensors (Basel) ; 22(24)2022 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-36560047

RESUMEN

The intelligent transportation system, especially autonomous vehicles, has seen a lot of interest among researchers owing to the tremendous work in modern artificial intelligence (AI) techniques, especially deep neural learning. As a result of increased road accidents over the last few decades, significant industries are moving to design and develop autonomous vehicles. Understanding the surrounding environment is essential for understanding the behavior of nearby vehicles to enable the safe navigation of autonomous vehicles in crowded traffic environments. Several datasets are available for autonomous vehicles focusing only on structured driving environments. To develop an intelligent vehicle that drives in real-world traffic environments, which are unstructured by nature, there should be an availability of a dataset for an autonomous vehicle that focuses on unstructured traffic environments. Indian Driving Lite dataset (IDD-Lite), focused on an unstructured driving environment, was released as an online competition in NCPPRIPG 2019. This study proposed an explainable inception-based U-Net model with Grad-CAM visualization for semantic segmentation that combines an inception-based module as an encoder for automatic extraction of features and passes to a decoder for the reconstruction of the segmentation feature map. The black-box nature of deep neural networks failed to build trust within consumers. Grad-CAM is used to interpret the deep-learning-based inception U-Net model to increase consumer trust. The proposed inception U-net with Grad-CAM model achieves 0.622 intersection over union (IoU) on the Indian Driving Dataset (IDD-Lite), outperforming the state-of-the-art (SOTA) deep neural-network-based segmentation models.


Asunto(s)
Inteligencia Artificial , Vehículos Autónomos , Humanos , Inteligencia , Redes Neurales de la Computación
8.
J Environ Manage ; 308: 114589, 2022 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-35121456

RESUMEN

Soil erosion hazard is one of the prominent climate hazards that negatively impact countries' economies and livelihood. According to the global climate index, Sri Lanka is ranked among the first ten countries most threatened by climate change during the last three years (2018-2020). However, limited studies were conducted to simulate the impact of the soil erosion vulnerability based on climate scenarios. This study aims to assess and predict soil erosion susceptibility using climate change projected scenarios: Representative Concentration Pathways (RCP) in the Central Highlands of Sri Lanka. The potential of soil erosion susceptibility was predicted to 2040, depending on climate change scenarios, RCP 2.6 and RCP 8.5. Five models: revised universal soil loss (RUSLE), frequency ratio (FR), artificial neural networks (ANN), support vector machine (SVM) and adaptive network-based fuzzy inference system (ANFIS) were selected as widely applied for hazards assessments. Eight geo-environmental factors were selected as inputs to model the soil erosion susceptibility. Results of the five models demonstrate that soil erosion vulnerability (soil erosion rates) will increase 4%-22% compared to the current soil erosion rate (2020). The predictions indicate average soil erosion will increase to 10.50 t/ha/yr and 12.4 t/ha/yr under the RCP 2.6 and RCP 8.5 climate scenario in 2040, respectively. The ANFIS and SVM model predictions showed the highest accuracy (89%) on soil erosion susceptibility for this study area. The soil erosion susceptibility maps provide a good understanding of future soil erosion vulnerability (spatial distribution) and can be utilized to develop climate resilience.


Asunto(s)
Cambio Climático , Erosión del Suelo , Monitoreo del Ambiente , Suelo , Sri Lanka
9.
Sensors (Basel) ; 21(14)2021 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-34300478

RESUMEN

Urban vegetation mapping is critical in many applications, i.e., preserving biodiversity, maintaining ecological balance, and minimizing the urban heat island effect. It is still challenging to extract accurate vegetation covers from aerial imagery using traditional classification approaches, because urban vegetation categories have complex spatial structures and similar spectral properties. Deep neural networks (DNNs) have shown a significant improvement in remote sensing image classification outcomes during the last few years. These methods are promising in this domain, yet unreliable for various reasons, such as the use of irrelevant descriptor features in the building of the models and lack of quality in the labeled image. Explainable AI (XAI) can help us gain insight into these limits and, as a result, adjust the training dataset and model as needed. Thus, in this work, we explain how an explanation model called Shapley additive explanations (SHAP) can be utilized for interpreting the output of the DNN model that is designed for classifying vegetation covers. We want to not only produce high-quality vegetation maps, but also rank the input parameters and select appropriate features for classification. Therefore, we test our method on vegetation mapping from aerial imagery based on spectral and textural features. Texture features can help overcome the limitations of poor spectral resolution in aerial imagery for vegetation mapping. The model was capable of obtaining an overall accuracy (OA) of 94.44% for vegetation cover mapping. The conclusions derived from SHAP plots demonstrate the high contribution of features, such as Hue, Brightness, GLCM_Dissimilarity, GLCM_Homogeneity, and GLCM_Mean to the output of the proposed model for vegetation mapping. Therefore, the study indicates that existing vegetation mapping strategies based only on spectral characteristics are insufficient to appropriately classify vegetation covers.


Asunto(s)
Calor , Redes Neurales de la Computación , Ciudades
10.
Sensors (Basel) ; 21(13)2021 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-34209169

RESUMEN

Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)-a machine learning model-and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model's decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model.


Asunto(s)
Inteligencia Artificial , Terremotos , Aprendizaje Automático , Redes Neurales de la Computación , Reproducibilidad de los Resultados
11.
Sensors (Basel) ; 21(20)2021 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-34696109

RESUMEN

In Australia, droughts are recurring events that tremendously affect environmental, agricultural and socio-economic activities. Southern Queensland is one of the most drought-prone regions in Australia. Consequently, a comprehensive drought vulnerability mapping is essential to generate a drought vulnerability map that can help develop and implement drought mitigation strategies. The study aimed to prepare a comprehensive drought vulnerability map that combines drought categories using geospatial techniques and to assess the spatial extent of the vulnerability of droughts in southern Queensland. A total of 14 drought-influencing criteria were selected for three drought categories, specifically, meteorological, hydrological and agricultural. The specific criteria spatial layers were prepared and weighted using the fuzzy analytical hierarchy process. Individual categories of drought vulnerability maps were prepared from their specific indices. Finally, the overall drought vulnerability map was generated by combining the indices using spatial analysis. Results revealed that approximately 79.60% of the southern Queensland region is moderately to extremely vulnerable to drought. The findings of this study were validated successfully through the receiver operating characteristics curve (ROC) and the area under the curve (AUC) approach using previous historical drought records. Results can be helpful for decision makers to develop and apply proactive drought mitigation strategies.


Asunto(s)
Agricultura , Sequías , Australia , Hidrología , Queensland
12.
Sensors (Basel) ; 21(13)2021 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-34283116

RESUMEN

Facial recognition has a significant application for security, especially in surveillance technologies. In surveillance systems, recognizing faces captured far away from the camera under various lighting conditions, such as in the daytime and nighttime, is a challenging task. A system capable of recognizing face images in both daytime and nighttime and at various distances is called Cross-Spectral Cross Distance (CSCD) face recognition. In this paper, we proposed a phase-based CSCD face recognition approach. We employed Homomorphic filtering as photometric normalization and Band Limited Phase Only Correlation (BLPOC) for image matching. Different from the state-of-the-art methods, we directly utilized the phase component from an image, without the need for a feature extraction process. The experiment was conducted using the Long-Distance Heterogeneous Face Database (LDHF-DB). The proposed method was evaluated in three scenarios: (i) cross-spectral face verification at 1m, (ii) cross-spectral face verification at 60m, and (iii) cross-spectral face verification where the probe images (near-infrared (NIR) face images) were captured at 1m and the gallery data (face images) was captured at 60 m. The proposed CSCD method resulted in the best recognition performance among the CSCD baseline approaches, with an Equal Error Rate (EER) of 5.34% and a Genuine Acceptance Rate (GAR) of 93%.


Asunto(s)
Reconocimiento Facial , Algoritmos , Bases de Datos Factuales , Cara , Iluminación
13.
Sensors (Basel) ; 21(21)2021 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-34770715

RESUMEN

Iris biometric detection provides contactless authentication, preventing the spread of COVID-19-like contagious diseases. However, these systems are prone to spoofing attacks attempted with the help of contact lenses, replayed video, and print attacks, making them vulnerable and unsafe. This paper proposes the iris liveness detection (ILD) method to mitigate spoofing attacks, taking global-level features of Thepade's sorted block truncation coding (TSBTC) and local-level features of the gray-level co-occurrence matrix (GLCM) of the iris image. Thepade's SBTC extracts global color texture content as features, and GLCM extracts local fine-texture details. The fusion of global and local content presentation may help distinguish between live and non-live iris samples. The fusion of Thepade's SBTC with GLCM features is considered in experimental validations of the proposed method. The features are used to train nine assorted machine learning classifiers, including naïve Bayes (NB), decision tree (J48), support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and ensembles (SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, J48 + RF + MLP) for ILD. Accuracy, precision, recall, and F-measure are used to evaluate the performance of the projected ILD variants. The experimentation was carried out on four standard benchmark datasets, and our proposed model showed improved results with the feature fusion approach. The proposed fusion approach gave 99.68% accuracy using the RF + J48 + MLP ensemble of classifiers, immediately followed by the RF algorithm, which gave 95.57%. The better capability of iris liveness detection will improve human-computer interaction and security in the cyber-physical space by improving person validation.


Asunto(s)
COVID-19 , Teorema de Bayes , Biometría , Humanos , Iris , SARS-CoV-2 , Máquina de Vectores de Soporte
14.
Sensors (Basel) ; 21(21)2021 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-34770722

RESUMEN

Studies relating to trends of vegetation, snowfall and temperature in the north-western Himalayan region of India are generally focused on specific areas. Therefore, a proper understanding of regional changes in climate parameters over large time periods is generally absent, which increases the complexity of making appropriate conclusions related to climate change-induced effects in the Himalayan region. This study provides a broad overview of changes in patterns of vegetation, snow covers and temperature in Uttarakhand state of India through bulk processing of remotely sensed Moderate Resolution Imaging Spectroradiometer (MODIS) data, meteorological records and simulated global climate data. Additionally, regression using machine learning algorithms such as Support Vectors and Long Short-term Memory (LSTM) network is carried out to check the possibility of predicting these environmental variables. Results from 17 years of data show an increasing trend of snow-covered areas during pre-monsoon and decreasing vegetation covers during monsoon since 2001. Solar radiation and cloud cover largely control the lapse rate variations. Mean MODIS-derived land surface temperature (LST) observations are in close agreement with global climate data. Future studies focused on climate trends and environmental parameters in Uttarakhand could fairly rely upon the remotely sensed measurements and simulated climate data for the region.


Asunto(s)
Monitoreo del Ambiente , Imágenes Satelitales , Algoritmos , Cambio Climático , Aprendizaje Automático
15.
Sensors (Basel) ; 21(19)2021 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-34640976

RESUMEN

Lung cancer is the leading cause of cancer death and morbidity worldwide. Many studies have shown machine learning models to be effective in detecting lung nodules from chest X-ray images. However, these techniques have yet to be embraced by the medical community due to several practical, ethical, and regulatory constraints stemming from the "black-box" nature of deep learning models. Additionally, most lung nodules visible on chest X-rays are benign; therefore, the narrow task of computer vision-based lung nodule detection cannot be equated to automated lung cancer detection. Addressing both concerns, this study introduces a novel hybrid deep learning and decision tree-based computer vision model, which presents lung cancer malignancy predictions as interpretable decision trees. The deep learning component of this process is trained using a large publicly available dataset on pathological biomarkers associated with lung cancer. These models are then used to inference biomarker scores for chest X-ray images from two independent data sets, for which malignancy metadata is available. Next, multi-variate predictive models were mined by fitting shallow decision trees to the malignancy stratified datasets and interrogating a range of metrics to determine the best model. The best decision tree model achieved sensitivity and specificity of 86.7% and 80.0%, respectively, with a positive predictive value of 92.9%. Decision trees mined using this method may be considered as a starting point for refinement into clinically useful multi-variate lung cancer malignancy models for implementation as a workflow augmentation tool to improve the efficiency of human radiologists.


Asunto(s)
Neoplasias Pulmonares , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Sensibilidad y Especificidad , Tórax , Rayos X
16.
J Environ Manage ; 283: 111979, 2021 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-33482453

RESUMEN

Droughts are slow-moving natural hazards that gradually spread over large areas and capable of extending to continental scales, leading to severe socio-economic damage. A key challenge is developing accurate drought forecast model and understanding a models' capability to examine different drought characteristics. Traditionally, forecasting techniques have used various time-series approaches and machine learning models. However, the use of deep learning methods have not been tested extensively despite its potential to improve our understanding of drought characteristics. The present study uses a deep learning approach, specifically the Long Short-Term Memory (LSTM) to predict a commonly used drought measure, the Standard Precipitation Evaporation Index (SPEI) at two different time scales (SPEI 1, SPEI 3). The model was compared with other common machine learning method, Random Forests, Artificial Neural Networks and applied over the New South Wales (NSW) region of Australia, using hydro-meteorological variables as predictors. The drought index and predictor data were collected from the Climatic Research Unit (CRU) dataset spanning from 1901 to 2018. We analysed the LSTM forecasted results in terms of several drought characteristics (drought intensity, drought category, or spatial variation) to better understand how drought forecasting was improved. Evaluation of the drought intensity forecasting capabilities of the model were based on three different statistical metrics, Coefficient of Determination (R2), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The model achieved R2 value of more than 0.99 for both SPEI 1 and SPEI 3 cases. The variation in drought category forecasted results were studied using a multi-class Receiver Operating Characteristic based Area under Curves (ROC-AUC) approach. The analysis revealed an AUC value of 0.83 and 0.82 for SPEI 1 and SPEI 3 respectively. The spatial variation between observed and forecasted values were analysed for the summer months of 2016-2018. The findings from the study show an improvement relative to machine learning models for a lead time of 1 month in terms of different drought characteristics. The results from this work can be used for drought mitigation purposes and different models need to be tested to further enhance our capabilities.


Asunto(s)
Sequías , Memoria a Corto Plazo , Australia , Predicción , Redes Neurales de la Computación , Nueva Gales del Sur
17.
Sensors (Basel) ; 20(9)2020 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-32375265

RESUMEN

In hilly areas across the world, landslides have been an increasing menace, causing loss of lives and properties. The damages instigated by landslides in the recent past call for attention from authorities for disaster risk reduction measures. Development of an effective landslide early warning system (LEWS) is an important risk reduction approach by which the authorities and public in general can be presaged about future landslide events. The Indian Himalayas are among the most landslide-prone areas in the world, and attempts have been made to determine the rainfall thresholds for possible occurrence of landslides in the region. The established thresholds proved to be effective in predicting most of the landslide events and the major drawback observed is the increased number of false alarms. For an LEWS to be successfully operational, it is obligatory to reduce the number of false alarms using physical monitoring. Therefore, to improve the efficiency of the LEWS and to make the thresholds serviceable, the slopes are monitored using a sensor network. In this study, micro-electro-mechanical systems (MEMS)-based tilt sensors and volumetric water content sensors were used to monitor the active slopes in Chibo, in the Darjeeling Himalayas. The Internet of Things (IoT)-based network uses wireless modules for communication between individual sensors to the data logger and from the data logger to an internet database. The slopes are on the banks of mountain rivulets (jhoras) known as the sinking zones of Kalimpong. The locality is highly affected by surface displacements in the monsoon season due to incessant rains and improper drainage. Real-time field monitoring for the study area is being conducted for the first time to evaluate the applicability of tilt sensors in the region. The sensors are embedded within the soil to measure the tilting angles and moisture content at shallow depths. The slopes were monitored continuously during three monsoon seasons (2017-2019), and the data from the sensors were compared with the field observations and rainfall data for the evaluation. The relationship between change in tilt rate, volumetric water content, and rainfall are explored in the study, and the records prove the significance of considering long-term rainfall conditions rather than immediate rainfall events in developing rainfall thresholds for the region.

18.
Sensors (Basel) ; 20(6)2020 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-32204505

RESUMEN

Four state-of-the-art metaheuristic algorithms including the genetic algorithm (GA), particle swarm optimization (PSO), differential evolutionary (DE), and ant colony optimization (ACO) are applied to an adaptive neuro-fuzzy inference system (ANFIS) for spatial prediction of landslide susceptibility in Qazvin Province (Iran). To this end, the landslide inventory map, composed of 199 identified landslides, is divided into training and testing landslides with a 70:30 ratio. To create the spatial database, thirteen landslide conditioning factors are considered within the geographic information system (GIS). Notably, the spatial interaction between the landslides and mentioned conditioning factors is analyzed by means of frequency ratio (FR) theory. After the optimization process, it was shown that the DE-based model reaches the best response more quickly than other ensembles. The landslide susceptibility maps were developed, and the accuracy of the models was evaluated by a ranking system, based on the calculated area under the receiving operating characteristic curve (AUROC), mean absolute error, and mean square error (MSE) accuracy indices. According to the results, the GA-ANFIS with a total ranking score (TRS) = 24 presented the most accurate prediction, followed by PSO-ANFIS (TRS = 17), DE-ANFIS (TRS = 13), and ACO-ANFIS (TRS = 6). Due to the excellent results of this research, the developed landslide susceptibility maps can be applied for future planning and decision making of the related area.

19.
Sensors (Basel) ; 20(2)2020 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-31936038

RESUMEN

Gully erosion is a problem; therefore, it must be predicted using highly accurate predictive models to avoid losses caused by gully development and to guarantee sustainable development. This research investigates the predictive performance of seven multiple-criteria decision-making (MCDM), statistical, and machine learning (ML)-based models and their ensembles for gully erosion susceptibility mapping (GESM). A case study of the Dasjard River watershed, Iran uses a database of 306 gully head cuts and 15 conditioning factors. The database was divided 70:30 to train and verify the models. Their performance was assessed with the area under prediction rate curve (AUPRC), the area under success rate curve (AUSRC), accuracy, and kappa. Results show that slope is key to gully formation. The maximum entropy (ME) ML model has the best performance (AUSRC = 0.947, AUPRC = 0.948, accuracy = 0.849 and kappa = 0.699). The second best is the random forest (RF) model (AUSRC = 0.965, AUPRC = 0.932, accuracy = 0.812 and kappa = 0.624). By contrast, the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) model was the least effective (AUSRC = 0.871, AUPRC = 0.867, accuracy = 0.758 and kappa = 0.516). RF increased the performance of statistical index (SI) and frequency ratio (FR) statistical models. Furthermore, the combination of a generalized linear model (GLM), and functional data analysis (FDA) improved their performances. The results demonstrate that a combination of geographic information systems (GIS) with remote sensing (RS)-based ML models can successfully map gully erosion susceptibility, particularly in low-income and developing regions. This method can aid the analyses and decisions of natural resources managers and local planners to reduce damages by focusing attention and resources on areas prone to the worst and most damaging gully erosion.

20.
Sensors (Basel) ; 20(16)2020 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-32764354

RESUMEN

Earthquake prediction is a popular topic among earth scientists; however, this task is challenging and exhibits uncertainty therefore, probability assessment is indispensable in the current period. During the last decades, the volume of seismic data has increased exponentially, adding scalability issues to probability assessment models. Several machine learning methods, such as deep learning, have been applied to large-scale images, video, and text processing; however, they have been rarely utilized in earthquake probability assessment. Therefore, the present research leveraged advances in deep learning techniques to generate scalable earthquake probability mapping. To achieve this objective, this research used a convolutional neural network (CNN). Nine indicators, namely, proximity to faults, fault density, lithology with an amplification factor value, slope angle, elevation, magnitude density, epicenter density, distance from the epicenter, and peak ground acceleration (PGA) density, served as inputs. Meanwhile, 0 and 1 were used as outputs corresponding to non-earthquake and earthquake parameters, respectively. The proposed classification model was tested at the country level on datasets gathered to update the probability map for the Indian subcontinent using statistical measures, such as overall accuracy (OA), F1 score, recall, and precision. The OA values of the model based on the training and testing datasets were 96% and 92%, respectively. The proposed model also achieved precision, recall, and F1 score values of 0.88, 0.99, and 0.93, respectively, for the positive (earthquake) class based on the testing dataset. The model predicted two classes and observed very-high (712,375 km2) and high probability (591,240.5 km2) areas consisting of 19.8% and 16.43% of the abovementioned zones, respectively. Results indicated that the proposed model is superior to the traditional methods for earthquake probability assessment in terms of accuracy. Aside from facilitating the prediction of the pixel values for probability assessment, the proposed model can also help urban-planners and disaster managers make appropriate decisions regarding future plans and earthquake management.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA