Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Commun Psychol ; 2(1): 1, 2024 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-39242855

RESUMEN

The use of language is innately political, often a vehicle of cultural identity and the basis for nation building. Here, we examine language choice and tweeting activity of Ukrainian citizens based on 4,453,341 geo-tagged tweets from 62,712 users before and during the Russian war in Ukraine, from January 2020 to October 2022. Using statistical models, we disentangle sample effects, arising from the in- and outflux of users on Twitter (now X), from behavioural effects, arising from behavioural changes of the users. We observe a steady shift from the Russian language towards Ukrainian already before the war, which drastically speeds up with its outbreak. We attribute these shifts in large part to users' behavioural changes. Notably, our analysis shows that more than half of the Russian-tweeting users switch towards Ukrainian with the Russian invasion. We interpret these findings as users' conscious choice towards a more Ukrainian (online) identity and self-definition of being Ukrainian.

2.
Sci Rep ; 14(1): 19290, 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39164356

RESUMEN

The impact of climate change and urbanization has increased the risk of flooding. During the UN Climate Change Conference 28 (COP 28), an agreement was reached to establish "The Loss and Damage Fund" to assist low-income countries impacted by climate change. However, allocating the resources required for post-flood reconstruction and reimbursement is challenging due to the limited availability of data and the absence of a comprehensive tool. Here, we propose a novel resource allocation framework based on remote sensing and geospatial data near the flood peak, such as buildings and population. The quantification of resource distribution utilizes an exposure index for each municipality, which interacts with various drivers, including flood hazard drivers, buildings exposure, and population exposure. The proposed framework asses the flood extension using pre- and post-flood Sentinel-1 Synthetic Aperture Radar (SAR) data. To demonstrate the effectiveness of this framework, an analysis was conducted on the flood that occurred in the Thessaly region of Greece in September 2023. The study revealed that the municipality of Palamas has the highest need for resource allocation, with an exposure index rating of 5/8. Any government can use this framework for rapid decision-making and to expedite post-flood recovery.

3.
Water Res ; 264: 122162, 2024 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-39126745

RESUMEN

Large-scale hydrodynamic models generally rely on fixed-resolution spatial grids and model parameters as well as incurring a high computational cost. This limits their ability to accurately forecast flood crests and issue time-critical hazard warnings. In this work, we build a fast, stable, accurate, resolution-invariant, and geometry-adaptive flood modeling and forecasting framework that can perform at large scales, namely FloodCast. The framework comprises two main modules: multi-satellite observation and hydrodynamic modeling. In the multi-satellite observation module, a real-time unsupervised change detection method and a rainfall processing and analysis tool are proposed to harness the full potential of multi-satellite observations in large-scale flood prediction. In the hydrodynamic modeling module, a geometry-adaptive physics-informed neural solver (GeoPINS) is introduced, benefiting from the absence of a requirement for training data in physics-informed neural networks (PINNs) and featuring a fast, accurate, and resolution-invariant architecture with Fourier neural operators. To adapt to complex river geometries, we reformulate PINNs in a geometry-adaptive space. GeoPINS demonstrates impressive performance on popular partial differential equations across regular and irregular domains. Building upon GeoPINS, we propose a sequence-to-sequence GeoPINS model to handle long-term temporal series and extensive spatial domains in large-scale flood modeling. This model employs sequence-to-sequence learning and hard-encoding of boundary conditions. Next, we establish a benchmark dataset in the 2022 Pakistan flood using a widely accepted finite difference numerical solution to assess various flood simulation methods. Finally, we validate the model in three dimensions - flood inundation range, depth, and transferability of spatiotemporal downscaling - utilizing SAR-based flood data, traditional hydrodynamic benchmarks, and concurrent optical remote sensing images. Traditional hydrodynamics and sequence-to-sequence GeoPINS exhibit exceptional agreement during high water levels, while comparative assessments with SAR-based flood depth data show that sequence-to-sequence GeoPINS outperforms traditional hydrodynamics, with smaller simulation errors. The experimental results for the 2022 Pakistan flood demonstrate that the proposed method enables high-precision, large-scale flood modeling with an average MAPE of 14.93 % and an average Mean Absolute Error (MAE) of 0.0610 m for 14-day water depth simulations while facilitating real-time flood hazard forecasting using reliable precipitation data.


Asunto(s)
Inundaciones , Predicción , Modelos Teóricos , Hidrodinámica , Redes Neurales de la Computación , Ríos
4.
Biomed Environ Sci ; 37(5): 511-520, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38843924

RESUMEN

Objective: This study employs the Geographically and Temporally Weighted Regression (GTWR) model to assess the impact of meteorological elements and imported cases on dengue fever outbreaks, emphasizing the spatial-temporal variability of these factors in border regions. Methods: We conducted a descriptive analysis of dengue fever's temporal-spatial distribution in Yunnan border areas. Utilizing annual data from 2013 to 2019, with each county in the Yunnan border serving as a spatial unit, we constructed a GTWR model to investigate the determinants of dengue fever and their spatio-temporal heterogeneity in this region. Results: The GTWR model, proving more effective than Ordinary Least Squares (OLS) analysis, identified significant spatial and temporal heterogeneity in factors influencing dengue fever's spread along the Yunnan border. Notably, the GTWR model revealed a substantial variation in the relationship between indigenous dengue fever incidence, meteorological variables, and imported cases across different counties. Conclusion: In the Yunnan border areas, local dengue incidence is affected by temperature, humidity, precipitation, wind speed, and imported cases, with these factors' influence exhibiting notable spatial and temporal variation.


Asunto(s)
Dengue , Dengue/epidemiología , China/epidemiología , Humanos , Análisis Espacio-Temporal , Incidencia , Brotes de Enfermedades , Regresión Espacial
5.
Zhen Ci Yan Jiu ; 49(2): 155-163, 2024 Feb 25.
Artículo en Inglés, Chino | MEDLINE | ID: mdl-38413036

RESUMEN

OBJECTIVES: To investigate the mechanism of electroacupuncture (EA) at "Neiguan" (PC6) in impro-ving myocardial electrical remodeling in rats with acute myocardial infarction (AMI) by enhancing transient outward potassium current. METHODS: A total of 30 male SD rats were randomly divided into control, model and EA groups, with 10 rats in each group. The AMI model was established by subcutaneous injection with isoprenaline (ISO, 85 mg/kg). EA was applied to left PC6 for 20 min, once daily for 5 days. Electrocardiogram (ECG) was recorded after treatment. TTC staining was used to observe myocardial necrosis. HE staining was used to observe the pathological morphology of myocardial tissue and measure the cross-sectional area of myocardium. Potassium ion-related genes in myocardial tissue were detected by RNA sequencing. The mRNA and protein expressions of Kchip2 and Kv4.2 in myocardial tissue were detected by real-time fluorescence quantitative PCR and Western blot, respectively. RESULTS: Compared with the control group, cardiomyocyte cross-sectional area in the model group was significantly increased (P<0.01), the ST segment was significantly elevated (P<0.01), and QT, QTc, QTd and QTcd were all significantly increased (P<0.05, P<0.01). After EA treatment, cardiomyocyte cross-sectional area was significantly decreased (P<0.01), the ST segment was significantly reduced (P<0.01), and the QT, QTc, QTcd and QTd were significantly decreased (P<0.01, P<0.05). RNA sequencing results showed that a total of 20 potassium ion-related genes co-expressed by the 3 groups were identified. Among them, Kchip2 expression was up-regulated most notablely in the EA group. Compared with the control group, the mRNA and protein expressions of Kchip2 and Kv4.2 in the myocardial tissue of the model group were significantly decreased (P<0.01, P<0.05), while those were increased in the EA group (P<0.01, P<0.05). CONCLUSIONS: EA may improve myocardial electrical remodeling in rats with myocardial infarction, which may be related to its functions in up-regulating the expressions of Kchip2 and Kv4.2.


Asunto(s)
Remodelación Atrial , Electroacupuntura , Infarto del Miocardio , Isquemia Miocárdica , Ratas , Masculino , Animales , Isquemia Miocárdica/terapia , Ratas Sprague-Dawley , Puntos de Acupuntura , Miocardio/metabolismo , Infarto del Miocardio/genética , Infarto del Miocardio/terapia , Potasio/metabolismo , ARN Mensajero/metabolismo
6.
EClinicalMedicine ; 67: 102391, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38274117

RESUMEN

Background: Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods: Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings: The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation: The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding: This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.

7.
Artículo en Inglés | MEDLINE | ID: mdl-37721886

RESUMEN

Image classification plays an important role in remote sensing. Earth observation (EO) has inevitably arrived in the big data era, but the high requirement on computation power has already become a bottleneck for analyzing large amounts of remote sensing data with sophisticated machine learning models. Exploiting quantum computing might contribute to a solution to tackle this challenge by leveraging quantum properties. This article introduces a hybrid quantum-classical convolutional neural network (QC-CNN) that applies quantum computing to effectively extract high-level critical features from EO data for classification purposes. Besides that, the adoption of the amplitude encoding technique reduces the required quantum bit resources. The complexity analysis indicates that the proposed model can accelerate the convolutional operation in comparison with its classical counterpart. The model's performance is evaluated with different EO benchmarks, including Overhead-MNIST, So2Sat LCZ42, PatternNet, RSI-CB256, and NaSC-TG2, through the TensorFlow Quantum platform, and it can achieve better performance than its classical counterpart and have higher generalizability, which verifies the validity of the QC-CNN model on EO data classification tasks.

8.
EClinicalMedicine ; 60: 102027, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37333662

RESUMEN

Background: Identifying patients with clinically significant prostate cancer (csPCa) before biopsy helps reduce unnecessary biopsies and improve patient prognosis. The diagnostic performance of traditional transrectal ultrasound (TRUS) for csPCa is relatively limited. This study was aimed to develop a high-performance convolutional neural network (CNN) model (P-Net) based on a TRUS video of the entire prostate and investigate its efficacy in identifying csPCa. Methods: Between January 2021 and December 2022, this study prospectively evaluated 832 patients from four centres who underwent prostate biopsy and/or radical prostatectomy. All patients had a standardised TRUS video of the whole prostate. A two-dimensional CNN (2D P-Net) and three-dimensional CNN (3D P-Net) were constructed using the training cohort (559 patients) and tested on the internal validation cohort (140 patients) as well as on the external validation cohort (133 patients). The performance of 2D P-Net and 3D P-Net in predicting csPCa was assessed in terms of the area under the receiver operating characteristic curve (AUC), biopsy rate, and unnecessary biopsy rate, and compared with the TRUS 5-point Likert score system as well as multiparametric magnetic resonance imaging (mp-MRI) prostate imaging reporting and data system (PI-RADS) v2.1. Decision curve analyses (DCAs) were used to determine the net benefits associated with their use. The study is registered at https://www.chictr.org.cn with the unique identifier ChiCTR2200064545. Findings: The diagnostic performance of 3D P-Net (AUC: 0.85-0.89) was superior to TRUS 5-point Likert score system (AUC: 0.71-0.78, P = 0.003-0.040), and similar to mp-MRI PI-RADS v2.1 score system interpreted by experienced radiologists (AUC: 0.83-0.86, P = 0.460-0.732) and 2D P-Net (AUC: 0.79-0.86, P = 0.066-0.678) in the internal and external validation cohorts. The biopsy rate decreased from 40.3% (TRUS 5-point Likert score system) and 47.6% (mp-MRI PI-RADS v2.1 score system) to 35.5% (2D P-Net) and 34.0% (3D P-Net). The unnecessary biopsy rate decreased from 38.1% (TRUS 5-point Likert score system) and 35.2% (mp-MRI PI-RADS v2.1 score system) to 32.0% (2D P-Net) and 25.8% (3D P-Net). 3D P-Net yielded the highest net benefit according to the DCAs. Interpretation: 3D P-Net based on a prostate grayscale TRUS video achieved satisfactory performance in identifying csPCa and potentially reducing unnecessary biopsies. More studies to determine how AI models better integrate into routine practice and randomized controlled trials to show the values of these models in real clinical applications are warranted. Funding: The National Natural Science Foundation of China (Grants 82202174 and 82202153), the Science and Technology Commission of Shanghai Municipality (Grants 18441905500 and 19DZ2251100), Shanghai Municipal Health Commission (Grants 2019LJ21 and SHSLCZDZK03502), Shanghai Science and Technology Innovation Action Plan (21Y11911200), and Fundamental Research Funds for the Central Universities (ZD-11-202151), Scientific Research and Development Fund of Zhongshan Hospital of Fudan University (Grant 2022ZSQD07).

9.
ISPRS J Photogramm Remote Sens ; 196: 178-196, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36824311

RESUMEN

High-resolution satellite images can provide abundant, detailed spatial information for land cover classification, which is particularly important for studying the complicated built environment. However, due to the complex land cover patterns, the costly training sample collections, and the severe distribution shifts of satellite imageries caused by, e.g., geographical differences or acquisition conditions, few studies have applied high-resolution images to land cover mapping in detailed categories at large scale. To fill this gap, we present a large-scale land cover dataset, Five-Billion-Pixels. It contains more than 5 billion labeled pixels of 150 high-resolution Gaofen-2 (4 m) satellite images, annotated in a 24-category system covering artificial-constructed, agricultural, and natural classes. In addition, we propose a deep-learning-based unsupervised domain adaptation approach that can transfer classification models trained on labeled dataset (referred to as the source domain) to unlabeled data (referred to as the target domain) for large-scale land cover mapping. Specifically, we introduce an end-to-end Siamese network employing dynamic pseudo-label assignment and class balancing strategy to perform adaptive domain joint learning. To validate the generalizability of our dataset and the proposed approach across different sensors and different geographical regions, we carry out land cover mapping on five megacities in China and six cities in other five Asian countries severally using: PlanetScope (3 m), Gaofen-1 (8 m), and Sentinel-2 (10 m) satellite images. Over a total study area of 60,000 km2, the experiments show promising results even though the input images are entirely unlabeled. The proposed approach, trained with the Five-Billion-Pixels dataset, enables high-quality and detailed land cover mapping across the whole country of China and some other Asian countries at meter-resolution.

10.
ISPRS J Photogramm Remote Sens ; 195: 192-203, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36726963

RESUMEN

Remote sensing (RS) image scene classification has obtained increasing attention for its broad application prospects. Conventional fully-supervised approaches usually require a large amount of manually-labeled data. As more and more RS images becoming available, how to make full use of these unlabeled data is becoming an urgent topic. Semi-supervised learning, which uses a few labeled data to guide the self-training of numerous unlabeled data, is an intuitive strategy. However, it is hard to apply it to cross-dataset (i.e., cross-domain) scene classification due to the significant domain shift among different datasets. To this end, semi-supervised domain adaptation (SSDA), which can reduce the domain shift and further transfer knowledge from a fully-labeled RS scene dataset (source domain) to a limited-labeled RS scene dataset (target domain), would be a feasible solution. In this paper, we propose an SSDA method termed bidirectional sample-class alignment (BSCA) for RS cross-domain scene classification. BSCA consists of two alignment strategies, unsupervised alignment (UA) and supervised alignment (SA), both of which can contribute to decreasing domain shift. UA concentrates on reducing the distance of maximum mean discrepancy across domains, with no demand for class labels. In contrast, SA aims to achieve the distribution alignment both from source samples to the associate target class centers and from target samples to the associate source class centers, with awareness of their classes. To validate the effectiveness of the proposed method, extensive ablation, comparison, and visualization experiments are conducted on an RS-SSDA benchmark built upon four widely-used RS scene classification datasets. Experimental results indicate that in comparison with some state-of-the-art methods, our BSCA achieves the superior cross-domain classification performance with compact feature representation and low-entropy classification boundary. Our code will be available at https://github.com/hw2hwei/BSCA.

11.
J Integr Med ; 21(1): 89-98, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36424268

RESUMEN

OBJECTIVE: The study explores the effects of electroacupuncture (EA) at the governing vessel (GV) on proteomic changes in the hippocampus of rats with cognitive impairment. METHODS: Healthy male rats were randomly divided into 3 groups: sham, model and EA. Cognitive impairment was induced by left middle cerebral artery occlusion in the model and EA groups. Rats in the EA group were treated with EA at Shenting (GV24) and Baihui (GV20) for 7 d. Neurological deficit was scored using the Longa scale, the learning and memory ability was detected using the Morris water maze (MWM) test, and the proteomic profiling in the hippocampus was analyzed using protein-labeling technology based on the isobaric tag for relative and absolute quantitation (iTRAQ). The Western blot (WB) analysis was used to detect the proteins and validate the results of iTRAQ. RESULTS: Compared with the model group, the neurological deficit score was significantly reduced, and the escape latency in the MWM test was significantly shortened, while the number of platform crossings increased in the EA group. A total of 2872 proteins were identified by iTRAQ. Differentially expressed proteins (DEPs) were identified between different groups: 92 proteins were upregulated and 103 were downregulated in the model group compared with the sham group, while 142 proteins were upregulated and 126 were downregulated in the EA group compared with the model group. Most of the DEPs were involved in oxidative phosphorylation, glycolipid metabolism and synaptic transmission. Furthermore, we also verified 4 DEPs using WB technology. Although the WB results were not exactly the same as the iTRAQ results, the expression trends of the DEPs were consistent. The upregulation of heat-shock protein ß1 (Hspb1) was the highest in the EA group compared to the model group. CONCLUSION: EA can effect proteomic changes in the hippocampus of rats with cognitive impairment. Hspb1 may be involved in the molecular mechanism by which acupuncture improves cognitive impairment.


Asunto(s)
Disfunción Cognitiva , Electroacupuntura , Ratas , Masculino , Animales , Ratas Sprague-Dawley , Proteómica , Disfunción Cognitiva/terapia , Hipocampo
12.
Sci Data ; 9(1): 715, 2022 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-36402846

RESUMEN

Obtaining a dynamic population distribution is key to many decision-making processes such as urban planning, disaster management and most importantly helping the government to better allocate socio-technical supply. For the aspiration of these objectives, good population data is essential. The traditional method of collecting population data through the census is expensive and tedious. In recent years, statistical and machine learning methods have been developed to estimate population distribution. Most of the methods use data sets that are either developed on a small scale or not publicly available yet. Thus, the development and evaluation of new methods become challenging. We fill this gap by providing a comprehensive data set for population estimation in 98 European cities. The data set comprises a digital elevation model, local climate zone, land use proportions, nighttime lights in combination with multi-spectral Sentinel-2 imagery, and data from the Open Street Map initiative. We anticipate that it would be a valuable addition to the research community for the development of sophisticated approaches in the field of population estimation.

13.
Remote Sens Environ ; 269: 112794, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35115734

RESUMEN

Urbanization is the second largest mega-trend right after climate change. Accurate measurements of urban morphological and demographic figures are at the core of many international endeavors to address issues of urbanization, such as the United Nations' call for "Sustainable Cities and Communities". In many countries - particularly developing countries -, however, this database does not yet exist. Here, we demonstrate a novel deep learning and big data analytics approach to fuse freely available global radar and multi-spectral satellite data, acquired by the Sentinel-1 and Sentinel-2 satellites. Via this approach, we created the first-ever global and quality controlled urban local climate zones classification covering all cities across the globe with a population greater than 300,000 and made it available to the community (https://doi.org/10.14459/2021mp1633461). Statistical analysis of the data quantifies a global inequality problem: approximately 40% of the area defined as compact or light/large low-rise accommodates about 60% of the total population, whereas approximately 30% of the area defined as sparsely built accommodates only about 10% of the total population. Beyond, patterns of urban morphology were discovered from the global classification map, confirming a morphologic relationship to the geographical region and related cultural heritage. We expect the open access of our dataset to encourage research on the global change process of urbanization, as a multidisciplinary crowd of researchers will use this baseline for spatial perspective in their work. In addition, it can serve as a unique dataset for stakeholders such as the United Nations to improve their spatial assessments of urbanization.

14.
IEEE Trans Image Process ; 31: 678-690, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34914588

RESUMEN

Building extraction in VHR RSIs remains a challenging task due to occlusion and boundary ambiguity problems. Although conventional convolutional neural networks (CNNs) based methods are capable of exploiting local texture and context information, they fail to capture the shape patterns of buildings, which is a necessary constraint in the human recognition. To address this issue, we propose an adversarial shape learning network (ASLNet) to model the building shape patterns that improve the accuracy of building segmentation. In the proposed ASLNet, we introduce the adversarial learning strategy to explicitly model the shape constraints, as well as a CNN shape regularizer to strengthen the embedding of shape features. To assess the geometric accuracy of building segmentation results, we introduced several object-based quality assessment metrics. Experiments on two open benchmark datasets show that the proposed ASLNet improves both the pixel-based accuracy and the object-based quality measurements by a large margin. The code is available at: https://github.com/ggsDing/ASLNet.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tecnología de Sensores Remotos , Humanos , Redes Neurales de la Computación
15.
ISPRS J Photogramm Remote Sens ; 178: 68-80, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34433999

RESUMEN

As remote sensing (RS) data obtained from different sensors become available largely and openly, multimodal data processing and analysis techniques have been garnering increasing interest in the RS and geoscience community. However, due to the gap between different modalities in terms of imaging sensors, resolutions, and contents, embedding their complementary information into a consistent, compact, accurate, and discriminative representation, to a great extent, remains challenging. To this end, we propose a shared and specific feature learning (S2FL) model. S2FL is capable of decomposing multimodal RS data into modality-shared and modality-specific components, enabling the information blending of multi-modalities more effectively, particularly for heterogeneous data sources. Moreover, to better assess multimodal baselines and the newly-proposed S2FL model, three multimodal RS benchmark datasets, i.e., Houston2013 - hyperspectral and multispectral data, Berlin - hyperspectral and synthetic aperture radar (SAR) data, Augsburg - hyperspectral, SAR, and digital surface model (DSM) data, are released and used for land cover classification. Extensive experiments conducted on the three datasets demonstrate the superiority and advancement of our S2FL model in the task of land cover classification in comparison with previously-proposed state-of-the-art baselines. Furthermore, the baseline codes and datasets used in this paper will be made available freely at https://github.com/danfenghong/ISPRS_S2FL.

16.
ISPRS J Photogramm Remote Sens ; 177: 89-102, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34219969

RESUMEN

Aerial scene recognition is a fundamental visual task and has attracted an increasing research interest in the last few years. Most of current researches mainly deploy efforts to categorize an aerial image into one scene-level label, while in real-world scenarios, there often exist multiple scenes in a single image. Therefore, in this paper, we propose to take a step forward to a more practical and challenging task, namely multi-scene recognition in single images. Moreover, we note that manually yielding annotations for such a task is extraordinarily time- and labor-consuming. To address this, we propose a prototype-based memory network to recognize multiple scenes in a single image by leveraging massive well-annotated single-scene images. The proposed network consists of three key components: 1) a prototype learning module, 2) a prototype-inhabiting external memory, and 3) a multi-head attention-based memory retrieval module. To be more specific, we first learn the prototype representation of each aerial scene from single-scene aerial image datasets and store it in an external memory. Afterwards, a multi-head attention-based memory retrieval module is devised to retrieve scene prototypes relevant to query multi-scene images for final predictions. Notably, only a limited number of annotated multi-scene images are needed in the training phase. To facilitate the progress of aerial scene recognition, we produce a new multi-scene aerial image (MAI) dataset. Experimental results on variant dataset configurations demonstrate the effectiveness of our network. Our dataset and codes are publicly available.

17.
Math Biosci ; 339: 108648, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34216635

RESUMEN

Non-pharmaceutical interventions (NPIs) are important to mitigate the spread of infectious diseases as long as no vaccination or outstanding medical treatments are available. We assess the effectiveness of the sets of non-pharmaceutical interventions that were in place during the course of the Coronavirus disease 2019 (Covid-19) pandemic in Germany. Our results are based on hybrid models, combining SIR-type models on local scales with spatial resolution. In order to account for the age-dependence of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we include realistic prepandemic and recently recorded contact patterns between age groups. The implementation of non-pharmaceutical interventions will occur on changed contact patterns, improved isolation, or reduced infectiousness when, e.g., wearing masks. In order to account for spatial heterogeneity, we use a graph approach and we include high-quality information on commuting activities combined with traveling information from social networks. The remaining uncertainty will be accounted for by a large number of randomized simulation runs. Based on the derived factors for the effectiveness of different non-pharmaceutical interventions over the past months, we provide different forecast scenarios for the upcoming time.


Asunto(s)
COVID-19 , Control de Enfermedades Transmisibles , Modelos Estadísticos , Análisis de Redes Sociales , Análisis Espacial , Factores de Edad , COVID-19/prevención & control , COVID-19/transmisión , Control de Enfermedades Transmisibles/métodos , Control de Enfermedades Transmisibles/normas , Control de Enfermedades Transmisibles/estadística & datos numéricos , Alemania , Humanos
18.
IEEE Trans Cybern ; 51(7): 3602-3615, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33175688

RESUMEN

Conventional nonlinear subspace learning techniques (e.g., manifold learning) usually introduce some drawbacks in explainability (explicit mapping) and cost effectiveness (linearization), generalization capability (out-of-sample), and representability (spatial-spectral discrimination). To overcome these shortcomings, a novel linearized subspace analysis technique with spatial-spectral manifold alignment is developed for a semisupervised hyperspectral dimensionality reduction (HDR), called joint and progressive subspace analysis (JPSA). The JPSA learns a high-level, semantically meaningful, joint spatial-spectral feature representation from hyperspectral (HS) data by: 1) jointly learning latent subspaces and a linear classifier to find an effective projection direction favorable for classification; 2) progressively searching several intermediate states of subspaces to approach an optimal mapping from the original space to a potential more discriminative subspace; and 3) spatially and spectrally aligning a manifold structure in each learned latent subspace in order to preserve the same or similar topological property between the compressed data and the original data. A simple but effective classifier, that is, nearest neighbor (NN), is explored as a potential application for validating the algorithm performance of different HDR approaches. Extensive experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely used HS datasets: 1) Indian Pines (92.98%) and 2) the University of Houston (86.09%) in comparison with previous state-of-the-art HDR methods. The demo of this basic work (i.e., ECCV2018) is openly available at https://github.com/danfenghong/ECCV2018_J-Play.

19.
ISPRS J Photogramm Remote Sens ; 170: 1-14, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33299267

RESUMEN

Existing techniques of 3-D reconstruction of buildings from SAR images are mostly based on multibaseline SAR interferometry, such as PSI and SAR tomography (TomoSAR). However, these techniques require tens of images for a reliable reconstruction, which limits the application in various scenarios, such as emergency response. Therefore, alternatives that use a single SAR image and the building footprints from GIS data show their great potential in 3-D reconstruction. The combination of GIS data and SAR images requires a precise registration, which is challenging due to the unknown terrain height, and the difficulty in finding and extracting the correspondence. In this paper, we propose a framework to automatically register GIS building footprints to a SAR image by exploiting the features representing the intersection of ground and visible building facades, specifically the near-range boundaries in the building polygons, and the double bounce lines in the SAR image. Based on those features, the two data sets are registered progressively in multiple resolutions, allowing the algorithm to cope with variations in the local terrain. The proposed framework was tested in Berlin using one TerraSAR-X High Resolution SpotLight image and GIS building footprints of the area. Comparing to the ground truth, the proposed algorithm reduced the average distance error from 5.91 m before the registration to -0.08 m, and the standard deviation from 2.77 m to 1.12 m. Such accuracy, better than half of the typical urban floor height (3 m), is significant for precise building height reconstruction on a large scale. The proposed registration framework has great potential in assisting SAR image interpretation in typical urban areas and building model reconstruction from SAR images.

20.
ISPRS J Photogramm Remote Sens ; 167: 12-23, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32904376

RESUMEN

This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...