Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Biomed Environ Sci ; 37(5): 511-520, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38843924

RESUMO

Objective: This study employs the Geographically and Temporally Weighted Regression (GTWR) model to assess the impact of meteorological elements and imported cases on dengue fever outbreaks, emphasizing the spatial-temporal variability of these factors in border regions. Methods: We conducted a descriptive analysis of dengue fever's temporal-spatial distribution in Yunnan border areas. Utilizing annual data from 2013 to 2019, with each county in the Yunnan border serving as a spatial unit, we constructed a GTWR model to investigate the determinants of dengue fever and their spatio-temporal heterogeneity in this region. Results: The GTWR model, proving more effective than Ordinary Least Squares (OLS) analysis, identified significant spatial and temporal heterogeneity in factors influencing dengue fever's spread along the Yunnan border. Notably, the GTWR model revealed a substantial variation in the relationship between indigenous dengue fever incidence, meteorological variables, and imported cases across different counties. Conclusion: In the Yunnan border areas, local dengue incidence is affected by temperature, humidity, precipitation, wind speed, and imported cases, with these factors' influence exhibiting notable spatial and temporal variation.


Assuntos
Dengue , Dengue/epidemiologia , China/epidemiologia , Humanos , Análise Espaço-Temporal , Incidência , Surtos de Doenças , Regressão Espacial
2.
Zhen Ci Yan Jiu ; 49(2): 155-163, 2024 Feb 25.
Artigo em Inglês, Chinês | MEDLINE | ID: mdl-38413036

RESUMO

OBJECTIVES: To investigate the mechanism of electroacupuncture (EA) at "Neiguan" (PC6) in impro-ving myocardial electrical remodeling in rats with acute myocardial infarction (AMI) by enhancing transient outward potassium current. METHODS: A total of 30 male SD rats were randomly divided into control, model and EA groups, with 10 rats in each group. The AMI model was established by subcutaneous injection with isoprenaline (ISO, 85 mg/kg). EA was applied to left PC6 for 20 min, once daily for 5 days. Electrocardiogram (ECG) was recorded after treatment. TTC staining was used to observe myocardial necrosis. HE staining was used to observe the pathological morphology of myocardial tissue and measure the cross-sectional area of myocardium. Potassium ion-related genes in myocardial tissue were detected by RNA sequencing. The mRNA and protein expressions of Kchip2 and Kv4.2 in myocardial tissue were detected by real-time fluorescence quantitative PCR and Western blot, respectively. RESULTS: Compared with the control group, cardiomyocyte cross-sectional area in the model group was significantly increased (P<0.01), the ST segment was significantly elevated (P<0.01), and QT, QTc, QTd and QTcd were all significantly increased (P<0.05, P<0.01). After EA treatment, cardiomyocyte cross-sectional area was significantly decreased (P<0.01), the ST segment was significantly reduced (P<0.01), and the QT, QTc, QTcd and QTd were significantly decreased (P<0.01, P<0.05). RNA sequencing results showed that a total of 20 potassium ion-related genes co-expressed by the 3 groups were identified. Among them, Kchip2 expression was up-regulated most notablely in the EA group. Compared with the control group, the mRNA and protein expressions of Kchip2 and Kv4.2 in the myocardial tissue of the model group were significantly decreased (P<0.01, P<0.05), while those were increased in the EA group (P<0.01, P<0.05). CONCLUSIONS: EA may improve myocardial electrical remodeling in rats with myocardial infarction, which may be related to its functions in up-regulating the expressions of Kchip2 and Kv4.2.


Assuntos
Remodelamento Atrial , Eletroacupuntura , Infarto do Miocárdio , Isquemia Miocárdica , Ratos , Masculino , Animais , Isquemia Miocárdica/terapia , Ratos Sprague-Dawley , Pontos de Acupuntura , Miocárdio/metabolismo , Infarto do Miocárdio/genética , Infarto do Miocárdio/terapia , Potássio/metabolismo , RNA Mensageiro/metabolismo
3.
EClinicalMedicine ; 67: 102391, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38274117

RESUMO

Background: Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods: Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings: The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation: The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding: This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37721886

RESUMO

Image classification plays an important role in remote sensing. Earth observation (EO) has inevitably arrived in the big data era, but the high requirement on computation power has already become a bottleneck for analyzing large amounts of remote sensing data with sophisticated machine learning models. Exploiting quantum computing might contribute to a solution to tackle this challenge by leveraging quantum properties. This article introduces a hybrid quantum-classical convolutional neural network (QC-CNN) that applies quantum computing to effectively extract high-level critical features from EO data for classification purposes. Besides that, the adoption of the amplitude encoding technique reduces the required quantum bit resources. The complexity analysis indicates that the proposed model can accelerate the convolutional operation in comparison with its classical counterpart. The model's performance is evaluated with different EO benchmarks, including Overhead-MNIST, So2Sat LCZ42, PatternNet, RSI-CB256, and NaSC-TG2, through the TensorFlow Quantum platform, and it can achieve better performance than its classical counterpart and have higher generalizability, which verifies the validity of the QC-CNN model on EO data classification tasks.

5.
EClinicalMedicine ; 60: 102027, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37333662

RESUMO

Background: Identifying patients with clinically significant prostate cancer (csPCa) before biopsy helps reduce unnecessary biopsies and improve patient prognosis. The diagnostic performance of traditional transrectal ultrasound (TRUS) for csPCa is relatively limited. This study was aimed to develop a high-performance convolutional neural network (CNN) model (P-Net) based on a TRUS video of the entire prostate and investigate its efficacy in identifying csPCa. Methods: Between January 2021 and December 2022, this study prospectively evaluated 832 patients from four centres who underwent prostate biopsy and/or radical prostatectomy. All patients had a standardised TRUS video of the whole prostate. A two-dimensional CNN (2D P-Net) and three-dimensional CNN (3D P-Net) were constructed using the training cohort (559 patients) and tested on the internal validation cohort (140 patients) as well as on the external validation cohort (133 patients). The performance of 2D P-Net and 3D P-Net in predicting csPCa was assessed in terms of the area under the receiver operating characteristic curve (AUC), biopsy rate, and unnecessary biopsy rate, and compared with the TRUS 5-point Likert score system as well as multiparametric magnetic resonance imaging (mp-MRI) prostate imaging reporting and data system (PI-RADS) v2.1. Decision curve analyses (DCAs) were used to determine the net benefits associated with their use. The study is registered at https://www.chictr.org.cn with the unique identifier ChiCTR2200064545. Findings: The diagnostic performance of 3D P-Net (AUC: 0.85-0.89) was superior to TRUS 5-point Likert score system (AUC: 0.71-0.78, P = 0.003-0.040), and similar to mp-MRI PI-RADS v2.1 score system interpreted by experienced radiologists (AUC: 0.83-0.86, P = 0.460-0.732) and 2D P-Net (AUC: 0.79-0.86, P = 0.066-0.678) in the internal and external validation cohorts. The biopsy rate decreased from 40.3% (TRUS 5-point Likert score system) and 47.6% (mp-MRI PI-RADS v2.1 score system) to 35.5% (2D P-Net) and 34.0% (3D P-Net). The unnecessary biopsy rate decreased from 38.1% (TRUS 5-point Likert score system) and 35.2% (mp-MRI PI-RADS v2.1 score system) to 32.0% (2D P-Net) and 25.8% (3D P-Net). 3D P-Net yielded the highest net benefit according to the DCAs. Interpretation: 3D P-Net based on a prostate grayscale TRUS video achieved satisfactory performance in identifying csPCa and potentially reducing unnecessary biopsies. More studies to determine how AI models better integrate into routine practice and randomized controlled trials to show the values of these models in real clinical applications are warranted. Funding: The National Natural Science Foundation of China (Grants 82202174 and 82202153), the Science and Technology Commission of Shanghai Municipality (Grants 18441905500 and 19DZ2251100), Shanghai Municipal Health Commission (Grants 2019LJ21 and SHSLCZDZK03502), Shanghai Science and Technology Innovation Action Plan (21Y11911200), and Fundamental Research Funds for the Central Universities (ZD-11-202151), Scientific Research and Development Fund of Zhongshan Hospital of Fudan University (Grant 2022ZSQD07).

6.
ISPRS J Photogramm Remote Sens ; 195: 192-203, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36726963

RESUMO

Remote sensing (RS) image scene classification has obtained increasing attention for its broad application prospects. Conventional fully-supervised approaches usually require a large amount of manually-labeled data. As more and more RS images becoming available, how to make full use of these unlabeled data is becoming an urgent topic. Semi-supervised learning, which uses a few labeled data to guide the self-training of numerous unlabeled data, is an intuitive strategy. However, it is hard to apply it to cross-dataset (i.e., cross-domain) scene classification due to the significant domain shift among different datasets. To this end, semi-supervised domain adaptation (SSDA), which can reduce the domain shift and further transfer knowledge from a fully-labeled RS scene dataset (source domain) to a limited-labeled RS scene dataset (target domain), would be a feasible solution. In this paper, we propose an SSDA method termed bidirectional sample-class alignment (BSCA) for RS cross-domain scene classification. BSCA consists of two alignment strategies, unsupervised alignment (UA) and supervised alignment (SA), both of which can contribute to decreasing domain shift. UA concentrates on reducing the distance of maximum mean discrepancy across domains, with no demand for class labels. In contrast, SA aims to achieve the distribution alignment both from source samples to the associate target class centers and from target samples to the associate source class centers, with awareness of their classes. To validate the effectiveness of the proposed method, extensive ablation, comparison, and visualization experiments are conducted on an RS-SSDA benchmark built upon four widely-used RS scene classification datasets. Experimental results indicate that in comparison with some state-of-the-art methods, our BSCA achieves the superior cross-domain classification performance with compact feature representation and low-entropy classification boundary. Our code will be available at https://github.com/hw2hwei/BSCA.

7.
ISPRS J Photogramm Remote Sens ; 196: 178-196, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36824311

RESUMO

High-resolution satellite images can provide abundant, detailed spatial information for land cover classification, which is particularly important for studying the complicated built environment. However, due to the complex land cover patterns, the costly training sample collections, and the severe distribution shifts of satellite imageries caused by, e.g., geographical differences or acquisition conditions, few studies have applied high-resolution images to land cover mapping in detailed categories at large scale. To fill this gap, we present a large-scale land cover dataset, Five-Billion-Pixels. It contains more than 5 billion labeled pixels of 150 high-resolution Gaofen-2 (4 m) satellite images, annotated in a 24-category system covering artificial-constructed, agricultural, and natural classes. In addition, we propose a deep-learning-based unsupervised domain adaptation approach that can transfer classification models trained on labeled dataset (referred to as the source domain) to unlabeled data (referred to as the target domain) for large-scale land cover mapping. Specifically, we introduce an end-to-end Siamese network employing dynamic pseudo-label assignment and class balancing strategy to perform adaptive domain joint learning. To validate the generalizability of our dataset and the proposed approach across different sensors and different geographical regions, we carry out land cover mapping on five megacities in China and six cities in other five Asian countries severally using: PlanetScope (3 m), Gaofen-1 (8 m), and Sentinel-2 (10 m) satellite images. Over a total study area of 60,000 km2, the experiments show promising results even though the input images are entirely unlabeled. The proposed approach, trained with the Five-Billion-Pixels dataset, enables high-quality and detailed land cover mapping across the whole country of China and some other Asian countries at meter-resolution.

8.
J Integr Med ; 21(1): 89-98, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36424268

RESUMO

OBJECTIVE: The study explores the effects of electroacupuncture (EA) at the governing vessel (GV) on proteomic changes in the hippocampus of rats with cognitive impairment. METHODS: Healthy male rats were randomly divided into 3 groups: sham, model and EA. Cognitive impairment was induced by left middle cerebral artery occlusion in the model and EA groups. Rats in the EA group were treated with EA at Shenting (GV24) and Baihui (GV20) for 7 d. Neurological deficit was scored using the Longa scale, the learning and memory ability was detected using the Morris water maze (MWM) test, and the proteomic profiling in the hippocampus was analyzed using protein-labeling technology based on the isobaric tag for relative and absolute quantitation (iTRAQ). The Western blot (WB) analysis was used to detect the proteins and validate the results of iTRAQ. RESULTS: Compared with the model group, the neurological deficit score was significantly reduced, and the escape latency in the MWM test was significantly shortened, while the number of platform crossings increased in the EA group. A total of 2872 proteins were identified by iTRAQ. Differentially expressed proteins (DEPs) were identified between different groups: 92 proteins were upregulated and 103 were downregulated in the model group compared with the sham group, while 142 proteins were upregulated and 126 were downregulated in the EA group compared with the model group. Most of the DEPs were involved in oxidative phosphorylation, glycolipid metabolism and synaptic transmission. Furthermore, we also verified 4 DEPs using WB technology. Although the WB results were not exactly the same as the iTRAQ results, the expression trends of the DEPs were consistent. The upregulation of heat-shock protein ß1 (Hspb1) was the highest in the EA group compared to the model group. CONCLUSION: EA can effect proteomic changes in the hippocampus of rats with cognitive impairment. Hspb1 may be involved in the molecular mechanism by which acupuncture improves cognitive impairment.


Assuntos
Disfunção Cognitiva , Eletroacupuntura , Ratos , Masculino , Animais , Ratos Sprague-Dawley , Proteômica , Disfunção Cognitiva/terapia , Hipocampo
9.
Sci Data ; 9(1): 715, 2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-36402846

RESUMO

Obtaining a dynamic population distribution is key to many decision-making processes such as urban planning, disaster management and most importantly helping the government to better allocate socio-technical supply. For the aspiration of these objectives, good population data is essential. The traditional method of collecting population data through the census is expensive and tedious. In recent years, statistical and machine learning methods have been developed to estimate population distribution. Most of the methods use data sets that are either developed on a small scale or not publicly available yet. Thus, the development and evaluation of new methods become challenging. We fill this gap by providing a comprehensive data set for population estimation in 98 European cities. The data set comprises a digital elevation model, local climate zone, land use proportions, nighttime lights in combination with multi-spectral Sentinel-2 imagery, and data from the Open Street Map initiative. We anticipate that it would be a valuable addition to the research community for the development of sophisticated approaches in the field of population estimation.

10.
Remote Sens Environ ; 269: 112794, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35115734

RESUMO

Urbanization is the second largest mega-trend right after climate change. Accurate measurements of urban morphological and demographic figures are at the core of many international endeavors to address issues of urbanization, such as the United Nations' call for "Sustainable Cities and Communities". In many countries - particularly developing countries -, however, this database does not yet exist. Here, we demonstrate a novel deep learning and big data analytics approach to fuse freely available global radar and multi-spectral satellite data, acquired by the Sentinel-1 and Sentinel-2 satellites. Via this approach, we created the first-ever global and quality controlled urban local climate zones classification covering all cities across the globe with a population greater than 300,000 and made it available to the community (https://doi.org/10.14459/2021mp1633461). Statistical analysis of the data quantifies a global inequality problem: approximately 40% of the area defined as compact or light/large low-rise accommodates about 60% of the total population, whereas approximately 30% of the area defined as sparsely built accommodates only about 10% of the total population. Beyond, patterns of urban morphology were discovered from the global classification map, confirming a morphologic relationship to the geographical region and related cultural heritage. We expect the open access of our dataset to encourage research on the global change process of urbanization, as a multidisciplinary crowd of researchers will use this baseline for spatial perspective in their work. In addition, it can serve as a unique dataset for stakeholders such as the United Nations to improve their spatial assessments of urbanization.

11.
IEEE Trans Image Process ; 31: 678-690, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34914588

RESUMO

Building extraction in VHR RSIs remains a challenging task due to occlusion and boundary ambiguity problems. Although conventional convolutional neural networks (CNNs) based methods are capable of exploiting local texture and context information, they fail to capture the shape patterns of buildings, which is a necessary constraint in the human recognition. To address this issue, we propose an adversarial shape learning network (ASLNet) to model the building shape patterns that improve the accuracy of building segmentation. In the proposed ASLNet, we introduce the adversarial learning strategy to explicitly model the shape constraints, as well as a CNN shape regularizer to strengthen the embedding of shape features. To assess the geometric accuracy of building segmentation results, we introduced several object-based quality assessment metrics. Experiments on two open benchmark datasets show that the proposed ASLNet improves both the pixel-based accuracy and the object-based quality measurements by a large margin. The code is available at: https://github.com/ggsDing/ASLNet.


Assuntos
Processamento de Imagem Assistida por Computador , Tecnologia de Sensoriamento Remoto , Humanos , Redes Neurais de Computação
12.
ISPRS J Photogramm Remote Sens ; 178: 68-80, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34433999

RESUMO

As remote sensing (RS) data obtained from different sensors become available largely and openly, multimodal data processing and analysis techniques have been garnering increasing interest in the RS and geoscience community. However, due to the gap between different modalities in terms of imaging sensors, resolutions, and contents, embedding their complementary information into a consistent, compact, accurate, and discriminative representation, to a great extent, remains challenging. To this end, we propose a shared and specific feature learning (S2FL) model. S2FL is capable of decomposing multimodal RS data into modality-shared and modality-specific components, enabling the information blending of multi-modalities more effectively, particularly for heterogeneous data sources. Moreover, to better assess multimodal baselines and the newly-proposed S2FL model, three multimodal RS benchmark datasets, i.e., Houston2013 - hyperspectral and multispectral data, Berlin - hyperspectral and synthetic aperture radar (SAR) data, Augsburg - hyperspectral, SAR, and digital surface model (DSM) data, are released and used for land cover classification. Extensive experiments conducted on the three datasets demonstrate the superiority and advancement of our S2FL model in the task of land cover classification in comparison with previously-proposed state-of-the-art baselines. Furthermore, the baseline codes and datasets used in this paper will be made available freely at https://github.com/danfenghong/ISPRS_S2FL.

13.
ISPRS J Photogramm Remote Sens ; 177: 89-102, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34219969

RESUMO

Aerial scene recognition is a fundamental visual task and has attracted an increasing research interest in the last few years. Most of current researches mainly deploy efforts to categorize an aerial image into one scene-level label, while in real-world scenarios, there often exist multiple scenes in a single image. Therefore, in this paper, we propose to take a step forward to a more practical and challenging task, namely multi-scene recognition in single images. Moreover, we note that manually yielding annotations for such a task is extraordinarily time- and labor-consuming. To address this, we propose a prototype-based memory network to recognize multiple scenes in a single image by leveraging massive well-annotated single-scene images. The proposed network consists of three key components: 1) a prototype learning module, 2) a prototype-inhabiting external memory, and 3) a multi-head attention-based memory retrieval module. To be more specific, we first learn the prototype representation of each aerial scene from single-scene aerial image datasets and store it in an external memory. Afterwards, a multi-head attention-based memory retrieval module is devised to retrieve scene prototypes relevant to query multi-scene images for final predictions. Notably, only a limited number of annotated multi-scene images are needed in the training phase. To facilitate the progress of aerial scene recognition, we produce a new multi-scene aerial image (MAI) dataset. Experimental results on variant dataset configurations demonstrate the effectiveness of our network. Our dataset and codes are publicly available.

14.
Math Biosci ; 339: 108648, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34216635

RESUMO

Non-pharmaceutical interventions (NPIs) are important to mitigate the spread of infectious diseases as long as no vaccination or outstanding medical treatments are available. We assess the effectiveness of the sets of non-pharmaceutical interventions that were in place during the course of the Coronavirus disease 2019 (Covid-19) pandemic in Germany. Our results are based on hybrid models, combining SIR-type models on local scales with spatial resolution. In order to account for the age-dependence of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we include realistic prepandemic and recently recorded contact patterns between age groups. The implementation of non-pharmaceutical interventions will occur on changed contact patterns, improved isolation, or reduced infectiousness when, e.g., wearing masks. In order to account for spatial heterogeneity, we use a graph approach and we include high-quality information on commuting activities combined with traveling information from social networks. The remaining uncertainty will be accounted for by a large number of randomized simulation runs. Based on the derived factors for the effectiveness of different non-pharmaceutical interventions over the past months, we provide different forecast scenarios for the upcoming time.


Assuntos
COVID-19 , Controle de Doenças Transmissíveis , Modelos Estatísticos , Análise de Rede Social , Análise Espacial , Fatores Etários , COVID-19/prevenção & controle , COVID-19/transmissão , Controle de Doenças Transmissíveis/métodos , Controle de Doenças Transmissíveis/normas , Controle de Doenças Transmissíveis/estatística & dados numéricos , Alemanha , Humanos
15.
IEEE Trans Cybern ; 51(7): 3602-3615, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33175688

RESUMO

Conventional nonlinear subspace learning techniques (e.g., manifold learning) usually introduce some drawbacks in explainability (explicit mapping) and cost effectiveness (linearization), generalization capability (out-of-sample), and representability (spatial-spectral discrimination). To overcome these shortcomings, a novel linearized subspace analysis technique with spatial-spectral manifold alignment is developed for a semisupervised hyperspectral dimensionality reduction (HDR), called joint and progressive subspace analysis (JPSA). The JPSA learns a high-level, semantically meaningful, joint spatial-spectral feature representation from hyperspectral (HS) data by: 1) jointly learning latent subspaces and a linear classifier to find an effective projection direction favorable for classification; 2) progressively searching several intermediate states of subspaces to approach an optimal mapping from the original space to a potential more discriminative subspace; and 3) spatially and spectrally aligning a manifold structure in each learned latent subspace in order to preserve the same or similar topological property between the compressed data and the original data. A simple but effective classifier, that is, nearest neighbor (NN), is explored as a potential application for validating the algorithm performance of different HDR approaches. Extensive experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely used HS datasets: 1) Indian Pines (92.98%) and 2) the University of Houston (86.09%) in comparison with previous state-of-the-art HDR methods. The demo of this basic work (i.e., ECCV2018) is openly available at https://github.com/danfenghong/ECCV2018_J-Play.

16.
ISPRS J Photogramm Remote Sens ; 170: 1-14, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33299267

RESUMO

Existing techniques of 3-D reconstruction of buildings from SAR images are mostly based on multibaseline SAR interferometry, such as PSI and SAR tomography (TomoSAR). However, these techniques require tens of images for a reliable reconstruction, which limits the application in various scenarios, such as emergency response. Therefore, alternatives that use a single SAR image and the building footprints from GIS data show their great potential in 3-D reconstruction. The combination of GIS data and SAR images requires a precise registration, which is challenging due to the unknown terrain height, and the difficulty in finding and extracting the correspondence. In this paper, we propose a framework to automatically register GIS building footprints to a SAR image by exploiting the features representing the intersection of ground and visible building facades, specifically the near-range boundaries in the building polygons, and the double bounce lines in the SAR image. Based on those features, the two data sets are registered progressively in multiple resolutions, allowing the algorithm to cope with variations in the local terrain. The proposed framework was tested in Berlin using one TerraSAR-X High Resolution SpotLight image and GIS building footprints of the area. Comparing to the ground truth, the proposed algorithm reduced the average distance error from 5.91 m before the registration to -0.08 m, and the standard deviation from 2.77 m to 1.12 m. Such accuracy, better than half of the typical urban floor height (3 m), is significant for precise building height reconstruction on a large scale. The proposed registration framework has great potential in assisting SAR image interpretation in typical urban areas and building model reconstruction from SAR images.

17.
ISPRS J Photogramm Remote Sens ; 167: 12-23, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32904376

RESUMO

This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.

18.
Comput Med Imaging Graph ; 84: 101765, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32810817

RESUMO

Dermoscopic images are widely used for melanoma detection. Many existing works based on traditional classification methods and deep learning models have been proposed for automatic skin lesion analysis. The traditional classification methods use hand-crafted features as input. However, due to the strong visual similarity between different classes of skin lesions and complex skin conditions, the hand-crafted features are not discriminative enough and fail in many cases. Recently, deep convolutional neural networks (CNN) have gained popularity since they can automatically learn optimal features during the training phase. Different from existing works, a novel mid-level feature learning method for skin lesion classification task is proposed in this paper. In this method, skin lesion segmentation is first performed to detect the regions of interest (ROI) of skin lesion images. Next, pretrained neural networks including ResNet and DenseNet are used as the feature extractors for the ROI images. Instead of using the extracted features directly as input of classifiers, the proposed method obtains the mid-level feature representations by utilizing the relationships among different image samples based on distance metric learning. The learned feature representation is a soft discriminative descriptor, having more tolerance to the hard samples and hence is more robust to the large intra-class difference and inter-class similarity. Experimental results demonstrate advantages of the proposed mid-level features, and the proposed method obtains state-of-the-art performance compared with the existing CNN based methods.


Assuntos
Aprendizado de Máquina , Melanoma , Humanos , Melanoma/diagnóstico por imagem , Redes Neurais de Computação
19.
ISPRS J Photogramm Remote Sens ; 166: 333-346, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32747852

RESUMO

Optical remote sensing imagery is at the core of many Earth observation activities. The regular, consistent and global-scale nature of the satellite data is exploited in many applications, such as cropland monitoring, climate change assessment, land-cover and land-use classification, and disaster assessment. However, one main problem severely affects the temporal and spatial availability of surface observations, namely cloud cover. The task of removing clouds from optical images has been subject of studies since decades. The advent of the Big Data era in satellite remote sensing opens new possibilities for tackling the problem using powerful data-driven deep learning methods. In this paper, a deep residual neural network architecture is designed to remove clouds from multispectral Sentinel-2 imagery. SAR-optical data fusion is used to exploit the synergistic properties of the two imaging systems to guide the image reconstruction. Additionally, a novel cloud-adaptive loss is proposed to maximize the retainment of original information. The network is trained and tested on a globally sampled dataset comprising real cloudy and cloud-free images. The proposed setup allows to remove even optically thick clouds by reconstructing an optical representation of the underlying land surface structure.

20.
ISPRS J Photogramm Remote Sens ; 163: 152-170, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32377033

RESUMO

Human settlement extent (HSE) information is a valuable indicator of world-wide urbanization as well as the resulting human pressure on the natural environment. Therefore, mapping HSE is critical for various environmental issues at local, regional, and even global scales. This paper presents a deep-learning-based framework to automatically map HSE from multi-spectral Sentinel-2 data using regionally available geo-products as training labels. A straightforward, simple, yet effective fully convolutional network-based architecture, Sen2HSE, is implemented as an example for semantic segmentation within the framework. The framework is validated against both manually labelled checking points distributed evenly over the test areas, and the OpenStreetMap building layer. The HSE mapping results were extensively compared to several baseline products in order to thoroughly evaluate the effectiveness of the proposed HSE mapping framework. The HSE mapping power is consistently demonstrated over 10 representative areas across the world. We also present one regional-scale and one country-wide HSE mapping example from our framework to show the potential for upscaling. The results of this study contribute to the generalization of the applicability of CNN-based approaches for large-scale urban mapping to cases where no up-to-date and accurate ground truth is available, as well as the subsequent monitor of global urbanization.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA