Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters











Publication year range
1.
Sensors (Basel) ; 24(17)2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39275491

ABSTRACT

In maritime transportation, a ship's draft survey serves as a primary method for weighing bulk cargo. The accuracy of the ship's draft reading determines the fairness of bulk cargo transactions. Human visual-based draft reading methods face issues such as safety concerns, high labor costs, and subjective interpretation. Therefore, some image processing methods are utilized to achieve automatic draft reading. However, due to the limitations in the spectral characteristics of RGB images, existing image processing methods are susceptible to water surface environmental interference, such as reflections. To solve this issue, we obtained and annotated 524 multispectral images of a ship's draft as the research dataset, marking the first application of integrating NIR information and RGB images for automatic draft reading tasks. Additionally, a dual-branch backbone named BIF is proposed to extract and combine spectral information from RGB and NIR images. The backbone network can be combined with the existing segmentation head and detection head to perform waterline segmentation and draft detection. By replacing the original ResNet-50 backbone of YOLOv8, we reached a mAP of 99.2% in the draft detection task. Similarly, combining UPerNet with our dual-branch backbone, the mIoU of the waterline segmentation task was improved from 98.9% to 99.3%. The inaccuracy of the draft reading is less than ±0.01 m, confirming the efficacy of our method for automatic draft reading tasks.

2.
Sensors (Basel) ; 24(16)2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39205020

ABSTRACT

(1) Background: Yield-monitoring systems are widely used in grain crops but are less advanced for hay and forage. Current commercial systems are generally limited to weighing individual bales, limiting the spatial resolution of maps of hay yield. This study evaluated an Uncrewed Aerial Vehicle (UAV)-based imaging system to estimate hay yield. (2) Methods: Data were collected from three 0.4 ha plots and a 35 ha hay field of red clover and timothy grass in September 2020. A multispectral camera on the UAV captured images at 30 m (20 mm pixel-1) and 50 m (35 mm pixel-1) heights. Eleven Vegetation Indices (VIs) and five texture features were calculated from the images to estimate biomass yield. Multivariate regression models (VIs and texture features vs. biomass) were evaluated. (3) Results: Model R2 values ranged from 0.31 to 0.68. (4) Conclusions: Despite strong correlations between standard VIs and biomass, challenges such as variable image resolution and clarity affected accuracy. Further research is needed before UAV-based yield estimation can provide accurate, high-resolution hay yield maps.


Subject(s)
Biomass , Remote Sensing Technology , Remote Sensing Technology/methods , Unmanned Aerial Devices , Crops, Agricultural/growth & development
3.
Sensors (Basel) ; 24(13)2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39001041

ABSTRACT

Hyperspectral imaging was used to predict the total polyphenol content in low-temperature stressed tomato seedlings for the development of a multispectral image sensor. The spectral data with a full width at half maximum (FWHM) of 5 nm were merged to obtain FWHMs of 10 nm, 25 nm, and 50 nm using a commercialized bandpass filter. Using the permutation importance method and regression coefficients, we developed the least absolute shrinkage and selection operator (Lasso) regression models by setting the band number to ≥11, ≤10, and ≤5 for each FWHM. The regression model using 56 bands with an FWHM of 5 nm resulted in an R2 of 0.71, an RMSE of 3.99 mg/g, and an RE of 9.04%, whereas the model developed using the spectral data of only 5 bands with a FWHM of 25 nm (at 519.5 nm, 620.1 nm, 660.3 nm, 719.8 nm, and 980.3 nm) provided an R2 of 0.62, an RMSE of 4.54 mg/g, and an RE of 10.3%. These results show that a multispectral image sensor can be developed to predict the total polyphenol content of tomato seedlings subjected to low-temperature stress, paving the way for energy saving and low-temperature stress damage prevention in vegetable seedling production.


Subject(s)
Hyperspectral Imaging , Polyphenols , Seedlings , Solanum lycopersicum , Solanum lycopersicum/chemistry , Solanum lycopersicum/growth & development , Polyphenols/analysis , Seedlings/chemistry , Hyperspectral Imaging/methods , Cold Temperature
4.
Lab Anim Res ; 40(1): 24, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38877529

ABSTRACT

BACKGROUND: Immune profiling has become an important tool for identifying predictive, prognostic and response biomarkers for immune checkpoint inhibitors from tumor microenvironment (TME). We aimed to build a multiplex immunofluorescence (mIF) panel to apply to formalin-fixed and paraffin-embedded tissues in mice tumors and to explore the programmed cell death protein 1/ programmed cell death 1 ligand 1 (PD-1/PD-L1) axis. RESULTS: An automated eight-color mIF panel was evaluated to study the TME using seven antibodies, including cytokeratin 19, CD3e, CD8a, CD4, PD-1, PD-L1, F4-80 and DAPI, then was applied in six mice lung adenocarcinoma samples. Cell phenotypes were quantified by software to explore the co-localization and spatial distribution between immune cells within the TME. This mice panel was successfully optimized and applied to a small cohort of mice lung adenocarcinoma cases. Image analysis showed a sparse degree of immune cell expression pattern in this cohort. From the spatial analysis we found that T cells and macrophages expressing PD-L1 were close to the malignant cells and other immune cells. CONCLUSIONS: Comprehensive immune profiling using mIF in translational studies improves our ability to correlate the PD-1/PD-L1 axis and spatial distribution of lymphocytes and macrophages in mouse lung cancer cells to provide new cues for immunotherapy, that can be translated to human tumors for cancer intervention.

5.
Front Plant Sci ; 15: 1333089, 2024.
Article in English | MEDLINE | ID: mdl-38601301

ABSTRACT

Timely and accurate estimation of cotton seedling emergence rate is of great significance to cotton production. This study explored the feasibility of drone-based remote sensing in monitoring cotton seedling emergence. The visible and multispectral images of cotton seedlings with 2 - 4 leaves in 30 plots were synchronously obtained by drones. The acquired images included cotton seedlings, bare soil, mulching films, and PE drip tapes. After constructing 17 visible VIs and 14 multispectral VIs, three strategies were used to separate cotton seedlings from the images: (1) Otsu's thresholding was performed on each vegetation index (VI); (2) Key VIs were extracted based on results of (1), and the Otsu-intersection method and three machine learning methods were used to classify cotton seedlings, bare soil, mulching films, and PE drip tapes in the images; (3) Machine learning models were constructed using all VIs and validated. Finally, the models constructed based on two modeling strategies [Otsu-intersection (OI) and machine learning (Support Vector Machine (SVM), Random Forest (RF), and K-nearest neighbor (KNN)] showed a higher accuracy. Therefore, these models were selected to estimate cotton seedling emergence rate, and the estimates were compared with the manually measured emergence rate. The results showed that multispectral VIs, especially NDVI, RVI, SAVI, EVI2, OSAVI, and MCARI, had higher crop seedling extraction accuracy than visible VIs. After fusing all VIs or key VIs extracted based on Otsu's thresholding, the binary image purity was greatly improved. Among the fusion methods, the Key VIs-OI and All VIs-KNN methods yielded less noises and small errors, with a RMSE (root mean squared error) as low as 2.69% and a MAE (mean absolute error) as low as 2.15%. Therefore, fusing multiple VIs can increase crop image segmentation accuracy. This study provides a new method for rapidly monitoring crop seedling emergence rate in the field, which is of great significance for the development of modern agriculture.

6.
Front Plant Sci ; 14: 1124939, 2023.
Article in English | MEDLINE | ID: mdl-37426958

ABSTRACT

The field of computer vision has shown great potential for the identification of crops at large scales based on multispectral images. However, the challenge in designing crop identification networks lies in striking a balance between accuracy and a lightweight framework. Furthermore, there is a lack of accurate recognition methods for non-large-scale crops. In this paper, we propose an improved encoder-decoder framework based on DeepLab v3+ to accurately identify crops with different planting patterns. The network employs ShuffleNet v2 as the backbone to extract features at multiple levels. The decoder module integrates a convolutional block attention mechanism that combines both channel and spatial attention mechanisms to fuse attention features across the channel and spatial dimensions. We establish two datasets, DS1 and DS2, where DS1 is obtained from areas with large-scale crop planting, and DS2 is obtained from areas with scattered crop planting. On DS1, the improved network achieves a mean intersection over union (mIoU) of 0.972, overall accuracy (OA) of 0.981, and recall of 0.980, indicating a significant improvement of 7.0%, 5.0%, and 5.7%, respectively, compared to the original DeepLab v3+. On DS2, the improved network improves the mIoU, OA, and recall by 5.4%, 3.9%, and 4.4%, respectively. Notably, the number of parameters and giga floating-point operations (GFLOPs) required by the proposed Deep-agriNet is significantly smaller than that of DeepLab v3+ and other classic networks. Our findings demonstrate that Deep-agriNet performs better in identifying crops with different planting scales, and can serve as an effective tool for crop identification in various regions and countries.

7.
Sensors (Basel) ; 23(14)2023 Jul 17.
Article in English | MEDLINE | ID: mdl-37514768

ABSTRACT

Rice lodging causes a loss of yield and leads to lower-quality rice. In Japan, Koshihikari is the most popular rice variety, and it has been widely cultivated for many years despite its susceptibility to lodging. Reducing basal fertilizer is recommended when the available nitrogen in soil (SAN) exceeds the optimum level (80-200 mg N kg-1). However, many commercial farmers prefer to simultaneously apply one-shot basal fertilizer at transplant time. This study investigated the relationship between the rice lodging and SAN content by assessing their spatial distributions from unmanned aircraft system (UAS) images in a Koshihikari paddy field where one-shot basal fertilizer was applied. We analyzed the severity of lodging using the canopy height model and spatially clarified a heavily lodged area and a non-lodged area. For the SAN assessment, we selected green and red band pixel digital numbers from multispectral images and developed a SAN estimating equation by regression analysis. The estimated SAN values were rasterized and compiled into a 1 m mesh to create a soil fertility map. The heavily lodged area roughly coincided with the higher SAN area. A negative correlation was observed between the rice inclination angle and the estimated SAN, and rice lodging occurred even within the optimum SAN level. These results show that the amount of one-shot basal fertilizer applied to Koshihikari should be reduced when absorbable nitrogen (SAN + fertilizer nitrogen) exceeds 200 mg N kg-1.

8.
Biomolecules ; 13(7)2023 07 11.
Article in English | MEDLINE | ID: mdl-37509140

ABSTRACT

A quantitative histology of maize stems is needed to study the role of tissue and of their chemical composition in plant development and in their end-use quality. In the present work, a new methodology is proposed to show and quantify the spatial variability of tissue composition in plant organs and to statistically compare different samples accounting for biological variability. Multispectral UV/visible autofluorescence imaging was used to acquire a macroscale image series based on the fluorescence of phenolic compounds in the cell wall. A series of 40 multispectral large images of a whole internode section taken from four maize inbred lines were compared. The series consisted of more than 1 billion pixels and 11 autofluorescence channels. Principal Component Analysis was adapted and named large PCA and score image montages at different scales were built. Large PCA score distributions were proposed as quantitative features to compare the inbred lines. Variations in the tissue fluorescence were clearly displayed in the score images. General intensity variations were identified. Rind vascular bundles were differentiated from other tissues due to their lignin fluorescence after visible excitation, while variations within the pith parenchyma were shown via UV fluorescence. They depended on the inbred line, as revealed by the first four large PCA score distributions. Autofluorescence macroscopy combined with an adapted analysis of a series of large images is promising for the investigation of the spatial heterogeneity of tissue composition between and within organ sections. The method is easy to implement and can be easily extended to other multi-hyperspectral imaging techniques. The score distributions enable a global comparison of the images and an analysis of the inbred lines' effect. The interpretation of the tissue autofluorescence needs to be further investigated by using complementary spatially resolved techniques.


Subject(s)
Zea mays , Principal Component Analysis
9.
Plants (Basel) ; 12(9)2023 Apr 28.
Article in English | MEDLINE | ID: mdl-37176880

ABSTRACT

The accurate, timely, and non-destructive estimation of maize total-above ground biomass (TAB) and theoretical biochemical methane potential (TBMP) under different phenological stages is a substantial part of agricultural remote sensing. The assimilation of UAV and machine learning (ML) data may be successfully applied in predicting maize TAB and TBMP; however, in the Nordic-Baltic region, these technologies are not fully exploited. Therefore, in this study, during the maize growing period, we tracked unmanned aerial vehicle (UAV) based multispectral bands (blue, red, green, red edge, and infrared) at the main phenological stages. In the next step, we calculated UAV-based vegetation indices, which were combined with field measurements and different ML models, including generalized linear, random forest, as well as support vector machines. The results showed that the best ML predictions were obtained during the maize blister (R2)-Dough (R4) growth period when the prediction models managed to explain 88-95% of TAB and 88-97% TBMP variation. However, for the practical usage of farmers, the earliest suitable timing for adequate TAB and TBMP prediction in the Nordic-Baltic area is stage V7-V10. We conclude that UAV techniques in combination with ML models were successfully applied for maize TAB and TBMP estimation, but similar research should be continued for further improvements.

10.
Sensors (Basel) ; 23(4)2023 Feb 20.
Article in English | MEDLINE | ID: mdl-36850938

ABSTRACT

The aim of fusing hyperspectral and multispectral images is to overcome the limitation of remote sensing hyperspectral sensors by improving their spatial resolutions. This process, also known as hypersharpening, generates an unobserved high-spatial-resolution hyperspectral image. To this end, several hypersharpening methods have been developed, however most of them do not consider the spectral variability phenomenon; therefore, neglecting this phenomenon may cause errors, which leads to reducing the spatial and spectral quality of the sharpened products. Recently, new approaches have been proposed to tackle this problem, particularly those based on spectral unmixing and using parametric models. Nevertheless, the reported methods need a large number of parameters to address spectral variability, which inevitably yields a higher computation time compared to the standard hypersharpening methods. In this paper, a new hypersharpening method addressing spectral variability by considering the spectra bundles-based method, namely the Automated Extraction of Endmember Bundles (AEEB), and the sparsity-based method called Sparse Unmixing by Variable Splitting and Augmented Lagrangian (SUnSAL), is introduced. This new method called Hyperspectral Super-resolution with Spectra Bundles dealing with Spectral Variability (HSB-SV) was tested on both synthetic and real data. Experimental results showed that HSB-SV provides sharpened products with higher spectral and spatial reconstruction fidelities with a very low computational complexity compared to other methods dealing with spectral variability, which are the main contributions of the designed method.

11.
Front Neurosci ; 16: 1031546, 2022.
Article in English | MEDLINE | ID: mdl-36325480

ABSTRACT

The surface spectral reflectance of an object is the key factor for high-fidelity color reproduction and material analysis, and spectral acquisition is the basis of its applications. Based on the theoretical imaging model of a digital camera, the spectral reflectance of any pixels in the image can be obtained through spectral reconstruction technology. This technology can avoid the application limitations of spectral cameras in open scenarios and obtain high spatial resolution multispectral images. However, the current spectral reconstruction algorithms are sensitive to the exposure variant of the test images. That is, when the exposure of the test image is different from that of the training image, the reconstructed spectral curve of the test object will deviate from the real spectral to varying degrees, which will lead to the spectral data of the target object being accurately reconstructed. This article proposes an optimized method for spectral reconstruction based on data augmentation and attention mechanisms using the current deep learning-based spectral reconstruction framework. The proposed method is exposure invariant and will adapt to the open environment in which the light is easily changed and the illumination is non-uniform. Thus, the robustness and reconstruction accuracy of the spectral reconstruction model in practical applications are improved. The experiments show that the proposed method can accurately reconstruct the shape of the spectral reflectance curve of the test object under different test exposure levels. And the spectral reconstruction error of our method at different exposure levels is significantly lower than that of the existing methods, which verifies the proposed method's effectiveness and superiority.

12.
Sensors (Basel) ; 22(11)2022 Jun 02.
Article in English | MEDLINE | ID: mdl-35684864

ABSTRACT

The deployment of any UAV application in precision agriculture involves the development of several tasks, such as path planning and route optimization, images acquisition, handling emergencies, and mission validation, to cite a few. UAVs applications are also subject to common constraints, such as weather conditions, zonal restrictions, and so forth. The development of such applications requires the advanced software integration of different utilities, and this situation may frighten and dissuade undertaking projects in the field of precision agriculture. This paper proposes the development of a Web and MATLAB-based application that integrates several services in the same environment. The first group of services deals with UAV mission creation and management. It provides several pieces of flight conditions information, such as weather conditions, the KP index, air navigation maps, or aeronautical information services including notices to Airmen (NOTAM). The second group deals with route planning and converts selected field areas on the map to an UAV optimized route, handling sub-routes for long journeys. The third group deals with multispectral image processing and vegetation indexes calculation and visualizations. From a software development point of view, the app integrates several monolithic and independent programs around the MATLAB Runtime package with an automated and transparent data flow. Its main feature consists in designing a plethora of executable MATLAB programs, especially for the route planning and optimization of UAVs, images processing and vegetation indexes calculations, and running them remotely.


Subject(s)
Agriculture , Remote Sensing Technology , Agriculture/methods , Data Collection , Image Processing, Computer-Assisted , Remote Sensing Technology/methods
13.
Spectrochim Acta A Mol Biomol Spectrosc ; 278: 121307, 2022 Oct 05.
Article in English | MEDLINE | ID: mdl-35567823

ABSTRACT

Multispectral transmission imaging provides a possibility for early breast cancer screening. Due to the strong scattering effect of the light source and the absorption characteristics of the material itself, the image signal is weak. The frame accumulation and demodulation technique can improve the accuracy of the image, but it brings a lot of redundant data. This paper proposes the "Two-dimensional Terraced Compression Method" and applies it to detecting heterogeneity contour in transmission images. The experiment is designed to prove its effectiveness. Four kinds of LEDs with different central wavelengths are respectively modulated as the light source to obtain the image sequences, and the Fast Fourier Transform (FFT) and frame accumulation are used to obtain single-wavelength images respectively. The image is first low-pass filtered, then find the gray minimum value in the image, and then find the connected area in the influence domain of the gradient threshold. If the connected area meets the area threshold, it is used as an effective growth point, and the gray value in the connected area is reassigned. Otherwise, mark it as an isolated point, return to find the minimum, and finally implement terraced compression on the image. This method not only reduces the redundancy of gray numbers but also greatly improves the gradient information of the image, and be used as a preprocessing image algorithm-nonlinear filtering also can be used to detect the contour of heterogeneity.


Subject(s)
Algorithms , Physical Phenomena
14.
Sensors (Basel) ; 22(7)2022 Apr 01.
Article in English | MEDLINE | ID: mdl-35408324

ABSTRACT

Sugarcane is the main industrial crop for sugar production, and its growth status is closely related to fertilizer, water, and light input. Unmanned aerial vehicle (UAV)-based multispectral imagery is widely used for high-throughput phenotyping, since it can rapidly predict crop vigor at field scale. This study focused on the potential of drone multispectral images in predicting canopy nitrogen concentration (CNC) and irrigation levels for sugarcane. An experiment was carried out in a sugarcane field with three irrigation levels and five fertilizer levels. Multispectral images at an altitude of 40 m were acquired during the elongating stage. Partial least square (PLS), backpropagation neural network (BPNN), and extreme learning machine (ELM) were adopted to establish CNC prediction models based on various combinations of band reflectance and vegetation indices. The simple ratio pigment index (SRPI), normalized pigment chlorophyll index (NPCI), and normalized green-blue difference index (NGBDI) were selected as model inputs due to their higher grey relational degree with the CNC and lower correlation between one another. The PLS model based on the five-band reflectance and the three vegetation indices achieved the best accuracy (Rv = 0.79, RMSEv = 0.11). Support vector machine (SVM) and BPNN were then used to classify the irrigation levels based on five spectral features which had high correlations with irrigation levels. SVM reached a higher accuracy of 80.6%. The results of this study demonstrated that high resolution multispectral images could provide effective information for CNC prediction and water irrigation level recognition for sugarcane crop.


Subject(s)
Saccharum , Edible Grain , Fertilizers , Nitrogen , Water
15.
Front Plant Sci ; 13: 730190, 2022.
Article in English | MEDLINE | ID: mdl-35283875

ABSTRACT

Corn seed materials of different quality were imaged, and a method for defect detection was developed based on a watershed algorithm combined with a two-pathway convolutional neural network (CNN) model. In this study, RGB and near-infrared (NIR) images were acquired with a multispectral camera to train the model, which was proved to be effective in identifying defective seeds and defect-free seeds, with an averaged accuracy of 95.63%, an averaged recall rate of 95.29%, and an F1 (harmonic average evaluation) of 95.46%. Our proposed method was superior to the traditional method that employs a one-pathway CNN with 3-channel RGB images. At the same time, the influence of different parameter settings on the model training was studied. Finally, the application of the object detection method in corn seed defect detection, which may provide an effective tool for high-throughput quality control of corn seeds, was discussed.

16.
Sensors (Basel) ; 21(18)2021 Sep 08.
Article in English | MEDLINE | ID: mdl-34577211

ABSTRACT

Current advancements in sensor technology bring new possibilities in multi- and hyperspectral imaging. Real-life use cases which can benefit from such imagery span across various domains, including precision agriculture, chemistry, biology, medicine, land cover applications, management of natural resources, detecting natural disasters, and more. To extract value from such highly dimensional data capturing up to hundreds of spectral bands in the electromagnetic spectrum, researchers have been developing a range of image processing and machine learning analysis pipelines to process these kind of data as efficiently as possible. To this end, multi- or hyperspectral analysis has bloomed and has become an exciting research area which can enable the faster adoption of this technology in practice, also when such algorithms are deployed in hardware-constrained and extreme execution environments; e.g., on-board imaging satellites.


Subject(s)
Algorithms , Hyperspectral Imaging , Agriculture , Image Processing, Computer-Assisted , Machine Learning
17.
J Biomed Opt ; 26(9)2021 09.
Article in English | MEDLINE | ID: mdl-34541836

ABSTRACT

SIGNIFICANCE: Effective vein visualization is critically important for several clinical procedures, such as venous blood sampling and intravenous injection. Existing technologies using infrared device or ultrasound rely on professional equipment and are not suitable for daily medical care. A regression-based vein visualization method is proposed. AIM: We visualize veins from conventional RGB images to provide assistance in venipuncture procedures as well as clinical diagnosis of some venous insufficiency. APPROACH: The RGB images taken by digital cameras are first transformed to spectral reflectance images using Wiener estimation. Multiple regression analysis is then applied to derive the relationship between spectral reflectance and the concentrations of pigments. Monte Carlo simulation is adopted to get prior information. Finally, vein patterns are visualized from the spatial distribution of pigments. To minimize the effect of illumination on skin color, light correction and shading removal operations are performed in advance. RESULTS: Experimental results from inner forearms of 60 subjects show the effectiveness of the regression-based method. Subjective and objective evaluations demonstrate that the clarity and completeness of vein patterns can be improved by light correction and shading removal. CONCLUSIONS: Vein patterns can be successfully visualized from RGB images without any professional equipment. The proposed method can assist in venipuncture procedures. It also shows promising potential to be used in clinical diagnosis and treatment of some venous insufficiency.


Subject(s)
Lighting , Veins , Forearm , Humans , Skin Pigmentation , Ultrasonography , Veins/diagnostic imaging
18.
Sensors (Basel) ; 21(16)2021 Aug 10.
Article in English | MEDLINE | ID: mdl-34450836

ABSTRACT

Since multispectral images (MSIs) and RGB images (RGBs) have significantly different definitions and severely imbalanced information entropies, the spectrum transformation between them, especially reconstructing MSIs from RGBs, is a big challenge. We propose a new approach, the Taiji Generative Neural Network (TaijiGNN), to address the above-mentioned problems. TaijiGNN consists of two generators, G_MSI, and G_RGB. These two generators establish two cycles by connecting one generator's output with the other's input. One cycle translates the RGBs into the MSIs and converts the MSIs back to the RGBs. The other cycle does the reverse. The cycles can turn the problem of comparing two different domain images into comparing the same domain images. In the same domain, there are neither different domain definition problems nor severely underconstrained challenges, such as reconstructing MSIs from RGBs. Moreover, according to several investigations and validations, we effectively designed a multilayer perceptron neural network (MLP) to substitute the convolutional neural network (CNN) when implementing the generators to make them simple and high performance. Furthermore, we cut off the two traditional CycleGAN's identity losses to fit the spectral image translation. We also added two consistent losses of comparing paired images to improve the two generators' training effectiveness. In addition, during the training process, similar to the ancient Chinese philosophy Taiji's polarity Yang and polarity Yin, the two generators update their neural network parameters by interacting with and complementing each other until they all converge and the system reaches a dynamic balance. Furthermore, several qualitative and quantitative experiments were conducted on the two classical datasets, CAVE and ICVL, to evaluate the performance of our proposed approach. Promising results were obtained with a well-designed simplistic MLP requiring a minimal amount of training data. Specifically, in the CAVE dataset, to achieve comparable state-of-the-art results, we only need half of the dataset for training; for the ICVL dataset, we used only one-fifth of the dataset to train the model, but obtained state-of-the-art results.


Subject(s)
Tai Ji , Neural Networks, Computer
19.
Sensors (Basel) ; 21(11)2021 May 21.
Article in English | MEDLINE | ID: mdl-34064128

ABSTRACT

The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.

20.
Plant Methods ; 17(1): 12, 2021 Feb 04.
Article in English | MEDLINE | ID: mdl-33541365

ABSTRACT

BACKGROUND: Pyropia is an economically advantageous genus of red macroalgae, which has been cultivated in the coastal areas of East Asia for over 300 years. Realizing estimation of macroalgae biomass in a high-throughput way would great benefit their cultivation management and research on breeding and phenomics. However, the conventional method is labour-intensive, time-consuming, manually destructive, and prone to human error. Nowadays, high-throughput phenotyping using unmanned aerial vehicle (UAV)-based spectral imaging is widely used for terrestrial crops, grassland, and forest, but no such application in marine aquaculture has been reported. RESULTS: In this study, multispectral images of cultivated Pyropia yezoensis were taken using a UAV system in the north of Haizhou Bay in the midwestern coast of Yellow Sea. The exposure period of P. yezoensis was utilized to prevent the significant shielding effect of seawater on the reflectance spectrum. The vegetation indices of normalized difference vegetation index (NDVI), ratio vegetation index (RVI), difference vegetation index (DVI) and normalized difference of red edge (NDRE) were derived and indicated no significant difference between the time that P. yezoensis was completely exposed to the air and 1 h later. The regression models of the vegetation indices and P. yezoensis biomass per unit area were established and validated. The quadratic model of DVI (Biomass = - 5.550DVI2 + 105.410DVI + 7.530) showed more accuracy than the other index or indices combination, with the highest coefficient of determination (R2), root mean square error (RMSE), and relative estimated accuracy (Ac) values of 0.925, 8.06, and 74.93%, respectively. The regression model was further validated by consistently predicting the biomass with a high R2 value of 0.918, RMSE of 8.80, and Ac of 82.25%. CONCLUSIONS: This study suggests that the biomass of Pyropia can be effectively estimated using UAV-based spectral imaging with high accuracy and consistency. It also implied that multispectral aerial imaging is potential to assist digital management and phenomics research on cultivated macroalgae in a high-throughput way.

SELECTION OF CITATIONS
SEARCH DETAIL