RESUMO
In this study, a secondary subsystem mathematical model is established under the condition that the layout of the sewage collection branch, trunk, and main pipe network projects is fixed. The sewage collection branch and trunk pipe network projects are treated as the research objective by taking the minimum annual cost of the sewage collection pipe network projects as the objective function, the longitudinal slope of the pipe section and the economic flow rate of the pipe section as constraints, and the diameter of the pipe section as the decision variable. A first-level subsystem mathematical model is established by taking the sewage collection branch, trunk, and main pipe network project as the research object. A large system mathematical model is established in the same manner. This model can be solved using the large system secondary decomposition-dynamic programming aggregation method, and the optimal diameter for each pipe section can be obtained. A regional sewage collection pipe network project in Taizhou city was considered as an example for comparative analysis before and after optimization, and the results verified that the optimization method proposed in this study can solve this complex large system optimization problem.
Assuntos
Modelos Teóricos , Esgotos , CidadesRESUMO
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.
RESUMO
In this paper, we propose a novel, effective and fast method to obtain a color illumination invariant and shadow-free image from a single outdoor image. Different from state-of-the-art methods for shadow-free image that either need shadow detection or statistical learning, we set up a linear equation set for each pixel value vector based on physically-based shadow invariants, deduce a pixel-wise orthogonal decomposition for its solutions, and then get an illumination invariant vector for each pixel value vector on an image. The illumination invariant vector is the unique particular solution of the linear equation set, which is orthogonal to its free solutions. With this illumination invariant vector and Lab color space, we propose an algorithm to generate a shadow-free image which well preserves the texture and color information of the original image. A series of experiments on a diverse set of outdoor images and the comparisons with the state-of-the-art methods validate our method.
RESUMO
In the field of image descattering, the image formation models employed for restoration approaches are often simplified. In these models, scattering distribution is uniform in homogeneous media when transmission is fixed. Through specifically designed experiments, we discover that scattering exhibits non-uniform characteristics even in homogeneous media. Neglecting non-uniform scattering in these models limits their accuracy in representing scattering distribution, resulting in existing image descattering approaches inadequate. To tackle these issues, this paper proposes a novel image formation model for image descattering, considering more physical parameters, such as zenith angle, azimuth angle, scattering phase function, and camera focal length. Our model describes the light transfer process in scattering media more accurately. For image descattering, we introduce corresponding algorithms for parameter estimation in our model and simultaneous restoration from degraded images. Experimental evaluations demonstrate the effectiveness of our proposed model in various tasks, including physical parameter estimation, pure-scattering removal, image dehazing, and underwater image restoration. In terms of calculating parameters, our results are close to the real values; in terms of underwater image restoration, our work outperforms the state-of-art methods; in terms of image dehazing, our work promotes the performance of existing methods by replacing previous models with our model.
RESUMO
Division of focal plane color polarization camera becomes the mainstream in polarimetric imaging for it directly captures color polarization mosaic image by one snapshot, so image demosaicking is an essential task. Current color polarization demosaicking (CPDM) methods are prone to unsatisfied results since it's difficult to recover missed 15 or 14 pixels out of 16 pixels in color polarization mosaic images. To address this problem, a non-locally regularized convolutional sparse regularization model, which is advantaged in denoising and edge maintaining, is proposed to recall more information for CPDM task, and the CPDM task is transformed into an energy function to be solved by ADMM optimization. Finally, the optimal model generates informative and clear results. The experimental results, including reconstructed synthetic and real-world scenes, demonstrate that our proposed method outperforms the current state-of-the-art methods in terms of quantitative measurements and visual quality. The source code is available at https://github.com/roydon-luo/NLCSR-CPDM.
RESUMO
Spectral reflectance is defined as the "fingerprint" of an object and is illumination invariant. It has many applications in color reproduction, imaging, computer vision, and computer graphics. In previous reflectance reconstruction methods, spectral reflectance has been treated equally over the whole wavelength. However, human eyes or sensors in an imaging device usually have different weights over different wavelengths. We propose a novel method to reconstruct reflectance, considering a wavelength-sensitive function (WSF) that is constructed from sensor-sensitive functions (or color matching functions). Our main idea is to achieve more accurate reconstruction at wavelengths where sensors have high sensitivities. This more accurate reconstruction can achieve better imaging or color reproduction performance. In our method, we generate a matrix through the Hadamard product of the reflectance matrix and the WSF matrix. We then obtain reconstructed reflectance by applying the singular value decomposition on the generated matrix. The experimental results show that our method can reduce 47% mean-square error and 55% Lab error compared with the classical principal component analysis method.
RESUMO
Conducting research on the construction of a collaborative ability evaluation system for the joint graduation design of new engineering specialty groups based on digital technology holds great practical relevance. In this paper, which is based on a comprehensive analysis and research of the current situation pertaining to the joint graduation design of college graduates in China and elsewhere and on the construction of a collaborative ability evaluation system, combined with the talent training program of the joint graduation design, the Delphi method and the analytic hierarchy process (AHP) are adopted to establish a hierarchical structure model of the collaborative ability evaluation system for joint graduation design. In this system, collaborative abilities in the areas of cognition, behavior and emergency management are used as the criteria level evaluation indices. Additionally, collaborative ability in regard to targets, to knowledge, to relationships, to software, to the workflow, to organization, to culture, to learning and to conflict are used as evaluation indices. The comparison judgment matrix of the evaluation indices is constructed at the collaborative ability criterion level and at the index level. By calculating the maximum eigenvalue and corresponding eigenvector of the judgment matrix, the weight assignment of the evaluation indices is obtained, and the evaluation indices are sorted. Finally, the related research content is evaluated. The research results show that the key evaluation indicators for the collaborative ability evaluation system of joint graduation design that need to be considered are easy to determine, and these indicators provide a theoretical reference for the reform of graduation design teaching of new engineering specialty groups.
RESUMO
The current shadow removal pipeline relies on the detected shadow masks, which have limitations for penumbras and tiny shadows, and results in an excessively long pipeline. To address these issues, we propose a shadow imaging bilinear model and design a novel three-branch residual (TBR) network for shadow removal. Our bilinear model reveals the single-image shadow removal process and can explain why simply increasing the brightness of shadow areas cannot remove shadows without artifacts. We considerably shorten the shadow removal pipeline by modeling illumination compensation and developing a single-stage shadow removal network without additional detection and refinement networks. Specifically, our network consists of three task branches, i.e., shadow image reconstruction, shadow matte estimation, and shadow removal. To merge these three branches and enhance the shadow removal branch, we design a model-based TBR module. Multiple TBR modules are cascaded to generate an intensive information flow and facilitate feature integration among the three branches. Thus, our network ensures the fidelity of nonshadow areas and restores the light intensity of shadow areas through three-branch collaboration. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods. The model and code are available at https://github.com/nachifur/TBRNet.
RESUMO
Anchor or anchor-free based Siamese trackers have achieved the astonishing advancement. However, their parallel regression and classification branches lack the tracked target information link and interaction, and the corresponding independent optimization maybe lead to task-misalignment, such as the reliable classification prediction with imprecisely localization and vice versa. To address this problem, we develop a general Siamese dense regression tracker (SDRT) with both task and feature alignments. It consists of two cooperative and mutual-guidance core branches: dense local regression with RepPoint representation, the global and local multi-classifier fusion with aligned features. They complement and boost each other to constrain the results with well-localized followed to also be well-classified. Specifically, a dense local regression with RepPoint representation, directly estimates and averages multiple dense local bounding box offsets for accurate localization. And then, the refined bounding boxes can be used to learn the global and local affine alignment features for reliable multi-classifier fusion. The classified scores in turn guide the assigned positive bounding boxes for the regression task. The mutual guidance operations can bridge the connection between classification and regression substantially, since the assigned labels of one task depend on the prediction quality of the other task. The proposed tracking module is general, and it can boost both the anchor or anchor-free based Siamese trackers to some extent. The extensive tracking comparisons on six tracking benchmarks verify its favorable and competitive performance over states-of-the-arts tracking modules.
RESUMO
Optimizing the locations of sewage treatment plants has enormous practical significance. In this study, a large-system mathematical model was developed for optimizing the locations of sewage treatment plants within a system and designing the associated pumping station pipe network. Head loss of pipe segments in the pipe network was the coupling constraint, the economic flow rate of pipe segments was determined by the feasible region constraints of decision variables, and the design variables were the sewage treatment plant locations, the design head of the pumping stations, the pipeline economic life, and the pipe diameter of divided pipe segments. The minimum total annual cost of the sewage treatment plant(s) and the pumping station pipe network was the objective function. A large-system quadratic orthogonal test-based selection method was used with a discrete enumeration comparison and selection method to determine pipeline economic life. A dynamic programming method was used to determine the pipe diameter of the divided pipe segments. By comparing the total annual cost of the sewage treatment plants and the associated pumping station pipe network corresponding to different pipeline economic lifetimes, the optimal solution that generates the minimum total annual cost can be identified. The sewage treatment plant and pumping station pipe network in Taizhou, China, was used as an example to compare and analyze optimization results. The new optimization method would have produced much lower annual cost than that of the existing system. This study provides valuable theoretical references for probing the layout design of urban sewage treatment plants corresponding to different pipeline economic lifetimes.
Assuntos
Modelos Teóricos , Esgotos , China , Esgotos/análiseRESUMO
State-of-the-art multi-object tracking (MOT) methods follow the tracking-by-detection paradigm, where object trajectories are obtained by associating per-frame outputs of object detectors. In crowded scenes, however, detectors often fail to obtain accurate detections due to heavy occlusions and high crowd density. In this paper, we propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes. Using crowd density maps, we jointly model detection, counting, and tracking of multiple targets as a network flow program, which simultaneously finds the global optimal detections and trajectories of multiple targets over the whole video. This is in contrast to prior MOT methods that either ignore the crowd density and thus are prone to errors in crowded scenes, or rely on a suboptimal two-step process using heuristic density-aware point-tracks for matching targets. Our approach yields promising results on public benchmarks of various domains including people tracking, cell tracking, and fish tracking.
RESUMO
In this paper, we propose a retinex-based decomposition model for a hazy image and a novel end-to-end image dehazing network. In the model, the illumination of the hazy image is decomposed into natural illumination for the haze-free image and residual illumination caused by haze. Based on this model, we design a deep retinex dehazing network (RDN) to jointly estimate the residual illumination map and the haze-free image. Our RDN consists of a multiscale residual dense network for estimating the residual illumination map and a U-Net with channel and spatial attention mechanisms for image dehazing. The multiscale residual dense network can simultaneously capture global contextual information from small-scale receptive fields and local detailed information from large-scale receptive fields to precisely estimate the residual illumination map caused by haze. In the dehazing U-Net, we apply the channel and spatial attention mechanisms in the skip connection of the U-Net to achieve a trade-off between overdehazing and underdehazing by automatically adjusting the channel-wise and pixel-wise attention weights. Compared with scattering model-based networks, fully data-driven networks, and prior-based dehazing methods, our RDN can avoid the errors associated with the simplified scattering model and provide better generalization ability with no dependence on prior information. Extensive experiments show the superiority of the RDN to various state-of-the-art methods.
RESUMO
PURPOSE: Radiotherapy is the mainstay for treating brain metastasis (BM). The objective of this study is to evaluate the overall survival (OS) of patients with BM of lung cancer treated with different radiotherapy modalities. METHODS: Patients with BM of lung cancer who underwent radiotherapy between July 2007 and November 2017 were collected, and their baseline demographics, clinicopathological characteristics and treatments were recorded. Survival was estimated by the Kaplan-Meier method and compared by using the log-rank test. Univariate and multivariate analysis of the prognostic factors were performed using the Cox proportional hazard regression model. RESULTS: A total of 144 patients were enrolled, of whom 77 underwent whole-brain radiotherapy (WBRT), 39 underwent whole brain radiotherapy with consecutive boost (WBRT + boost), and 28 underwent integrated simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT). The OS in SIB-IMRT group was significantly longer than that in WBRT group (median OS 14 (95% confidence interval [CI] 8.8-19.1) vs.7 (95% CI 5.5-8.5) months, log-rank p < 0.001) and WBRT + boost group (median OS: 14 (95% CI 8.8-19.1) vs.11 (95% CI 8.3-13.7) months, log-rank p = 0.037). Multivariable analysis showed that mortality risk of patients treated with SIB-IMRT decrease by 56, 59, 64 and 64% in unadjusted model (hazard ratio [HR] = 0.44; 95% CI 0.28-0.70, p < 0.001), model 1 (HR = 0.41; 95% CI 0.26-0.65, p < 0.001), model 2 (HR = 0.36; 95% CI 0.21-0.61, p < 0.001), and model 3 (HR = 0.36; 95% CI 0.21-0.61, p < 0.001). CONCLUSIONS: For patients with BM of lung cancer, SIB-IMRT seems to be associated with a more favorable prognosis.
Assuntos
Neoplasias Encefálicas/radioterapia , Neoplasias Encefálicas/secundário , Irradiação Craniana/métodos , Neoplasias Pulmonares/patologia , Radioterapia de Intensidade Modulada/métodos , Adulto , Idoso , Neoplasias Encefálicas/mortalidade , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
Many astonishing correlation filter trackers pay limited concentration on the tracking reliability and locating accuracy. To solve the issues, we propose a reliable and accurate cross correlation particle filter tracker via graph regularized multi-kernel multi-subtask learning. Specifically, multiple non-linear kernels are assigned to multi-channel features with reliable feature selection. Each kernel space corresponds to one type of reliable and discriminative features. Then, we define the trace of each target subregion with one feature as a single view, and their multi-view cooperations and interdependencies are exploited to jointly learn multi-kernel subtask cross correlation particle filters, and make them complement and boost each other. The learned filters consist of two complementary parts: weighted combination of base kernels and reliable integration of base filters. The former is associated to feature reliability with importance map, and the weighted information reflects different tracking contribution to accurate location. The second part is to find the reliable target subtasks via the response map, to exclude the distractive subtasks or backgrounds. Besides, the proposed tracker constructs the Laplacian graph regularization via cross similarity of different subtasks, which not only exploits the intrinsic structure among subtasks, and preserves their spatial layout structure, but also maintains the temporal-spatial consistency of subtasks. Comprehensive experiments on five datasets demonstrate its remarkable and competitive performance against state-of-the-art methods.
RESUMO
According to dichromatic reflection model, the previous methods of specular reflection separation in image processing often separate specular reflection from a single image using patch-based priors. Due to lack of global information, these methods often cannot completely separate the specular component of an image and are incline to degrade image textures. In this paper, we derive a global color-lines constraint from dichromatic reflection model to effectively recover specular and diffuse reflection. Our key observation is from that each image pixel lies along a color line in normalized RGB space and the different color lines representing distinct diffuse chromaticities intersect at one point, namely, the illumination chromaticity. For pixels along the same color line, they spread over the entire image and their distances to the illumination chromaticity reflect the amount of specular reflection components. With global (non-local) information from these color lines, our method can effectively separate specular and diffuse reflection components in a pixelwise way for a single image, and it is suitable for real-time applications. Our experimental results on synthetic and real images show that our method performs better than the state-of-the-art methods to separate specular reflection.
RESUMO
Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
RESUMO
Shadows, the common phenomena in most outdoor scenes, bring many problems in image processing and computer vision. In this paper, we present a novel method focusing on extracting shadows from a single outdoor image. The proposed tricolor attenuation model (TAM) that describe the attenuation relationship between shadow and its nonshadow background is derived based on image formation theory. The parameters of the TAM are fixed by using the spectral power distribution (SPD) of daylight and skylight, which are estimated according to Planck's blackbody irradiance law. Based on the TAM, a multistep shadow detection algorithm is proposed to extract shadows. Compared with previous methods, the algorithm can be applied to process single images gotten in real complex scenes without prior knowledge. The experimental results validate the performance of the model.