Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
JCO Clin Cancer Inform ; 8: e2300166, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38885475

RESUMEN

PURPOSE: The RECIST guidelines provide a standardized approach for evaluating the response of cancer to treatment, allowing for consistent comparison of treatment efficacy across different therapies and patients. However, collecting such information from electronic health records manually can be extremely labor-intensive and time-consuming because of the complexity and volume of clinical notes. The aim of this study is to apply natural language processing (NLP) techniques to automate this process, minimizing manual data collection efforts, and improving the consistency and reliability of the results. METHODS: We proposed a complex, hybrid NLP system that automates the process of extracting, linking, and summarizing anticancer therapy and associated RECIST-like responses from narrative clinical text. The system consists of multiple machine learning-/deep learning-based and rule-based modules for diverse NLP tasks such as named entity recognition, assertion classification, relation extraction, and text normalization, to address different challenges associated with anticancer therapy and response information extraction. We then evaluated the system performances on two independent test sets from different institutions to demonstrate its effectiveness and generalizability. RESULTS: The system used domain-specific language models, BioBERT and BioClinicalBERT, for high-performance therapy mentions identification and RECIST responses extraction and categorization. The best-performing model achieved a 0.66 score in linking therapy and RECIST response mentions, with end-to-end performance peaking at 0.74 after relation normalization, indicating substantial efficacy with room for improvement. CONCLUSION: We developed, implemented, and tested an information extraction system from clinical notes for cancer treatment and efficacy assessment information. We expect this system will support future cancer research, particularly oncologic studies that focus on efficiently assessing the effectiveness and reliability of cancer therapeutics.


Asunto(s)
Registros Electrónicos de Salud , Procesamiento de Lenguaje Natural , Neoplasias , Criterios de Evaluación de Respuesta en Tumores Sólidos , Humanos , Neoplasias/terapia , Aprendizaje Automático , Minería de Datos/métodos , Algoritmos , Aprendizaje Profundo
2.
Brain Res ; 1826: 148736, 2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38141801

RESUMEN

Oxygen-glucose deprivation (OGD) is a critical model for studying hypoxic-ischemic cerebrovascular disease in vitro. This paper is to investigate the protection of OGD-induced cellular damage and inflammatory responses by OGD preconditioning in vitro, to provide a theoretical basis for OGD preconditioning to improve the prevention and prognosis of ischemic stroke. OGD or OGD preconditioning model was established by culturing the PC12 cell line in vitro, followed by further adding A23187 (calcium ion carrier) or CsA (calcium ion antagonist). Cell viability was detected by MTT, apoptosis by Hoechst 33,258 staining, the levels of TNF-α and IL-1ß mRNA by RT-qPCR and ELISA, and the levels of CaN, NFAT, COX-2 by RT-qPCR and Western blot. Cell viability was decreased, and apoptosis, inflammatory cytokines, and CaN, NFAT, and COX-2 levels were notably increased upon OGD, while OGD pretreatment significantly increased cell viability and decreased apoptosis, inflammation, and the Ca2+/CaN/NFAT pathway. Treatment with A23187 decreased cell viability, promoted apoptosis, and significantly increased TNF-α, IL-1ß, CaN, NFAT, and COX-2 levels, while CsA treatment reduced the opposite results. In vitro OGD preconditioning mediates the Ca2+/CaN/NFAT pathway to protect against OGD-induced cellular damage and inflammatory responses.


Asunto(s)
Precondicionamiento Isquémico , Oxígeno , Ratas , Animales , Oxígeno/metabolismo , Calcio , Factor de Necrosis Tumoral alfa , Glucosa , Calcimicina , Ciclooxigenasa 2 , Apoptosis , Células PC12 , Supervivencia Celular
3.
IEEE Trans Image Process ; 32: 6441-6456, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37991912

RESUMEN

Fully perceiving the surrounding world is a vital capability for autonomous robots. To achieve this goal, a multi-camera system is usually equipped on the data collecting platform and the structure from motion (SfM) technology is used for scene reconstruction. However, although incremental SfM achieves high-precision modeling, it is inefficient and prone to scene drift in large-scale reconstruction tasks. In this paper, we propose a tailored incremental SfM framework for multi-camera systems, where the internal relative poses between cameras can not only be calibrated automatically but also serve as an additional constraint to improve the system robustness. Previous multi-camera based modeling work has mainly focused on stereo setups or multi-camera systems with known calibration information, but we allow arbitrary configurations and only require images as input. First, one camera is selected as the reference camera, and the other cameras in the multi-camera system are denoted as non-reference cameras. Based on the pose relationship between the reference and non-reference camera, the non-reference camera pose can be derived from the reference camera pose and internal relative poses. Then, a two-stage multi-camera based camera registration module is proposed, where the internal relative poses are computed first by local motion averaging, and then the rigid units are registered incrementally. Finally, a multi-camera based bundle adjustment is put forth to iteratively refine the reference camera and the internal relative poses. Experiments demonstrate that our system achieves higher accuracy and robustness on benchmark data compared to the state-of-the-art SfM and SLAM (simultaneous localization and mapping) methods.

4.
IEEE Trans Image Process ; 32: 3521-3535, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37339022

RESUMEN

Inspired by Active Learning and 2D-3D semantic fusion, we proposed a novel framework for 3D scene semantic segmentation based on rendered 2D images, which could efficiently achieve semantic segmentation of any large-scale 3D scene with only a few 2D image annotations. In our framework, we first render perspective images at certain positions in the 3D scene. Then we continuously fine-tune a pre-trained network for image semantic segmentation and project all dense predictions to the 3D model for fusion. In each iteration, we evaluate the 3D semantic model and re-render images in several representative areas where the 3D segmentation is not stable and send them to the network for training after annotation. Through this iterative process of rendering-segmentation-fusion, it can effectively generate difficult-to-segment image samples in the scene, while avoiding complex 3D annotations, so as to achieve label-efficient 3D scene segmentation. Experiments on three large-scale indoor and outdoor 3D datasets demonstrate the effectiveness of the proposed method compared with other state-of-the-art.


Asunto(s)
Imagenología Tridimensional , Semántica , Imagenología Tridimensional/métodos , Aprendizaje Basado en Problemas , Procesamiento de Imagen Asistido por Computador/métodos
5.
Front Genet ; 14: 947144, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36968607

RESUMEN

Background: HAR1 is a 118-bp segment that lies in a pair of novel non-coding RNA genes. It shows a dramatic accelerated change with an estimated 18 substitutions in the human lineage since the human-chimpanzee ancestor, compared with the expected 0.27 substitutions based on the slow rate of change in this region in other amniotes. Mutations of HAR1 lead to a different HAR1 secondary structure in humans compared to that in chimpanzees. Methods: We cloned HAR1 into the EF-1α promoter vector to generate transgenic mice. Morris water maze tests and step-down passive avoidance tests were conducted to observe the changes in memory and cognitive abilities of mice. RNA-seq analysis was performed to identify differentially expressed genes (DEGs) between the experimental and control groups. Systematic bioinformatics analysis was used to confirm the pathways and functions that the DEGs were involved in. Results: Memory and cognitive abilities of the transgenic mice were significantly improved. The results of Gene Ontology (GO) analysis showed that Neuron differentiation, Dentate gyrus development, Nervous system development, Cerebral cortex neuron differentiation, Cerebral cortex development, Cerebral cortex development and Neurogenesis are all significant GO terms related to brain development. The DEGs enriched in these terms included Lhx2, Emx2, Foxg1, Nr2e1 and Emx1. All these genes play an important role in regulating the functioning of Cajal-Retzius cells (CRs). The DEGs were also enriched in glutamatergic synapses, synapses, memory, and the positive regulation of long-term synaptic potentiation. In addition, "cellular response to calcium ions" exhibited the second highest rich factor in the GO analysis. Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis of the DEGs showed that the neuroactive ligand-receptor interaction pathway was the most significantly enriched pathway, and DEGs also notably enriched in neuroactive ligand-receptor interaction, axon guidance, and cholinergic synapses. Conclusion: HAR1 overexpression led to improvements in memory and cognitive abilities of the transgenic mice. The possible mechanism for this was that the long non-coding RNA (lncRNA) HAR1A affected brain development by regulating the function of CRs. Moreover, HAR1A may be involved in ligand-receptor interaction, axon guidance, and synapse formation, all of which are important in brain development and evolution. Furthermore, cellular response to calcium may play an important role in those processes.

6.
Front Cardiovasc Med ; 9: 973592, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36204576

RESUMEN

Background: An understanding of the epidemiologic patterns of peripheral artery disease is essential in public health policy-making. We aimed to assess secular trends in the epidemiologic patterns and risk factors of peripheral artery disease from 1990 to 2019 in China. Materials and methods: We extracted data on prevalence, incidence, death, and disability-adjusted life years (DALYs) due to peripheral artery disease from the Global Burden of Disease study 2019. In addition, risk factors for peripheral artery disease were reported. Results: The age-standardized prevalence of peripheral artery disease significantly increased from 1330.42 to 1423.78 per 100,000 population, with an average annual percentage change (AAPC) of 0.16 [95% confidence interval (CI), 0.07 to 0.24] from 1990 to 2019 in China. In addition, the age-standardized mortality rate significantly increased, with an AAPC of 0.62 (95% CI, 0.54 to 0.7), contrasting with the significantly declining trend in age-standardized DALYs (AAPC, -0.45; 95% CI, -0.52 to -0.39) between 1990 and 2019. The age-standardized prevalence was almost three times higher in females than males [2022.13 (95% CI: 1750 to 2309.13) vs. 744.96 (95% CI: 644.62 to 850.82) per 100,000 population] in 2019. The age-specific incidence significantly increased in individuals aged 40-44, 45-49, 50-54, 55-59, and 60-64 years groups but decreased in 70-74, 75-79, and 80-84 years groups. The age and period effects showed that the relative risks of incident peripheral artery disease increased with age and time. The cohort assessment showed that the incidence decreased in successive birth cohorts. Smoking was identified as the risk factor that contributed the most to age-standardized DALYs of peripheral artery disease in 2019. Conclusion: The burden of peripheral artery disease showed unexpected patterns that varied by age, sex, and year in China. More attention should be given to addressing the increasing incidence among middle-aged individuals and mortality among males.

7.
Artículo en Inglés | MEDLINE | ID: mdl-36054385

RESUMEN

Depth completion aims to recover pixelwise depth from incomplete and noisy depth measurements with or without the guidance of a reference RGB image. This task attracted considerable research interest due to its importance in various computer vision-based applications, such as scene understanding, autonomous driving, 3-D reconstruction, object detection, pose estimation, trajectory prediction, and so on. As the system input, an incomplete depth map is usually generated by projecting the 3-D points collected by ranging sensors, such as LiDAR in outdoor environments, or obtained directly from RGB-D cameras in indoor areas. However, even if a high-end LiDAR is employed, the obtained depth maps are still very sparse and noisy, especially in the regions near the object boundaries, which makes the depth completion task a challenging problem. To address this issue, a few years ago, conventional image processing-based techniques were employed to fill the holes and remove the noise from the relatively dense depth maps obtained by RGB-D cameras, while deep learning-based methods have recently become increasingly popular and inspiring results have been achieved, especially for the challenging situation of LiDAR-image-based depth completion. This article systematically reviews and summarizes the works related to the topic of depth completion in terms of input modalities, data fusion strategies, loss functions, and experimental settings, especially for the key techniques proposed in deep learning-based multiple input methods. On this basis, we conclude by presenting the current status of depth completion and discussing several prospects for its future research directions.

8.
IEEE Trans Image Process ; 31: 2449-2462, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35263254

RESUMEN

With the popularization of smartphones, larger collection of videos with high quality is available, which makes the scale of scene reconstruction increase dramatically. However, high-resolution video produces more match outliers, and high frame rate video brings more redundant images. To solve these problems, a tailor-made framework is proposed to realize an accurate and robust structure-from-motion based on monocular videos. The key ideas include two points: one is to use the spatial and temporal continuity of video sequences to improve the accuracy and robustness of reconstruction; the other is to use the redundancy of video sequences to improve the efficiency and scalability of system. Our technical contributions include an adaptive way to identify accurate loop matching pairs, a cluster-based camera registration algorithm, a local rotation averaging scheme to verify the pose estimate and a local images extension strategy to reboot the incremental reconstruction. In addition, our system can integrate data from different video sequences, allowing multiple videos to be simultaneously reconstructed. Extensive experiments on both indoor and outdoor monocular videos demonstrate that our method outperforms the state-of-the-art approaches in robustness, accuracy and scalability.

9.
IEEE Trans Image Process ; 30: 7458-7471, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34449362

RESUMEN

Urban scene modeling is a challenging task for the photogrammetry and computer vision community due to its large scale, structural complexity, and topological delicacy. This paper presents an efficient multistep modeling framework for large-scale urban scenes from aerial images. It takes aerial images and a textured 3D mesh model generated by an image-based modeling system as the input and outputs compact polygon models with semantics at different levels of detail (LODs). Based on the key observation that urban buildings usually have piecewise planar rooftops and vertical walls, we propose a segment-based modeling method, which consists of three major stages: scene segmentation, roof contour extraction, and building modeling. By combining the deep neural network predictions with geometric constraints of the 3D mesh, the scene is first segmented into three classes. Then, for each building mesh, the 2D line segments are detected and used to slice the ground into polygon cells, followed by assigning each cell a roof plane via a MRF optimization. Finally, the LOD model is obtained by extruding cells to their corresponding planes. Compared with direct modeling in 3D space, we transform the mesh into a uniform 2D image grid representation and most of the modeling work is performed in 2D space, which has the advantages of low computational complexity and high robustness. In addition, our method doesn't require any global prior, such as the Manhattan or Atlanta world assumption, making it flexible to model scenes with different characteristics and complexity. Experiments on both single buildings and large-scale urban scenes demonstrate that by combining 2D photometric with 3D geometric information, the proposed algorithm is robust and efficient in urban scene LOD vectorized modeling compared with the state-of-the-art approaches.

10.
Environ Sci Technol ; 55(8): 5189-5198, 2021 04 20.
Artículo en Inglés | MEDLINE | ID: mdl-33764763

RESUMEN

Batteries have the potential to significantly reduce greenhouse gas emissions from on-road transportation. However, environmental and social impacts of producing lithium-ion batteries, particularly cathode materials, and concerns over material criticality are frequently highlighted as barriers to widespread electric vehicle adoption. Circular economy strategies, like reuse and recycling, can reduce impacts and secure regional supplies. To understand the potential for circularity, we undertake a dynamic global material flow analysis of pack-level materials that includes scenario analysis for changing battery cathode chemistries and electric vehicle demand. Results are produced regionwise and through the year 2040 to estimate the potential global and regional circularity of lithium, cobalt, nickel, manganese, iron, aluminum, copper, and graphite, although the analysis is focused on the cathode materials. Under idealized conditions, retired batteries could supply 60% of cobalt, 53% of lithium, 57% of manganese, and 53% of nickel globally in 2040. If the current mix of cathode chemistries evolves to a market dominated by NMC 811, a low cobalt chemistry, there is potential for 85% global circularity of cobalt in 2040. If the market steers away from cathodes containing cobalt, to an LFP-dominated market, cobalt, manganese, and nickel become less relevant and reach circularity before 2040. For each market to benefit from the recovery of secondary materials, recycling and manufacturing infrastructure must be developed in each region.


Asunto(s)
Suministros de Energía Eléctrica , Litio , Cobalto , Electrodos , Iones , Reciclaje
11.
Sensors (Basel) ; 19(6)2019 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-30871277

RESUMEN

In this paper, we put forward a new method for surface reconstruction from image-based point clouds. In particular, we introduce a new visibility model for each line of sight to preserve scene details without decreasing the noise filtering ability. To make the proposed method suitable for point clouds with heavy noise, we introduce a new likelihood energy term to the total energy of the binary labeling problem of Delaunay tetrahedra, and we give its s-t graph implementation. Besides, we further improve the performance of the proposed method with the dense visibility technique, which helps to keep the object edge sharp. The experimental result shows that the proposed method rivalled the state-of-the-art methods in terms of accuracy and completeness, and performed better with reference to detail preservation.

12.
Vis Comput Ind Biomed Art ; 2(1): 10, 2019 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-32240393

RESUMEN

Image-based 3D modeling is an effective method for reconstructing large-scale scenes, especially city-level scenarios. In the image-based modeling pipeline, obtaining a watertight mesh model from a noisy multi-view stereo point cloud is a key step toward ensuring model quality. However, some state-of-the-art methods rely on the global Delaunay-based optimization formed by all the points and cameras; thus, they encounter scaling problems when dealing with large scenes. To circumvent these limitations, this study proposes a scalable point-cloud meshing approach to aid the reconstruction of city-scale scenes with minimal time consumption and memory usage. Firstly, the entire scene is divided along the x and y axes into several overlapping chunks so that each chunk can satisfy the memory limit. Then, the Delaunay-based optimization is performed to extract meshes for each chunk in parallel. Finally, the local meshes are merged together by resolving local inconsistencies in the overlapping areas between the chunks. We test the proposed method on three city-scale scenes with hundreds of millions of points and thousands of images, and demonstrate its scalability, accuracy, and completeness, compared with the state-of-the-art methods.

13.
IEEE Trans Image Process ; 26(8): 3775-3788, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28534771

RESUMEN

This paper aims at bridging the two important trends in efficient graph cuts in the literature, the one is to decompose a graph into several smaller subgraphs to take the advantage of parallel computation, the other is to reuse the solution of the max-flow problem on a residual graph to boost the efficiency on another similar graph. Our proposed parallel dynamic graph cuts algorithm takes the advantages of both, and is extremely efficient for certain dynamically changing MRF models in computer vision. The performance of our proposed algorithm is validated on two typical dynamic graph cuts problems: the foreground-background segmentation in video, where similar graph cuts problems need to be solved in sequential and GrabCut, where graph cuts are used iteratively.

14.
IEEE Trans Image Process ; 25(12): 5511-5525, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27654484

RESUMEN

Graph cuts are widely used in computer vision. To speed up the optimization process and improve the scalability for large graphs, Strandmark and Kahl introduced a splitting method to split a graph into multiple subgraphs for parallel computation in both shared and distributed memory models. However, this parallel algorithm (the parallel BK-algorithm) does not have a polynomial bound on the number of iterations and is found to be non-convergent in some cases due to the possible multiple optimal solutions of its sub-problems. To remedy this non-convergence problem, in this paper, we first introduce a merging method capable of merging any number of those adjacent sub-graphs that can hardly reach agreement on their overlapping regions in the parallel BK-algorithm. Based on the pseudo-boolean representations of graph cuts, our merging method is shown to be effectively reused all the computed flows in these sub-graphs. Through both splitting and merging, we further propose a dynamic parallel and distributed graph cuts algorithm with guaranteed convergence to the globally optimal solutions within a predefined number of iterations. In essence, this paper provides a general framework to allow more sophisticated splitting and merging strategies to be employed to further boost performance. Our dynamic parallel algorithm is validated with extensive experimental results.

15.
IEEE Trans Image Process ; 24(11): 3561-73, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26111397

RESUMEN

One of the potentially effective means for large-scale 3D scene reconstruction is to reconstruct the scene in a global manner, rather than incrementally, by fully exploiting available auxiliary information on the imaging condition, such as camera location by Global Positioning System (GPS), orientation by inertial measurement unit (or compass), focal length from EXIF, and so on. However, such auxiliary information, though informative and valuable, is usually too noisy to be directly usable. In this paper, we present an approach by taking advantage of such noisy auxiliary information to improve structure from motion solving. More specifically, we introduce two effective iterative global optimization algorithms initiated with such noisy auxiliary information. One is a robust rotation averaging algorithm to deal with contaminated epipolar graph, the other is a robust scene reconstruction algorithm to deal with noisy GPS data for camera centers initialization. We found that by exclusively focusing on the estimated inliers at the current iteration, the optimization process initialized by such noisy auxiliary information could converge well and efficiently. Our proposed method is evaluated on real images captured by unmanned aerial vehicle, StreetView car, and conventional digital cameras. Extensive experimental results show that our method performs similarly or better than many of the state-of-art reconstruction approaches, in terms of reconstruction accuracy and completeness, but is more efficient and scalable for large-scale image data sets.

16.
IEEE Trans Image Process ; 23(1): 308-18, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24240002

RESUMEN

Depth-map merging based 3D modeling is an effective approach for reconstructing large-scale scenes from multiple images. In addition to generate high quality depth maps at each image, how to select suitable neighboring images for each image is also an important step in the reconstruction pipeline, unfortunately to which little attention has been paid in the literature until now. This paper is intended to tackle this issue for large scale scene reconstruction where many unordered images are captured and used with substantial varying scale and view-angle changes. We formulate the neighboring image selection as a combinatorial optimization problem and use the quantum-inspired evolutionary algorithm to seek its optimal solution. Experimental results on the ground truth data set show that our approach can significantly improve the quality of the depth-maps as well as final 3D reconstruction results with high computational efficiency.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Teóricos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Simulación por Computador , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
IEEE Trans Image Process ; 22(5): 1901-14, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23322763

RESUMEN

In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.

18.
IEEE Trans Image Process ; 19(2): 512-21, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20007027

RESUMEN

We present a method for recovering the 3-D shape of an inextensible deformable surface from a monocular image sequence. State-of-the-art methods on this problem , utilize L(infinity)-norm of reprojection residual vectors and formulate the tracking problem as a Second-Order Cone Programming (SOCP) problem. Instead of using L(infinity) which is sensitive to outliers, we use L(2)-norm of reprojection errors. Generally, using L(2) leads a nonconvex optimization problem which is difficult to minimize. Instead of solving the nonconvex problem directly, we design an iterative L(2)-norm approximation process to approximate the nonconvex objective function, in which only a linear system needs to be solved at each iteration. Furthermore, we introduce a shape regularization term into this iterative process in order to keep the inextensibility of the recovered mesh. Compared with previous methods, ours performs more robust to image noises, outliers and large interframe motions with high computational efficiency. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.

19.
IEEE Trans Image Process ; 19(3): 782-94, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20028624

RESUMEN

We present a method for recovering 3-D nonrigid structure from an image pair taken with a stereo rig. More specifically, we dedicate to recover shapes of nearly inextensible deformable surfaces. In our approach, we represent the surface as a 3-D triangulated mesh and formulate the reconstruction problem as an optimization problem consisting of data terms and shape terms. The data terms are model to image keypoint correspondences which can be formulated as second-order cone programming (SOCP) constraints using L(infinity) norm. The shape terms are designed to retaining original lengths of mesh edges which are typically nonconvex constraints. We will show that this optimization problem can be turned into a sequence of SOCP feasibility problems in which the nonconvex constraints are approximated as a set of convex constraints. Thanks to the efficient SOCP solver, the reconstruction problem can then be solved reliably and efficiently. As opposed to previous methods, ours neither involves smoothness constraints nor need an initial estimation, which enables us to recover shapes of surfaces with smooth, sharp and other complex deformations from a single image pair. The robustness and accuracy of our approach are evaluated quantitatively on synthetic data and qualitatively on real data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA