Your browser doesn't support javascript.
loading
Robust Mesh Denoising via Triple Sparsity.
Zhong, Saishang; Xie, Zhong; Liu, Jinqin; Liu, Zheng.
Afiliación
  • Zhong S; Faculty of Information Engineering, China University of Geosciences, Wuhan 430074, China. saishang@cug.edu.cn.
  • Xie Z; National Engineering Research Center of Geographic Information System, China University of Geosciences, Wuhan 430074, China. saishang@cug.edu.cn.
  • Liu J; Faculty of Information Engineering, China University of Geosciences, Wuhan 430074, China. xiezhong@cug.edu.cn.
  • Liu Z; National Engineering Research Center of Geographic Information System, China University of Geosciences, Wuhan 430074, China. xiezhong@cug.edu.cn.
Sensors (Basel) ; 19(5)2019 Feb 26.
Article en En | MEDLINE | ID: mdl-30813651
Mesh denoising is to recover high quality meshes from noisy inputs scanned from the real world. It is a crucial step in geometry processing, computer vision, computer-aided design, etc. Yet, state-of-the-art denoising methods still fall short of handling meshes containing both sharp features and fine details. Besides, some of the methods usually introduce undesired staircase effects in smoothly curved regions. These issues become more severe when a mesh is corrupted by various kinds of noise, including Gaussian, impulsive, and mixed Gaussian⁻impulsive noise. In this paper, we present a novel optimization method for robustly denoising the mesh. The proposed method is based on a triple sparsity prior: a double sparse prior on first order and second order variations of the face normal field and a sparse prior on the residual face normal field. Numerically, we develop an efficient algorithm based on variable-splitting and augmented Lagrange method to solve the problem. The proposed method can not only effectively recover various features (including sharp features, fine details, smoothly curved regions, etc), but also be robust against different kinds of noise. We testify effectiveness of the proposed method on synthetic meshes and a broad variety of scanned data produced by the laser scanner, Kinect v1, Kinect v2, and Kinect-fusion. Intensive numerical experiments show that our method outperforms all of the compared select-of-the-art methods qualitatively and quantitatively.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2019 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2019 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza