Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38457327

RESUMEN

We present a general, fast, and practical solution for interpolating novel views of diverse real-world scenes given a sparse set of nearby views. Existing generic novel view synthesis methods rely on time-consuming scene geometry pre-computation or redundant sampling of the entire space for neural volumetric rendering, limiting the overall efficiency. Instead, we incorporate learned MVS priors into the neural volume rendering pipeline while improving the rendering efficiency by reducing sampling points under the guidance of depth probability distributions. Specifically, fewer but important points are sampled under the guidance of depth probability distributions extracted from the learned MVS architecture. Based on the learned probability-guided sampling, we develop a sophisticated neural volume rendering module that effectively integrates source view information with the learned scene structures. We further propose confidence-aware refinement to improve the rendering results in uncertain, occluded, and unreferenced regions. Moreover, we build a four-view camera system for holographic display and provide a real-time version of our framework for free-viewpoint experience, where novel view images of a spatial resolution of 512×512 can be rendered at around 20 fps on a single GTX 3090 GPU. Experiments show that our method achieves 15 to 40 times faster rendering compared to state-of-the-art baselines, with strong generalization capacity and comparable high-quality novel view synthesis performance.

2.
Nat Commun ; 14(1): 7727, 2023 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-38001106

RESUMEN

Understandings of the three-dimensional social behaviors of freely moving large-size mammals are valuable for both agriculture and life science, yet challenging due to occlusions in close interactions. Although existing animal pose estimation methods captured keypoint trajectories, they ignored deformable surfaces which contained geometric information essential for social interaction prediction and for dealing with the occlusions. In this study, we develop a Multi-Animal Mesh Model Alignment (MAMMAL) system based on an articulated surface mesh model. Our self-designed MAMMAL algorithms automatically enable us to align multi-view images into our mesh model and to capture 3D surface motions of multiple animals, which display better performance upon severe occlusions compared to traditional triangulation and allow complex social analysis. By utilizing MAMMAL, we are able to quantitatively analyze the locomotion, postures, animal-scene interactions, social interactions, as well as detailed tail motions of pigs. Furthermore, experiments on mouse and Beagle dogs demonstrate the generalizability of MAMMAL across different environments and mammal species.


Asunto(s)
Imagenología Tridimensional , Captura de Movimiento , Animales , Porcinos , Ratones , Perros , Imagenología Tridimensional/métodos , Postura , Algoritmos , Movimiento (Física) , Mamíferos
3.
Artículo en Inglés | MEDLINE | ID: mdl-37478036

RESUMEN

Recent neural rendering methods have made great progress in generating photorealistic human avatars. However, these methods are generally conditioned only on low-dimensional driving signals (e.g., body poses), which are insufficient to encode the complete appearance of a clothed human. Hence they fail to generate faithful details. To address this problem, we exploit driving view images (e.g., in telepresence systems) as additional inputs. We propose a novel neural rendering pipeline, Hybrid Volumetric-Textural Rendering (HVTR++), which synthesizes 3D human avatars from arbitrary driving poses and views while staying faithful to appearance details efficiently and at high quality. First, we learn to encode the driving signals of pose and view image on a dense UV manifold of the human body surface and extract UV-aligned features, preserving the structure of a skeleton-based parametric model. To handle complicated motions (e.g., self-occlusions), we then leverage the UV-aligned features to construct a 3D volumetric representation based on a dynamic neural radiance field. While this allows us to represent 3D geometry with changing topology, volumetric rendering is computationally heavy. Hence we employ only a rough volumetric representation using a pose- and image-conditioned downsampled neural radiance field (PID-NeRF), which we can render efficiently at low resolutions. In addition, we learn 2D textural features that are fused with rendered volumetric features in image space. The key advantage of our approach is that we can then convert the fused features into a high-resolution, high-quality avatar by a fast GAN-based textural renderer. We demonstrate that hybrid rendering enables HVTR++ to handle complicated motions, render high-quality avatars under user-controlled poses/shapes, and most importantly, be efficient at inference time. Our experimental results also demonstrate state-of-the-art quantitative results.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15406-15425, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37494160

RESUMEN

Estimating human pose and shape from monocular images is a long-standing problem in computer vision. Since the release of statistical body models, 3D human mesh recovery has been drawing broader attention. With the same goal of obtaining well-aligned and physically plausible mesh results, two paradigms have been developed to overcome challenges in the 2D-to-3D lifting process: i) an optimization-based paradigm, where different data terms and regularization terms are exploited as optimization objectives; and ii) a regression-based paradigm, where deep learning techniques are embraced to solve the problem in an end-to-end fashion. Meanwhile, continuous efforts are devoted to improving the quality of 3D mesh labels for a wide range of datasets. Though remarkable progress has been achieved in the past decade, the task is still challenging due to flexible body motions, diverse appearances, complex environments, and insufficient in-the-wild annotations. To the best of our knowledge, this is the first survey that focuses on the task of monocular 3D human mesh recovery. We start with the introduction of body models and then elaborate recovery frameworks and training objectives by providing in-depth analyses of their strengths and weaknesses. We also summarize datasets, evaluation metrics, and benchmark results. Open issues and future directions are discussed in the end, hoping to motivate researchers and facilitate their research in this area.


Asunto(s)
Algoritmos , Mallas Quirúrgicas , Humanos , Benchmarking , Modelos Estadísticos , Movimiento (Física)
5.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 12287-12303, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37126625

RESUMEN

We present PyMAF-X, a regression-based approach to recovering a parametric full-body model from a single image. This task is very challenging since minor parametric deviation may lead to noticeable misalignment between the estimated mesh and the input image. Moreover, when integrating part-specific estimations into the full-body model, existing solutions tend to either degrade the alignment or produce unnatural wrist poses. To address these issues, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop in our regression network for well-aligned human mesh recovery and extend it as PyMAF-X for the recovery of expressive full-body models. The core idea of PyMAF is to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status. Specifically, given the currently predicted parameters, mesh-aligned evidence will be extracted from finer-resolution features accordingly and fed back for parameter rectification. To enhance the alignment perception, an auxiliary dense supervision is employed to provide mesh-image correspondence guidance while spatial alignment attention is introduced to enable the awareness of the global contexts for our network. When extending PyMAF for full-body mesh recovery, an adaptive integration strategy is proposed in PyMAF-X to produce natural wrist poses while maintaining the well-aligned performance of the part-specific estimations. The efficacy of our approach is validated on several benchmark datasets for body, hand, face, and full-body mesh recovery, where PyMAF and PyMAF-X effectively improve the mesh-image alignment and achieve new The project page with code and video results can be found at https://www.liuyebin.com/pymaf-x.

6.
J Virol Methods ; 316: 114725, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36965632

RESUMEN

African swine fever virus (ASFV) infection causes substantial economic losses to the swine industry worldwide, and there are still no safe and effective vaccines or therapeutics available. The granulated virus antigen improves the antigen present process and elicits high antibody reaction than the subunit antigen. In this study, the SpyTag peptide-p10 fusion protein was altered and displayed on the surface of the T7 phage to construct an engineered phage (T7-ST). At the same time, ASFV antigen-Spycatcher C-terminal-fused protein (antigen-SC) was expressed and purified by an E. coli prokaryotic expression system. Five virus-like particles (VLPs) displaying the main ASFV antigenic proteins P30, P54, P72, CD2v, and K145R were reconstructed by the isopeptide bond between SpyTag and antigen-SC proteins. The stability of five ASFV VLPs in high temperature and extreme pH conditions was evaluated by transmission electron microscopy (TEM) and plaque analysis. All ASFV VLPs induced a high titer antigen-specific antibody response in mice. Our results showed that the granulated antigen displaying ASFV protein on the surface of the T7 phage provides a robust potential vaccine and diagnostic tool to address the challenge of the ASFV pandemic.


Asunto(s)
Virus de la Fiebre Porcina Africana , Fiebre Porcina Africana , Porcinos , Animales , Ratones , Bacteriófago T7/genética , Formación de Anticuerpos , Escherichia coli/genética , Proteínas Virales
7.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 1581-1593, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35439130

RESUMEN

Garment representation, editing and animation are challenging topics in the area of computer vision and graphics. It remains difficult for existing garment representations to achieve smooth and plausible transitions between different shapes and topologies. In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing. Our unified framework contains 3 components: First, we represent the garment geometry with a "topology-aware UV-position map", which allows for the unified description of various garments with different shapes and topologies by introducing an additional topology-aware UV-mask for the UV-position map. Second, to further enable garment reconstruction and editing, we contribute a method to embed the UV-based representations into a continuous feature space, which enables garment shape reconstruction and editing by optimization and control in the latent space, respectively. Finally, we propose a garment animation method by unifying our neural garment representation with body shape and pose, which achieves plausible garment animation results leveraging the dynamic information encoded by our shape and style representation, even under drastic garment editing operations. To conclude, with DeepCloth, we move a step forward in establishing a more flexible and general 3D garment digitization framework. Experiments demonstrate that our method can achieve state-of-the-art garment representation performance compared with previous methods.

8.
IEEE Trans Vis Comput Graph ; 29(12): 4891-4905, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35914057

RESUMEN

In this paper, we propose a controllable high-quality free viewpoint video generation method based on the motion graph and neural radiance fields (NeRF). Different from existing pose-driven NeRF or time/structure conditioned NeRF works, we propose to first construct a directed motion graph of the captured sequence. Such a sequence-motion-parameterization strategy not only enables flexible pose control for free viewpoint video rendering but also avoids redundant calculation of similar poses and thus improves the overall reconstruction efficiency. Moreover, to support body shape control without losing the realistic free viewpoint rendering performance, we improve the vanilla NeRF by combining explicit surface deformation and implicit neural scene representations. Specifically, we train a local surface-guided NeRF for each valid frame on the motion graph, and the volumetric rendering was only performed in the local space around the real surface, thus enabling plausible shape control ability. As far as we know, our method is the first method that supports both realistic free viewpoint video reconstruction and motion graph-based user-guided motion traversal. The results and comparisons further demonstrate the effectiveness of the proposed method.

9.
IEEE Trans Pattern Anal Mach Intell ; 44(6): 3170-3184, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-33434121

RESUMEN

Modeling 3D humans accurately and robustly from a single image is very challenging, and the key for such an ill-posed problem is the 3D representation of the human models. To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function. In our PaMIR-based reconstruction framework, a novel deep neural network is proposed to regularize the free-form deep implicit function using the semantic features of the parametric model, which improves the generalization ability under the scenarios of challenging poses and various clothing topologies. Moreover, a novel depth-ambiguity-aware training loss is further integrated to resolve depth ambiguities and enable successful surface detail reconstruction with imperfect body reference. Finally, we propose a body reference optimization method to improve the parametric model estimation accuracy and to enhance the consistency between the parametric model and the implicit function. With the PaMIR representation, our framework can be easily extended to multi-image input scenarios without the need of multi-camera calibration and pose synchronization. Experimental results demonstrate that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación
10.
IEEE Trans Vis Comput Graph ; 28(4): 1862-1879, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-32991282

RESUMEN

We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning. The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively. For geometry reconstruction, we decompose the clothed human into multiple geometry layers, namely a body mesh layer and a garment piece layer. The key technique behind is a Garment-from-Video (GfV) method for optimizing the garment shape and reconstructing the dynamic cloth to fit the input video sequence, based on a cloth simulation model which is effectively solved with gradient descent. For texture rendering, we decompose each input image frame into a shading layer and an albedo layer, and propose a method for fusing a fixed albedo map and solving for detailed garment geometry using the shading layer. Compared with existing single view human performance capture systems, our "multi-layer" approach bypasses the tedious and time consuming scanning step for obtaining a human specific mesh template. Experimental results demonstrate that MulayCap produces realistic rendering of dynamically changing details that has not been achieved in any previous monocular video camera systems. Benefiting from its fully semantic modeling, MulayCap can be applied to various important editing applications, such as cloth editing, re-targeting, relighting, and AR applications.


Asunto(s)
Gráficos por Computador , Imagenología Tridimensional , Simulación por Computador , Humanos , Imagenología Tridimensional/métodos , Grabación en Video/métodos
11.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5430-5444, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-33861692

RESUMEN

The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and the non-Lambertian effect. Typical approaches either address the large disparity challenge using depth estimation followed by view synthesis or eschew explicit depth information to enable non-Lambertian rendering, but rarely solve both challenges in a unified framework. In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques. First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem. Classic LF rendering approaches typically mitigate the aliasing with a reconstruction filter in the Fourier domain, which is, however, intractable to implement within a deep learning pipeline. Instead, we introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and analytically show comparable efficacy on the aliasing issue. To explore the full potential, we then embed the anti-aliasing framework into a deep neural network through the design of an integrated architecture and trainable parameters. The network is trained through end-to-end optimization using a peculiar training set, including regular LFs and unstructured LFs. The proposed deep learning pipeline shows a substantial superiority in solving both the large disparity and the non-Lambertian challenges compared with other state-of-the-art approaches. In addition to the view interpolation for an LF, we also show that the proposed pipeline also benefits light field view extrapolation.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
12.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7854-7870, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-34529563

RESUMEN

In this paper, we propose an efficient method for robust and accurate 3D self-portraits using a single RGBD camera. Our method can generate detailed and realistic 3D self-portraits in seconds and shows the ability to handle subjects wearing extremely loose clothes. To achieve highly efficient and robust reconstruction, we propose PIFusion, which combines learning-based 3D recovery with volumetric non-rigid fusion to generate accurate sparse partial scans of the subject. Meanwhile, a non-rigid volumetric deformation method is proposed to continuously refine the learned shape prior. Moreover, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only "loop" with each other but also remain consistent with the selected live key observations. Finally, to further generate realistic portraits, we propose non-rigid texture optimization to improve the texture quality. Additionally, we also contribute a benchmark for single-view 3D self-portrait reconstruction, an evaluation dataset that contains 10 single-view RGBD sequences of a self-rotating performer wearing various clothes and the corresponding ground-truth 3D models in the first frame of each sequence. The results and experiments based on this dataset show that the proposed method outperforms state-of-the-art methods on accuracy, efficiency, and generality.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos
13.
IEEE Trans Vis Comput Graph ; 28(12): 4873-4886, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34449390

RESUMEN

Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet), to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet.


Asunto(s)
Imagenología Tridimensional , Habla , Humanos , Habla/fisiología , Imagenología Tridimensional/métodos , Algoritmos , Cara/diagnóstico por imagen , Cara/fisiología , Gráficos por Computador
14.
IEEE Trans Image Process ; 30: 8999-9013, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34705646

RESUMEN

Typical learning-based light field reconstruction methods demand in constructing a large receptive field by deepening their networks to capture correspondences between input views. In this paper, we propose a spatial-angular attention network to perceive non-local correspondences in the light field, and reconstruct high angular resolution light field in an end-to-end manner. Motivated by the non-local attention mechanism (Wang et al., 2018; Zhang et al., 2019), a spatial-angular attention module specifically for the high-dimensional light field data is introduced to compute the response of each query pixel from all the positions on the epipolar plane, and generate an attention map that captures correspondences along the angular dimension. Then a multi-scale reconstruction structure is proposed to efficiently implement the non-local attention in the low resolution feature space, while also preserving the high frequency components in the high-resolution feature space. Extensive experiments demonstrate the superior performance of the proposed spatial-angular attention network for reconstructing sparsely-sampled light fields with Non-Lambertian effects.

15.
IEEE Trans Image Process ; 30: 5239-5251, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34010134

RESUMEN

3D human reconstruction from a single image is a challenging problem. Existing methods have difficulties to infer 3D clothed human models with consistent topologies for various poses. In this paper, we propose an efficient and effective method using a hierarchical graph transformation network. To deal with large deformations and avoid distorted geometries, rather than using Euclidean coordinates directly, 3D human shapes are represented by a vertex-based deformation representation that effectively encodes the deformation and copes well with large deformations. To infer a 3D human mesh consistent with the input real image, we also use a perspective projection layer to incorporate perceptual image features into the deformation representation. Our model is easy to train and fast to converge with short test time. Besides, we present the D2Human (Dynamic Detailed Human) dataset, including variously posed 3D human meshes with consistent topologies and rich geometry details, together with the captured color images and SMPL models, which is useful for training and evaluation of deep frameworks, particularly for graph neural networks. Experimental results demonstrate that our method achieves more plausible and complete 3D human reconstruction from a single image, compared with several state-of-the-art methods. The code and dataset are available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/MGTnet.


Asunto(s)
Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Postura/fisiología , Algoritmos , Femenino , Humanos , Masculino
16.
IEEE Trans Pattern Anal Mach Intell ; 43(10): 3523-3539, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32191880

RESUMEN

As an emerging imaging modality, transient imaging that records the transient information of light transport has significantly shaped our understanding of scenes. In spite of the great progress made in computer vision and optical imaging fields, commonly used multi-frequency time-of-flight (ToF) sensors are still afflicted with the band-limited modulation frequency and long acquisition process. To overcome such barriers, more effective image-formation schemes and reconstruction algorithms are highly desired. In this paper, we propose a compressive transient imaging model, without any priori knowledge, by constructing a near-tight-frame based representation of the ToF imaging principle. We prove that the compressibility of sensor measurements can be presented in the Fourier domain and held in the frame, and the ToF measurements possess multi-scale characteristics. Solving the inverse problems in transient imaging with our proposed model consists of two major steps, including a compressed-sensing-based approach for full measurement recovery, which essentially reduces the capture time, and a wavelet-based transient image reconstruction framework, which realizes adaptive transient image reconstruction and achieves highly accurate reconstruction results. The compressive transient imaging model is suitable for various existing multi-frequency ToF sensors and requires no hardware modifications. Experimental results using synthetic and real online datasets demonstrate its promising performance.

17.
IEEE Trans Vis Comput Graph ; 27(1): 68-82, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31369379

RESUMEN

While dynamic scene reconstruction has made revolutionary progress from the earliest setup using a mass of static cameras in studio environment to the latest egocentric or hand-held moving camera based schemes, it is still restricted by the recording volume, user comfortability, human labor and expertise. In this paper, a novel solution is proposed through a real-time and robust dynamic fusion scheme using a single flying depth camera, denoted as FlyFusion. By proposing a novel topology compactness strategy for effectively regularizing the complex topology changes, and the Geometry And Motion Energy (GAME) metric for guiding the viewpoint optimization in the volumetric space, FlyFusion succeeds to enable intelligent viewpoint selection based on the immediate dynamic reconstruction result. The merit of FlyFusion lies in its concurrent robustness, efficiency, and adaptation in producing fused and denoised 3D geometry and motions of a moving target interacting with different non-rigid objects in a large space.

18.
Artículo en Inglés | MEDLINE | ID: mdl-33048678

RESUMEN

Human pose transfer, which aims at transferring the appearance of a given person to a target pose, is very challenging and important in many applications. Previous work ignores the guidance of pose features or only uses local attention mechanism, leading to implausible and blurry results. We propose a new human pose transfer method using a generative adversarial network (GAN) with simplified cascaded blocks. In each block, we propose a pose-guided non-local attention (PoNA) mechanism with a long-range dependency scheme to select more important regions of image features to transfer. We also design pre-posed image-guided pose feature update and post-posed pose-guided image feature update to better utilize the pose and image features. Our network is simple, stable, and easy to train. Quantitative and qualitative results on Market-1501 and DeepFashion datasets show the efficacy and efficiency of our model. Compared with state-of-the-art methods, our model generates sharper and more realistic images with rich details, while having fewer parameters and faster speed. Furthermore, our generated images can help to alleviate data insufficiency for person re-identification.

19.
Zhongguo Zhong Yao Za Zhi ; 45(5): 984-990, 2020 Mar.
Artículo en Chino | MEDLINE | ID: mdl-32237436

RESUMEN

Noni is a dry and mature fruit of Morinda citrifolia, which is widely distributed in the islands in the southern Pacific Ocean and the Indochina Peninsula in Asia. It is edible and has been used as a natural medicine for thousands of years. At present, Noni has been legally introduced into China, but there is no clear standard of traditional Chinese medicine properties and clinical application of traditional Chinese medicine, which greatly limits the application of compatibility with traditional Chinese medicine in China. This article appllied our pioneering modern research technology of new herbal medicine outside of China, theoretically studied the traditional Chinese medicine properties of Noni, and scientifically guided the reasonable compatibility and application of Noni with traditional Chinese medicine. The Web of Science and PubMed databases were selected to access the literatures on Noni. The retrieval time was August 1, 2018, with Noni or Morinda citrifolia as the search term. A total of 862 articles were retrieved. By reading the titles and abstracts of the articles, in addition to repetitive and irrelevant literature, 251 scientific research literatures with reasonable design and high credibility were selected, including 25 clinical trials, 94 pharmacological experiments, and 51 chemical composition literatures. Through analysis of scientific research literatures, led by clinical experiments, supported by pharmacological experiments, combined with the research progress of chemical components, the medicinal properties were studied under the guidance of traditional Chinese medicine theory. The Chinese medicine property of Noni is flat, with acid and sweet flavor.The channel tropisms of Noni included kidney, liver and spleen. The function of Noni included tonifying kindey and liver, strengthening tendon and bone, yiqi yangyin. The clinical application of Noni is used for liver and kidney deficiency, waist and knee weakness, weak muscles and bones; Qi and Yin deficiency, tiredness and thirst. Taken as fruit pulp or dry powder, the equivalent of dried product is 1-4 g. Noni is also distributed in Taiwan, Hainan in China. Hainan, Yunnan have been cultivated and introduced. Give Noni a clear Chinese medicine property, and lay a theoretical foundation for the compatibility of Noni with traditional Chinese medicine, which can enrich the Chinese medicine resources and promote the development of Chinese medicine.


Asunto(s)
Medicamentos Herbarios Chinos/farmacología , Morinda/química , China , Frutas/química , Medicina Tradicional China , Fitoterapia , Extractos Vegetales , Plantas Medicinales/química
20.
Artículo en Inglés | MEDLINE | ID: mdl-32305917

RESUMEN

This paper proposes a new method for simultaneous 3D reconstruction and semantic segmentation for indoor scenes. Unlike existing methods that require recording a video using a color camera and/or a depth camera, our method only needs a small number of (e.g., 3~5) color images from uncalibrated sparse views, which significantly simplifies data acquisition and broadens applicable scenarios. To achieve promising 3D reconstruction from sparse views with limited overlap, our method first recovers the depth map and semantic information for each view, and then fuses the depth maps into a 3D scene. To this end, we design an iterative deep architecture, named IterNet, to estimate the depth map and semantic segmentation alternately. To obtain accurate alignment between views with limited overlap, we further propose a joint global and local registration method to reconstruct a 3D scene with semantic information. We also make available a new indoor synthetic dataset, containing photorealistic high-resolution RGB images, accurate depth maps and pixel-level semantic labels for thousands of complex layouts. Experimental results on public datasets and our dataset demonstrate that our method achieves more accurate depth estimation, smaller semantic segmentation errors, and better 3D reconstruction results over state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...