Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters











Database
Language
Publication year range
1.
Comput Biol Med ; 166: 107519, 2023 Sep 25.
Article in English | MEDLINE | ID: mdl-37801919

ABSTRACT

With the increasing popularity of the use of 3D scanning equipment in capturing oral cavity in dental health applications, the quality of 3D dental models has become vital in oral prosthodontics and orthodontics. However, the point cloud data obtained can often be sparse and thus missing information. To address this issue, we construct a high-resolution teeth point cloud completion method named TUCNet to fill up the sparse and incomplete oral point cloud collected and output a dense and complete teeth point cloud. First, we propose a Channel and Spatial Attentive EdgeConv (CSAE) module to fuse local and global contexts in the point feature extraction. Second, we propose a CSAE-based point cloud upsample (CPCU) module to gradually increase the number of points in the point clouds. TUCNet employs a tree-based approach to generate complete point clouds, where child points are derived through a splitting process from parent points following each CPCU. The CPCU learns the up-sampling pattern of each parent point by combining the attention mechanism and the point deconvolution operation. Skip connections are introduced between CPCUs to summarize the split mode of the previous layer of CPCUs, which is used to generate the split mode of the current CPCUs. We conduct numerous experiments on the teeth point cloud completion dataset and the PCN dataset. The experimental results show that our TUCNet not only achieves the state-of-the-art performance on the teeth dataset, but also achieves excellent performance on the PCN dataset.

2.
Med Image Anal ; 90: 102975, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37804586

ABSTRACT

Cine magnetic resonance imaging (MRI) is the current gold standard for the assessment of cardiac anatomy and function. However, it typically only acquires a set of two-dimensional (2D) slices of the underlying three-dimensional (3D) anatomy of the heart, thus limiting the understanding and analysis of both healthy and pathological cardiac morphology and physiology. In this paper, we propose a novel fully automatic surface reconstruction pipeline capable of reconstructing multi-class 3D cardiac anatomy meshes from raw cine MRI acquisitions. Its key component is a multi-class point cloud completion network (PCCN) capable of correcting both the sparsity and misalignment issues of the 3D reconstruction task in a unified model. We first evaluate the PCCN on a large synthetic dataset of biventricular anatomies and observe Chamfer distances between reconstructed and gold standard anatomies below or similar to the underlying image resolution for multiple levels of slice misalignment. Furthermore, we find a reduction in reconstruction error compared to a benchmark 3D U-Net by 32% and 24% in terms of Hausdorff distance and mean surface distance, respectively. We then apply the PCCN as part of our automated reconstruction pipeline to 1000 subjects from the UK Biobank study in a cross-domain transfer setting and demonstrate its ability to reconstruct accurate and topologically plausible biventricular heart meshes with clinical metrics comparable to the previous literature. Finally, we investigate the robustness of our proposed approach and observe its capacity to successfully handle multiple common outlier conditions.


Subject(s)
Heart , Magnetic Resonance Imaging , Humans , Heart/diagnostic imaging , Magnetic Resonance Imaging, Cine/methods , Thorax , Imaging, Three-Dimensional/methods
3.
Entropy (Basel) ; 25(7)2023 Jul 02.
Article in English | MEDLINE | ID: mdl-37509965

ABSTRACT

In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud's 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.

4.
Cells ; 11(24)2022 12 17.
Article in English | MEDLINE | ID: mdl-36552872

ABSTRACT

3D point clouds are gradually becoming more widely used in the medical field, however, they are rarely used for 3D representation of intracranial vessels and aneurysms due to the time-consuming data reconstruction. In this paper, we simulate the incomplete intracranial vessels (including aneurysms) in the actual collection from different angles, then propose Multi-Scope Feature Extraction Network (MSENet) for Intracranial Aneurysm 3D Point Cloud Completion. MSENet adopts a multi-scope feature extraction encoder to extract the global features from the incomplete point cloud. This encoder utilizes different scopes to fuse the neighborhood information for each point fully. Then a folding-based decoder is applied to obtain the complete 3D shape. To enable the decoder to intuitively match the original geometric structure, we engage the original points coordinates input to perform residual linking. Finally, we merge and sample the complete but coarse point cloud from the decoder to obtain the final refined complete 3D point cloud shape. We conduct extensive experiments on both 3D intracranial aneurysm datasets and general 3D vision PCN datasets. The results demonstrate the effectiveness of the proposed method on three evaluation metrics compared to baseline: our model increases the F-score to 0.379 (+21.1%)/0.320 (+7.7%), reduces Chamfer Distance score to 0.998 (-33.8%)/0.974 (-6.4%), and reduces the Earth Mover's Distance to 2.750 (17.8%)/2.858 (-0.8%).


Subject(s)
Intracranial Aneurysm , Humans
5.
Plants (Basel) ; 11(23)2022 Dec 01.
Article in English | MEDLINE | ID: mdl-36501381

ABSTRACT

In this paper, a novel point cloud segmentation and completion framework is proposed to achieve high-quality leaf area measurement of melon seedlings. In particular, the input of our algorithm is the point cloud data collected by an Azure Kinect camera from the top view of the seedlings, and our method can enhance measurement accuracy from two aspects based on the acquired data. On the one hand, we propose a neighborhood space-constrained method to effectively filter out the hover points and outlier noise of the point cloud, which can enhance the quality of the point cloud data significantly. On the other hand, by leveraging the purely linear mixer mechanism, a new network named MIX-Net is developed to achieve segmentation and completion of the point cloud simultaneously. Different from previous methods that separate these two tasks, the proposed network can better balance these two tasks in a more definite and effective way, leading to satisfactory performance on these two tasks. The experimental results prove that our methods can outperform other competitors and provide more accurate measurement results. Specifically, for the seedling segmentation task, our method can obtain a 3.1% and 1.7% performance gain compared with PointNet++ and DGCNN, respectively. Meanwhile, the R2 of leaf area measurement improved from 0.87 to 0.93 and MSE decreased from 2.64 to 2.26 after leaf shading completion.

6.
Front Plant Sci ; 13: 947690, 2022.
Article in English | MEDLINE | ID: mdl-36247622

ABSTRACT

The plant factory is a form of controlled environment agriculture (CEA) which is offers a promising solution to the problem of food security worldwide. Plant growth parameters need to be acquired for process control and yield estimation in plant factories. In this paper, we propose a fast and non-destructive framework for extracting growth parameters. Firstly, ToF camera (Microsoft Kinect V2) is used to obtain the point cloud from the top view, and then the lettuce point cloud is separated. According to the growth characteristics of lettuce, a geometric method is proposed to complete the incomplete lettuce point cloud. The treated point cloud has a high linear correlation with the actual plant height (R 2 = 0.961), leaf area (R 2 = 0.964), and fresh weight (R 2 = 0.911) with a significant improvement compared to untreated point cloud. The result suggests our proposed point cloud completion method have has the potential to tackle the problem of obtaining the plant growth parameters from a single 3D view with occlusion.

7.
Sensors (Basel) ; 22(17)2022 Aug 26.
Article in English | MEDLINE | ID: mdl-36080900

ABSTRACT

We propose a conceptually simple, general framework and end-to-end approach to point cloud completion, entitled PCA-Net. This approach differs from the existing methods in that it does not require a "simple" network, such as multilayer perceptrons (MLPs), to generate a coarse point cloud and then a "complex" network, such as auto-encoders or transformers, to enhance local details. It can directly learn the mapping between missing and complete points, ensuring that the structure of the input missing point cloud remains unchanged while accurately predicting the complete points. This approach follows the minimalist design of U-Net. In the encoder, we encode the point clouds into point cloud blocks by iterative farthest point sampling (IFPS) and k-nearest neighbors and then extract the depth interaction features between the missing point cloud blocks by the attention mechanism. In the decoder, we introduce a new trilinear interpolation method to recover point cloud details, with the help of the coordinate space and feature space of low-resolution point clouds, and missing point cloud information. This paper also proposes a method to generate multi-view missing point cloud data using a 3D point cloud hidden point removal algorithm, so that each 3D point cloud model generates a missing point cloud through eight uniformly distributed camera poses. Experiments validate the effectiveness and superiority of PCA-Net in several challenging point cloud completion tasks, and PCA-Net also shows great versatility and robustness in real-world missing point cloud completion.


Subject(s)
Algorithms , Neural Networks, Computer , Cluster Analysis , Electric Power Supplies , Research Design
8.
J Imaging ; 8(5)2022 Apr 26.
Article in English | MEDLINE | ID: mdl-35621889

ABSTRACT

Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover's Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain's evaluation.

SELECTION OF CITATIONS
SEARCH DETAIL