PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
Abstract
Rendering high-fidelity images from sparse point clouds is still challenging. Existing learning-based approaches suffer from either hole artifacts, missing details, or expensive computations. In this paper, we propose a novel framework to render high-quality images from sparse points. This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering, which includes several cascaded modules. We first use a regressor to estimate Gaussian properties in a point-wise manner, the estimated properties are used to rasterize neural feature descriptors into 2D planes which are extracted from a multiscale extractor. The projected feature volume is gradually decoded toward the final prediction via a multiscale and <PRE_TAG>progressive decoder</POST_TAG>. The whole pipeline experiences a <PRE_TAG>two-stage training</POST_TAG> and is driven by our well-designed <PRE_TAG><PRE_TAG>progressive and multiscale reconstruction loss</POST_TAG></POST_TAG>. Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper