Papers
arxiv:2407.03857

PFGS: High Fidelity Point Cloud Rendering via Feature Splatting

Published on Jul 4, 2024
Authors:
,
,
,

Abstract

Rendering high-fidelity images from sparse point clouds is still challenging. Existing learning-based approaches suffer from either hole artifacts, missing details, or expensive computations. In this paper, we propose a novel framework to render high-quality images from sparse points. This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering, which includes several cascaded modules. We first use a regressor to estimate Gaussian properties in a point-wise manner, the estimated properties are used to rasterize neural feature descriptors into 2D planes which are extracted from a multiscale extractor. The projected feature volume is gradually decoded toward the final prediction via a multiscale and <PRE_TAG>progressive decoder</POST_TAG>. The whole pipeline experiences a <PRE_TAG>two-stage training</POST_TAG> and is driven by our well-designed <PRE_TAG><PRE_TAG>progressive and multiscale reconstruction loss</POST_TAG></POST_TAG>. Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.03857 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.03857 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.03857 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.