3D Gaussian Splatting has emerged as an efficient photorealistic novel view synthesis method. However, its reliance on sparse Structure-from-Motion (SfM) point clouds often limits scene reconstruction quality. To address the limitation, this paper proposes a novel 3D reconstruction framework, Gaussian Processes enhanced Gaussian Splatting (GP-GS), in which a multi-output Gaussian Process model is developed to enable adaptive and uncertainty-guided densification of sparse SfM point clouds. Specifically, we propose a dynamic sampling and filtering pipeline that adaptively expands the SfM point clouds by leveraging GP-based predictions to infer new candidate points from the input 2D pixels and depth maps. The pipeline utilizes uncertainty estimates to guide the pruning of high-variance predictions, ensuring geometric consistency and enabling the generation of dense point clouds. These densified point clouds provide high-quality initial 3D Gaussians, enhancing reconstruction performance. Extensive experiments conducted on synthetic and real-world datasets across various scales validate the effectiveness and practicality of the proposed framework.
GP-GS is a novel framework that enhances the initialization of 3D Gaussian Splatting (3DGS) by leveraging Multi-Output Gaussian Processes (MOGP). It improves the rendering quality of novel view synthesis by densifying sparse point clouds reconstructed via Structure-from-Motion (SfM). The method is particularly effective in complex regions with densely packed objects and challenging lighting conditions.
Our pipeline consists of the following steps:
- Multi-View Image Input: We start with multi-view images and extract per-view depth maps using depth estimation models (e.g., Depth Anything).
- SfM Reconstruction: Sparse point clouds are generated from the input images using Structure-from-Motion (SfM).
- Point Cloud Densification:
- MOGP is trained to take pixel coordinates and depth values as input and predict dense 3D points with position and color information.
- A Matérn kernel is used to model smooth spatial variations, and the parameters are optimized via gradient updates.
- Uncertainty-Based Filtering:
- High-variance noisy points are filtered out based on a variance-based thresholding strategy to ensure structured densification.
- Gaussian Initialization and Optimization:
- The densified points are used to initialize 3D Gaussians, which undergo further optimization to improve geometric accuracy.
- Novel View Rendering:
- The optimized 3D Gaussians are used for efficient rasterization-based rendering to synthesize high-quality novel views.
conda env create --file environment.yml conda activate GP-GS
MOGP:
python MOGP/top_four_contribution.py #Find the image from a perspective that contributes most to SfM points cloud.
python MOGP/mogp_train.py #Training MOGP model
python MOGP/predict.py #Predict high quality dense points cloud.MOGP for 3D gaussians Initialization:
python MOGP/rewrite_images_sfm.py
python MOGP/write_points3d.py3DGS*:
python train.py -s <scene path>Render and Evaluation:
python render.py -m <model path>
python metrics.py -m <model path>If you find this project useful in your research, please consider cite:
@article{guo2025gp,
title={GP-GS: Gaussian Processes for Enhanced Gaussian Splatting},
author={Guo, Zhihao and Su, Jingxuan and Wang, Shenglin and Fan, Jinlong and Zhang, Jing and Han, Liangxiu and Wang, Peng},
journal={arXiv preprint arXiv:2502.02283},
year={2025}
}
