Modeling reflections from 2D images is essential for photorealistic rendering and novel view synthesis. Recent approaches enhance Gaussian primitives with reflection-related material attributes to enable physically based rendering (PBR) with Gaussian Splatting. However, the material inference often lacks sufficient constraints, especially under limited environment modeling, resulting in illumination aliasing and reduced generalization. In this work, we revisit the problem from a multi-view perspective and show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting. To this end, we enforce 2D Gaussians to produce multi-view consistent material maps during deferred shading. We also track photometric variations across views to identify highly reflective regions, which serve as strong priors for reflection strength terms. To handle indirect illumination caused by inter-object occlusions, we further introduce an environment modeling strategy through ray tracing with 2DGS, enabling photorealistic rendering of indirect radiance. Experiments on widely used benchmarks show that our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
Overview of our method. Our 3D representation consists of 2D Gaussian primitives with physical material attributes, including diffuse color, metallic, roughness, and albedo. We adopt a deferred shading architecture, where multi-view material maps are first rendered and then evaluated through a Bidirectional Reflectance Distribution Function (BRDF) to obtain the final shading results.
To address the inconsistency of material maps observed from different viewpoints caused by the alpha blending of Gaussian ellipsoids, we propose two key components: (a) multi-view material consistency constraint, which enforces appearance consistency for corresponding surface regions across the projections of multi-view material maps; and (b) reflection strength prior, which traces photometric variations along camera trajectories to provide an explicit optimization target for the reflection strength term.
In addition, to compensate for inter-object reflections caused by self-occlusions, we introduce (c) environment modeling strategy based on 2DGS ray tracing. This enables the joint optimization of illumination and geometry.
We evaluate our method in ShinyBlender and GlossySynthetic datasets, where our method accurately captures the environment reflections, as well as the secondary light reflections.
We evaluate our method on real-world Ref-Real dataset, where our method accurately reconstructs the reflection textures from the surrounding environments on highly reflective surfaces.
We also evaluate our method on real-world Mip-NeRF 360 dataset, where existing reflective 3DGS approaches often exhibit degraded performance in complex scenes with limited reflective surfaces. More importantly, we demonstrate that accurate material decomposition facilitates learning more precise geometric reconstruction.
We also evaluate our method on real-world Mip-NeRF 360 dataset, where existing reflective 3DGS approaches often exhibit degraded performance in complex scenes with limited reflective surfaces. More importantly, we demonstrate that accurate material decomposition facilitates learning more precise geometric reconstruction.
@inproceedings{zhang2025materialrefgs,
title={{MaterialRefGS}: Reflective Gaussian Splatting with Multi-view Consistent Material Inference},
author={Zhang, Wenyuan and Tang, Jimin and Zhang, Weiqi and Fang, Yi and Liu, Yu-Shen and Han, Zhizhong},
booktitle={Advances in Neural Information Processing Systems},
year={2025}
}