We introduce an autoregressive human video generation framework that enables interactive multimodal control and low-latency extrapolation in a streaming manner.
We present GAP, which gaussianizes raw point clouds into high-fidelity 3D Gaussians with text guidance via depth-aware diffusion and surface-anchored optimization.
We present NeRFPrior, which adopts a neural radiance field as a prior to learn signed distance fields using volume rendering for indoor scene surface reconstruction.
we propose a general approach that explores the uncertainty of monocular depths to provide enhanced geometric priors for neural rendering and reconstruction.
We propose a method that seamlessly merge 3DGS with the learning of neural SDFs. To this end, we dynamically align 3D Gaussians on the zero-level set of the neural SDF. Meanwhile, we update the neural SDF by pulling neighboring space to the pulled 3D Gaussians, which progressively refine the signed distance field near the surface.
We present a novel differentiable renderer to infer UDFs from multi-view images. Instead of using hand-crafted equations, our differentiable renderer is a neural network which is pre-trained in a data-driven manner, dubbed volume rendering priors.
We introduce a general strategy to speed up the learning procedure for almost all radiance fields based methods. The key idea is to reduce the redundancy by shooting much fewer rays in the multi-view volume rendering procedure.
Honors and Awards
Comprehensive Excellent Scholarship (综合优秀奖学金) at Tsinghua University, 2024,2023,2021,2019.
Excellent undergraduate thesis (本科优秀毕业论文) at Tsinghua University, 2021.
Excellent graduates (本科优良毕业生) at Tsinghua University, 2021.
I currently serve as the T.A. of the undergraduate course "Fundamentals of Programming" (程序设计基础), which is taught by Prof. Yu-Shen Liu. I am also serving as an undergraduate advisor (本科生辅导员) at School of Software.
My hobbies include playing the flute, running, and singing. Feel free to contact me!