NeRF stands for Neural Radiance Fields, a deep learning technique for 3D scene reconstruction and view synthesis from 2D images. It typically requires multiple images or views of a scene to construct a 3D representation accurately. NeRF involves a set of pictures of a scene taken from different viewpoints. NeRF has inspired extensions and improvements, such as NeRF-W, which aim to make it more efficient, accurate, and applicable to various scenarios, including dynamic scenes and real-time applications. Its variants have had a significant impact on the fields of computer vision, computer graphics, and 3D scene reconstruction.
However, If you have a single image and want to incorporate 3D priors, you need to improve the quality of the 3D reconstruction. The present techniques limit the field of view, which greatly limits their scalability to real-world 360-degree panoramic scenarios with large sizes. Researchers present a 360-degree novel view synthesis framework called PERF. It stands for Panoramic Neural Radiance field. Their framework trains a panoramic neural radiance field from a single panorama.
A panoramic image is created by capturing multiple images, often sequentially, and then stitching them together to form a seamless and wide-angle representation of a landscape, cityscape, or any other scene. The team proposes a collaborative RGBD inpainting method to complete RGB images and depth maps of visible regions with a trained Stable Diffusion for RGB inpainting. They also trained a monocular depth estimator for depth completion to generate novel appearances and 3D shapes that are invisible from the input panorama.
Training a panoramic neural radiance field (NeRF) from a single panorama is a challenging problem due to lack of 3D information, large-size object occlusion, coupled problems on reconstruction and generation, and geometry conflict between visible regions and invisible regions during inpainting. To tackle these issues, PERF consists of a three-step process: 1) to obtain single view NeRF training with depth supervision; 2) to collaborate RGBD inpainting of ROI; and 3) to use progressive inpainting-and-erasing generation.
To optimize the predicted depth map of ROI and make it consistent with the global panoramic scene, they propose an inpainting-and-erasing method, which inpaints invisible regions from a random view and erases conflicted geometry regions observed from other reference views, yielding better 3D scene completion.
Researchers experimented on the Replica and PERF-in-the-wild datasets. They demonstrate that PERF achieves a new state-of-the-art single-view panoramic neural radiance field. They say PERF can be applied to panorama-to-3D, text-to-3D, and 3D scene stylization tasks to obtain surprising results with several promising applications.
PERF significantly improves the performance of single-shot NeRF but heavily depends on the accuracy of the depth estimator and the Stable Diffusion. So, the team says that the future work will include improving the accuracy of the depth estimator and stable diffusion model.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
We are also on Telegram and WhatsApp.
Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.