D-NeRF: neural radiance fields for dynamic scenes
Cita com:
hdl:2117/365218
Document typeConference report
Defense date2021
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
Except where otherwise noted, content on this work
is licensed under a Creative Commons license
:
Attribution-NonCommercial-NoDerivs 3.0 Spain
ProjectPERCEPCION-ACCION-APRENDIZAJE INTERACTIVO PARA MODELAR OBJETOS (AEI-PCI2019-103386)
ENTENDER EL MOVIMIENTO HUMANO PARA ADAPTAR EL COMPORTAMIENTO DE UN ROBOT (AEI-TIN2017-90086-R)
ENTENDER EL MOVIMIENTO HUMANO PARA ADAPTAR EL COMPORTAMIENTO DE UN ROBOT (AEI-TIN2017-90086-R)
Abstract
Neural rendering techniques combining machine learning with geometric reasoning have arisen as one of the most promising approaches for synthesizing novel views of a scene from a sparse set of images. Among these, stands out the Neural radiance fields (NeRF), which trains a deep network to map 5D input coordinates (representing spatial location and viewing direction) into a volume density and view-dependent emitted radiance. However, despite achieving an unprecedented level of photorealism on the generated images, NeRF is only applicable to static scenes, where the same spatial location can be queried from different images. In this paper we introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions. For this purpose we consider time as an additional input to the system, and split the learning process in two main stages: one that encodes the scene into a canonical space and another that maps this canonical representation into the deformed scene at a particular time. Both mappings are learned using fully-connected networks. Once the networks are trained, D-NeRF can render novel images, controlling both the camera view and the time variable, and thus, the object movement. We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
Description
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
CitationPumarola, A. [et al.]. D-NeRF: neural radiance fields for dynamic scenes. A: IEEE Conference on Computer Vision and Pattern Recognition. "Proceding of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)". Institute of Electrical and Electronics Engineers (IEEE), 2021, p. 10313-10322. DOI 10.1109/CVPR46437.2021.01018.
Publisher versionhttps://ieeexplore.ieee.org/document/9578753
Collections
- IRI - Institut de Robòtica i Informàtica Industrial, CSIC-UPC - Ponències/Comunicacions de congressos [589]
- VIS - Visió Artificial i Sistemes Intel·ligents - Ponències/Comunicacions de congressos [296]
- ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI - Ponències/Comunicacions de congressos [265]
- Doctorat en Automàtica, Robòtica i Visió - Ponències/Comunicacions de congressos [180]
Files | Description | Size | Format | View |
---|---|---|---|---|
2510-D-nerf_-Ne ... lds-for-dynamic-scenes.pdf | 3,546Mb | View/Open |