diff --git a/_pages/MetaUrban_rebuttal.md b/_pages/MetaUrban_rebuttal.md index 174298e..2ac70ed 100644 --- a/_pages/MetaUrban_rebuttal.md +++ b/_pages/MetaUrban_rebuttal.md @@ -7,9 +7,9 @@ nav: false nav_order: 2 --- +
This page displays video demonstrations in response to reviewers’ feedback. Click on any video to play. You can also find specific responses by searching the reviewer's name.
+This page showcases various video demonstrations in response to reviewer feedback. Click on any video to play.
Integration of OmniVerse as the renderer to improve visual realism and PhysX as the physical engine to improve interactive realism.
+Integration of Nvidia Omniverse as the renderer to improve visual realism, and Nvidia PhysX as the physical engine to improve interactive realism.
Preliminary results of harnessing Diffusion Model to improve the visual quality of MetaUrban in 2D space. Input: RGB image, depth map, semantic map and provided by MetaUrban; output: photo-realistic image. (It is an extension of our previous work SimGen) +
Preliminary results of harnessing diffusion models to improve the visual quality of MetaUrban in 2D space. Input: RGB image rendered by MetaUrban; output: photo-realistic image. (It is an extension of our previous work SimGen)
Preliminary results of harnessing Gaussian Splatting to improve the visual quality of MetaUrban in 3D space. Input: monocular videos; output: 3D scene represented by Gaussian Splatting. Integrated within the simulator, it enables training agents with photo-realistic RGB images.
+Preliminary results of harnessing Gaussian splatting to improve the visual quality of MetaUrban in 3D space. Input: monocular videos; output: 3D scene represented by Gaussian Splatting. Integrated within the simulator, it enables training agents with photo-realistic RGB images as observations.