-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the sparse depth? #53
Comments
Thanks for your interest! The point clouds have been projected into an image plane so they could be regarded as a depth image. All (not only "effective" points) pixels of the color image and the sparse depth map are sent into the network. |
|
Actually it does not. You might have known about special strategies such as sparsity invariant convolutions. But according to our experiment and analysis, vanilla 2d convolution is enough for acceptable predictions. |
Thank you for your contribution! |
Thank you for your outstanding contribution!
I want to know how the Color-dominant Branch is combined with the point cloud and sent to the network. Does the radar point cloud only take effective points, and does the color image also take the same effective points as the point cloud? How can we get a dense depth map in this way?
This question has been bothering me. I hope you can answer it for me. Thank you again!
The text was updated successfully, but these errors were encountered: