We aim to use AutoDrone to help humans in repetitive tasks. Current SOTA methods use various sensors (video, LiDAR, thermal) and process data locally to generate commands. Our approach integrates a webcam, Raspberry Pi, and 5G modem to enable the drone’s perception. Our system allows for the drone to identify obstacles in its path by transmitting images over 5G to an external server that infers depth of current frame. Our solution’s performance includes improved flight stability, successful image transmission and command return based on depth perception. Future work will focus on perception precision and automating flight controls after obstacle detection.
- Focus 1: Panotic Segmentation -Panotic Segmentation
Segmentatiion with GroundedSAM
- Focus 2: 3D Reconstruction-Gaussian Splatting; Reconstruct with Lidar
For the reconstruction and ego-motion localization in 24 Fall semester, we used COLMAP to perform feature detection, matching and sparse reconstruction. Please follow the official document for more details. We recommend install a GPU version COLMAP GUI following this instruction
3D Gaussian Splatting representation of Ryon Lab, visualized with GaussianEditor GUI.
- Focus 3: Depth Perception-Depth_Estimator;
Predicted Depth as the camera approaches the obstacles
- Focus 4: Computation Offload over 5G
Using 5G to transmit images captured by webcam on Raspberry Pi over 5G to server for depth inference and obstacle detection before returning depth data and command to the drone.
Transmitted Images over 5G and Depth Inference with returned depth value.
Autodrone team final report from ELEC 594 Capstone Project in Fall 2024 at Rice University:
ELEC594 Autodrone_project_final_report
This project is licensed under the Electrical and Computer Engineering Department at Rice University