Skip to content

Rice-MECE-Capstone-Projects/Autodrone

Repository files navigation

Autodrone with Perception and Obstacle Avoidance

Our Auto-Drone

Background

We aim to use AutoDrone to help humans in repetitive tasks. Current SOTA methods use various sensors (video, LiDAR, thermal) and process data locally to generate commands. Our approach integrates a webcam, Raspberry Pi, and 5G modem to enable the drone’s perception. Our system allows for the drone to identify obstacles in its path by transmitting images over 5G to an external server that infers depth of current frame. Our solution’s performance includes improved flight stability, successful image transmission and command return based on depth perception. Future work will focus on perception precision and automating flight controls after obstacle detection.

Methods

recon

Segmentatiion with GroundedSAM

For the reconstruction and ego-motion localization in 24 Fall semester, we used COLMAP to perform feature detection, matching and sparse reconstruction. Please follow the official document for more details. We recommend install a GPU version COLMAP GUI following this instruction

recon

3D Gaussian Splatting representation of Ryon Lab, visualized with GaussianEditor GUI.

recon

Predicted Depth as the camera approaches the obstacles

  • Focus 4: Computation Offload over 5G

Using 5G to transmit images captured by webcam on Raspberry Pi over 5G to server for depth inference and obstacle detection before returning depth data and command to the drone.

recon

Transmitted Images over 5G and Depth Inference with returned depth value.

Report

Autodrone team final report from ELEC 594 Capstone Project in Fall 2024 at Rice University:
ELEC594 Autodrone_project_final_report

This project is licensed under the Electrical and Computer Engineering Department at Rice University