If you control the robot right away in the real world, there are many problems. The cost of researching robots is high, but we don't know what problems will arise because the code we wrote doesn't work perfectly at once. To solve this problem, we create a Simulation environment and test the code.
- Check Host IP (eth0: intet 000.000.000.000)
# WSL2
$ ifconfig
# Windows
PS > wsl hostname -I
- Run Robotvision_Xycar
# (1) WSL2 Terminal 1
$ roslaunch rosbridge_server rosbridge_websocket.launch
# (2) Use Windows Terminal or Click .exe
# Unity is on, Input Host IP(WSL2 IP). => "ws://{000.000.000.000}:9090" and Press "Enter".
PS > (your directory)/RobotVision_xycar/Build/Windows/Start/RVS_start.exe
# If you input right host ip, ros can connect RVS.
PS > (your directory)/RobotVision_xycar/Build/Windows/Xycar/RVS.exe
# (3) WSL2 Terminal 2
$ roslaunch assignment1 driving.launch
# (1) Terminal 1
$ roslaunch rosbridge_server rosbridge_websocket.launch
# (2) Terminal 2
# Unity is on, Input Ubuntu IP(localhost). => "ws://localhost:9090" and Press "Enter".
$ ~/xycar_ws/src/RobotVision_xycar/Build/Linux/Start/RVS_start.86_64
# If you input right host ip, ros can connect RVS.
$ ~/xycar_ws/src/RobotVision_xycar/Build/Linux/Xycar/RVS.86_64
# (3) Terminal 3
$ roslaunch assignment1 driving.launch
'F1' : Car vision (ros)
'F2' : Top view
'F3' : Car back vision
[1] Image Read
Image : OpenCV-ImageRead.png
Line >> img = cv2.line(img, (0,0), (511, 511), (255, 0, 0), 5)
Rectangle >> img = cv2.rectangle(img, (100,100), (300,500), (0,255,0), 3)
Circle >> img = cv2.circle(img, (300,400), 100, (0,0,255), 2)
Text >> cv2.putText(img, 'Test', (10,50), cv2.FONT_HERSHEY_SIMPLEX, 4, (255,255))
Image file read >> img = cv2.imread('picture.jpg', cv2.IMREAD_GRAYSCALE)
Image show >> cv2.imshow('Color', img)
Image save >> cv2.imwrite('new_image.jpg', img)
[2] Gray Scale
[3] Gaussian Blur
[4] HSV -Binary
[5] ROI