Software To-dos/Roadmap (list)
Overview
As it currently stands, we have ROS2 control, and our URDFs/simulations, spun up for the drivetrain and arm, and are working on the hardware interface such that we can begin with integration testing (sending a message from ROS to the firmware board to spin the drivetrain wheels back and forth).
Autonomy and the cartesian arm controller can be worked on in parallel. Now that ROS2 is working on the Jetson, the ROS2 CAN library is first priority; then integration testing via running commands straight from the jetson command line/RViz. Science module is not a priority until later stages.
Management
Fix CI/CD on GitHub
Review, then merge all finished-but-unmerged PRs (Keyon arm-ros-gazebo-control, Orson arm-control)
Add more links/help to software training
Total Robot
Create Jetson-to-Nucleo CAN interface (Colin)
Install SSD onto Jetson upon arrival
Test jetson running (builds run fine, but try running the drivetrain launch file)
[if time permits] use SW_to_URDF to convert the entire Solidworks assembly to URDF format. Split up the exported URDF into each module (arm, drivetrain, science, camera). Add the ros2_control tags to each module. This will give us a very high-fidelity simulation of the robot, but is not required for autonomy.
Communications
Set up communication between base station and jetson via router (later task)
Ping electrical to get the router and networking working.
Xbox controller integration; create a ROS node that publishes Xbox controller values such that the drivetrain/arm can subscribe to them. Look into http://wiki.ros.org/joy and https://get-help.robotigniteacademy.com/t/ros2-foxy-package-joy-not-found/14287 .
Set up camera feed between ROS on jetson and base station
Create camera video stream publishing node. This node will read from the camera and publish the data to a topic every frame. This will be used for both the autonomy nodes and for reading a video stream across the network. Through web_video_server, this node should also be available across the network so that the base station can connect to it.
GitHub - RobotWebTools/web_video_server: HTTP Streaming of ROS Image Topics in Multiple Formats
Drivetrain
The drivetrain already has its full description package completed; URDF (with ros2_control tags) made.
Set up CAN interface for drivetrain
This is currently stubbed and waiting for the Jetson-to-Nucleo CAN API to be finished. Once it’s finished, we just need to add the drivetrain actuator read/write() functions to the stubbed functions.
Integration test for drivetrain
send a message from ROS, over CAN to the nucleo, to spin the drivetrain wheels back and forth
Arm
The arm already has its full description package completed; URDF (with ros2_control tags) made.
Cartesian arm controller
Set up CAN interface for arm
This is currently stubbed and waiting for the Jetson-to-Nucleo CAN API to be finished. Once it’s finished, we just need to add the arm actuation read/write() functions to the stubbed functions.
This requires the Jetson CAN interface to be complete.
Science
Create science package:
Build URDF for science module with ros2_control
Create RVIZ launch file for science module
Create Gazebo launch file for science module
Create hardware interface package for science module
This can be “stubbed” until the general CAN interface between the Jetson and firmware board has been completed.
This should define each motor/actuator, and sensor on the science module (e.g: digger mechanism, raising mechanism), and their methods of control. The drivetrain code shows how this should be completed.
Camera
Create camera package:
Build URDF for camera module with ros2_control
Create RVIZ launch file for camera module
Create Gazebo launch file for camera module
Create hardware interface package for camera module
This can be “stubbed” until the general CAN interface between the Jetson and firmware board has been completed.
For the camera, this is just an interface between the Jetson and Nucleo for the two servo motors that control which way the camera module is pointing.
Autonomy
MVP:
Bring in ZED2 Camera ROS2 package as a new git submodule; write launch file for starting camera and publishing the camera message
Write the ARUCO code reader node; takes in published camera message and outputs a Pose identifying where the ARUCO code is.
First Time Robot Setup guides of Nav2 library:
URDF
Odometry
Sensors
Footprint
Lifecycle and Composition Nodes
Create lifecycle node that starts behaviour tree/action server for Navigate to Pose; takes in Pose message and navigates to specified pose automatically.
OLD:
Set up ZED2 Cameras such that they take the camera & depth data and build a point cloud; publish that point cloud data to Rviz/ROS
Make a test node that publishes that point cloud data.
Ensure that the drivetrain ZED transforms are correct – moving the ZED camera to the correct point in space on the URDF
Create behaviour tree for what the rover does when it receives stimuli from the environment (e.g: drive forward)
When doing this, look deeply into ROS2 Nav2 research such that it can be integrated smoothly into nav2
Look at the Autonomous Navigation Mission for what the robot needs to do specifically.
Create action servers for each action in the behavior tree
Fuse with ROS2 Nav2 library