Software Architecture sync up meeting

Meeting agenda: (just a thoughts half way working on the doc → this is a lot I will work on software with you)

  • System by System discussion

    • Project Comms:

      • What is the current status of the current Comms system
        - Currently upgrading jetson to newer ROS version and then will do further tests after this is done

      • Definition:

        • The communication system is mainly from the ground station to the rover

      • Is the current comm system a fully decoupled system(meaning the only thing you need to change if we put on a new comms module is just the setting == No code change needed)

        • Yes once comms are setup with ROS2, and initial configuration is done radios can be interchanged

      • Multiple radio links and switches in between

        • for what? How many? is this really necessary?

      • RSSI to evaluate the radio signal strength?

        • to do this would have to write custom ros2 nodes that reading in the radio RSSI values using serial

      • What are the types of message going between the ground station and what are not being relayed back?

        • Multiple video feeds (front cameras + arm), Vector Nav Data (GPS location + IMU), status update (for autonomy when robot reaches final destination)

      • What if we want to debug can the comms help us to get the data we are interested in?

      • How can we differentiate the system:

        • By the amount of information being communicated

        • By the different link do different things(easier)
          ??

    • Project Drivetrain:

      • What are the involved modules in between after the ros joy message is received?

        • Joy → XboxControllerNode → xbox_msg → CoordinateNode → twist msg

      • What data is being received by the CAN wrapper (fused)?

        • twist msg, not fused

      • How can the autonomy module and sensor feedback interact with the current drivetrain system?

        • Nav2 (does path planning + object avoidence_ → twist msg → CAN Wrapper → CAN msgs to motors

      • How Estop come into play with the current system

        • My understanding of this is that the EStop should reset the jetson and kill power to main hardware. Don’t belive this should rely on software to occur.

      • Do we want to enable brake feature?

        • yes we will have this

      • Motor calibration feature support?

        • ??

      • How frequently are we updating in CAN (regularly or only upon new data?)

        • Regular Updates: The CAN socket is read regularly in a loop, even if no new data is available. The loop continuously checks for new frames and updates the internal map upon receiving new data.

        • Upon New Data: The internal map (recv_map_) is updated only when new CAN frames are read successfully. The read attempts happen frequently due to the continuous loop and the small sleep interval.

      • How do we want to implement our own close loop control? velocity based? position based? just percentage of motor based?

        • ideally PID control

      • Do we want to have a sensorless mode?

        • I don't see a reason why we would want this

      • What are the modes inside the drivetrain? (not too sure what this means)

        • GPS location controlled?

        • Encoder position controlled?

          • GPS, IMU and encoder data fusion to localize our robot in space and orientation

          • in the case of autonomy this localization data used to control robot

          • in case of teleoperation IMU + GPS data returned to the user, but not used for direct contol

    • Project Sensory:

      • What’s the status on Vn 300

        • What is the return of the module

        • How can we utilize the data

          • sensor fusion needed?

            • currently, gives us IMU + GPS data

          • what are the other system that need to listen into the sensor?

            • what’s the way to update them upon request? regularly?

            • other systems that listen to this data is the GUI for parts of this data, path planning and object detection models, all of these get updated at when the sensor publishes the value to the topic and they read from them

      • Ignore the localization board for now?

        • This would be good to have but can we just buy one?

      • how can we define the duty of the four camera and what is the communication logic of all the four camera

      • Encoder integration plan? → any sensor fusion needed?

        • Nico completed localization through sensor data fusion and encoder integration is not a thing, we have yaml config file and ros2 takes care fo the rest.

      • What about gimbal controller?

        • how can we initiate the control or there is just a fixed amount of operation you can do with gimbal hardcoded?

          • personally dont think we need the gimble or that this is a good use of time at the moment as autonomy will not use gimble, and if we have cameras at the front and size should be able to see all objects and assets around us for other missions

    • Project GUI

      • Same question what are the input and what are the output?

        • input camera feeds from the front of rover x 2, or arm when doing arm control

      • What is the available user defined input by joystick + emulated interface or commandline?

        • it will be done through joystick + emulated such as hit X key to reset the objective

      • do we process data on GUI end or not?

        • no should not need a large amount of data processing on jetson, only possible processing is to process encoder values for speed.

    • Project ARM

      • What is the model we want to use? https://moveit.picknik.ai/main/doc/concepts/concepts.html

      • Are we treating encoder and controller and motor as a close box module and the model just interface with that module?

        • yes

      • what are the available control modes for the arm?

        • 1 control mode, user-operated, not doing autonomy for it this year

      • How can we do pipelining for the arm motion?

        • Like if user command like pick up something from a point and follow a path to point b

        • do we just give one point at a time or what?

          • how do we plan to know these points and give it to them? Then we have to localize an object in 3D space and might as well try to do autonomy for the arm too

        • This question should be worded as how can we interface with the model, we have one layer with it?

          • model interfacing with the arm gets done through ROS2 as it offers a hardware abstraction layer where you put configs in a yaml file and it will interface with them for us.

      • (note: there isn’t a lot for arm because if the model is good then we don’t worry but we need understand the system capability first)

    • AUX interface

      • How can jetson handle communication with PDB –

        • doing all board and microcontroller communication with CAN, can modify wrapper and less headaches

      • How can jetson handle communication with general AUX board

        • can we define a generic custom message scheme?

          • CAN

  • What is the IPC between different threads and what is the message and thread priority

  • Is there any hard timing requirement for it?

 

What I want to get out of it?

A system architecture diagram for all the involved systems and how they interact

A detailed document for the details of each system that is not captured by the diagram