diff --git a/perception/autoware_image_projection_based_fusion/CMakeLists.txt b/perception/autoware_image_projection_based_fusion/CMakeLists.txt index 7c8160d6fe1b7..5074741808018 100644 --- a/perception/autoware_image_projection_based_fusion/CMakeLists.txt +++ b/perception/autoware_image_projection_based_fusion/CMakeLists.txt @@ -17,6 +17,8 @@ endif() # Build non-CUDA dependent nodes ament_auto_add_library(${PROJECT_NAME} SHARED src/camera_projection.cpp + src/fusion_collector.cpp + src/fusion_matching_strategy.cpp src/fusion_node.cpp src/debugger.cpp src/utils/geometry.cpp diff --git a/perception/autoware_image_projection_based_fusion/README.md b/perception/autoware_image_projection_based_fusion/README.md index dcf35e45bbd9d..ab9e1224aa1b1 100644 --- a/perception/autoware_image_projection_based_fusion/README.md +++ b/perception/autoware_image_projection_based_fusion/README.md @@ -2,81 +2,239 @@ ## Purpose -The `autoware_image_projection_based_fusion` is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation). +The `autoware_image_projection_based_fusion` package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications. -## Inner-workings / Algorithms +### Fusion algorithms -### Sync Algorithm +The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links: -#### matching +| Fusion Name | Description | Detail | +| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ | +| roi_cluster_fusion | Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) | +| roi_detected_object_fusion | Updates classification labels of detected objects using ROI information from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) | +| pointpainting_fusion | Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. | [link](./docs/pointpainting-fusion.md) | +| roi_pointcloud_fusion | Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. | [link](./docs/roi-pointcloud-fusion.md) | +| segmentation_pointcloud_fusion | Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. | [link](./docs/segmentation-pointcloud-fusion.md) | -The offset between each camera and the lidar is set according to their shutter timing. -After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched. +## Inner Workings / Algorithms -![roi_sync_image1](./docs/images/roi_sync_1.png) +![fusion_algorithm](./docs/images/fusion_algorithm.drawio.svg) + +The fusion process operates on two primary types of input data: + +- **Msg3d**: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR. +- **RoIs** (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes. + +Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion. + +The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion. + +### Step 1: Matching and Creating a Collector + +When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp. + +- If a matching collector is found, the incoming data is added to it. +- If no matching collector exists, a new collector is created and initialized with the reference timestamp. + +### Step 2: Triggering the Timer + +Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either `msg3d_timeout_sec` for msg3d or `rois_timeout_sec` for RoIs. + +The collector will attempt to fuse the collected 3D and 2D data either: + +- When both Msg3d and RoI data are available, or +- When the timer expires. + +If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion. -current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0 +### Step 3: Fusion Process -#### fusion and timer +The fusion process consists of three main stages: +1. **Preprocessing** – Preparing the input data for fusion. +2. **Fusion** – Aligning and merging RoIs with the 3D point cloud. +3. **Postprocessing** – Refining the fused output based on the algorithm's requirements. + +The specific operations performed during these stages may vary depending on the type of fusion being applied. + +### Step 4: Publishing the Fused Result + +After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message. + +The figure below shows how the input data is fused in different scenarios. ![roi_sync_image2](./docs/images/roi_sync_2.png) -The subscription status of the message is signed with 'O'. +## Parameters -1.if a pointcloud message is subscribed under the below condition: +All of the fusion nodes have the common parameters described in the following -| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | -| :-----------------: | :--------: | :-------: | :-------: | :-------: | -| subscription status | | O | O | O | +{{ json_to_markdown("perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json") }} -If the roi msgs can be matched, fuse them and postprocess the pointcloud message. -Otherwise, fuse the matched roi msgs and cache the pointcloud. +### Parameter Settings -2.if a pointcloud message is subscribed under the below condition: +#### Timeout -| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | -| :-----------------: | :--------: | :-------: | :-------: | :-------: | -| subscription status | | O | O | | +The order in which `RoIs` or the `msg3d` message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse `2D RoIs` with `msg3d` data, `msg3d` is essential for processing. -if the roi msgs can be matched, fuse them and cache the pointcloud. +If `RoIs` arrive earlier, they must wait until `msg3d` is received. You can adjust the waiting time using the `rois_timeout_sec` parameter. -3.if a pointcloud message is subscribed under the below condition: +If `msg3d` arrives first, the fusion process should proceed as quickly as possible, so the waiting time for `msg3d` (`msg3d_timeout_sec`) should be kept minimal. -| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | -| :-----------------: | :--------: | :-------: | :-------: | :-------: | -| subscription status | O | O | O | | +#### RoIs Offsets -If the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. -If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is. +The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the `RoIs` and `msg3d`. Once this offset is known, it should be specified in the parameter `rois_timestamp_offsets`. -The timeout threshold should be set according to the postprocessing time. -E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. -current default value at autoware.universe for XX1: - timeout_ms: 50.0 +In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The `rois_timestamp_offsets` can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the `rois_timestamp_offsets` would be `[0.059, 0.010, 0.026, 0.042, 0.076, 0.093]`. -#### The `build_only` option +![lidar_camera_sync](./docs/images/lidar_camera_sync.svg) -The `pointpainting_fusion` node has `build_only` option to build the TensorRT engine file from the ONNX file. -Although it is preferred to move all the ROS parameters in `.param.yaml` file in Autoware Universe, the `build_only` option is not moved to the `.param.yaml` file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command: +To check the header timestamp of the msg3d and RoIs, user can easily run ```bash -ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true +ros2 echo [topic] --header field ``` -#### Known Limits +#### Matching Strategies + +We provide two matching strategies for different scenarios: + +##### Naive Mode + +User should use this mode if the concatenation node from the Autoware point cloud preprocessor is not being used. User should also set an appropriate `threshold` value to determine whether the time interval between the `msg3d` and `RoI` messages is within an acceptable range. +If the interval is less than the match threshold, the messages are considered matched. +![roi_sync_image1](./docs/images/roi_sync_1.png) + +- Example usage: + + ```bash + matching_strategy: + type: naive + threshold: 0.05 + ``` -The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused. +##### Advanced Mode -### Approximate camera projection +If the concatenation node from the Autoware point cloud preprocessor is being used, enable this mode. +The advanced mode parses diagnostics from the concatenation node to verify whether all point clouds have been successfully concatenated. If concatenation is incomplete, it dynamically adjusts `rois_timestamp_offsets` based on diagnostic messages. +Instead of using a fixed threshold, this mode requires setting two parameters: + +- `msg3d_noise_window` (a single value) +- `rois_timestamp_noise_window` (a vector) + +These parameters enforce stricter matching between the `RoI` messages and `msg3d` input. + +- Example usage: + + ```bash + matching_strategy: + type: advanced + msg3d_noise_window: 0.02 + rois_timestamp_noise_window: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01] + ``` + +#### Approximate camera projection For algorithms like `pointpainting_fusion`, the computation required to project points onto an unrectified (raw) image can be substantial. To address this, an option is provided to reduce the computational load. Set the [`approximate_camera_projection parameter`](config/fusion_common.param.yaml) to `true` for each camera (ROIs). If the corresponding `point_project_to_unrectified_image` parameter is also set to true, the projections will be pre-cached. The cached projections are calculated using a grid, with the grid size specified by the `approximation_grid_width_size` and `approximation_grid_height_size` parameters in the [configuration file](config/fusion_common.param.yaml). The centers of the grid are used as the projected points. -### Detail description of each fusion's algorithm is in the following links +## Debug and Diagnostics + +To verify whether the node has successfully fuse the msg3d or rois, the user can examine rqt or the `/diagnostics` topic using the following command: + +```bash +ros2 topic echo /diagnostics +``` + +Below is an example output when the fusion is success: + +- msg3d has a value of `True`. +- Each rois has a value of `True`. +- The `fusion_success` is `True`. +- The `level` value is `\0`. (diagnostic_msgs::msg::DiagnosticStatus::OK) + +```bash +header: + stamp: + sec: 1722492015 + nanosec: 848508777 + frame_id: '' +status: +- level: "\0" + name: 'pointpainting: pointpainting_fusion_status' + message: Fused output is published and include all rois and msg3d + hardware_id: pointpainting_fusion_checker + values: + - key: msg3d/is_fused + value: 'True' + - key: fused_timestamp + value: '1738725170.860273600' + - key: reference_timestamp_min + value: '1738725170.850771904' + - key: reference_timestamp_max + value: '1738725170.870771885' + - key: /rois0/timestamp + value: '1738725170.903310537' + - key: /rois0/is_fused + value: 'True' + - key: /rois1/timestamp + value: '1738725170.934378386' + - key: /rois1/is_fused + value: 'True' + - key: /rois2/timestamp + value: '1738725170.917550087' + - key: /rois2/is_fused + value: 'True' + - key: fusion_success + value: 'True' +``` + +Below is an example output when the fusion is failed: + +- msg3d has a value of `True`. +- Each rois has a value of `False`. +- The `fusion_success` is `False`. +- The `level` value is `\x02`. (diagnostic_msgs::msg::DiagnosticStatus::ERROR) + +```bash +header: + stamp: + sec: 1722492015 + nanosec: 848508777 + frame_id: '' +status: +- level: "\x02" + name: 'pointpainting: pointpainting_fusion_status' + message: Fused output msg is published but misses some ROIs + hardware_id: pointpainting_fusion_checker + values: + - key: msg3d/is_fused + value: 'True' + - key: fused_timestamp + value: '1738725170.860273600' + - key: reference_timestamp_min + value: '1738725170.850771904' + - key: reference_timestamp_max + value: '1738725170.870771885' + - key: /rois0/is_fused + value: 'False' + - key: /rois1/timestamp + value: '1738725170.934378386' + - key: /rois1/is_fused + value: 'True' + - key: /rois2/timestamp + value: '1738725170.917550087' + - key: /rois2/is_fused + value: 'True' + - key: fusion_success + value: 'False' +``` + +## The `build_only` option -| Fusion Name | Description | Detail | -| -------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------------- | -| roi_cluster_fusion | Overwrite a classification label of clusters by that of ROIs from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) | -| roi_detected_object_fusion | Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) | -| pointpainting_fusion | Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. | [link](./docs/pointpainting-fusion.md) | -| roi_pointcloud_fusion | Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects | [link](./docs/roi-pointcloud-fusion.md) | +The `pointpainting_fusion` node has `build_only` option to build the TensorRT engine file from the ONNX file. +Although it is preferred to move all the ROS parameters in `.param.yaml` file in Autoware Universe, the `build_only` option is not moved to the `.param.yaml` file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command: + +```bash +ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true +``` diff --git a/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml b/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml index 86db3bad4f8f8..15d1ee7581a9e 100644 --- a/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml +++ b/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml @@ -1,12 +1,15 @@ /**: ros__parameters: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - timeout_ms: 70.0 - match_threshold_ms: 50.0 + # Example of how to set offset parameters (Check the readme for more information), both lidar and camera run in 10 hz. + # msg3d's header timestamp: 0.000 + # rois's header timestamp (camera 0-5): 0.059, 0.010, 0.026, 0.042, 0.076, 0.093 + # offset = rois_timestamp- msg3d_timestamp + rois_timestamp_offsets: [0.059, 0.010, 0.026, 0.042, 0.076, 0.093] + rois_timeout_sec: 0.5 + msg3d_timeout_sec: 0.05 image_buffer_size: 15 # projection setting for each ROI whether unrectify image point_project_to_unrectified_image: [false, false, false, false, false, false] - debug_mode: false filter_scope_min_x: -100.0 filter_scope_min_y: -100.0 filter_scope_min_z: -100.0 @@ -21,4 +24,16 @@ approximation_grid_cell_height: 1.0 # debug parameters + debug_mode: false + collector_debug_mode: false publish_processing_time_detail: false + + publish_previous_but_late_output_msg: false + rosbag_length: 10.0 + # matching strategy + matching_strategy: + type: advanced + msg3d_noise_window: 0.02 + rois_timestamp_noise_window: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01] + # type: naive + # threshold: 0.05 diff --git a/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg b/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg new file mode 100644 index 0000000000000..f16b1396b4205 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg @@ -0,0 +1,1341 @@ + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ Collector +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs1 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs2 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs1 +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + +
+
+
+ 0 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs0's stamp +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs1's stamp +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ 2 +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + +
+
+
+ arrival time +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+
+ compare [stamp - offset] with the +
reference timestamp.
+
If match, add to the collector
+
+ if it doesn't match, create new + collector +
+
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ + 2D + + RoIs1 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ 2D RoIs2 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ timer +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ add to the collector +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + +
+
+
+ create a collector +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ set [stamp - offset] as +
reference timestamp
+
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ trigger the timer +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ When timer count to zero +
+ fusion + publish +
+
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ When group has +
+ RoIs 1, 2, 3 and + Msg3D +
+
+ + fusion + publish + +
+
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + +
+
+
+ 3D msg +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ 2D RoIs0 +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + +
+
+
+ 1 +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ Msg3D +
+
+
+
+ +
+
+
+
+ + + + + + + + +
+
+
+ RoIs2's stamp +
+
+
+
+ +
+
+
+
+ + + + + + + + + +
+
+
+ reference timestamp +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + +
+
+
+ msg +
+
+
+
+ +
+
+
+
+ + + + + + + + + + + + + +
+
+
+ msg's stamp +
+
+
+
+ +
+
+
+
+ + + + + + +
+
+
+
diff --git a/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg b/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg new file mode 100644 index 0000000000000..7c6c91d709f61 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg @@ -0,0 +1,558 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ scan +
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ Lidar +
+ header time: 0.000 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 1 +
+ header time: 0.010 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 2 +
+ header time: 0.026 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 3 +
+ header time: 0.042 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 0 +
+ header time: 0.059 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 5 +
+ header time: 0.093 +
+
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ camera 4 +
+ header time: 0.076 +
+
+
+
+
+ +
+
+
+ + + + + + + + + + + + + + + + + +
+
+
Vehicle direction
+
+
+
+ +
+
+
+ + + + + + + + + + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+ + + + + + + +
+
+
+ trigger +
+
+
+
+ +
+
+
+
+
diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp new file mode 100644 index 0000000000000..a75ee786f276b --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp @@ -0,0 +1,115 @@ +// Copyright 2025 TIER IV, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once + +#include "autoware/image_projection_based_fusion/camera_projection.hpp" + +#include + +#include +#include +#include +#include +#include + +namespace autoware::image_projection_based_fusion +{ +using autoware::image_projection_based_fusion::CameraProjection; + +template +class FusionNode; + +template +struct Det2dStatus +{ + // camera index + std::size_t id{0}; + // camera projection + std::shared_ptr camera_projector_ptr{nullptr}; + bool project_to_unrectified_image{false}; + bool approximate_camera_projection{false}; +}; + +struct FusionCollectorInfoBase +{ + virtual ~FusionCollectorInfoBase() = default; +}; + +struct NaiveCollectorInfo : public FusionCollectorInfoBase +{ + double timestamp; + double threshold; + + explicit NaiveCollectorInfo(double timestamp = 0.0, double threshold = 0.0) + : timestamp(timestamp), threshold(threshold) + { + } +}; + +struct AdvancedCollectorInfo : public FusionCollectorInfoBase +{ + double timestamp; + double noise_window; + + explicit AdvancedCollectorInfo(double timestamp = 0.0, double noise_window = 0.0) + : timestamp(timestamp), noise_window(noise_window) + { + } +}; + +enum class CollectorStatus { Idle, Processing, Finished }; + +template +class FusionCollector +{ +public: + FusionCollector( + std::shared_ptr> && ros2_parent_node, + std::size_t rois_number, const std::vector> & det2d_status_list, + bool debug_mode); + void process_msg3d(const typename Msg3D::ConstSharedPtr msg3d, double msg3d_timeout); + void process_rois( + const std::size_t & rois_id, const typename Msg2D::ConstSharedPtr rois_msg, + double rois_timeout); + void fusion_callback(); + + [[nodiscard]] CollectorStatus get_status(); + + void set_info(std::shared_ptr collector_info); + [[nodiscard]] std::shared_ptr get_info() const; + bool ready_to_fuse(); + bool rois_exists(const std::size_t & rois_id); + bool msg3d_exists(); + void add_camera_projection( + std::size_t rois_id, std::shared_ptr camera_projector_ptr); + void set_period(const std::chrono::nanoseconds period); + void reset(); + void show_debug_message(); + +private: + std::shared_ptr> ros2_parent_node_; + rclcpp::TimerBase::SharedPtr timer_; + std::size_t rois_number_; + typename Msg3D::ConstSharedPtr msg3d_{nullptr}; + std::vector> det2d_status_list_; + std::unordered_map id_to_rois_map_; + bool is_first_msg3d_{false}; + bool debug_mode_; + std::mutex fusion_mutex_; + std::shared_ptr fusion_collector_info_; + CollectorStatus status_; +}; + +} // namespace autoware::image_projection_based_fusion diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp new file mode 100644 index 0000000000000..d53d3e1308774 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp @@ -0,0 +1,145 @@ +// Copyright 2025 TIER IV, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once + +#include "autoware/image_projection_based_fusion/fusion_collector.hpp" + +#include + +#include +#include +#include +#include +#include +#include +#include + +namespace autoware::image_projection_based_fusion +{ + +template +class FusionNode; + +struct MatchingContextBase +{ + virtual ~MatchingContextBase() = default; +}; + +struct Msg3dMatchingContext : public MatchingContextBase +{ + double msg3d_timestamp; + + explicit Msg3dMatchingContext(double msg3d_timestamp = 0.0) : msg3d_timestamp(msg3d_timestamp) {} +}; + +struct RoisMatchingContext : public MatchingContextBase +{ + double rois_timestamp; + std::size_t rois_id; + + explicit RoisMatchingContext(double rois_timestamp = 0.0, std::size_t rois_id = 0) + : rois_timestamp(rois_timestamp), rois_id(rois_id) + { + } +}; + +template +class FusionMatchingStrategy +{ +public: + virtual ~FusionMatchingStrategy() = default; + + [[nodiscard]] virtual std::optional>> + match_rois_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) const = 0; + + [[nodiscard]] virtual std::optional>> + match_msg3d_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) = 0; + virtual void set_collector_info( + std::shared_ptr> & collector, + const std::shared_ptr & matching_context) = 0; +}; + +template +class NaiveMatchingStrategy : public FusionMatchingStrategy +{ +public: + explicit NaiveMatchingStrategy( + std::shared_ptr> && ros2_parent_node, + const std::unordered_map & id_to_offset_map); + + [[nodiscard]] std::optional>> + match_rois_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) const override; + + [[nodiscard]] std::optional>> + match_msg3d_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) override; + + void set_collector_info( + std::shared_ptr> & collector, + const std::shared_ptr & matching_context) override; + +private: + std::shared_ptr> ros2_parent_node_; + std::unordered_map id_to_offset_map_; + double threshold_; +}; + +template +class AdvancedMatchingStrategy : public FusionMatchingStrategy +{ +public: + explicit AdvancedMatchingStrategy( + std::shared_ptr> && ros2_parent_node, + const std::unordered_map & id_to_offset_map); + + [[nodiscard]] std::optional>> + match_rois_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) const override; + + [[nodiscard]] std::optional>> + match_msg3d_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) override; + void set_collector_info( + std::shared_ptr> & collector, + const std::shared_ptr & matching_context) override; + + double get_concatenated_offset( + const double & msg3d_timestamp, + const std::optional> & concatenated_status); + + double extract_fractional(double timestamp); + void update_fractional_timestamp_set(double new_timestamp); + double compute_offset(double input_timestamp); + +private: + std::shared_ptr> ros2_parent_node_; + std::unordered_map id_to_offset_map_; + std::unordered_map id_to_noise_window_map_; + double msg3d_noise_window_; + std::set fractional_timestamp_set_; // Use set to store unique fractional timestamps + int success_status_counter_{0}; + static constexpr int success_threshold{100}; + bool database_created_{false}; +}; +} // namespace autoware::image_projection_based_fusion diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp index 5b889ac9a3e8b..eb900faa4d9c3 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp @@ -1,4 +1,4 @@ -// Copyright 2022 TIER IV, Inc. +// Copyright 2025 TIER IV, Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -12,17 +12,23 @@ // See the License for the specific language governing permissions and // limitations under the License. -#ifndef AUTOWARE__IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_ -#define AUTOWARE__IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_ +#pragma once + +#include "autoware/image_projection_based_fusion/fusion_collector.hpp" +#include "autoware/image_projection_based_fusion/fusion_matching_strategy.hpp" +#include "autoware/image_projection_based_fusion/fusion_types.hpp" #include #include #include #include #include +#include #include #include +#include +#include #include #include #include @@ -38,44 +44,16 @@ #include #include -#include +#include +#include #include -#include +#include #include -#include +#include #include namespace autoware::image_projection_based_fusion { -using autoware_perception_msgs::msg::DetectedObject; -using autoware_perception_msgs::msg::DetectedObjects; -using sensor_msgs::msg::CameraInfo; -using sensor_msgs::msg::Image; -using PointCloudMsgType = sensor_msgs::msg::PointCloud2; -using RoiMsgType = tier4_perception_msgs::msg::DetectedObjectsWithFeature; -using ClusterMsgType = tier4_perception_msgs::msg::DetectedObjectsWithFeature; -using ClusterObjType = tier4_perception_msgs::msg::DetectedObjectWithFeature; -using tier4_perception_msgs::msg::DetectedObjectWithFeature; -using PointCloud = pcl::PointCloud; -using autoware::image_projection_based_fusion::CameraProjection; -using autoware_perception_msgs::msg::ObjectClassification; - -template -struct Det2dStatus -{ - // camera index - std::size_t id = 0; - // camera projection - std::unique_ptr camera_projector_ptr; - bool project_to_unrectified_image = false; - bool approximate_camera_projection = false; - // process flags - bool is_fused = false; - // timing - double input_offset_ms = 0.0; - // cache - std::map cached_det2d_msgs; -}; template class FusionNode : public rclcpp::Node @@ -89,57 +67,83 @@ class FusionNode : public rclcpp::Node explicit FusionNode( const std::string & node_name, const rclcpp::NodeOptions & options, int queue_size); + virtual void preprocess(Msg3D & output_msg); + virtual void fuse_on_single_image( + const Msg3D & input_msg3d, const Det2dStatus & det2d_status, + const Msg2D & input_rois_msg, Msg3D & output_msg) = 0; + void export_process( + typename Msg3D::SharedPtr & output_det3d_msg, + std::unordered_map id_to_stamp_map, + std::shared_ptr collector_info); + std::optional> find_concatenation_status( + double timestamp); + void show_diagnostic_message( + std::unordered_map id_to_stamp_map, + std::shared_ptr collector_info); + private: // Common process methods - void cameraInfoCallback( + void camera_info_callback( const sensor_msgs::msg::CameraInfo::ConstSharedPtr input_camera_info_msg, - const std::size_t camera_id); - - // callback for timer - void timer_callback(); - void setPeriod(const int64_t new_period); - void exportProcess(); - - // 2d detection management methods - bool checkAllDet2dFused() - { - for (const auto & det2d : det2d_list_) { - if (!det2d.is_fused) { - return false; - } - } - return true; - } + const std::size_t rois_id); + + void initialize_strategy(); + void initialize_collector_list(); + void manage_collector_list(); + void manage_concatenated_status_map(double current_timestamp); + + static std::string format_timestamp(double timestamp); + void check_fusion_status(diagnostic_updater::DiagnosticStatusWrapper & stat); // camera projection float approx_grid_cell_w_size_; float approx_grid_cell_h_size_; + bool debug_mode_{false}; + bool collector_debug_mode_{false}; + + std::size_t rois_number_; // timer - rclcpp::TimerBase::SharedPtr timer_; - double timeout_ms_{}; - double match_threshold_ms_{}; + double msg3d_timeout_sec_{}; + double rois_timeout_sec_{}; std::vector::SharedPtr> rois_subs_; std::vector::SharedPtr> camera_info_subs_; - // cache for fusion - int64_t cached_det3d_msg_timestamp_; - typename Msg3D::SharedPtr cached_det3d_msg_ptr_; + std::unique_ptr> fusion_matching_strategy_; + std::mutex fusion_collectors_mutex_; + std::list>> fusion_collectors_; + + std::unordered_map id_to_offset_map_; + + // timestamp: (key, value) + std::unordered_map> concatenated_status_map_; + + diagnostic_updater::Updater diagnostic_updater_{this}; + std::shared_ptr diagnostic_collector_info_; + std::unordered_map diagnostic_id_to_stamp_map_; + + double current_output_msg_timestamp_{0.0}; + double latest_output_msg_timestamp_{0.0}; + double rosbag_length_{10.0}; + bool publish_previous_but_late_output_msg_{false}; + bool drop_previous_but_late_output_msg_{false}; + bool publish_output_msg_{false}; + bool msg3d_fused_{true}; + static constexpr const int num_of_collectors = 10; + bool init_collector_list_{false}; protected: - void setDet2DStatus(std::size_t rois_number); + void initialize_det2d_status(std::size_t rois_number); // callback for main subscription - void subCallback(const typename Msg3D::ConstSharedPtr input_msg); - // callback for roi subscription - void roiCallback(const typename Msg2D::ConstSharedPtr input_roi_msg, const std::size_t roi_i); + void sub_callback(const typename Msg3D::ConstSharedPtr msg3d); + // callback for rois subscription + void rois_callback(const typename Msg2D::ConstSharedPtr rois_msg, const std::size_t rois_id); + + void diagnostic_callback(const diagnostic_msgs::msg::DiagnosticArray::SharedPtr diagnostic_msg); // Custom process methods - virtual void preprocess(Msg3D & output_msg); - virtual void fuseOnSingleImage( - const Msg3D & input_msg, const Det2dStatus & det2d, const Msg2D & input_roi_msg, - Msg3D & output_msg) = 0; virtual void postprocess(const Msg3D & processing_msg, ExportObj & output_msg); virtual void publish(const ExportObj & output_msg); @@ -148,10 +152,11 @@ class FusionNode : public rclcpp::Node tf2_ros::TransformListener tf_listener_; // 2d detection management - std::vector> det2d_list_; + std::vector> det2d_status_list_; // 3d detection subscription - typename rclcpp::Subscription::SharedPtr det3d_sub_; + typename rclcpp::Subscription::SharedPtr msg3d_sub_; + rclcpp::Subscription::SharedPtr sub_diag_; // parameters for out_of_scope filter float filter_scope_min_x_; @@ -161,6 +166,8 @@ class FusionNode : public rclcpp::Node float filter_scope_min_z_; float filter_scope_max_z_; + std::string matching_strategy_; + // output publisher typename rclcpp::Publisher::SharedPtr pub_ptr_; @@ -179,5 +186,3 @@ class FusionNode : public rclcpp::Node }; } // namespace autoware::image_projection_based_fusion - -#endif // AUTOWARE__IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_ diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_types.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_types.hpp new file mode 100644 index 0000000000000..e43e38061dee5 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_types.hpp @@ -0,0 +1,38 @@ +// Copyright 2025 TIER IV, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#pragma once + +#include +#include +#include +#include + +#include +#include + +namespace autoware::image_projection_based_fusion +{ +using autoware_perception_msgs::msg::DetectedObject; +using autoware_perception_msgs::msg::DetectedObjects; +using sensor_msgs::msg::CameraInfo; +using sensor_msgs::msg::Image; +using PointCloudMsgType = sensor_msgs::msg::PointCloud2; +using RoiMsgType = tier4_perception_msgs::msg::DetectedObjectsWithFeature; +using ClusterMsgType = tier4_perception_msgs::msg::DetectedObjectsWithFeature; +using ClusterObjType = tier4_perception_msgs::msg::DetectedObjectWithFeature; +using tier4_perception_msgs::msg::DetectedObjectWithFeature; +using PointCloud = pcl::PointCloud; +using autoware_perception_msgs::msg::ObjectClassification; +} // namespace autoware::image_projection_based_fusion diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/pointpainting_fusion/node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/pointpainting_fusion/node.hpp index 32293d9b85c75..6deaddec7cd7a 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/pointpainting_fusion/node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/pointpainting_fusion/node.hpp @@ -50,9 +50,9 @@ class PointPaintingFusionNode : public FusionNode & det2d, - const RoiMsgType & input_roi_msg, PointCloudMsgType & painted_pointcloud_msg) override; + void fuse_on_single_image( + const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, PointCloudMsgType & painted_pointcloud_msg) override; void postprocess( const PointCloudMsgType & painted_pointcloud_msg, DetectedObjects & output_msg) override; diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_cluster_fusion/node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_cluster_fusion/node.hpp index b7bf8738b68a4..31685cbc34c29 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_cluster_fusion/node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_cluster_fusion/node.hpp @@ -32,9 +32,9 @@ class RoiClusterFusionNode : public FusionNode & det2d, - const RoiMsgType & input_roi_msg, ClusterMsgType & output_cluster_msg) override; + void fuse_on_single_image( + const ClusterMsgType & input_cluster_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, ClusterMsgType & output_cluster_msg) override; void postprocess(const ClusterMsgType & output_cluster_msg, ClusterMsgType & output_msg) override; diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_detected_object_fusion/node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_detected_object_fusion/node.hpp index ffe7e176dfa27..f51f5737b937c 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_detected_object_fusion/node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_detected_object_fusion/node.hpp @@ -39,12 +39,12 @@ class RoiDetectedObjectFusionNode : public FusionNode & det2d, - const RoiMsgType & input_roi_msg, DetectedObjects & output_object_msg) override; + void fuse_on_single_image( + const DetectedObjects & input_object_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, DetectedObjects & output_object_msg) override; std::map generateDetectedObjectRoIs( - const DetectedObjects & input_object_msg, const Det2dStatus & det2d, + const DetectedObjects & input_object_msg, const Det2dStatus & det2d_status, const Eigen::Affine3d & object2camera_affine); void fuseObjectsOnImage( diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_pointcloud_fusion/node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_pointcloud_fusion/node.hpp index 77a1745faa7e5..da6c1a0716539 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_pointcloud_fusion/node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/roi_pointcloud_fusion/node.hpp @@ -34,9 +34,9 @@ class RoiPointCloudFusionNode : public FusionNode::SharedPtr point_pub_ptr_; rclcpp::Publisher::SharedPtr cluster_debug_pub_; - void fuseOnSingleImage( - const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d, - const RoiMsgType & input_roi_msg, PointCloudMsgType & output_pointcloud_msg) override; + void fuse_on_single_image( + const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, PointCloudMsgType & output_pointcloud_msg) override; void postprocess(const PointCloudMsgType & pointcloud_msg, ClusterMsgType & output_msg) override; diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/segmentation_pointcloud_fusion/node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/segmentation_pointcloud_fusion/node.hpp index 5414f98e142cd..845cba88f3e78 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/segmentation_pointcloud_fusion/node.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/segmentation_pointcloud_fusion/node.hpp @@ -42,8 +42,8 @@ class SegmentPointCloudFusionNode : public FusionNode & det2d, + void fuse_on_single_image( + const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d_status, const Image & input_mask, PointCloudMsgType & output_pointcloud_msg) override; inline void copyPointCloud( diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/utils/utils.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/utils/utils.hpp index 1555e0123a368..eaff188daa555 100644 --- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/utils/utils.hpp +++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/utils/utils.hpp @@ -63,7 +63,7 @@ struct PointData size_t orig_index; }; -bool checkCameraInfo(const sensor_msgs::msg::CameraInfo & camera_info); +bool check_camera_info(const sensor_msgs::msg::CameraInfo & camera_info); std::optional getTransformStamped( const tf2_ros::Buffer & tf_buffer, const std::string & target_frame_id, diff --git a/perception/autoware_image_projection_based_fusion/launch/roi_detected_object_fusion.launch.xml b/perception/autoware_image_projection_based_fusion/launch/roi_detected_object_fusion.launch.xml index 3a2a74e84ec1e..db463e4fb367a 100644 --- a/perception/autoware_image_projection_based_fusion/launch/roi_detected_object_fusion.launch.xml +++ b/perception/autoware_image_projection_based_fusion/launch/roi_detected_object_fusion.launch.xml @@ -1,5 +1,5 @@ - + diff --git a/perception/autoware_image_projection_based_fusion/package.xml b/perception/autoware_image_projection_based_fusion/package.xml index c920d9e23ff10..f5d3606819795 100644 --- a/perception/autoware_image_projection_based_fusion/package.xml +++ b/perception/autoware_image_projection_based_fusion/package.xml @@ -23,6 +23,8 @@ autoware_point_types autoware_utils cv_bridge + diagnostic_msgs + diagnostic_updater image_geometry image_transport message_filters diff --git a/perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json b/perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json index 8077f3e2f3e30..2a1966ee1bb57 100644 --- a/perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json +++ b/perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json @@ -6,24 +6,26 @@ "fusion_common": { "type": "object", "properties": { - "input_offset_ms": { + "rois_timestamp_offsets": { "type": "array", - "description": "An array of timestamp offsets for each camera [ms].", - "default": [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] + "items": { + "type": "number" + }, + "description": "The timestamp offset between each RoIs and the msg3d in seconds.", + "minItems": 2, + "default": [0.059, 0.01, 0.026, 0.042, 0.076, 0.093] }, - "timeout_ms": { + "rois_timeout_sec": { "type": "number", - "description": "A timeout value can be assigned within a single frame [ms].", - "default": 70.0, - "minimum": 1.0, - "maximum": 100.0 + "description": "Timer's timeout duration in seconds when the collector created by RoIs msg.", + "default": 0.5, + "minimum": 0.001 }, - "match_threshold_ms": { + "msg3d_timeout_sec": { "type": "number", - "description": "A maximum threshold value to synchronize RoIs from multiple cameras [ms].", - "default": 50.0, - "minimum": 0.0, - "maximum": 100.0 + "description": "Timer's timeout duration in seconds when the collector received msg3d.", + "default": 0.05, + "minimum": 0.001 }, "image_buffer_size": { "type": "integer", @@ -36,11 +38,6 @@ "description": "An array of options indicating whether to project point to unrectified image or not.", "default": [false, false, false, false, false, false] }, - "debug_mode": { - "type": "boolean", - "description": "Whether to debug or not.", - "default": false - }, "filter_scope_min_x": { "type": "number", "description": "Minimum x position to be considered for debug [m].", @@ -85,6 +82,86 @@ "type": "number", "description": "The height of grid cell used in approximated projection [pixel].", "default": 1.0 + }, + "debug_mode": { + "type": "boolean", + "description": "Flag to enable or disable debug message output.", + "default": false + }, + "collector_debug_mode": { + "type": "boolean", + "description": "Flag to enable or disable collector's debug message output.", + "default": false + }, + "publish_processing_time_detail": { + "type": "boolean", + "description": "Flag to publish detail message for processing time.", + "default": false + }, + "publish_previous_but_late_output_msg": { + "type": "boolean", + "description": "Flag to indicate if the current fusion result should be published if its timestamp is earlier than the previous published fusion result.", + "default": false + }, + "rosbag_length": { + "type": "number", + "description": "This value determine if the rosbag has started from the beginning again. The value should be set smaller than the actual length of the bag.", + "default": 10.0 + }, + "matching_strategy": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": ["naive", "advanced"], + "default": "advanced", + "description": "Set it to `advanced` if you want to use more accurate and complicated logic for matching LiDAR and camera; otherwise, set it to `naive`." + }, + "threshold": { + "type": "number", + "description": "A maximum threshold value to synchronize RoIs from multiple cameras in seconds.", + "default": 0.05, + "minimum": 0.0, + "maximum": 0.1 + }, + "msg3d_noise_window": { + "type": "number", + "description": "msg3d noise window in seconds.", + "default": 0.001, + "minimum": 0.0 + }, + "rois_timestamp_noise_window": { + "type": "array", + "items": { + "type": "number", + "minimum": 0.0 + }, + "default": [0.005, 0.005, 0.005, 0.005, 0.005, 0.005], + "minItems": 2, + "description": "List of camera timestamp noise windows in seconds. The noise values should be specified in the same order as the input_topics." + } + }, + "required": ["type"], + "dependencies": { + "type": { + "oneOf": [ + { + "properties": { "type": { "const": "naive" } }, + "required": ["threshold"], + "not": { + "required": ["msg3d_noise_window", "rois_timestamp_noise_window"] + } + }, + { + "properties": { "type": { "const": "advanced" } }, + "required": ["msg3d_noise_window", "rois_timestamp_noise_window"], + "not": { + "required": ["threshold"] + } + } + ] + } + } } } } diff --git a/perception/autoware_image_projection_based_fusion/src/fusion_collector.cpp b/perception/autoware_image_projection_based_fusion/src/fusion_collector.cpp new file mode 100644 index 0000000000000..249f8941e2ca1 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/src/fusion_collector.cpp @@ -0,0 +1,306 @@ +// Copyright 2025 TIER IV, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "autoware/image_projection_based_fusion/fusion_collector.hpp" + +#include "autoware/image_projection_based_fusion/fusion_node.hpp" +#include "autoware/image_projection_based_fusion/fusion_types.hpp" + +#include + +#include +#include +#include +#include +#include + +namespace autoware::image_projection_based_fusion +{ + +template +FusionCollector::FusionCollector( + std::shared_ptr> && ros2_parent_node, std::size_t rois_number, + const std::vector> & det2d_status_list, bool debug_mode) +: ros2_parent_node_(std::move(ros2_parent_node)), + rois_number_(rois_number), + det2d_status_list_(det2d_status_list), + debug_mode_(debug_mode) +{ + status_ = CollectorStatus::Idle; + + auto init_timeout_sec = 1.0; // This will be overwritten when input come + const auto period_ns = std::chrono::duration_cast( + std::chrono::duration(init_timeout_sec)); + + timer_ = + rclcpp::create_timer(ros2_parent_node_, ros2_parent_node_->get_clock(), period_ns, [this]() { + std::lock_guard fusion_lock(fusion_mutex_); + if (status_ == CollectorStatus::Finished) return; + fusion_callback(); + }); +} + +template +void FusionCollector::set_info( + std::shared_ptr fusion_collector_info) +{ + fusion_collector_info_ = std::move(fusion_collector_info); +} + +template +std::shared_ptr FusionCollector::get_info() const +{ + return fusion_collector_info_; +} + +template +void FusionCollector::process_msg3d( + const typename Msg3D::ConstSharedPtr msg3d, double msg3d_timeout) +{ + std::lock_guard fusion_lock(fusion_mutex_); + + if (status_ == CollectorStatus::Idle) { + // Add msg3d to the collector, restart the timer + status_ = CollectorStatus::Processing; + is_first_msg3d_ = true; + const auto period_ns = std::chrono::duration_cast( + std::chrono::duration(msg3d_timeout)); + set_period(period_ns); + timer_->reset(); + } else if (status_ == CollectorStatus::Processing) { + if (msg3d_ != nullptr) { + RCLCPP_WARN_STREAM_THROTTLE( + ros2_parent_node_->get_logger(), *ros2_parent_node_->get_clock(), + std::chrono::milliseconds(10000).count(), + "Msg3d already exists in the collector. Check the timestamp of the msg3d."); + } + + if (!is_first_msg3d_) { + const auto period_ns = std::chrono::duration_cast( + std::chrono::duration(msg3d_timeout)); + set_period(period_ns); + timer_->reset(); + } + } + + msg3d_ = msg3d; + if (ready_to_fuse()) { + fusion_callback(); + } +} + +template +void FusionCollector::process_rois( + const std::size_t & rois_id, const typename Msg2D::ConstSharedPtr rois_msg, double rois_timeout) +{ + std::lock_guard fusion_lock(fusion_mutex_); + + if (status_ == CollectorStatus::Idle) { + // Add rois_msg to the collector, restart the timer + status_ = CollectorStatus::Processing; + is_first_msg3d_ = false; + const auto period_ns = std::chrono::duration_cast( + std::chrono::duration(rois_timeout)); + set_period(period_ns); + timer_->reset(); + } else if (status_ == CollectorStatus::Processing) { + if (id_to_rois_map_.find(rois_id) != id_to_rois_map_.end()) { + RCLCPP_WARN_STREAM_THROTTLE( + ros2_parent_node_->get_logger(), *ros2_parent_node_->get_clock(), + std::chrono::milliseconds(10000).count(), + "ROIs '" << rois_id + << "' already exists in the collector. Check the timestamp of the rois."); + } + } + + id_to_rois_map_[rois_id] = rois_msg; + if (ready_to_fuse()) { + fusion_callback(); + } +} + +template +bool FusionCollector::ready_to_fuse() +{ + return id_to_rois_map_.size() == rois_number_ && msg3d_ != nullptr; +} + +template +CollectorStatus FusionCollector::get_status() +{ + std::lock_guard fusion_lock(fusion_mutex_); + return status_; +} + +template +void FusionCollector::fusion_callback() +{ + if (debug_mode_) { + show_debug_message(); + } + + // All pointcloud and rois are received or the timer has timed out, cancel the timer and fuse + // them. + timer_->cancel(); + + std::unordered_map id_to_stamp_map; + for (const auto & [rois_id, rois_msg] : id_to_rois_map_) { + id_to_stamp_map[rois_id] = rclcpp::Time(rois_msg->header.stamp).seconds(); + } + + if (!msg3d_) { + RCLCPP_DEBUG( + ros2_parent_node_->get_logger(), + "The input 3D message is not in the fusion collector, so the fusion process will be " + "skipped."); + status_ = CollectorStatus::Finished; + ros2_parent_node_->show_diagnostic_message(id_to_stamp_map, fusion_collector_info_); + return; + } + + typename Msg3D::SharedPtr output_det3d_msg = std::make_shared(*msg3d_); + ros2_parent_node_->preprocess(*output_det3d_msg); + + for (const auto & [rois_id, rois_msg] : id_to_rois_map_) { + if (det2d_status_list_[rois_id].camera_projector_ptr == nullptr) { + RCLCPP_WARN_THROTTLE( + ros2_parent_node_->get_logger(), *ros2_parent_node_->get_clock(), 5000, + "no camera info. id is %zu", rois_id); + continue; + } + ros2_parent_node_->fuse_on_single_image( + *msg3d_, det2d_status_list_[rois_id], *rois_msg, *output_det3d_msg); + } + + ros2_parent_node_->export_process(output_det3d_msg, id_to_stamp_map, fusion_collector_info_); + status_ = CollectorStatus::Finished; +} + +template +bool FusionCollector::rois_exists(const std::size_t & rois_id) +{ + return id_to_rois_map_.find(rois_id) != id_to_rois_map_.end(); +} + +template +bool FusionCollector::msg3d_exists() +{ + return msg3d_ != nullptr; +} + +template +void FusionCollector::add_camera_projection( + std::size_t rois_id, std::shared_ptr camera_projector_ptr) +{ + std::lock_guard lock(fusion_mutex_); + det2d_status_list_[rois_id].camera_projector_ptr = camera_projector_ptr; +} + +template +void FusionCollector::set_period(const std::chrono::nanoseconds period) +{ + try { + const auto new_period = period.count(); + if (!timer_) { + return; + } + int64_t old_period = 0; + rcl_ret_t ret = rcl_timer_get_period(timer_->get_timer_handle().get(), &old_period); + if (ret != RCL_RET_OK) { + rclcpp::exceptions::throw_from_rcl_error(ret, "Couldn't get old period"); + } + ret = rcl_timer_exchange_period(timer_->get_timer_handle().get(), new_period, &old_period); + if (ret != RCL_RET_OK) { + rclcpp::exceptions::throw_from_rcl_error(ret, "Couldn't exchange_period"); + } + } catch (rclcpp::exceptions::RCLError & ex) { + RCLCPP_WARN_THROTTLE( + ros2_parent_node_->get_logger(), *ros2_parent_node_->get_clock(), 5000, "%s", ex.what()); + } +} + +template +void FusionCollector::reset() +{ + std::lock_guard lock(fusion_mutex_); + + status_ = CollectorStatus::Idle; // Reset status to Idle + id_to_rois_map_.clear(); + msg3d_ = nullptr; + fusion_collector_info_ = nullptr; + is_first_msg3d_ = false; + + if (timer_ && !timer_->is_canceled()) { + timer_->cancel(); + } +} + +template +void FusionCollector::show_debug_message() +{ + auto time_until_trigger = timer_->time_until_trigger(); + std::stringstream log_stream; + log_stream << std::fixed << std::setprecision(6); + log_stream << "Collector's fusion callback time: " + << ros2_parent_node_->get_clock()->now().seconds() << " seconds\n"; + + if ( + auto advanced_info = std::dynamic_pointer_cast(fusion_collector_info_)) { + log_stream << "Advanced strategy:\n Fusion collector's reference time min: " + << advanced_info->timestamp - advanced_info->noise_window + << " to max: " << advanced_info->timestamp + advanced_info->noise_window + << " seconds\n"; + } else if ( + auto naive_info = std::dynamic_pointer_cast(fusion_collector_info_)) { + log_stream << "Naive strategy:\n Fusion collector's timestamp: " << naive_info->timestamp + << " seconds\n"; + } + + log_stream << "Time until trigger: " << (time_until_trigger.count() / 1e9) << " seconds\n"; + if (msg3d_) { + log_stream << "Msg3d: [" << rclcpp::Time(msg3d_->header.stamp).seconds() << "]\n"; + } else { + log_stream << "Msg3d: [Is empty]\n"; + } + log_stream << "ROIs: ["; + std::string separator; + for (const auto & [id, rois] : id_to_rois_map_) { + log_stream << separator; + log_stream << "[rois " << id << ", " << rclcpp::Time(rois->header.stamp).seconds() << "]"; + separator = ", "; + } + + log_stream << "]\n"; + + RCLCPP_INFO(ros2_parent_node_->get_logger(), "%s", log_stream.str().c_str()); +} + +// Explicit instantiation for the supported types + +// pointpainting fusion +template class FusionCollector; + +// roi cluster fusion +template class FusionCollector; + +// roi detected-object fusion +template class FusionCollector; + +// roi pointcloud fusion +template class FusionCollector; + +// segment pointcloud fusion +template class FusionCollector; + +} // namespace autoware::image_projection_based_fusion diff --git a/perception/autoware_image_projection_based_fusion/src/fusion_matching_strategy.cpp b/perception/autoware_image_projection_based_fusion/src/fusion_matching_strategy.cpp new file mode 100644 index 0000000000000..f417f26f52613 --- /dev/null +++ b/perception/autoware_image_projection_based_fusion/src/fusion_matching_strategy.cpp @@ -0,0 +1,433 @@ +// Copyright 2025 TIER IV, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "autoware/image_projection_based_fusion/fusion_matching_strategy.hpp" + +#include "autoware/image_projection_based_fusion/fusion_collector.hpp" +#include "autoware/image_projection_based_fusion/fusion_node.hpp" +#include "autoware/image_projection_based_fusion/fusion_types.hpp" + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace autoware::image_projection_based_fusion +{ + +template +NaiveMatchingStrategy::NaiveMatchingStrategy( + std::shared_ptr> && ros2_parent_node, + const std::unordered_map & id_to_offset_map) +: ros2_parent_node_(std::move(ros2_parent_node)), id_to_offset_map_(id_to_offset_map) +{ + if (!ros2_parent_node_) { + throw std::runtime_error("ros2_parent_node is nullptr in NaiveMatchingStrategy constructor."); + } + + threshold_ = ros2_parent_node_->template declare_parameter("matching_strategy.threshold"); + + RCLCPP_INFO(ros2_parent_node_->get_logger(), "Utilize naive matching strategy for fusion nodes."); +} + +template +std::optional>> +NaiveMatchingStrategy::match_rois_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) const +{ + double smallest_time_difference = std::numeric_limits::max(); + std::shared_ptr> closest_collector = nullptr; + + for (const auto & fusion_collector : fusion_collectors) { + if (!fusion_collector->rois_exists(matching_context->rois_id)) { + if ( + auto naive_info = + std::dynamic_pointer_cast(fusion_collector->get_info())) { + auto offset_it = id_to_offset_map_.find(matching_context->rois_id); + if (offset_it == id_to_offset_map_.end()) { + RCLCPP_ERROR( + ros2_parent_node_->get_logger(), "Missing offset for rois_id: %zu", + matching_context->rois_id); + continue; + } + + double time_difference = + std::abs(matching_context->rois_timestamp - naive_info->timestamp - offset_it->second); + if (time_difference < smallest_time_difference && time_difference < naive_info->threshold) { + smallest_time_difference = time_difference; + closest_collector = fusion_collector; + } + } + } + } + + return closest_collector; // Implicitly returns std::nullopt if closest_collector is nullptr +} + +template +std::optional>> +NaiveMatchingStrategy::match_msg3d_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) +{ + double smallest_time_difference = std::numeric_limits::max(); + std::shared_ptr> closest_collector = nullptr; + + for (const auto & fusion_collector : fusion_collectors) { + if (!fusion_collector->msg3d_exists()) { + if ( + auto naive_info = + std::dynamic_pointer_cast(fusion_collector->get_info())) { + double time_difference = + std::abs(matching_context->msg3d_timestamp - naive_info->timestamp); + + if (time_difference < smallest_time_difference && time_difference < naive_info->threshold) { + smallest_time_difference = time_difference; + closest_collector = fusion_collector; + } + } + } + } + + return closest_collector; // Implicitly returns std::nullopt if closest_collector is nullptr +} + +template +void NaiveMatchingStrategy::set_collector_info( + std::shared_ptr> & collector, + const std::shared_ptr & matching_context) +{ + if (!matching_context) { + RCLCPP_ERROR(ros2_parent_node_->get_logger(), "matching_context is nullptr!"); + return; + } + + if ( + auto msg3d_matching_context = + std::dynamic_pointer_cast(matching_context)) { + auto info = + std::make_shared(msg3d_matching_context->msg3d_timestamp, threshold_); + collector->set_info(info); + + } else if ( + auto rois_matching_context = std::dynamic_pointer_cast(matching_context)) { + auto offset_it = id_to_offset_map_.find(rois_matching_context->rois_id); + if (offset_it == id_to_offset_map_.end()) { + RCLCPP_ERROR( + ros2_parent_node_->get_logger(), "Missing offset for rois_id: %zu", + rois_matching_context->rois_id); + return; + } + + auto info = std::make_shared( + rois_matching_context->rois_timestamp - offset_it->second, threshold_); + collector->set_info(info); + } +} + +template +AdvancedMatchingStrategy::AdvancedMatchingStrategy( + std::shared_ptr> && ros2_parent_node, + const std::unordered_map & id_to_offset_map) +: ros2_parent_node_(std::move(ros2_parent_node)), id_to_offset_map_(id_to_offset_map) +{ + if (!ros2_parent_node_) { + throw std::runtime_error( + "ros2_parent_node is nullptr in AdvancedMatchingStrategy constructor."); + } + + msg3d_noise_window_ = + ros2_parent_node_->template declare_parameter("matching_strategy.msg3d_noise_window"); + auto rois_timestamp_noise_window = + ros2_parent_node_->template declare_parameter>( + "matching_strategy.rois_timestamp_noise_window"); + + auto rois_number = id_to_offset_map_.size(); + if (rois_timestamp_noise_window.size() != rois_number) { + throw std::runtime_error( + "Mismatch: rois_number (" + std::to_string(rois_number) + + ") does not match rois_timestamp_noise_window size (" + + std::to_string(rois_timestamp_noise_window.size()) + ")."); + } + + for (size_t i = 0; i < rois_number; i++) { + id_to_noise_window_map_[i] = rois_timestamp_noise_window[i]; + } + + RCLCPP_INFO( + ros2_parent_node_->get_logger(), "Utilize advanced matching strategy for fusion nodes."); +} + +template +std::optional>> +AdvancedMatchingStrategy::match_rois_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) const +{ + auto offset_it = id_to_offset_map_.find(matching_context->rois_id); + auto noise_it = id_to_noise_window_map_.find(matching_context->rois_id); + + if (offset_it == id_to_offset_map_.end() || noise_it == id_to_noise_window_map_.end()) { + RCLCPP_ERROR( + ros2_parent_node_->get_logger(), "Missing offset or noise window for rois_id: %zu", + matching_context->rois_id); + return std::nullopt; + } + + double adjusted_timestamp = matching_context->rois_timestamp - offset_it->second; + double noise_window = noise_it->second; + + for (const auto & fusion_collector : fusion_collectors) { + if ( + auto advanced_info = + std::dynamic_pointer_cast(fusion_collector->get_info())) { + double reference_timestamp_min = advanced_info->timestamp - advanced_info->noise_window; + double reference_timestamp_max = advanced_info->timestamp + advanced_info->noise_window; + + if ( + adjusted_timestamp < reference_timestamp_max + noise_window && + adjusted_timestamp > reference_timestamp_min - noise_window) { + return fusion_collector; + } + } + } + return std::nullopt; +} + +template +std::optional>> +AdvancedMatchingStrategy::match_msg3d_to_collector( + const std::list>> & fusion_collectors, + const std::shared_ptr & matching_context) +{ + auto concatenated_status = + ros2_parent_node_->find_concatenation_status(matching_context->msg3d_timestamp); + + double offset = get_concatenated_offset(matching_context->msg3d_timestamp, concatenated_status); + double adjusted_timestamp = matching_context->msg3d_timestamp - offset; + + for (const auto & fusion_collector : fusion_collectors) { + if ( + auto advanced_info = + std::dynamic_pointer_cast(fusion_collector->get_info())) { + double reference_timestamp_min = advanced_info->timestamp - advanced_info->noise_window; + double reference_timestamp_max = advanced_info->timestamp + advanced_info->noise_window; + + if ( + adjusted_timestamp < reference_timestamp_max + msg3d_noise_window_ && + adjusted_timestamp > reference_timestamp_min - msg3d_noise_window_) { + return fusion_collector; + } + } + } + return std::nullopt; +} + +template +void AdvancedMatchingStrategy::set_collector_info( + std::shared_ptr> & collector, + const std::shared_ptr & matching_context) +{ + if (!matching_context) { + RCLCPP_ERROR(ros2_parent_node_->get_logger(), "matching_context is nullptr!"); + return; + } + + if ( + auto msg3d_matching_context = + std::dynamic_pointer_cast(matching_context)) { + auto concatenated_status = + ros2_parent_node_->find_concatenation_status(msg3d_matching_context->msg3d_timestamp); + double offset = + get_concatenated_offset(msg3d_matching_context->msg3d_timestamp, concatenated_status); + + auto info = std::make_shared( + msg3d_matching_context->msg3d_timestamp - offset, msg3d_noise_window_); + collector->set_info(info); + } else if ( + auto rois_matching_context = std::dynamic_pointer_cast(matching_context)) { + auto offset_it = id_to_offset_map_.find(rois_matching_context->rois_id); + auto noise_it = id_to_noise_window_map_.find(rois_matching_context->rois_id); + + if (offset_it == id_to_offset_map_.end() || noise_it == id_to_noise_window_map_.end()) { + RCLCPP_ERROR( + ros2_parent_node_->get_logger(), "Missing offset or noise window for rois_id: %zu", + rois_matching_context->rois_id); + return; + } + + auto info = std::make_shared( + rois_matching_context->rois_timestamp - offset_it->second, noise_it->second); + collector->set_info(info); + } +} + +template +double AdvancedMatchingStrategy::get_concatenated_offset( + const double & msg3d_timestamp, + const std::optional> & concatenated_status) +{ + double offset = 0.0; + + if (concatenated_status.has_value()) { + bool concatenation_success = false; + const auto & status_map = concatenated_status.value(); + + // Find required keys in the map + auto concat_success_it = status_map.find("cloud_concatenation_success"); + + if (concat_success_it != status_map.end()) { + concatenation_success = (concat_success_it->second == "True"); + if (concatenation_success && database_created_) { + return offset; // 0.0 + } + } + + auto ref_min_it = status_map.find("reference_timestamp_min"); + auto ref_max_it = status_map.find("reference_timestamp_max"); + if (ref_min_it != status_map.end() && ref_max_it != status_map.end()) { + try { + double reference_min = std::stod(ref_min_it->second); + double reference_max = std::stod(ref_max_it->second); + + if (!concatenation_success && msg3d_timestamp > reference_max) { + offset = msg3d_timestamp - (reference_min + (reference_max - reference_min) / 2); + } else if (!database_created_ && concatenation_success) { + auto concat_cloud_it = status_map.find("concatenated_cloud_timestamp"); + if (concat_cloud_it != status_map.end()) { + double concatenated_cloud_timestamp = std::stod(concat_cloud_it->second); + update_fractional_timestamp_set(concatenated_cloud_timestamp); + success_status_counter_++; + offset = 0.0; + + if (success_status_counter_ > success_threshold) { + database_created_ = true; + } + } + } + } catch (const std::exception & e) { + RCLCPP_ERROR(ros2_parent_node_->get_logger(), "Failed to parse timestamp: %s", e.what()); + } + } + } else { + if (database_created_) { + offset = compute_offset(msg3d_timestamp); + RCLCPP_DEBUG(ros2_parent_node_->get_logger(), "Using database, computed offset: %f", offset); + } else { + offset = 0.0; // Database not created yet, expect the concatenation is successful + } + } + + return offset; +} + +template +double AdvancedMatchingStrategy::extract_fractional(double timestamp) +{ + return fmod(timestamp, 1.0); +} + +template +void AdvancedMatchingStrategy::update_fractional_timestamp_set( + double timestamp) +{ + double fractional_part = extract_fractional(timestamp); + + // Check if the new timestamp belongs to an existing element within noise tolerance + for (auto it = fractional_timestamp_set_.begin(); it != fractional_timestamp_set_.end(); ++it) { + if (std::abs(fractional_part - *it) < msg3d_noise_window_ * 2) { + // If it belongs to an existing group, average the timestamp + double updated_value = (*it + fractional_part) / 2; + fractional_timestamp_set_.erase(it); + fractional_timestamp_set_.insert(updated_value); + return; + } + } + + fractional_timestamp_set_.insert(fractional_part); +} + +template +double AdvancedMatchingStrategy::compute_offset(double input_timestamp) +{ + if (fractional_timestamp_set_.empty()) { + return 0.0; + } + + double fractional_part = extract_fractional(input_timestamp); + double expected_timestamp = -1.0; + + // Check if input timestamp is within an existing timestamp ± noise_tolerance + for (const auto & timestamp : fractional_timestamp_set_) { + if ( + fractional_part >= timestamp - msg3d_noise_window_ && + fractional_part < timestamp + msg3d_noise_window_) { + return 0.0; // If within range, offset is zero + } + } + + // Find the closest timestamp ≤ fractional_part + auto it = fractional_timestamp_set_.lower_bound(fractional_part); + if (it == fractional_timestamp_set_.end()) { + --it; + expected_timestamp = floor(input_timestamp) + *it; + } else if (it == fractional_timestamp_set_.begin()) { + // **If `new_timestamp` is smaller than all stored timestamps, use the largest timestamp** + expected_timestamp = floor(input_timestamp) - 1 + *fractional_timestamp_set_.rbegin(); + } else { + --it; + expected_timestamp = floor(input_timestamp) + *it; + } + return input_timestamp - expected_timestamp; +} + +// pointpainting fusion +template class NaiveMatchingStrategy; + +// roi cluster fusion +template class NaiveMatchingStrategy; + +// roi detected-object fusion +template class NaiveMatchingStrategy; + +// roi pointcloud fusion +template class NaiveMatchingStrategy; + +// segment pointcloud fusion +template class NaiveMatchingStrategy; + +// pointpainting fusion +template class AdvancedMatchingStrategy; + +// roi cluster fusion +template class AdvancedMatchingStrategy; + +// roi detected-object fusion +template class AdvancedMatchingStrategy; + +// roi pointcloud fusion +template class AdvancedMatchingStrategy; + +// segment pointcloud fusion +template class AdvancedMatchingStrategy; + +} // namespace autoware::image_projection_based_fusion diff --git a/perception/autoware_image_projection_based_fusion/src/fusion_node.cpp b/perception/autoware_image_projection_based_fusion/src/fusion_node.cpp index cb79c04ae2d47..d34a07a969a6f 100644 --- a/perception/autoware_image_projection_based_fusion/src/fusion_node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/fusion_node.cpp @@ -1,4 +1,4 @@ -// Copyright 2022 TIER IV, Inc. +// Copyright 2025 TIER IV, Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -12,23 +12,29 @@ // See the License for the specific language governing permissions and // limitations under the License. +#include #define EIGEN_MPL2_ONLY +#include "autoware/image_projection_based_fusion/fusion_collector.hpp" #include "autoware/image_projection_based_fusion/fusion_node.hpp" +#include "autoware/image_projection_based_fusion/fusion_types.hpp" #include #include #include -#include #include #include #include +#include #include #include +#include #include +#include +#include #include #ifdef ROS_DISTRO_GALACTIC @@ -40,9 +46,6 @@ #include #endif -// static int publish_counter = 0; -static double processing_time_ms = 0; - namespace autoware::image_projection_based_fusion { using autoware_utils::ScopedTimeTrack; @@ -53,29 +56,40 @@ FusionNode::FusionNode( : Node(node_name, options), tf_buffer_(this->get_clock()), tf_listener_(tf_buffer_) { // set rois_number - const std::size_t rois_number = - static_cast(declare_parameter("rois_number")); - if (rois_number < 1) { - RCLCPP_ERROR( - this->get_logger(), "minimum rois_number is 1. current rois_number is %zu", rois_number); + rois_number_ = static_cast(declare_parameter("rois_number")); + if (rois_number_ < 1) { + throw std::runtime_error( + "Minimum rois_number_ is 1. Current rois_number_ is " + std::to_string(rois_number_)); } - if (rois_number > 8) { + if (rois_number_ > 8) { RCLCPP_WARN( this->get_logger(), - "Current rois_number is %zu. Large rois number may cause performance issue.", rois_number); + "Current rois_number_ is %zu. Large rois number may cause performance issue.", rois_number_); } // Set parameters - match_threshold_ms_ = declare_parameter("match_threshold_ms"); - timeout_ms_ = declare_parameter("timeout_ms"); + msg3d_timeout_sec_ = declare_parameter("msg3d_timeout_sec"); + rois_timeout_sec_ = declare_parameter("rois_timeout_sec"); + + auto rois_timestamp_offsets = declare_parameter>("rois_timestamp_offsets"); + if (rois_timestamp_offsets.size() != rois_number_) { + throw std::runtime_error( + "Mismatch: rois_number (" + std::to_string(rois_number_) + + ") does not match rois_timestamp_offsets size (" + + std::to_string(rois_timestamp_offsets.size()) + ")."); + } + + for (std::size_t i = 0; i < rois_number_; i++) { + id_to_offset_map_[i] = rois_timestamp_offsets[i]; + } std::vector input_rois_topics; std::vector input_camera_info_topics; - input_rois_topics.resize(rois_number); - input_camera_info_topics.resize(rois_number); + input_rois_topics.resize(rois_number_); + input_camera_info_topics.resize(rois_number_); - for (std::size_t roi_i = 0; roi_i < rois_number; ++roi_i) { + for (std::size_t roi_i = 0; roi_i < rois_number_; ++roi_i) { input_rois_topics.at(roi_i) = declare_parameter( "input/rois" + std::to_string(roi_i), "/perception/object_recognition/detection/rois" + std::to_string(roi_i)); @@ -85,38 +99,38 @@ FusionNode::FusionNode( "/sensing/camera/camera" + std::to_string(roi_i) + "/camera_info"); } - // subscribe camera info - camera_info_subs_.resize(rois_number); - for (std::size_t roi_i = 0; roi_i < rois_number; ++roi_i) { - std::function fnc = - std::bind(&FusionNode::cameraInfoCallback, this, std::placeholders::_1, roi_i); - camera_info_subs_.at(roi_i) = this->create_subscription( - input_camera_info_topics.at(roi_i), rclcpp::QoS{1}.best_effort(), fnc); - } + // Subscribe to Camera Info + camera_info_subs_.resize(rois_number_); + for (auto rois_id = 0u; rois_id < rois_number_; ++rois_id) { + auto topic = input_camera_info_topics.at(rois_id); + auto qos = rclcpp::QoS{1}.best_effort(); - // subscribe rois - rois_subs_.resize(rois_number); - for (std::size_t roi_i = 0; roi_i < rois_number; ++roi_i) { - std::function roi_callback = - std::bind(&FusionNode::roiCallback, this, std::placeholders::_1, roi_i); - rois_subs_.at(roi_i) = this->create_subscription( - input_rois_topics.at(roi_i), rclcpp::QoS{1}.best_effort(), roi_callback); + camera_info_subs_[rois_id] = this->create_subscription( + topic, qos, [this, rois_id](const sensor_msgs::msg::CameraInfo::ConstSharedPtr msg) { + this->camera_info_callback(msg, rois_id); + }); } - // subscribe 3d detection - std::function sub_callback = - std::bind(&FusionNode::subCallback, this, std::placeholders::_1); - det3d_sub_ = - this->create_subscription("input", rclcpp::QoS(1).best_effort(), sub_callback); + // Subscribe to ROIs + rois_subs_.resize(rois_number_); - // Set timer - const auto period_ns = std::chrono::duration_cast( - std::chrono::duration(timeout_ms_)); - timer_ = rclcpp::create_timer( - this, get_clock(), period_ns, std::bind(&FusionNode::timer_callback, this)); + for (auto rois_id = 0u; rois_id < rois_number_; ++rois_id) { + auto topic = input_rois_topics.at(rois_id); + auto qos = rclcpp::QoS{1}.best_effort(); + + rois_subs_[rois_id] = this->create_subscription( + topic, qos, [this, rois_id](const typename Msg2D::ConstSharedPtr msg) { + this->rois_callback(msg, rois_id); + }); + } + + // Subscribe 3D input msg + msg3d_sub_ = this->create_subscription( + "input", rclcpp::QoS(1).best_effort(), + [this](const typename Msg3D::ConstSharedPtr msg) { this->sub_callback(msg); }); // initialization on each 2d detections - setDet2DStatus(rois_number); + initialize_det2d_status(rois_number_); // parameters for approximation grid approx_grid_cell_w_size_ = declare_parameter("approximation_grid_cell_width"); @@ -130,23 +144,30 @@ FusionNode::FusionNode( filter_scope_min_z_ = declare_parameter("filter_scope_min_z"); filter_scope_max_z_ = declare_parameter("filter_scope_max_z"); + rosbag_length_ = declare_parameter("rosbag_length"); + publish_previous_but_late_output_msg_ = + declare_parameter("publish_previous_but_late_output_msg"); + matching_strategy_ = declare_parameter("matching_strategy.type"); + // debugger - if (declare_parameter("debug_mode", false)) { + debug_mode_ = declare_parameter("debug_mode"); + if (debug_mode_) { std::vector input_camera_topics; - input_camera_topics.resize(rois_number); - for (std::size_t roi_i = 0; roi_i < rois_number; ++roi_i) { + input_camera_topics.resize(rois_number_); + for (std::size_t roi_i = 0; roi_i < rois_number_; ++roi_i) { input_camera_topics.at(roi_i) = declare_parameter( "input/image" + std::to_string(roi_i), "/sensing/camera/camera" + std::to_string(roi_i) + "/image_rect_color"); } - std::size_t image_buffer_size = + auto image_buffer_size = static_cast(declare_parameter("image_buffer_size")); debugger_ = - std::make_shared(this, rois_number, image_buffer_size, input_camera_topics); + std::make_shared(this, rois_number_, image_buffer_size, input_camera_topics); // input topic timing publisher debug_internal_pub_ = std::make_unique(this, get_name()); } + collector_debug_mode_ = declare_parameter("collector_debug_mode"); // time keeper bool use_time_keeper = declare_parameter("publish_processing_time_detail"); @@ -165,18 +186,45 @@ FusionNode::FusionNode( stop_watch_ptr_->tic("cyclic_time"); stop_watch_ptr_->tic("processing_time"); } + + // Diagnostic Updater + diagnostic_updater_.setHardwareID(node_name + "_checker"); + diagnostic_updater_.add(node_name + "_status", this, &FusionNode::check_fusion_status); +} + +template +void FusionNode::initialize_strategy() +{ + if (matching_strategy_ == "naive") { + fusion_matching_strategy_ = std::make_unique>( + std::dynamic_pointer_cast(shared_from_this()), id_to_offset_map_); + } else if (matching_strategy_ == "advanced") { + fusion_matching_strategy_ = std::make_unique>( + std::dynamic_pointer_cast(shared_from_this()), id_to_offset_map_); + // subscribe diagnostics + sub_diag_ = this->create_subscription( + "/diagnostics", 10, std::bind(&FusionNode::diagnostic_callback, this, std::placeholders::_1)); + } else { + throw std::runtime_error("Matching strategy must be 'advanced' or 'naive'"); + } } template -void FusionNode::setDet2DStatus(std::size_t rois_number) +void FusionNode::initialize_collector_list() { - // camera offset settings - std::vector input_offset_ms = declare_parameter>("input_offset_ms"); - if (!input_offset_ms.empty() && rois_number > input_offset_ms.size()) { - throw std::runtime_error("The number of offsets does not match the number of topics."); + // Initialize collector list + for (size_t i = 0; i < num_of_collectors; ++i) { + fusion_collectors_.emplace_back(std::make_shared>( + std::dynamic_pointer_cast(shared_from_this()), rois_number_, det2d_status_list_, + collector_debug_mode_)); } + init_collector_list_ = true; +} - // camera projection settings +template +void FusionNode::initialize_det2d_status(std::size_t rois_number) +{ + // Camera projection settings std::vector point_project_to_unrectified_image = declare_parameter>("point_project_to_unrectified_image"); if (rois_number > point_project_to_unrectified_image.size()) { @@ -184,315 +232,504 @@ void FusionNode::setDet2DStatus(std::size_t rois_number "The number of point_project_to_unrectified_image does not match the number of rois " "topics."); } - std::vector approx_camera_projection = + std::vector approximate_camera_projection = declare_parameter>("approximate_camera_projection"); - if (rois_number != approx_camera_projection.size()) { - const std::size_t current_size = approx_camera_projection.size(); - RCLCPP_WARN( - get_logger(), - "The number of elements in approximate_camera_projection should be the same as in " - "rois_number. " - "It has %zu elements.", - current_size); - if (current_size < rois_number) { - approx_camera_projection.resize(rois_number); - for (std::size_t i = current_size; i < rois_number; i++) { - approx_camera_projection.at(i) = true; - } - } + if (rois_number != approximate_camera_projection.size()) { + const std::size_t current_size = approximate_camera_projection.size(); + throw std::runtime_error( + "The number of elements in approximate_camera_projection should be the same as rois_number_. " + "It has " + + std::to_string(current_size) + " elements, but rois_number is " + + std::to_string(rois_number) + "."); } - // 2d detection status initialization - det2d_list_.resize(rois_number); - for (std::size_t roi_i = 0; roi_i < rois_number; ++roi_i) { - det2d_list_.at(roi_i).id = roi_i; - det2d_list_.at(roi_i).project_to_unrectified_image = - point_project_to_unrectified_image.at(roi_i); - det2d_list_.at(roi_i).approximate_camera_projection = approx_camera_projection.at(roi_i); - det2d_list_.at(roi_i).input_offset_ms = input_offset_ms.at(roi_i); + // 2D detection status initialization + det2d_status_list_.resize(rois_number_); + for (std::size_t rois_id = 0; rois_id < rois_number_; ++rois_id) { + det2d_status_list_.at(rois_id).id = rois_id; + det2d_status_list_.at(rois_id).project_to_unrectified_image = + point_project_to_unrectified_image.at(rois_id); + det2d_status_list_.at(rois_id).approximate_camera_projection = + approximate_camera_projection.at(rois_id); } } template -void FusionNode::cameraInfoCallback( +void FusionNode::camera_info_callback( const sensor_msgs::msg::CameraInfo::ConstSharedPtr input_camera_info_msg, - const std::size_t camera_id) + const std::size_t rois_id) { - // create the CameraProjection when the camera info arrives for the first time - // assuming the camera info does not change while the node is running - auto & det2d = det2d_list_.at(camera_id); - if (!det2d.camera_projector_ptr && checkCameraInfo(*input_camera_info_msg)) { - det2d.camera_projector_ptr = std::make_unique( + if (rois_id >= det2d_status_list_.size()) { + throw std::out_of_range("rois_id " + std::to_string(rois_id) + " is out of range."); + } + + // Create the CameraProjection only when the camera info arrives for the first time. + // This assume the camera info does not change while the node is running + auto & det2d_status = det2d_status_list_.at(rois_id); + if (det2d_status.camera_projector_ptr == nullptr && check_camera_info(*input_camera_info_msg)) { + det2d_status.camera_projector_ptr = std::make_unique( *input_camera_info_msg, approx_grid_cell_w_size_, approx_grid_cell_h_size_, - det2d.project_to_unrectified_image, det2d.approximate_camera_projection); - det2d.camera_projector_ptr->initialize(); + det2d_status.project_to_unrectified_image, det2d_status.approximate_camera_projection); + det2d_status.camera_projector_ptr->initialize(); + + std::unique_lock fusion_collectors_lock(fusion_collectors_mutex_); + for (auto & collector : fusion_collectors_) { + collector->add_camera_projection(rois_id, det2d_status.camera_projector_ptr); + } } } template -void FusionNode::preprocess(Msg3D & ouput_msg __attribute__((unused))) +void FusionNode::preprocess([[maybe_unused]] Msg3D & output_msg) { - // do nothing by default + // Default implementation: No preprocessing. + // This function can be overridden by derived classes if needed. } template -void FusionNode::exportProcess() +void FusionNode::export_process( + typename Msg3D::SharedPtr & output_det3d_msg, + std::unordered_map id_to_stamp_map, + std::shared_ptr collector_info) { - timer_->cancel(); - ExportObj output_msg; - postprocess(*(cached_det3d_msg_ptr_), output_msg); - publish(output_msg); + postprocess(*(output_det3d_msg), output_msg); + + // Update timestamp + current_output_msg_timestamp_ = rclcpp::Time(output_msg.header.stamp).seconds(); + + // Handle late messages during rosbag replay + if ( + current_output_msg_timestamp_ < latest_output_msg_timestamp_ && + !publish_previous_but_late_output_msg_) { + double timestamp_diff = latest_output_msg_timestamp_ - current_output_msg_timestamp_; + + publish_output_msg_ = (timestamp_diff > rosbag_length_); // Force publish if rosbag looped + drop_previous_but_late_output_msg_ = !publish_output_msg_; // Drop otherwise + } else { + publish_output_msg_ = true; // Publish normally + } + + if (publish_output_msg_) { + publish(output_msg); + } - // add processing time for debug + // Move collected diagnostics info + diagnostic_collector_info_ = std::move(collector_info); + diagnostic_id_to_stamp_map_ = std::move(id_to_stamp_map); + diagnostic_updater_.force_update(); + + // Add processing time for debugging if (debug_publisher_) { - const double cyclic_time_ms = stop_watch_ptr_->toc("cyclic_time", true); - const double pipeline_latency_ms = + auto cyclic_time_ms = stop_watch_ptr_->toc("cyclic_time", true); + auto pipeline_latency_ms = std::chrono::duration( std::chrono::nanoseconds( - (this->get_clock()->now() - cached_det3d_msg_ptr_->header.stamp).nanoseconds())) + (this->get_clock()->now() - output_det3d_msg->header.stamp).nanoseconds())) .count(); + debug_publisher_->publish( "debug/cyclic_time_ms", cyclic_time_ms); debug_publisher_->publish( - "debug/processing_time_ms", - processing_time_ms + stop_watch_ptr_->toc("processing_time", true)); + "debug/processing_time_ms", stop_watch_ptr_->toc("processing_time", true)); debug_publisher_->publish( "debug/pipeline_latency_ms", pipeline_latency_ms); - processing_time_ms = 0; } - cached_det3d_msg_ptr_ = nullptr; + + // debug + if (debug_internal_pub_) { + for (std::size_t rois_id = 0; rois_id < rois_number_; ++rois_id) { + auto rois_timestamp = diagnostic_id_to_stamp_map_[rois_id]; + auto timestamp_interval_ms = (rois_timestamp - current_output_msg_timestamp_) * 1000; + debug_internal_pub_->publish( + "debug/roi" + std::to_string(rois_id) + "/timestamp_interval_ms", timestamp_interval_ms); + debug_internal_pub_->publish( + "debug/roi" + std::to_string(rois_id) + "/timestamp_interval_offset_ms", + timestamp_interval_ms - id_to_offset_map_[rois_id] * 1000); + } + } } template -void FusionNode::subCallback( - const typename Msg3D::ConstSharedPtr det3d_msg) +void FusionNode::sub_callback(const typename Msg3D::ConstSharedPtr msg3d) { - std::unique_ptr st_ptr; - if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); - - if (cached_det3d_msg_ptr_ != nullptr) { - // PROCESS: if the main message is remained (and roi is not collected all) publish the main - // message may processed partially with arrived 2d rois - stop_watch_ptr_->toc("processing_time", true); - exportProcess(); + if (!fusion_matching_strategy_) { + initialize_strategy(); + } - // reset flags - for (auto & det2d : det2d_list_) { - det2d.is_fused = false; - } + if (!init_collector_list_) { + initialize_collector_list(); } - // TIMING: reset timer to the timeout time - auto period = std::chrono::duration_cast( - std::chrono::duration(timeout_ms_)); - try { - setPeriod(period.count()); - } catch (rclcpp::exceptions::RCLError & ex) { - RCLCPP_WARN_THROTTLE(get_logger(), *get_clock(), 5000, "%s", ex.what()); + // Debug logging for message latency + if (collector_debug_mode_) { + auto arrive_time = this->get_clock()->now().seconds(); + auto msg3d_stamp = rclcpp::Time(msg3d->header.stamp).seconds(); + RCLCPP_DEBUG( + this->get_logger(), "msg3d timestamp: %lf, arrive time: %lf, latency: %lf", msg3d_stamp, + arrive_time, arrive_time - msg3d_stamp); } - timer_->reset(); + + std::unique_ptr st_ptr; + if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); stop_watch_ptr_->toc("processing_time", true); + manage_collector_list(); - // PROCESS: preprocess the main message - typename Msg3D::SharedPtr output_msg = std::make_shared(*det3d_msg); - preprocess(*output_msg); - - // PROCESS: fuse the main message with the cached roi messages - // (please ask maintainers before parallelize this loop because debugger is not thread safe) - int64_t timestamp_nsec = - (*output_msg).header.stamp.sec * static_cast(1e9) + (*output_msg).header.stamp.nanosec; - // for loop for each roi - for (auto & det2d : det2d_list_) { - const auto roi_i = det2d.id; - - // check camera info - if (det2d.camera_projector_ptr == nullptr) { - RCLCPP_WARN_THROTTLE( - this->get_logger(), *this->get_clock(), 5000, "no camera info. id is %zu", roi_i); - continue; - } - auto & det2d_msgs = det2d.cached_det2d_msgs; - - // check if the roi is collected - if (det2d_msgs.size() == 0) continue; - - // MATCH: get the closest roi message, and remove outdated messages - int64_t min_interval = 1e9; - int64_t matched_stamp = -1; - std::list outdate_stamps; - for (const auto & [roi_stamp, value] : det2d_msgs) { - int64_t new_stamp = timestamp_nsec + det2d.input_offset_ms * static_cast(1e6); - int64_t interval = abs(static_cast(roi_stamp) - new_stamp); - - if (interval <= min_interval && interval <= match_threshold_ms_ * static_cast(1e6)) { - min_interval = interval; - matched_stamp = roi_stamp; - } else if ( - static_cast(roi_stamp) < new_stamp && - interval > match_threshold_ms_ * static_cast(1e6)) { - outdate_stamps.push_back(static_cast(roi_stamp)); - } - } - for (auto stamp : outdate_stamps) { - det2d_msgs.erase(stamp); - } + std::shared_ptr> selected_collector = nullptr; + // Lock fusion collectors list + std::unique_lock fusion_collectors_lock(fusion_collectors_mutex_); - // PROCESS: if matched, fuse the main message with the roi message - if (matched_stamp != -1) { - if (debugger_) { - debugger_->clear(); - } + auto msg3d_timestamp = rclcpp::Time(msg3d->header.stamp).seconds(); - fuseOnSingleImage(*det3d_msg, det2d, *(det2d_msgs[matched_stamp]), *output_msg); - det2d_msgs.erase(matched_stamp); - det2d.is_fused = true; - - // add timestamp interval for debug - if (debug_internal_pub_) { - double timestamp_interval_ms = (matched_stamp - timestamp_nsec) / 1e6; - debug_internal_pub_->publish( - "debug/roi" + std::to_string(roi_i) + "/timestamp_interval_ms", timestamp_interval_ms); - debug_internal_pub_->publish( - "debug/roi" + std::to_string(roi_i) + "/timestamp_interval_offset_ms", - timestamp_interval_ms - det2d.input_offset_ms); - } - } + // Create matching parameters + auto matching_context = std::make_shared(); + matching_context->msg3d_timestamp = msg3d_timestamp; + + // Try to find an existing FusionCollector that matches this message + auto fusion_collector = + !fusion_collectors_.empty() + ? fusion_matching_strategy_->match_msg3d_to_collector(fusion_collectors_, matching_context) + : std::nullopt; + + if (fusion_collector && fusion_collector.value()) { + selected_collector = fusion_collector.value(); } - // PROCESS: check if the fused message is ready to publish - cached_det3d_msg_timestamp_ = timestamp_nsec; - cached_det3d_msg_ptr_ = output_msg; - if (checkAllDet2dFused()) { - // if all camera fused, postprocess and publish the main message - exportProcess(); + // If no suitable collector was found, reuse the collector if the status is Idle + if (!selected_collector || selected_collector->get_status() == CollectorStatus::Finished) { + auto it = std::find_if( + fusion_collectors_.begin(), fusion_collectors_.end(), + [](const auto & collector) { return collector->get_status() == CollectorStatus::Idle; }); - // reset flags - for (auto & det2d : det2d_list_) { - det2d.is_fused = false; + if (it != fusion_collectors_.end()) { + selected_collector = *it; } + } + + fusion_collectors_lock.unlock(); + if (selected_collector) { + fusion_matching_strategy_->set_collector_info(selected_collector, matching_context); + selected_collector->process_msg3d(msg3d, msg3d_timeout_sec_); } else { - // if all of rois are not collected, publish the old Msg(if exists) and cache the - // current Msg - processing_time_ms = stop_watch_ptr_->toc("processing_time", true); + // Handle case where no suitable collector is found + RCLCPP_WARN(get_logger(), "No available FusionCollector in IDLE state."); + } + + if (matching_strategy_ == "advanced") { + // remove outdated messages in the concatenated map + manage_concatenated_status_map(msg3d_timestamp); + } + + if (debugger_) { + debugger_->clear(); } } template -void FusionNode::roiCallback( - const typename Msg2D::ConstSharedPtr det2d_msg, const std::size_t roi_i) +void FusionNode::rois_callback( + const typename Msg2D::ConstSharedPtr rois_msg, const std::size_t rois_id) { + if (!fusion_matching_strategy_) { + initialize_strategy(); + } + + if (!init_collector_list_) { + initialize_collector_list(); + } + + if (collector_debug_mode_) { + auto arrive_time = this->get_clock()->now().seconds(); + RCLCPP_DEBUG( + this->get_logger(), " rois %zu timestamp: %lf arrive time: %lf seconds, latency: %lf", + rois_id, rclcpp::Time(rois_msg->header.stamp).seconds(), arrive_time, + arrive_time - rclcpp::Time(rois_msg->header.stamp).seconds()); + } + std::unique_ptr st_ptr; if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); - stop_watch_ptr_->toc("processing_time", true); + manage_collector_list(); - auto & det2d = det2d_list_.at(roi_i); - - int64_t timestamp_nsec = - (*det2d_msg).header.stamp.sec * static_cast(1e9) + (*det2d_msg).header.stamp.nanosec; - // if cached Msg exist, try to match - if (cached_det3d_msg_ptr_ != nullptr) { - int64_t new_stamp = - cached_det3d_msg_timestamp_ + det2d.input_offset_ms * static_cast(1e6); - int64_t interval = abs(timestamp_nsec - new_stamp); - - // PROCESS: if matched, fuse the main message with the roi message - if (interval < match_threshold_ms_ * static_cast(1e6) && det2d.is_fused == false) { - // check camera info - if (det2d.camera_projector_ptr == nullptr) { - RCLCPP_WARN_THROTTLE( - this->get_logger(), *this->get_clock(), 5000, "no camera info. id is %zu", roi_i); - det2d.cached_det2d_msgs[timestamp_nsec] = det2d_msg; - return; - } + std::shared_ptr> selected_collector = nullptr; + // Lock fusion collectors list + std::unique_lock fusion_collectors_lock(fusion_collectors_mutex_); - if (debugger_) { - debugger_->clear(); - } - // PROCESS: fuse the main message with the roi message - fuseOnSingleImage(*(cached_det3d_msg_ptr_), det2d, *det2d_msg, *(cached_det3d_msg_ptr_)); - det2d.is_fused = true; - - if (debug_internal_pub_) { - double timestamp_interval_ms = (timestamp_nsec - cached_det3d_msg_timestamp_) / 1e6; - debug_internal_pub_->publish( - "debug/roi" + std::to_string(roi_i) + "/timestamp_interval_ms", timestamp_interval_ms); - debug_internal_pub_->publish( - "debug/roi" + std::to_string(roi_i) + "/timestamp_interval_offset_ms", - timestamp_interval_ms - det2d.input_offset_ms); + auto rois_timestamp = rclcpp::Time(rois_msg->header.stamp).seconds(); + + // Create matching parameters + auto matching_context = std::make_shared(); + matching_context->rois_id = rois_id; + matching_context->rois_timestamp = rois_timestamp; + + // Try to find an existing FusionCollector that matches this message + auto fusion_collector = + !fusion_collectors_.empty() + ? fusion_matching_strategy_->match_rois_to_collector(fusion_collectors_, matching_context) + : std::nullopt; + + if (fusion_collector && fusion_collector.value()) { + selected_collector = fusion_collector.value(); + } + + // If no suitable collector was found, reuse the collector if the status is Idle + if (!selected_collector || selected_collector->get_status() == CollectorStatus::Finished) { + auto it = std::find_if( + fusion_collectors_.begin(), fusion_collectors_.end(), + [](const auto & collector) { return collector->get_status() == CollectorStatus::Idle; }); + + if (it != fusion_collectors_.end()) { + selected_collector = *it; + } + } + + fusion_collectors_lock.unlock(); + if (selected_collector) { + fusion_matching_strategy_->set_collector_info(selected_collector, matching_context); + selected_collector->process_rois(rois_id, rois_msg, rois_timeout_sec_); + } else { + // Handle case where no suitable collector is found + RCLCPP_WARN(get_logger(), "No available FusionCollector in IDLE state."); + } + + if (debugger_) { + debugger_->clear(); + } +} + +template +void FusionNode::diagnostic_callback( + const diagnostic_msgs::msg::DiagnosticArray::SharedPtr diagnostic_msg) +{ + for (const auto & status : diagnostic_msg->status) { + // Filter for the concatenate_and_time_sync_node diagnostic message + if ( + status.name == std::string_view("concatenate_data: /sensing/lidar/concatenate_data_status")) { + std::optional concatenate_timestamp_opt; + + // First pass: Locate concatenated_cloud_timestamp + for (const auto & value : status.values) { + if (value.key == std::string_view("concatenated_cloud_timestamp")) { + try { + concatenate_timestamp_opt = std::stod(value.value); + } catch (const std::exception & e) { + RCLCPP_ERROR(get_logger(), "Error parsing concatenated cloud timestamp: %s", e.what()); + } + } } - // PROCESS: if all camera fused, postprocess and publish the main message - if (checkAllDet2dFused()) { - exportProcess(); - // reset flags - for (auto & status : det2d_list_) { - status.is_fused = false; + // Second pass: Fill key-value map only if timestamp was valid + if (concatenate_timestamp_opt.has_value()) { + std::unordered_map key_value_map; + for (const auto & value : status.values) { + key_value_map.emplace(value.key, value.value); } + + concatenated_status_map_.emplace( + concatenate_timestamp_opt.value(), std::move(key_value_map)); } - processing_time_ms = processing_time_ms + stop_watch_ptr_->toc("processing_time", true); - return; } } - // store roi msg if not matched - det2d.cached_det2d_msgs[timestamp_nsec] = det2d_msg; } template -void FusionNode::timer_callback() +void FusionNode::manage_concatenated_status_map(double current_timestamp) { - std::unique_ptr st_ptr; - if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); + constexpr double threshold_seconds = 1.0; // Define threshold as a constant + + // Remove old entries from concatenated_status_map_ + auto it = concatenated_status_map_.begin(); + while (it != concatenated_status_map_.end()) { + if (current_timestamp - it->first > threshold_seconds) { + RCLCPP_DEBUG( + get_logger(), "Removing old concatenation status for timestamp: %.9f", it->first); + it = concatenated_status_map_.erase(it); + } else { + ++it; + } + } +} - using std::chrono_literals::operator""ms; - timer_->cancel(); +template +void FusionNode::postprocess( + [[maybe_unused]] const Msg3D & processing_msg, [[maybe_unused]] ExportObj & output_msg) +{ + // Default implementation: No postprocessing. + // This function can be overridden by derived classes if needed. +} - // PROCESS: if timeout, postprocess cached msg - if (cached_det3d_msg_ptr_ != nullptr) { - stop_watch_ptr_->toc("processing_time", true); - exportProcess(); +template +void FusionNode::publish(const ExportObj & output_msg) +{ + if (!pub_ptr_) { + RCLCPP_WARN(get_logger(), "Publish failed: pub_ptr_ is null."); + return; } - // reset flags whether the message is fused or not - for (auto & det2d : det2d_list_) { - det2d.is_fused = false; + if (pub_ptr_->get_subscription_count() < 1) { + RCLCPP_DEBUG(get_logger(), "No subscribers, skipping publish."); + return; } + + pub_ptr_->publish(output_msg); } template -void FusionNode::setPeriod(const int64_t new_period) +void FusionNode::manage_collector_list() { - if (!timer_) { - return; + std::lock_guard collectors_lock(fusion_collectors_mutex_); + + int num_processing_collectors = 0; + + for (auto & collector : fusion_collectors_) { + if (collector->get_status() == CollectorStatus::Finished) { + collector->reset(); + } + + if (collector->get_status() == CollectorStatus::Processing) { + num_processing_collectors++; + } } - int64_t old_period = 0; - rcl_ret_t ret = rcl_timer_get_period(timer_->get_timer_handle().get(), &old_period); - if (ret != RCL_RET_OK) { - rclcpp::exceptions::throw_from_rcl_error(ret, "Couldn't get old period"); + + if (num_processing_collectors == num_of_collectors) { + auto min_it = fusion_collectors_.end(); + constexpr double k_max_timestamp = std::numeric_limits::max(); + double min_timestamp = k_max_timestamp; + + for (auto it = fusion_collectors_.begin(); it != fusion_collectors_.end(); ++it) { + if ((*it)->get_status() == CollectorStatus::Processing) { + auto info = (*it)->get_info(); + double timestamp = k_max_timestamp; + + if (auto naive_info = std::dynamic_pointer_cast(info)) { + timestamp = naive_info->timestamp; + } else if (auto advanced_info = std::dynamic_pointer_cast(info)) { + timestamp = advanced_info->timestamp; + } else { + continue; + } + + if (timestamp < min_timestamp) { + min_timestamp = timestamp; + min_it = it; + } + } + } + + // Reset the collector with the oldest timestamp if found + if (min_it != fusion_collectors_.end()) { + RCLCPP_WARN_STREAM_THROTTLE( + this->get_logger(), *this->get_clock(), 1000, + "Reset the oldest collector because the number of processing collectors (" + << num_processing_collectors << ") equal to the limit (" << num_of_collectors << ")."); + (*min_it)->reset(); + } } - ret = rcl_timer_exchange_period(timer_->get_timer_handle().get(), new_period, &old_period); - if (ret != RCL_RET_OK) { - rclcpp::exceptions::throw_from_rcl_error(ret, "Couldn't exchange_period"); +} + +template +std::optional> +FusionNode::find_concatenation_status(double timestamp) +{ + auto it = concatenated_status_map_.find(timestamp); + if (it != concatenated_status_map_.end()) { + return it->second; } + return std::nullopt; } template -void FusionNode::postprocess( - const Msg3D & processing_msg __attribute__((unused)), - ExportObj & output_msg __attribute__((unused))) +std::string FusionNode::format_timestamp(double timestamp) { - // do nothing by default + std::ostringstream oss; + oss << std::fixed << std::setprecision(9) << timestamp; + return oss.str(); } template -void FusionNode::publish(const ExportObj & output_msg) +void FusionNode::show_diagnostic_message( + std::unordered_map id_to_stamp_map, + std::shared_ptr collector_info) { - if (pub_ptr_->get_subscription_count() < 1) { - return; + msg3d_fused_ = false; + diagnostic_collector_info_ = std::move(collector_info); + diagnostic_id_to_stamp_map_ = std::move(id_to_stamp_map); + diagnostic_updater_.force_update(); +} + +template +void FusionNode::check_fusion_status( + diagnostic_updater::DiagnosticStatusWrapper & stat) +{ + if (publish_output_msg_ || drop_previous_but_late_output_msg_ || !msg3d_fused_) { + stat.add("msg3d/is_fused", msg3d_fused_); + + if (msg3d_fused_) { + stat.add("fused_timestamp", format_timestamp(current_output_msg_timestamp_)); + } + + if ( + auto naive_info = std::dynamic_pointer_cast(diagnostic_collector_info_)) { + stat.add("first_input_arrival_timestamp", format_timestamp(naive_info->timestamp)); + } else if ( + auto advanced_info = + std::dynamic_pointer_cast(diagnostic_collector_info_)) { + stat.add( + "reference_timestamp_min", + format_timestamp(advanced_info->timestamp - advanced_info->noise_window)); + stat.add( + "reference_timestamp_max", + format_timestamp(advanced_info->timestamp + advanced_info->noise_window)); + } + + bool rois_miss = false; + bool fusion_success = msg3d_fused_; + + for (std::size_t id = 0; id < rois_number_; ++id) { + std::string rois_prefix = "rois" + std::to_string(id); + bool input_rois_fused = true; + + auto it = diagnostic_id_to_stamp_map_.find(id); + if (it != diagnostic_id_to_stamp_map_.end()) { + stat.add(rois_prefix + "/timestamp", format_timestamp(it->second)); + } else { + rois_miss = true; + fusion_success = false; + input_rois_fused = false; + } + stat.add(rois_prefix + "/is_fused", input_rois_fused); + } + + stat.add("fusion_success", fusion_success); + + int8_t level = diagnostic_msgs::msg::DiagnosticStatus::OK; + std::string message = "Fused output is published and include all rois and msg3d"; + + if (drop_previous_but_late_output_msg_) { + level = diagnostic_msgs::msg::DiagnosticStatus::ERROR; + message = + rois_miss + ? "Fused output msg misses some ROIs and is not published because it arrived too late" + : "Fused output msg is not published as it is too late"; + } else if (rois_miss) { + level = diagnostic_msgs::msg::DiagnosticStatus::ERROR; + message = "Fused output msg is published but misses some ROIs"; + } else if (!msg3d_fused_) { + level = diagnostic_msgs::msg::DiagnosticStatus::ERROR; + message = "Fused output msg is not published as msg3d is missed"; + } + + stat.summary(level, message); + + // Reset status flags + publish_output_msg_ = false; + drop_previous_but_late_output_msg_ = false; + msg3d_fused_ = true; + } else { + stat.summary( + diagnostic_msgs::msg::DiagnosticStatus::OK, + "Fusion node launched successfully, but waiting for input pointcloud"); } - pub_ptr_->publish(output_msg); } // Explicit instantiation for the supported types diff --git a/perception/autoware_image_projection_based_fusion/src/pointpainting_fusion/node.cpp b/perception/autoware_image_projection_based_fusion/src/pointpainting_fusion/node.cpp index 4becd1f587a8e..a74a36880ee6d 100644 --- a/perception/autoware_image_projection_based_fusion/src/pointpainting_fusion/node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/pointpainting_fusion/node.cpp @@ -157,8 +157,8 @@ PointPaintingFusionNode::PointPaintingFusionNode(const rclcpp::NodeOptions & opt // subscriber std::function sub_callback = - std::bind(&PointPaintingFusionNode::subCallback, this, std::placeholders::_1); - det3d_sub_ = this->create_subscription( + std::bind(&PointPaintingFusionNode::sub_callback, this, std::placeholders::_1); + msg3d_sub_ = this->create_subscription( "~/input/pointcloud", rclcpp::SensorDataQoS().keep_last(3), sub_callback); // publisher @@ -258,9 +258,9 @@ void PointPaintingFusionNode::preprocess(PointCloudMsgType & painted_pointcloud_ static_cast(painted_pointcloud_msg.data.size() / painted_pointcloud_msg.height); } -void PointPaintingFusionNode::fuseOnSingleImage( +void PointPaintingFusionNode::fuse_on_single_image( __attribute__((unused)) const PointCloudMsgType & input_pointcloud_msg, - const Det2dStatus & det2d, const RoiMsgType & input_roi_msg, + const Det2dStatus & det2d_status, const RoiMsgType & input_rois_msg, PointCloudMsgType & painted_pointcloud_msg) { if (painted_pointcloud_msg.data.empty() || painted_pointcloud_msg.fields.empty()) { @@ -269,7 +269,7 @@ void PointPaintingFusionNode::fuseOnSingleImage( return; } - auto num_bbox = (input_roi_msg.feature_objects).size(); + auto num_bbox = (input_rois_msg.feature_objects).size(); if (num_bbox == 0) { return; } @@ -285,8 +285,8 @@ void PointPaintingFusionNode::fuseOnSingleImage( Eigen::Affine3f lidar2cam_affine; { const auto transform_stamped_optional = getTransformStamped( - tf_buffer_, /*target*/ input_roi_msg.header.frame_id, - /*source*/ painted_pointcloud_msg.header.frame_id, input_roi_msg.header.stamp); + tf_buffer_, /*target*/ input_rois_msg.header.frame_id, + /*source*/ painted_pointcloud_msg.header.frame_id, input_rois_msg.header.stamp); if (!transform_stamped_optional) { return; } @@ -314,7 +314,7 @@ dc | dc dc dc dc ||zc| |dc| **/ - auto objects = input_roi_msg.feature_objects; + auto objects = input_rois_msg.feature_objects; int iterations = painted_pointcloud_msg.data.size() / painted_pointcloud_msg.point_step; // iterate points // Requires 'OMP_NUM_THREADS=N' @@ -338,13 +338,13 @@ dc | dc dc dc dc ||zc| p_y = point_camera.y(); p_z = point_camera.z(); - if (det2d.camera_projector_ptr->isOutsideHorizontalView(p_x, p_z)) { + if (det2d_status.camera_projector_ptr->isOutsideHorizontalView(p_x, p_z)) { continue; } // project Eigen::Vector2d projected_point; - if (det2d.camera_projector_ptr->calcImageProjectedPoint( + if (det2d_status.camera_projector_ptr->calcImageProjectedPoint( cv::Point3d(p_x, p_y, p_z), projected_point)) { // iterate 2d bbox for (const auto & feature_object : objects) { @@ -373,7 +373,7 @@ dc | dc dc dc dc ||zc| if (time_keeper_) inner_st_ptr = std::make_unique("publish debug message", *time_keeper_); - for (const auto & feature_object : input_roi_msg.feature_objects) { + for (const auto & feature_object : input_rois_msg.feature_objects) { debug_image_rois.push_back(feature_object.feature.roi); } @@ -383,7 +383,7 @@ dc | dc dc dc dc ||zc| debugger_->image_rois_ = debug_image_rois; debugger_->obstacle_points_ = debug_image_points; - debugger_->publishImage(det2d.id, input_roi_msg.header.stamp); + debugger_->publishImage(det2d_status.id, input_rois_msg.header.stamp); } } } diff --git a/perception/autoware_image_projection_based_fusion/src/roi_cluster_fusion/node.cpp b/perception/autoware_image_projection_based_fusion/src/roi_cluster_fusion/node.cpp index 374357b0bc181..1d8bc3fc7656e 100644 --- a/perception/autoware_image_projection_based_fusion/src/roi_cluster_fusion/node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/roi_cluster_fusion/node.cpp @@ -73,14 +73,15 @@ void RoiClusterFusionNode::preprocess(ClusterMsgType & output_cluster_msg) } } -void RoiClusterFusionNode::fuseOnSingleImage( - const ClusterMsgType & input_cluster_msg, const Det2dStatus & det2d, - const RoiMsgType & input_roi_msg, ClusterMsgType & output_cluster_msg) +void RoiClusterFusionNode::fuse_on_single_image( + const ClusterMsgType & input_cluster_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, ClusterMsgType & output_cluster_msg) { std::unique_ptr st_ptr; if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); - const sensor_msgs::msg::CameraInfo & camera_info = det2d.camera_projector_ptr->getCameraInfo(); + const sensor_msgs::msg::CameraInfo & camera_info = + det2d_status.camera_projector_ptr->getCameraInfo(); // get transform from cluster frame id to camera optical frame id geometry_msgs::msg::TransformStamped transform_stamped; @@ -129,7 +130,7 @@ void RoiClusterFusionNode::fuseOnSingleImage( } Eigen::Vector2d projected_point; - if (det2d.camera_projector_ptr->calcImageProjectedPoint( + if (det2d_status.camera_projector_ptr->calcImageProjectedPoint( cv::Point3d(*iter_x, *iter_y, *iter_z), projected_point)) { const int px = static_cast(projected_point.x()); const int py = static_cast(projected_point.y()); @@ -156,7 +157,7 @@ void RoiClusterFusionNode::fuseOnSingleImage( if (debugger_) debugger_->obstacle_rois_.push_back(roi); } - for (const auto & feature_obj : input_roi_msg.feature_objects) { + for (const auto & feature_obj : input_rois_msg.feature_objects) { int index = -1; bool associated = false; double max_iou = 0.0; @@ -210,7 +211,7 @@ void RoiClusterFusionNode::fuseOnSingleImage( // note: debug objects are safely cleared in fusion_node.cpp if (debugger_) { - debugger_->publishImage(det2d.id, input_roi_msg.header.stamp); + debugger_->publishImage(det2d_status.id, input_rois_msg.header.stamp); } } diff --git a/perception/autoware_image_projection_based_fusion/src/roi_detected_object_fusion/node.cpp b/perception/autoware_image_projection_based_fusion/src/roi_detected_object_fusion/node.cpp index f01353d0265cd..12fd576da97fc 100644 --- a/perception/autoware_image_projection_based_fusion/src/roi_detected_object_fusion/node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/roi_detected_object_fusion/node.cpp @@ -83,9 +83,9 @@ void RoiDetectedObjectFusionNode::preprocess(DetectedObjects & output_msg) ignored_object_flags_map_.insert(std::make_pair(timestamp_nsec, ignored_object_flags)); } -void RoiDetectedObjectFusionNode::fuseOnSingleImage( - const DetectedObjects & input_object_msg, const Det2dStatus & det2d, - const RoiMsgType & input_roi_msg, DetectedObjects & output_object_msg __attribute__((unused))) +void RoiDetectedObjectFusionNode::fuse_on_single_image( + const DetectedObjects & input_object_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, DetectedObjects & output_object_msg __attribute__((unused))) { std::unique_ptr st_ptr; if (time_keeper_) st_ptr = std::make_unique(__func__, *time_keeper_); @@ -93,8 +93,8 @@ void RoiDetectedObjectFusionNode::fuseOnSingleImage( Eigen::Affine3d object2camera_affine; { const auto transform_stamped_optional = getTransformStamped( - tf_buffer_, /*target*/ input_roi_msg.header.frame_id, - /*source*/ input_object_msg.header.frame_id, input_roi_msg.header.stamp); + tf_buffer_, /*target*/ input_rois_msg.header.frame_id, + /*source*/ input_object_msg.header.frame_id, input_rois_msg.header.stamp); if (!transform_stamped_optional) { return; } @@ -102,21 +102,21 @@ void RoiDetectedObjectFusionNode::fuseOnSingleImage( } const auto object_roi_map = - generateDetectedObjectRoIs(input_object_msg, det2d, object2camera_affine); - fuseObjectsOnImage(input_object_msg, input_roi_msg.feature_objects, object_roi_map); + generateDetectedObjectRoIs(input_object_msg, det2d_status, object2camera_affine); + fuseObjectsOnImage(input_object_msg, input_rois_msg.feature_objects, object_roi_map); if (debugger_) { - debugger_->image_rois_.reserve(input_roi_msg.feature_objects.size()); - for (std::size_t roi_i = 0; roi_i < input_roi_msg.feature_objects.size(); ++roi_i) { - debugger_->image_rois_.push_back(input_roi_msg.feature_objects.at(roi_i).feature.roi); + debugger_->image_rois_.reserve(input_rois_msg.feature_objects.size()); + for (std::size_t roi_i = 0; roi_i < input_rois_msg.feature_objects.size(); ++roi_i) { + debugger_->image_rois_.push_back(input_rois_msg.feature_objects.at(roi_i).feature.roi); } - debugger_->publishImage(det2d.id, input_roi_msg.header.stamp); + debugger_->publishImage(det2d_status.id, input_rois_msg.header.stamp); } } std::map RoiDetectedObjectFusionNode::generateDetectedObjectRoIs( - const DetectedObjects & input_object_msg, const Det2dStatus & det2d, + const DetectedObjects & input_object_msg, const Det2dStatus & det2d_status, const Eigen::Affine3d & object2camera_affine) { std::unique_ptr st_ptr; @@ -132,7 +132,8 @@ RoiDetectedObjectFusionNode::generateDetectedObjectRoIs( return object_roi_map; } const auto & passthrough_object_flags = passthrough_object_flags_map_.at(timestamp_nsec); - const sensor_msgs::msg::CameraInfo & camera_info = det2d.camera_projector_ptr->getCameraInfo(); + const sensor_msgs::msg::CameraInfo & camera_info = + det2d_status.camera_projector_ptr->getCameraInfo(); const double image_width = static_cast(camera_info.width); const double image_height = static_cast(camera_info.height); @@ -163,7 +164,7 @@ RoiDetectedObjectFusionNode::generateDetectedObjectRoIs( } Eigen::Vector2d proj_point; - if (det2d.camera_projector_ptr->calcImageProjectedPoint( + if (det2d_status.camera_projector_ptr->calcImageProjectedPoint( cv::Point3d(point.x(), point.y(), point.z()), proj_point)) { const double px = proj_point.x(); const double py = proj_point.y(); diff --git a/perception/autoware_image_projection_based_fusion/src/roi_pointcloud_fusion/node.cpp b/perception/autoware_image_projection_based_fusion/src/roi_pointcloud_fusion/node.cpp index e6596a9e28602..c0a87be0a8684 100644 --- a/perception/autoware_image_projection_based_fusion/src/roi_pointcloud_fusion/node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/roi_pointcloud_fusion/node.cpp @@ -49,9 +49,9 @@ RoiPointCloudFusionNode::RoiPointCloudFusionNode(const rclcpp::NodeOptions & opt cluster_debug_pub_ = this->create_publisher("debug/clusters", 1); } -void RoiPointCloudFusionNode::fuseOnSingleImage( - const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d, - const RoiMsgType & input_roi_msg, +void RoiPointCloudFusionNode::fuse_on_single_image( + const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d_status, + const RoiMsgType & input_rois_msg, __attribute__((unused)) PointCloudMsgType & output_pointcloud_msg) { std::unique_ptr st_ptr; @@ -65,7 +65,7 @@ void RoiPointCloudFusionNode::fuseOnSingleImage( std::vector debug_image_rois; std::vector debug_image_points; // select ROIs for fusion - for (const auto & feature_obj : input_roi_msg.feature_objects) { + for (const auto & feature_obj : input_rois_msg.feature_objects) { if (fuse_unknown_only_) { bool is_roi_label_unknown = feature_obj.object.classification.front().label == autoware_perception_msgs::msg::ObjectClassification::UNKNOWN; @@ -86,7 +86,7 @@ void RoiPointCloudFusionNode::fuseOnSingleImage( if (debugger_) { debugger_->image_rois_ = debug_image_rois; debugger_->obstacle_points_ = debug_image_points; - debugger_->publishImage(det2d.id, input_roi_msg.header.stamp); + debugger_->publishImage(det2d_status.id, input_rois_msg.header.stamp); } return; } @@ -94,8 +94,8 @@ void RoiPointCloudFusionNode::fuseOnSingleImage( geometry_msgs::msg::TransformStamped transform_stamped; { const auto transform_stamped_optional = getTransformStamped( - tf_buffer_, input_roi_msg.header.frame_id, input_pointcloud_msg.header.frame_id, - input_roi_msg.header.stamp); + tf_buffer_, input_rois_msg.header.frame_id, input_pointcloud_msg.header.frame_id, + input_rois_msg.header.stamp); if (!transform_stamped_optional) { return; } @@ -134,7 +134,7 @@ void RoiPointCloudFusionNode::fuseOnSingleImage( } Eigen::Vector2d projected_point; - if (det2d.camera_projector_ptr->calcImageProjectedPoint( + if (det2d_status.camera_projector_ptr->calcImageProjectedPoint( cv::Point3d(transformed_x, transformed_y, transformed_z), projected_point)) { for (std::size_t i = 0; i < output_objs.size(); ++i) { auto & feature_obj = output_objs.at(i); @@ -169,14 +169,14 @@ void RoiPointCloudFusionNode::fuseOnSingleImage( // refine and update output_fused_objects_ updateOutputFusedObjects( - output_objs, clusters, clusters_data_size, input_pointcloud_msg, input_roi_msg.header, + output_objs, clusters, clusters_data_size, input_pointcloud_msg, input_rois_msg.header, tf_buffer_, min_cluster_size_, max_cluster_size_, cluster_2d_tolerance_, output_fused_objects_); // publish debug image if (debugger_) { debugger_->image_rois_ = debug_image_rois; debugger_->obstacle_points_ = debug_image_points; - debugger_->publishImage(det2d.id, input_roi_msg.header.stamp); + debugger_->publishImage(det2d_status.id, input_rois_msg.header.stamp); } } diff --git a/perception/autoware_image_projection_based_fusion/src/segmentation_pointcloud_fusion/node.cpp b/perception/autoware_image_projection_based_fusion/src/segmentation_pointcloud_fusion/node.cpp index 0cb3b7cb3c65f..cf726797f0937 100644 --- a/perception/autoware_image_projection_based_fusion/src/segmentation_pointcloud_fusion/node.cpp +++ b/perception/autoware_image_projection_based_fusion/src/segmentation_pointcloud_fusion/node.cpp @@ -62,8 +62,8 @@ void SegmentPointCloudFusionNode::preprocess(__attribute__((unused)) return; } -void SegmentPointCloudFusionNode::fuseOnSingleImage( - const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d, +void SegmentPointCloudFusionNode::fuse_on_single_image( + const PointCloudMsgType & input_pointcloud_msg, const Det2dStatus & det2d_status, [[maybe_unused]] const Image & input_mask, __attribute__((unused)) PointCloudMsgType & output_cloud) { @@ -77,7 +77,8 @@ void SegmentPointCloudFusionNode::fuseOnSingleImage( return; } - const sensor_msgs::msg::CameraInfo & camera_info = det2d.camera_projector_ptr->getCameraInfo(); + const sensor_msgs::msg::CameraInfo & camera_info = + det2d_status.camera_projector_ptr->getCameraInfo(); std::vector mask_data(input_mask.data.begin(), input_mask.data.end()); cv::Mat mask = perception_utils::runLengthDecoder(mask_data, input_mask.height, input_mask.width); @@ -127,7 +128,7 @@ void SegmentPointCloudFusionNode::fuseOnSingleImage( } Eigen::Vector2d projected_point; - if (!det2d.camera_projector_ptr->calcImageProjectedPoint( + if (!det2d_status.camera_projector_ptr->calcImageProjectedPoint( cv::Point3d(transformed_x, transformed_y, transformed_z), projected_point)) { continue; } diff --git a/perception/autoware_image_projection_based_fusion/src/utils/utils.cpp b/perception/autoware_image_projection_based_fusion/src/utils/utils.cpp index 3bc4290fe991c..1ac1a4d11076c 100644 --- a/perception/autoware_image_projection_based_fusion/src/utils/utils.cpp +++ b/perception/autoware_image_projection_based_fusion/src/utils/utils.cpp @@ -22,7 +22,7 @@ namespace autoware::image_projection_based_fusion { -bool checkCameraInfo(const sensor_msgs::msg::CameraInfo & camera_info) +bool check_camera_info(const sensor_msgs::msg::CameraInfo & camera_info) { const bool is_supported_model = (camera_info.distortion_model == sensor_msgs::distortion_models::PLUMB_BOB || @@ -30,7 +30,7 @@ bool checkCameraInfo(const sensor_msgs::msg::CameraInfo & camera_info) if (!is_supported_model) { RCLCPP_ERROR_STREAM( rclcpp::get_logger("image_projection_based_fusion"), - "checkCameraInfo: Unsupported distortion model: " << camera_info.distortion_model); + "check_camera_info: Unsupported distortion model: " << camera_info.distortion_model); return false; } const bool is_supported_distortion_param = @@ -38,7 +38,7 @@ bool checkCameraInfo(const sensor_msgs::msg::CameraInfo & camera_info) if (!is_supported_distortion_param) { RCLCPP_ERROR_STREAM( rclcpp::get_logger("image_projection_based_fusion"), - "checkCameraInfo: Unsupported distortion coefficients size: " << camera_info.d.size()); + "check_camera_info: Unsupported distortion coefficients size: " << camera_info.d.size()); return false; } return true; diff --git a/sensing/autoware_pointcloud_preprocessor/include/autoware/pointcloud_preprocessor/concatenate_data/collector_matching_strategy.hpp b/sensing/autoware_pointcloud_preprocessor/include/autoware/pointcloud_preprocessor/concatenate_data/collector_matching_strategy.hpp index 7a95ca859c165..0f6e9ba8cb4c1 100644 --- a/sensing/autoware_pointcloud_preprocessor/include/autoware/pointcloud_preprocessor/concatenate_data/collector_matching_strategy.hpp +++ b/sensing/autoware_pointcloud_preprocessor/include/autoware/pointcloud_preprocessor/concatenate_data/collector_matching_strategy.hpp @@ -76,6 +76,4 @@ class AdvancedMatchingStrategy : public CollectorMatchingStrategy std::unordered_map topic_to_noise_window_map_; }; -std::shared_ptr parse_matching_strategy(rclcpp::Node & node); - } // namespace autoware::pointcloud_preprocessor diff --git a/sensing/autoware_pointcloud_preprocessor/src/concatenate_data/concatenate_and_time_sync_node.cpp b/sensing/autoware_pointcloud_preprocessor/src/concatenate_data/concatenate_and_time_sync_node.cpp index 00116febc7f74..fa3ca93702aad 100644 --- a/sensing/autoware_pointcloud_preprocessor/src/concatenate_data/concatenate_and_time_sync_node.cpp +++ b/sensing/autoware_pointcloud_preprocessor/src/concatenate_data/concatenate_and_time_sync_node.cpp @@ -144,9 +144,17 @@ PointCloudConcatenateDataSynchronizerComponent::PointCloudConcatenateDataSynchro params_.has_static_tf_only); // Diagnostic Updater - diagnostic_updater_.setHardwareID("concatenate_data_checker"); + std::ostringstream hardware_id_stream; + hardware_id_stream << this->get_fully_qualified_name() << "_checker"; + std::string hardware_id = hardware_id_stream.str(); + + std::ostringstream diagnostic_name_stream; + diagnostic_name_stream << this->get_fully_qualified_name() << "_status"; + std::string diagnostic_name = diagnostic_name_stream.str(); + + diagnostic_updater_.setHardwareID(hardware_id); diagnostic_updater_.add( - "concat_status", this, &PointCloudConcatenateDataSynchronizerComponent::check_concat_status); + diagnostic_name, this, &PointCloudConcatenateDataSynchronizerComponent::check_concat_status); } std::string PointCloudConcatenateDataSynchronizerComponent::replace_sync_topic_name_postfix(