diff --git a/perception/autoware_image_projection_based_fusion/CMakeLists.txt b/perception/autoware_image_projection_based_fusion/CMakeLists.txt
index 7c8160d6fe1b7..5074741808018 100644
--- a/perception/autoware_image_projection_based_fusion/CMakeLists.txt
+++ b/perception/autoware_image_projection_based_fusion/CMakeLists.txt
@@ -17,6 +17,8 @@ endif()
# Build non-CUDA dependent nodes
ament_auto_add_library(${PROJECT_NAME} SHARED
src/camera_projection.cpp
+ src/fusion_collector.cpp
+ src/fusion_matching_strategy.cpp
src/fusion_node.cpp
src/debugger.cpp
src/utils/geometry.cpp
diff --git a/perception/autoware_image_projection_based_fusion/README.md b/perception/autoware_image_projection_based_fusion/README.md
index dcf35e45bbd9d..ab9e1224aa1b1 100644
--- a/perception/autoware_image_projection_based_fusion/README.md
+++ b/perception/autoware_image_projection_based_fusion/README.md
@@ -2,81 +2,239 @@
## Purpose
-The `autoware_image_projection_based_fusion` is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).
+The `autoware_image_projection_based_fusion` package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.
-## Inner-workings / Algorithms
+### Fusion algorithms
-### Sync Algorithm
+The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:
-#### matching
+| Fusion Name | Description | Detail |
+| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
+| roi_cluster_fusion | Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) |
+| roi_detected_object_fusion | Updates classification labels of detected objects using ROI information from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) |
+| pointpainting_fusion | Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. | [link](./docs/pointpainting-fusion.md) |
+| roi_pointcloud_fusion | Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. | [link](./docs/roi-pointcloud-fusion.md) |
+| segmentation_pointcloud_fusion | Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. | [link](./docs/segmentation-pointcloud-fusion.md) |
-The offset between each camera and the lidar is set according to their shutter timing.
-After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.
+## Inner Workings / Algorithms
-
+
+
+The fusion process operates on two primary types of input data:
+
+- **Msg3d**: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
+- **RoIs** (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.
+
+Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.
+
+The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.
+
+### Step 1: Matching and Creating a Collector
+
+When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.
+
+- If a matching collector is found, the incoming data is added to it.
+- If no matching collector exists, a new collector is created and initialized with the reference timestamp.
+
+### Step 2: Triggering the Timer
+
+Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either `msg3d_timeout_sec` for msg3d or `rois_timeout_sec` for RoIs.
+
+The collector will attempt to fuse the collected 3D and 2D data either:
+
+- When both Msg3d and RoI data are available, or
+- When the timer expires.
+
+If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.
-current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0
+### Step 3: Fusion Process
-#### fusion and timer
+The fusion process consists of three main stages:
+1. **Preprocessing** – Preparing the input data for fusion.
+2. **Fusion** – Aligning and merging RoIs with the 3D point cloud.
+3. **Postprocessing** – Refining the fused output based on the algorithm's requirements.
+
+The specific operations performed during these stages may vary depending on the type of fusion being applied.
+
+### Step 4: Publishing the Fused Result
+
+After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.
+
+The figure below shows how the input data is fused in different scenarios.

-The subscription status of the message is signed with 'O'.
+## Parameters
-1.if a pointcloud message is subscribed under the below condition:
+All of the fusion nodes have the common parameters described in the following
-| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 |
-| :-----------------: | :--------: | :-------: | :-------: | :-------: |
-| subscription status | | O | O | O |
+{{ json_to_markdown("perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json") }}
-If the roi msgs can be matched, fuse them and postprocess the pointcloud message.
-Otherwise, fuse the matched roi msgs and cache the pointcloud.
+### Parameter Settings
-2.if a pointcloud message is subscribed under the below condition:
+#### Timeout
-| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 |
-| :-----------------: | :--------: | :-------: | :-------: | :-------: |
-| subscription status | | O | O | |
+The order in which `RoIs` or the `msg3d` message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse `2D RoIs` with `msg3d` data, `msg3d` is essential for processing.
-if the roi msgs can be matched, fuse them and cache the pointcloud.
+If `RoIs` arrive earlier, they must wait until `msg3d` is received. You can adjust the waiting time using the `rois_timeout_sec` parameter.
-3.if a pointcloud message is subscribed under the below condition:
+If `msg3d` arrives first, the fusion process should proceed as quickly as possible, so the waiting time for `msg3d` (`msg3d_timeout_sec`) should be kept minimal.
-| | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 |
-| :-----------------: | :--------: | :-------: | :-------: | :-------: |
-| subscription status | O | O | O | |
+#### RoIs Offsets
-If the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3.
-If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is.
+The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the `RoIs` and `msg3d`. Once this offset is known, it should be specified in the parameter `rois_timestamp_offsets`.
-The timeout threshold should be set according to the postprocessing time.
-E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms.
-current default value at autoware.universe for XX1: - timeout_ms: 50.0
+In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The `rois_timestamp_offsets` can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the `rois_timestamp_offsets` would be `[0.059, 0.010, 0.026, 0.042, 0.076, 0.093]`.
-#### The `build_only` option
+
-The `pointpainting_fusion` node has `build_only` option to build the TensorRT engine file from the ONNX file.
-Although it is preferred to move all the ROS parameters in `.param.yaml` file in Autoware Universe, the `build_only` option is not moved to the `.param.yaml` file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:
+To check the header timestamp of the msg3d and RoIs, user can easily run
```bash
-ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true
+ros2 echo [topic] --header field
```
-#### Known Limits
+#### Matching Strategies
+
+We provide two matching strategies for different scenarios:
+
+##### Naive Mode
+
+User should use this mode if the concatenation node from the Autoware point cloud preprocessor is not being used. User should also set an appropriate `threshold` value to determine whether the time interval between the `msg3d` and `RoI` messages is within an acceptable range.
+If the interval is less than the match threshold, the messages are considered matched.
+
+
+- Example usage:
+
+ ```bash
+ matching_strategy:
+ type: naive
+ threshold: 0.05
+ ```
-The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.
+##### Advanced Mode
-### Approximate camera projection
+If the concatenation node from the Autoware point cloud preprocessor is being used, enable this mode.
+The advanced mode parses diagnostics from the concatenation node to verify whether all point clouds have been successfully concatenated. If concatenation is incomplete, it dynamically adjusts `rois_timestamp_offsets` based on diagnostic messages.
+Instead of using a fixed threshold, this mode requires setting two parameters:
+
+- `msg3d_noise_window` (a single value)
+- `rois_timestamp_noise_window` (a vector)
+
+These parameters enforce stricter matching between the `RoI` messages and `msg3d` input.
+
+- Example usage:
+
+ ```bash
+ matching_strategy:
+ type: advanced
+ msg3d_noise_window: 0.02
+ rois_timestamp_noise_window: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01]
+ ```
+
+#### Approximate camera projection
For algorithms like `pointpainting_fusion`, the computation required to project points onto an unrectified (raw) image can be substantial. To address this, an option is provided to reduce the computational load. Set the [`approximate_camera_projection parameter`](config/fusion_common.param.yaml) to `true` for each camera (ROIs). If the corresponding `point_project_to_unrectified_image` parameter is also set to true, the projections will be pre-cached.
The cached projections are calculated using a grid, with the grid size specified by the `approximation_grid_width_size` and `approximation_grid_height_size` parameters in the [configuration file](config/fusion_common.param.yaml). The centers of the grid are used as the projected points.
-### Detail description of each fusion's algorithm is in the following links
+## Debug and Diagnostics
+
+To verify whether the node has successfully fuse the msg3d or rois, the user can examine rqt or the `/diagnostics` topic using the following command:
+
+```bash
+ros2 topic echo /diagnostics
+```
+
+Below is an example output when the fusion is success:
+
+- msg3d has a value of `True`.
+- Each rois has a value of `True`.
+- The `fusion_success` is `True`.
+- The `level` value is `\0`. (diagnostic_msgs::msg::DiagnosticStatus::OK)
+
+```bash
+header:
+ stamp:
+ sec: 1722492015
+ nanosec: 848508777
+ frame_id: ''
+status:
+- level: "\0"
+ name: 'pointpainting: pointpainting_fusion_status'
+ message: Fused output is published and include all rois and msg3d
+ hardware_id: pointpainting_fusion_checker
+ values:
+ - key: msg3d/is_fused
+ value: 'True'
+ - key: fused_timestamp
+ value: '1738725170.860273600'
+ - key: reference_timestamp_min
+ value: '1738725170.850771904'
+ - key: reference_timestamp_max
+ value: '1738725170.870771885'
+ - key: /rois0/timestamp
+ value: '1738725170.903310537'
+ - key: /rois0/is_fused
+ value: 'True'
+ - key: /rois1/timestamp
+ value: '1738725170.934378386'
+ - key: /rois1/is_fused
+ value: 'True'
+ - key: /rois2/timestamp
+ value: '1738725170.917550087'
+ - key: /rois2/is_fused
+ value: 'True'
+ - key: fusion_success
+ value: 'True'
+```
+
+Below is an example output when the fusion is failed:
+
+- msg3d has a value of `True`.
+- Each rois has a value of `False`.
+- The `fusion_success` is `False`.
+- The `level` value is `\x02`. (diagnostic_msgs::msg::DiagnosticStatus::ERROR)
+
+```bash
+header:
+ stamp:
+ sec: 1722492015
+ nanosec: 848508777
+ frame_id: ''
+status:
+- level: "\x02"
+ name: 'pointpainting: pointpainting_fusion_status'
+ message: Fused output msg is published but misses some ROIs
+ hardware_id: pointpainting_fusion_checker
+ values:
+ - key: msg3d/is_fused
+ value: 'True'
+ - key: fused_timestamp
+ value: '1738725170.860273600'
+ - key: reference_timestamp_min
+ value: '1738725170.850771904'
+ - key: reference_timestamp_max
+ value: '1738725170.870771885'
+ - key: /rois0/is_fused
+ value: 'False'
+ - key: /rois1/timestamp
+ value: '1738725170.934378386'
+ - key: /rois1/is_fused
+ value: 'True'
+ - key: /rois2/timestamp
+ value: '1738725170.917550087'
+ - key: /rois2/is_fused
+ value: 'True'
+ - key: fusion_success
+ value: 'False'
+```
+
+## The `build_only` option
-| Fusion Name | Description | Detail |
-| -------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------------- |
-| roi_cluster_fusion | Overwrite a classification label of clusters by that of ROIs from a 2D object detector. | [link](./docs/roi-cluster-fusion.md) |
-| roi_detected_object_fusion | Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. | [link](./docs/roi-detected-object-fusion.md) |
-| pointpainting_fusion | Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. | [link](./docs/pointpainting-fusion.md) |
-| roi_pointcloud_fusion | Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects | [link](./docs/roi-pointcloud-fusion.md) |
+The `pointpainting_fusion` node has `build_only` option to build the TensorRT engine file from the ONNX file.
+Although it is preferred to move all the ROS parameters in `.param.yaml` file in Autoware Universe, the `build_only` option is not moved to the `.param.yaml` file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:
+
+```bash
+ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true
+```
diff --git a/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml b/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml
index 86db3bad4f8f8..15d1ee7581a9e 100644
--- a/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml
+++ b/perception/autoware_image_projection_based_fusion/config/fusion_common.param.yaml
@@ -1,12 +1,15 @@
/**:
ros__parameters:
- input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0]
- timeout_ms: 70.0
- match_threshold_ms: 50.0
+ # Example of how to set offset parameters (Check the readme for more information), both lidar and camera run in 10 hz.
+ # msg3d's header timestamp: 0.000
+ # rois's header timestamp (camera 0-5): 0.059, 0.010, 0.026, 0.042, 0.076, 0.093
+ # offset = rois_timestamp- msg3d_timestamp
+ rois_timestamp_offsets: [0.059, 0.010, 0.026, 0.042, 0.076, 0.093]
+ rois_timeout_sec: 0.5
+ msg3d_timeout_sec: 0.05
image_buffer_size: 15
# projection setting for each ROI whether unrectify image
point_project_to_unrectified_image: [false, false, false, false, false, false]
- debug_mode: false
filter_scope_min_x: -100.0
filter_scope_min_y: -100.0
filter_scope_min_z: -100.0
@@ -21,4 +24,16 @@
approximation_grid_cell_height: 1.0
# debug parameters
+ debug_mode: false
+ collector_debug_mode: false
publish_processing_time_detail: false
+
+ publish_previous_but_late_output_msg: false
+ rosbag_length: 10.0
+ # matching strategy
+ matching_strategy:
+ type: advanced
+ msg3d_noise_window: 0.02
+ rois_timestamp_noise_window: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01]
+ # type: naive
+ # threshold: 0.05
diff --git a/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg b/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg
new file mode 100644
index 0000000000000..f16b1396b4205
--- /dev/null
+++ b/perception/autoware_image_projection_based_fusion/docs/images/fusion_algorithm.drawio.svg
@@ -0,0 +1,1341 @@
+
+
+
+
diff --git a/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg b/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg
new file mode 100644
index 0000000000000..7c6c91d709f61
--- /dev/null
+++ b/perception/autoware_image_projection_based_fusion/docs/images/lidar_camera_sync.svg
@@ -0,0 +1,558 @@
+
+
+
+
diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp
new file mode 100644
index 0000000000000..a75ee786f276b
--- /dev/null
+++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_collector.hpp
@@ -0,0 +1,115 @@
+// Copyright 2025 TIER IV, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+
+#include "autoware/image_projection_based_fusion/camera_projection.hpp"
+
+#include
+
+#include
+#include
+#include
+#include
+#include
+
+namespace autoware::image_projection_based_fusion
+{
+using autoware::image_projection_based_fusion::CameraProjection;
+
+template
+class FusionNode;
+
+template
+struct Det2dStatus
+{
+ // camera index
+ std::size_t id{0};
+ // camera projection
+ std::shared_ptr camera_projector_ptr{nullptr};
+ bool project_to_unrectified_image{false};
+ bool approximate_camera_projection{false};
+};
+
+struct FusionCollectorInfoBase
+{
+ virtual ~FusionCollectorInfoBase() = default;
+};
+
+struct NaiveCollectorInfo : public FusionCollectorInfoBase
+{
+ double timestamp;
+ double threshold;
+
+ explicit NaiveCollectorInfo(double timestamp = 0.0, double threshold = 0.0)
+ : timestamp(timestamp), threshold(threshold)
+ {
+ }
+};
+
+struct AdvancedCollectorInfo : public FusionCollectorInfoBase
+{
+ double timestamp;
+ double noise_window;
+
+ explicit AdvancedCollectorInfo(double timestamp = 0.0, double noise_window = 0.0)
+ : timestamp(timestamp), noise_window(noise_window)
+ {
+ }
+};
+
+enum class CollectorStatus { Idle, Processing, Finished };
+
+template
+class FusionCollector
+{
+public:
+ FusionCollector(
+ std::shared_ptr> && ros2_parent_node,
+ std::size_t rois_number, const std::vector> & det2d_status_list,
+ bool debug_mode);
+ void process_msg3d(const typename Msg3D::ConstSharedPtr msg3d, double msg3d_timeout);
+ void process_rois(
+ const std::size_t & rois_id, const typename Msg2D::ConstSharedPtr rois_msg,
+ double rois_timeout);
+ void fusion_callback();
+
+ [[nodiscard]] CollectorStatus get_status();
+
+ void set_info(std::shared_ptr collector_info);
+ [[nodiscard]] std::shared_ptr get_info() const;
+ bool ready_to_fuse();
+ bool rois_exists(const std::size_t & rois_id);
+ bool msg3d_exists();
+ void add_camera_projection(
+ std::size_t rois_id, std::shared_ptr camera_projector_ptr);
+ void set_period(const std::chrono::nanoseconds period);
+ void reset();
+ void show_debug_message();
+
+private:
+ std::shared_ptr> ros2_parent_node_;
+ rclcpp::TimerBase::SharedPtr timer_;
+ std::size_t rois_number_;
+ typename Msg3D::ConstSharedPtr msg3d_{nullptr};
+ std::vector> det2d_status_list_;
+ std::unordered_map id_to_rois_map_;
+ bool is_first_msg3d_{false};
+ bool debug_mode_;
+ std::mutex fusion_mutex_;
+ std::shared_ptr fusion_collector_info_;
+ CollectorStatus status_;
+};
+
+} // namespace autoware::image_projection_based_fusion
diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp
new file mode 100644
index 0000000000000..d53d3e1308774
--- /dev/null
+++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_matching_strategy.hpp
@@ -0,0 +1,145 @@
+// Copyright 2025 TIER IV, Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+#pragma once
+
+#include "autoware/image_projection_based_fusion/fusion_collector.hpp"
+
+#include
+
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+namespace autoware::image_projection_based_fusion
+{
+
+template
+class FusionNode;
+
+struct MatchingContextBase
+{
+ virtual ~MatchingContextBase() = default;
+};
+
+struct Msg3dMatchingContext : public MatchingContextBase
+{
+ double msg3d_timestamp;
+
+ explicit Msg3dMatchingContext(double msg3d_timestamp = 0.0) : msg3d_timestamp(msg3d_timestamp) {}
+};
+
+struct RoisMatchingContext : public MatchingContextBase
+{
+ double rois_timestamp;
+ std::size_t rois_id;
+
+ explicit RoisMatchingContext(double rois_timestamp = 0.0, std::size_t rois_id = 0)
+ : rois_timestamp(rois_timestamp), rois_id(rois_id)
+ {
+ }
+};
+
+template
+class FusionMatchingStrategy
+{
+public:
+ virtual ~FusionMatchingStrategy() = default;
+
+ [[nodiscard]] virtual std::optional>>
+ match_rois_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) const = 0;
+
+ [[nodiscard]] virtual std::optional>>
+ match_msg3d_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) = 0;
+ virtual void set_collector_info(
+ std::shared_ptr> & collector,
+ const std::shared_ptr & matching_context) = 0;
+};
+
+template
+class NaiveMatchingStrategy : public FusionMatchingStrategy
+{
+public:
+ explicit NaiveMatchingStrategy(
+ std::shared_ptr> && ros2_parent_node,
+ const std::unordered_map & id_to_offset_map);
+
+ [[nodiscard]] std::optional>>
+ match_rois_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) const override;
+
+ [[nodiscard]] std::optional>>
+ match_msg3d_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) override;
+
+ void set_collector_info(
+ std::shared_ptr> & collector,
+ const std::shared_ptr & matching_context) override;
+
+private:
+ std::shared_ptr> ros2_parent_node_;
+ std::unordered_map id_to_offset_map_;
+ double threshold_;
+};
+
+template
+class AdvancedMatchingStrategy : public FusionMatchingStrategy
+{
+public:
+ explicit AdvancedMatchingStrategy(
+ std::shared_ptr> && ros2_parent_node,
+ const std::unordered_map & id_to_offset_map);
+
+ [[nodiscard]] std::optional>>
+ match_rois_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) const override;
+
+ [[nodiscard]] std::optional>>
+ match_msg3d_to_collector(
+ const std::list>> & fusion_collectors,
+ const std::shared_ptr & matching_context) override;
+ void set_collector_info(
+ std::shared_ptr> & collector,
+ const std::shared_ptr & matching_context) override;
+
+ double get_concatenated_offset(
+ const double & msg3d_timestamp,
+ const std::optional> & concatenated_status);
+
+ double extract_fractional(double timestamp);
+ void update_fractional_timestamp_set(double new_timestamp);
+ double compute_offset(double input_timestamp);
+
+private:
+ std::shared_ptr> ros2_parent_node_;
+ std::unordered_map id_to_offset_map_;
+ std::unordered_map id_to_noise_window_map_;
+ double msg3d_noise_window_;
+ std::set fractional_timestamp_set_; // Use set to store unique fractional timestamps
+ int success_status_counter_{0};
+ static constexpr int success_threshold{100};
+ bool database_created_{false};
+};
+} // namespace autoware::image_projection_based_fusion
diff --git a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp
index 5b889ac9a3e8b..eb900faa4d9c3 100644
--- a/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp
+++ b/perception/autoware_image_projection_based_fusion/include/autoware/image_projection_based_fusion/fusion_node.hpp
@@ -1,4 +1,4 @@
-// Copyright 2022 TIER IV, Inc.
+// Copyright 2025 TIER IV, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -12,17 +12,23 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-#ifndef AUTOWARE__IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_
-#define AUTOWARE__IMAGE_PROJECTION_BASED_FUSION__FUSION_NODE_HPP_
+#pragma once
+
+#include "autoware/image_projection_based_fusion/fusion_collector.hpp"
+#include "autoware/image_projection_based_fusion/fusion_matching_strategy.hpp"
+#include "autoware/image_projection_based_fusion/fusion_types.hpp"
#include
#include
#include
#include
#include
+#include
#include
#include
+#include
+#include
#include
#include
#include
@@ -38,44 +44,16 @@
#include
#include
-#include