Lynx + Ouster OS1-32 — ROS2 topics/TFs disappear after starting navigation

I’m using a Lynx mobile base together with an Ouster OS1-32 LiDAR. I run the LiDAR driver using the husarion/ouster-docker repository on the user PC, launched with:

docker compose -f compose.ouster.yaml up

This works as expected. I also publish a static transform between the LiDAR (os_sensor) and the robot (/lynx/mount_link).

I want to use the husarion_ugv_autonomy_ros repository for SLAM and navigation. Before launching navigation, I set:
export OBSERVATION_TOPIC=/ouster/points
export OBSERVATION_TOPIC_TYPE=pointcloud
export CAMERA_IMAGE_TOPIC={camera_image_topic}
export CAMERA_INFO_TOPIC={camera_info_topic}
export SLAM=True
export ROBOT_MODEL=lynx
export ROBOT_NAMESPACE=lynx

(I’m not using cameras or docking at the moment, so those topics are left unused.)
Then I start navigation from the user PC:
just start-hardware navigation

And the web interface:
just start-visualization

Problem:

As soon as I start the navigation container, ROS2 communication between the user PC (10.15.20.3) and the robot’s internal PC (10.15.20.2) breaks.

On the user PC (10.15.20.3)

ros2 topic list shows the navigation topics (/lynx/map, etc.)
BUT all core robot topics disappear — such as:

  • e-stop

  • battery status

  • /base_link, /cover_link, /mount_link, and related TFs (from /tf and /tf_static)

The TF graph on the user PC only shows LiDAR frames: os_imu, os_sensor, os_lidar.

On the robot built-in PC(10.15.20.2)

ros2 topic list shows the topics from husarion_ugv_ros (which runs automatically on startup).
BUT LiDAR topics and navigation topics are missing.

The TF graph on the robot shows the complete Lynx mobile base tree (odom → base_link → mount_link → cover_link → wheels), but no LiDAR frames.

What I verified

  • RMW_IMPLEMENTATION is rmw_cyclonedds_cpp on both machines.

  • ROS_DOMAIN_ID is empty on both.

  • ROS_LOCALHOST_ONLY is empty on both.

  • Ping between machines works.

  • Both run ROS2 Jazzy.

  • Stopping the navigation container does not restore communication.

  • Restarting containers does not help.

  • Full robot shutdown + power on the next day does restore communication. (shutdown + power on immediately (1 minute wait time) does no help for some reason)

    • In this state (normal ros communication), from the user PC (built-in PC as well) I can see topics from both:

      • the robot (via husarion_ugv_ros)

      • the Ouster driver container

Because the issue only appears after launching navigation, I suspect the cause is just start-hardware navigation- it’s Docker configuration.

compose.hardware.yaml:

x-common-config:
  &common-config
  network_mode: host
  ipc: host
  env_file: .env

services:
  docking:
    build:
      context: ..
      dockerfile: Dockerfile
    container_name: docking
    <<: *common-config
    volumes:
      - ./config:/config
      - ../husarion_ugv_docking:/ros2_ws/src/husarion_ugv_autonomy_ros/husarion_ugv_docking
    command: >
      ros2 launch husarion_ugv_docking docking.launch.py
        namespace:=${ROBOT_NAMESPACE:-lynx}
        use_wibotic_info:=True
        camera_image_topic:=${CAMERA_IMAGE_TOPIC:-/lynx/camera/color/image_raw}
        camera_info_topic:=${CAMERA_INFO_TOPIC:-/lynx/camera/color/camera_info}
        apriltag_output_dir:=/config/apriltags
        apriltag_size:=0.08
        use_sim:=False

  navigation:
    build:
      context: ..
      dockerfile: Dockerfile
    container_name: navigation
    <<: *common-config
    volumes:
      - ./config:/config
      - ../maps:/maps
      - ../husarion_ugv_navigation:/ros2_ws/src/husarion_ugv_autonomy_ros/husarion_ugv_navigation
    command: >
      ros2 launch husarion_ugv_navigation bringup_launch.py
        namespace:=${ROBOT_NAMESPACE:-lynx}
        observation_topic:=${OBSERVATION_TOPIC:-/ouster/points}
        observation_topic_type:=${OBSERVATION_TOPIC_TYPE:-pointcloud}
        slam:=${SLAM:-True}
        use_sim_time:=False
        robot_model:=${ROBOT_MODEL:-lynx}

It would be appreciated if you could give me tips to overcome this problem.
Thanks in advance

Hi @Kreja,

This looks strange, and I’d like to ask or clarify a few things:

  1. Is there another device on the network with a different ROS distribution or DDS settings (e.g., a laptop)?
  2. Is the user computer running the same version of ROS in Docker that has it natively installed?
  3. Does what you’re writing about the ROS distribution and RMW_IMPLEMENTATION apply to all running Dockers?
  4. Is there time synchronization?
  5. When navigation is off, can you get the date from topics (sometimes topics are visible, but data are not transmitted)?
  1. Yes, a third pc (no ros on it) is used on the same network and from it I am using ssh to get into the user and built-in pc.

  2. Yes, user pc, built-in pc and all the containers (ouster-ros, navigation, husarion_ugv_ros) are running with jazzy.

  3. Yes.

  4. There was not, but now I set it up, and the problem is still present.

  5. I am not sure if I understand this correctly, but if I do, yes, when I echo for example the /ouster/points topic, I get the data from the pointcloud printed in the terminal.

    I also tried, running just start-visualization without running just start-hardware navigation so I suppose the problem could stem from the webui.

In fact, webui is also worth checking, regarding ROS and RMW settings. Please check:

  1. WebUI configuration
snap info husarion-webui | grep tracking # should return jazzy/stable
sudo snap get husarion-webui ros # ros.transport=rmw_cyclonedds_cpp and ros.namespace:=lynx
  1. DDS configuration. I noticed another issue in the DDS configuration: the communication interface is explicitly specified as eth0. Perhaps your OS is setting the Ethernet interface name differently. You can delete CYCLONE_DDS_URI configuration.