Combine Publisher, Subscriber & Service in ROS2 Single Node | ROS2 Tutorial

Combine Publisher, Subscriber & Service in ROS2 Single Node | ROS2 Tutorial

What we are going to learn

  1. Learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5c13606c/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days (C++): https://app.theconstructsim.com/Course/133
  4. Apple Detector: https://shrishailsgajbhar.github.io/post/OpenCV-Apple-detection-counting
  5. Banana Detector: https://github.com/noorkhokhar99/Open-CV-Banana-Detection

Overview

ROS2 (Robot Operating System version 2) is becoming the de facto standard “framework” for programming robots.

In this post, we are going to learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5c13606c/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

Combining Publisher, Subscriber & Service in ROS2 Single Node – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Creating the required files

In order to interact with ROS2, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

In this rosject we cloned the https://bitbucket.org/theconstructcore/fruit_detector and placed it inside the ~/ros2_ws/src folder. You can see its content with the following command in the terminal:

ls ~/ros2_ws/src/fruit_detector/
the following output should be produced:
custom_interfaces  pub_sub_srv_ros2_pkg_example
A new file called pubsubserv_example.py was created inside the fruit_detector/pub_sub_srv_ros2_pkg_example/scripts folder.
The command for creating that file was:
touch  ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/scripts/pubsubserv_example.py
You could have created that file also using the Code Editor.

If you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:

Open the IDE - Code Editor

Open the IDE – Code Editor

 

The following content was pasted to that file:
#! /usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from custom_interfaces.srv import StringServiceMessage
import os
import cv2
from cv_bridge import CvBridge
import ament_index_python.packages as ament_index

class CombineNode(Node):

    def __init__(self, dummy=True):
        super().__init__('combine_node')

        self._dummy= dummy

        self.pkg_path = self.get_package_path("pub_sub_srv_ros2_pkg_example")
        self.scripts_path = os.path.join(self.pkg_path,"scripts")
        cascade_file_path = os.path.join(self.scripts_path,'haarbanana.xml')

        self.banana_cascade = cv2.CascadeClassifier(cascade_file_path)


        self.bridge = CvBridge()

        self.publisher = self.create_publisher(Image, 'image_detected_fruit', 10)
        self.subscription = self.create_subscription(
            Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)

        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
        

        self.get_logger().info(f'READY CombineNode')



    def get_package_path(self, package_name):
        try:
            package_share_directory = ament_index.get_package_share_directory(package_name)
            return package_share_directory
        except Exception as e:
            print(f"Error: {e}")
            return None


    def image_callback(self, msg):
        self.get_logger().info('Received an image.')
        self.current_image = msg


    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

    def generate_unknown_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'unknown.png')
        self.get_logger().warning("Unknown path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the unknown image.")
        else:
            self.get_logger().warning("SUCCESS to load the unknown image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")


    def generate_apple_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'apple.png')
        self.get_logger().warning("Apple path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the apple image.")
        else:
            self.get_logger().warning("SUCCESS to load the apple image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def generate_banana_detection_image(self):
        self.banana_img_path = os.path.join(self.scripts_path,'banana.png')
        self.get_logger().warning("Banana path="+str(self.banana_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple yellow circle on a black image
        image = cv2.imread(self.banana_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the banana image.")
        else:
            self.get_logger().warning("SUCCESS to load the banana image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def publish_image(self, image_msg):
        if image_msg is not None:
            self.publisher.publish(image_msg)


    def detect_and_publish_apple(self):
        if self.current_image is not None:
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")

            # Your apple detection code here (from approach_2.py)
            # HIGH=====
            # (0.0, 238.935, 255.0)
            # LOW=====
            # (1.8, 255.0, 66.045)


            low_apple_raw = (0.0, 80.0, 80.0)
            high_apple_raw = (20.0, 255.0, 255.0)

            image_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

            mask = cv2.inRange(image_hsv, low_apple_raw, high_apple_raw)

            cnts, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
                                    cv2.CHAIN_APPROX_SIMPLE)
            c_num = 0
            radius = 10
            for i, c in enumerate(cnts):
                ((x, y), r) = cv2.minEnclosingCircle(c)
                if r > radius:
                    print("OK="+str(r))
                    c_num += 1
                    cv2.circle(frame, (int(x), int(y)), int(r), (0, 255, 0), 2)
                    cv2.putText(frame, "#{}".format(c_num), (int(x) - 10, int(y)),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
                else:
                    print(r)

            

            # Publish the detected image as a ROS 2 Image message
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)

        else:
            self.get_logger().error("Image NOT found")


    def detect_and_publish_banana(self):
        self.get_logger().warning("detect_and_publish_banana Start")
        if self.current_image is not None:
            self.get_logger().warning("Image found")
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            bananas = self.banana_cascade.detectMultiScale(
                gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)

            for (x, y, w, h) in bananas:
                cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3)
                cv2.putText(frame, 'Banana', (x-10, y-10),
                            cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))

            # Publish the detected image as a ROS 2 Image message
            self.get_logger().warning("BananaDetection Image Publishing...")
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)
            self.get_logger().warning("BananaDetection Image Publishing...DONE")
        else:
            self.get_logger().error("Image NOT found")
    


def main(args=None):
    rclpy.init(args=args)
    node = CombineNode(dummy=False)
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
After creating that Python file, we also modified the CMakeLists.txt file of the pub_sub_srv_ros2_pkg_example package:
~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt
We basically added ‘scripts/pubsubserv_example.py‘ to the list of files to be installed when we build our ros2 workspace.
In the end, the content of ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt is like this:
cmake_minimum_required(VERSION 3.8)
project(pub_sub_srv_ros2_pkg_example)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

find_package(ament_cmake REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_srvs REQUIRED)
find_package(custom_interfaces REQUIRED)


if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  set(ament_cmake_copyright_FOUND TRUE)
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

# We add it to be able to use other modules of the scripts folder
install(DIRECTORY
  scripts
  rviz
  DESTINATION share/${PROJECT_NAME}
)

install(PROGRAMS
	scripts/example1_dummy.py
  	scripts/example1.py
	scripts/example1_main.py
	scripts/pubsubserv_example.py
  DESTINATION lib/${PROJECT_NAME}
)

ament_package()

We then compiled specifically the pub_sub_srv_ros2_pkg_example package using the following command:
cd ~/ros2_ws/

source install/setup.bash
colcon build --packages-select pub_sub_srv_ros2_pkg_example

After the package is compiled, we could run that python script using the following command:

cd ~/ros2_ws

source install/setup.bash

ros2 run pub_sub_srv_ros2_pkg_example pubsubserv_example.py
After running that script you are not going to see any output because we are not printing anything.
But, let’s try to list the services in a second terminal by typing ros2 node list. If everything goes well, we should be able to see the combine_node node:
$ ros2 node list

/combine_node

Launching the simulation

So far we can’t see what our node is capable of.
Let’s launch a simulation so that we can understand our node better.
For that, let’s run the following command in a third terminal:
ros2 launch box_bot_gazebo garden_main.launch.xml
A simulation similar to the following should appear in a few seconds:
Combine Publisher, Subscriber & Service in ROS2 Single Node - Simulation

Combine Publisher, Subscriber & Service in ROS2 Single Node – Simulation

After launching the simulation, in the first terminal where we launched our node, we should start seeing messages like the following:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
We will soon understand what these messages mean.

See what the robot sees through rviz2

Now that the simulation is running, we can open rviz2 (ROS Visualization version 2).
To make it easier for you to see the robot model, and the robot camera, a fruit.rviz file was created at ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz.
You can tell rviz2 to load that config file using the following command:
rviz2 --display-config ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz
A new screen should pop up in a few seconds, and you should be able to see what the robot camera sees, as well as the robot model.
The ROS2 Topic that we set for the camera was /box_bot_1/box_bot_1_camera/image_raw. You can find this topic if you list the topics in another terminal using ros2 topic list.
If you look at the topic that we subscribe to at the __init__ method of the CombineNode class, it is exactly this topic:
self.subscription = self.create_subscription( Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)
When a new Image message comes, the image_callback method is called. It essentially saves the image in an internal variable called current_image:
def image_callback(self, msg): 
    self.get_logger().info('Received an image.') 
    self.current_image = msg
At the __init__ method we also created a service for analyzing an image and detecting whether or not it contains a banana:
    def __init__(self, dummy=True):
        super().__init__('combine_node')
       
        # ...
        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)

    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

By analyzing the code above, we see that when the detect_fruit_service service that we created is called, it calls the string_service_callback method that is responsible for detecting bananas and apples.
Now, going back to the messages we see in the first terminal:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
These messages basically say that we are correctly receiving the Image messages from the /box_bot_1/box_bot_1_camera/image_raw topic mentioned earlier.
If we list the services, we should find a service named . Let’s try it, by running the following command in a free terminal:
ros2 service list

You should see a huge number of services, and among them, you should be able to find the following one, that we created:

/detect_fruit_service
By the way, the reason why we have so many services is that the Gazebo simulator generates a lot of services, making it easier to interact with Gazebo using ROS2.
Now, let’s call that service. In order to detect an apple, a banana, and a strawberry, we could run the following commands respectively:
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'apple'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'banana'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'strawberry'"
If you don’t understand the commands we have been using so far, I highly recommend you take the ROS2 Basics course: https://app.theconstructsim.com/courses/132.
Alright. After calling the service to detect a banana, we should have an output similar to the following:
requester: making request: custom_interfaces.srv.StringServiceMessage_Request(detect='banana') 

response: custom_interfaces.srv.StringServiceMessage_Response(success=True, message='Received and processed: banana')
Indicating that the service correctly detected a banana.
If you check the logs in the first terminal where we launched our node, you will also see a message similar to the following:
BananaDetection Image Publishing...
So, as you can see, we have in the same ROS2 Node a Publisher, a Subscriber, and a Service.

Congratulations. Now you know how to combine different ROS2 pieces in a single node.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

ROS2 C++ Package Creation Guide | ROS2 Tutorial

ROS2 C++ Package Creation Guide | ROS2 Tutorial

What we are going to learn

  1. How to create a ROS2 package
  2. How to create a package with some dependencies
  3. How to create many packages in a ros project
  4. How to compile a ros2 workspace

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5bda8c95/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

ROS (Robot Operating System) is becoming the de facto standard “framework” for programming robots. In this post, let’s learn how to create a ROS2 package, essential for giving instruction to robots, using the ros2 command.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5bda8c95/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

ROS2 package creation  – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Creating a ros2 package

In order to create a ROS2 package, we need to have a ROS2 Workspace, and for that, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

Once inside the first terminal, let’s first run a command that shows the list of available options for ros2:

ros2 -h
the following output should be produced:
ros2 is an extensible command-line tool for ROS 2.
options:
  -h, --help            show this help message and exit

Commands:
  action     Various action related sub-commands
  bag        Various rosbag related sub-commands
  component  Various component related sub-commands
  daemon     Various daemon related sub-commands
  doctor     Check ROS setup and other potential issues
  interface  Show information about ROS interfaces
  launch     Run a launch file
  lifecycle  Various lifecycle related sub-commands
  multicast  Various multicast related sub-commands
  node       Various node related sub-commands
  param      Various param related sub-commands
  pkg        Various package related sub-commands
  run        Run a package specific executable
  security   Various security related sub-commands
  service    Various service related sub-commands
  topic      Various topic related sub-commands
  wtf        Use `wtf` as alias to `doctor`

  Call `ros2 <command> -h` for more detailed usage.

As we can see in the output above, we have a command called pkg, and we can also get help with the ros2 pkg -h command. Let’s try it:
ros2 pkg -h
Running that command produces the following:
Various package related sub-commands
options:
  -h, --help            show this help message and exit

Commands:
  create       Create a new ROS 2 package
  executables  Output a list of package specific executables
  list         Output a list of available packages
  prefix       Output the prefix path of a package
  xml          Output the XML of the package manifest or a specific tag

Since what we want to do is create a package, we could ask for help with the create command shown above. Let’s try it:
ros2 pkg create -h
That gives us the following:
usage: ros2 pkg create [-h] [--package-format {2,3}] [--description DESCRIPTION] [--license LICENSE] [--destination-directory DESTINATION_DIRECTORY] [--build-type {cmake,ament_cmake,ament_python}]
                       [--dependencies DEPENDENCIES [DEPENDENCIES ...]] [--maintainer-email MAINTAINER_EMAIL] [--maintainer-name MAINTAINER_NAME] [--node-name NODE_NAME] [--library-name LIBRARY_NAME]
                       package_name

Create a new ROS 2 package

positional arguments:
  package_name          The package name

options:
  -h, --help            show this help message and exit
  --package-format {2,3}, --package_format {2,3}
                        The package.xml format.
  --description DESCRIPTION
                        The description given in the package.xml
  --license LICENSE     The license attached to this package; this can be an arbitrary string, but a LICENSE file will only be generated if it is one of the supported licenses (pass '?' to get a list)
  --destination-directory DESTINATION_DIRECTORY
                        Directory where to create the package directory
  --build-type {cmake,ament_cmake,ament_python}
                        The build type to process the package with
  --dependencies DEPENDENCIES [DEPENDENCIES ...]
                        list of dependencies
  --maintainer-email MAINTAINER_EMAIL
                        email address of the maintainer of this package
  --maintainer-name MAINTAINER_NAME
                        name of the maintainer of this package
  --node-name NODE_NAME
                        name of the empty executable
  --library-name LIBRARY_NAME
                        name of the empty library

Ok, according to the instructions, we should be able to create a package just using ros2 pkg create PKG_NAME. Let’s try to create a package named my_superbot inside the ros2_ws/src folder.
cd ~/ros2_ws/src

ros2 pkg create my_superbot

Assuming that everything went as expected, we should see something like this:

going to create a new package
package name: my_superbot
destination directory: /root/ros2_ws/src
package format: 3
version: 0.0.0
description: TODO: Package description
maintainer: ['root <root@todo.todo>']
licenses: ['TODO: License declaration']
build type: ament_cmake
dependencies: []
creating folder ./my_superbot
creating ./my_superbot/package.xml
creating source and include folder
creating folder ./my_superbot/src
creating folder ./my_superbot/include/my_superbot
creating ./my_superbot/CMakeLists.txt
According to the log messages, we now have a package called my_superbot, with some files inside the my_superbot folder. The most important files are ./my_superbot/package.xml and ./my_superbot/CMakeLists.txt. The former (package.xml) because it defines the package name, and the latter (CMakeLists.txt) because it contains the “instructions” on how to compile our package.
If you now run the ls command, you should be able to see the my_superbot folder, which is essentially your ROS2 package.
Also, if you run the “tree .  command, you should see the folder structure.:
tree .
The package structure:
.
└── my_superbot
    ├── CMakeLists.txt
    ├── include
    │   └── my_superbot
    ├── package.xml
    └── src

4 directories, 2 files
This is the simplest and easiest way of creating a ROS2 package.
If  you don’t have the tree command installed, you can install it using the commands below:
sudo apt-get update

sudo apt-get install -y tree

Creating a ros2 package with some dependencies

Most of the times, when we create a package, we basically want to reuse or leverage existing tools (or packages).

Let’s remove the package we just created, and create it again, but at this time, specifying some dependencies:
cd ~/ros2_ws/src

rm -rfv my_superbot
Okay, we just removed the package we created earlier. If you remember, previous we executed the “ros2 pkg create -h“, which provided us with some help with dependencies:
...

--dependencies DEPENDENCIES [DEPENDENCIES ...] 
     list of dependencies

...
Let’s now create the package with the same name, but at this time, specifying rclcpp and std_msgs dependencies:
cd ~/ros2_ws/src


ros2 pkg create my_superbot --dependencies rclcpp std_msgs
If you use the “ls” or “tree” commands, like before, you will see that the package has been successfully created. The main differences are in the contents of the package.xml and CMakeLists.txt files.
ls

tree .

Creating many ros2 packages

There is a principle in Software Development called DRY (Don’t repeat yourself). It basically tells us that we have to reuse code, making code easier to maintain.

There is also the Separation of Concerns (SoC) design principle that manages complexity by partitioning the software system so that each partition is responsible for a separate concern, minimizing the overlap of concerns as much as possible.

In a robotics project, we should ideally have different packages for different purposes. Let’s remove again the package we just created, and rather than creating the package directly on the ros2_ws/src folder, let’s create a project folder there, and then create the packages inside that project folder. Start by removing the existing package:

cd ~/ros2_ws/src

rm -rfv my_superbot
Now, let’s create a folder called superbot_project:
cd ~/ros2_ws/src

mkdir superbot_project

Inside the project folder, we can now create different packages.

cd superbot_project

ros2 pkg create superbot_description

ros2 pkg create superbot_detection

ros2 pkg create superbot_audio
We created 3 packages.  If we run “tree .” or “ls -l“, we should be able to see the three packages there:
tree .
The output of the tree command:
├── superbot_audio
│   ├── CMakeLists.txt
│   ├── include
│   ├── package.xml
│   └── src
├── superbot_description
│   ├── CMakeLists.txt
│   ├── include
│   │   └── superbot_description
│   ├── package.xml
│   └── src
└── superbot_detection
    ├── CMakeLists.txt
    ├── include
    │   └── superbot_detection
    ├── package.xml
    └── src

12 directories, 6 files

Building our ros2 packages

Now that we have created the packages, even though they don’t contain any meaning code, let’s learn how to compile the workspace, which contains the packages.

For that, we use the “colcon build” command on the main workspace folder:

cd ~/ros2_ws/

colcon build
source install/setup.bash
Assuming that everything worked nicely, the output should be similar to the following:
Starting >>> superbot_audio
Starting >>> superbot_description
Starting >>> superbot_detection
Finished <<< superbot_description [0.67s]                                                                                            
Finished <<< superbot_audio [0.69s]
Finished <<< superbot_detection [0.68s]

Summary: 3 packages finished [0.83s]
If we now run the “ls” command, we should see three new folders there: devel, install, log, in addition to the src folder that we created.
ls

# build  install  log  src
When you compile your workspace, if you want to make ROS2 aware that your packages are compiled and ready to use, you have to tell it where to find the packages using the “source” command. That is why we used it after the “colcon build” command.

Congratulations. Now you know how to create your own packages in ROS2, and how to compile them.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Install ROS2 Iron Irwini on Ubuntu 22 | ROS2 Tutorial

Install ROS2 Iron Irwini on Ubuntu 22 | ROS2 Tutorial

What we are going to learn

  1. How to install ROS2 Iron on Ubuntu 22 on your own computer
  2. How to use ROS without having to install anything

List of resources used in this post

  1. Your own computer with Ubuntu 22 installed
  2. The Construct: https://app.theconstructsim.com/
  3. https://docs.ros.org/en/iron/Installation/Ubuntu-Install-Debians.html
  4. https://man7.org/linux/man-pages/man7/locale.7.html
  5. https://help.ubuntu.com/community/Repositories/Ubuntu
  6. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

ROS2 is a “framework” for developing robotics applications. Its real-time capabilities, cross-platform support, security features, language flexibility, improved communication, modularity, community support, and industry adoption make it a valuable framework for robotic development.

In this tutorial, we are going to learn how to install it on our own computers.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Setting up locales

According to ROS documentation, we need to have support for UTF-8, in order for ROS2 to work properly.

To set up locales for UTF-8 support, we can run the following commands on Ubuntu 22. Let’s start by installing the locales command:

sudo apt update && sudo apt install locales -y

 

According to locale docs:

A locale is a set of language and cultural rules.  These cover
aspects such as language for messages, different character sets,
lexicographic conventions, and so on.  A program needs to be able
to determine its locale and act accordingly to be portable to
different cultures.

Once the locales package is installed, let’s configure UTF-8 in our system:

sudo locale-gen en_US en_US.UTF-8
sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8

 

Now, if we run the locale command we should be able to see UTF-8:

locale

In the output, you should see UTF-8 in all variables that have a value. Something similar to the following:

LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE=pt_BR.UTF-8
LC_NUMERIC=pt_BR.UTF-8
LC_TIME=pt_BR.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=pt_BR.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=pt_BR.UTF-8
LC_NAME=pt_BR.UTF-8
LC_ADDRESS=pt_BR.UTF-8
LC_TELEPHONE=pt_BR.UTF-8
LC_MEASUREMENT=pt_BR.UTF-8
LC_IDENTIFICATION=pt_BR.UTF-8
LC_ALL=

 

Setting up repositories

Now that the locale is ready for UTF-8, let’s enable the repositories we need for installing ROS.

Let’s start enabling the Universe Repository (which contains community-maintained free and open-source software),

sudo apt install software-properties-common -y
sudo add-apt-repository universe

 

Now that the repository has been added, let’s get the ROS2 GPG key, necessary when downloading the ROS2 packages:

sudo apt update && sudo apt install curl -y
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg

 

Now we need to add the ROS2 repository to the list of enabled repositories from where we can download packages. The repository is added to the /etc/apt/sources.list.d/ros2.list file using the following command:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list

Installing ROS 2 development tools

If you are interested in installing ROS, you are probably going to create some packages, build the workspace, etc. For that, it is recommended to install the development tools.

These tools can be installed using the following command:

sudo apt update && sudo apt install ros-dev-tools -y

Upgrading packages, and Installing ROS 2 Iron

Now that we have all the requirements in place, we can install ROS2 Iron, but before we do that, since ROS leverages existing tools, let’s upgrade the packages installed on our system in order to have the most recent changes on the programs that are already installed.

sudo apt update

sudo apt-get upgrade -y

 

Now that we have all base packages upgraded, we can install ROS, the Desktop version using the next command:

sudo apt install ros-iron-desktop -y

 

Testing the ROS 2 installation

Now that ROS is installed, let’s run an example of a node named talker that publishes a message to a topic called /chatter.

Before running a node, we need to “enable” the ROS installation. We do that using the source command:

source /opt/ros/iron/setup.bash

 

Now that the current terminal is aware of ROS, we can run the talker with the command below:

source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py talker

 

If everything went ok, you should see an output similar to the following:

[INFO] [1696342010.719351116] [talker]: Publishing: "Hello World: 0"
[INFO] [1696342011.707609990] [talker]: Publishing: "Hello World: 1"
[INFO] [1696342012.707533232] [talker]: Publishing: "Hello World: 2"
[INFO] [1696342013.707451283] [talker]: Publishing: "Hello World: 3"
[INFO] [1696342014.707842625] [talker]: Publishing: "Hello World: 4"
[INFO] [1696342015.706340664] [talker]: Publishing: "Hello World: 5"
[INFO] [1696342016.707204262] [talker]: Publishing: "Hello World: 6"
[INFO] [1696342017.707310619] [talker]: Publishing: "Hello World: 7"
[INFO] [1696342018.707408333] [talker]: Publishing: "Hello World: 8"
[INFO] [1696342019.707478561] [talker]: Publishing: "Hello World: 9"
[INFO] [1696342020.706401798] [talker]: Publishing: "Hello World: 10"
[INFO] [1696342021.707534531] [talker]: Publishing: "Hello World: 11"
[INFO] [1696342022.706507971] [talker]: Publishing: "Hello World: 12"
[INFO] [1696342023.706325651] [talker]: Publishing: "Hello World: 13"
[INFO] [1696342024.706483290] [talker]: Publishing: "Hello World: 14"
...

 

In another terminal, you can also run the listener node, which subscribers to the /chatter topic and prints to the screen what the talker node “said”:

source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py listener

The output should be similar to this:

...
[INFO] [1696342081.719585297] [listener]: I heard: [Hello World: 71]
[INFO] [1696342082.709465778] [listener]: I heard: [Hello World: 72]
[INFO] [1696342083.709447192] [listener]: I heard: [Hello World: 73]
[INFO] [1696342084.709592572] [listener]: I heard: [Hello World: 74]
[INFO] [1696342085.708058493] [listener]: I heard: [Hello World: 75]
[INFO] [1696342086.708537524] [listener]: I heard: [Hello World: 76]
[INFO] [1696342087.708396171] [listener]: I heard: [Hello World: 77]
...

 

 

Using ROS on The Construct (not having to install ROS on your own computer)

Alright, we learned how to install ROS on Ubuntu 22. Turns out that some people may not have a computer with Linux Ubuntu 22 installed, and do not want all the hassle of installing ROS locally.

Thank God we have The Construct, a platform that allows us to use ROS 2 online without having to install anything.

In order to use ROS, you just have to first create an account, create a rosject, and then “run” the rosject. Below we have the step-by-step process.

  1. First, create your account at https://app.theconstructsim.com/
  2. After authenticated, you go to the My Rosjects page and click the “Create a new rosject” button: https://app.theconstructsim.com/rosjects/my_rosjects
Create a new rosject

Create a new rosject

  1. On the Create rosject form, you select the ROS Distribution that you want to use (you can choose ROS Humble, ROS Iron, etc)
  2. Once the rosject is created, you can just press RUN to start the ROS environment. Something similar to what we can see in the image below:
  3. Learn ROS2 Parameters - Run rosjectRUN rosject

 

After the environment is running, you can just open a terminal and start creating and running your ros programs:

 

Open a new Terminal

Open a new Terminal

And that is basically it

Congratulations. Now you know how to install ROS2 on your own computer, and also know The Construct, a platform where you can program our ROS projects with ease.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

How to spawn a Gazebo robot using XML launch files

How to spawn a Gazebo robot using XML launch files

What we are going to learn

  1. How to start Gazebo
  2. How to spawn a robot to Gazebo
  3. How to run the Robot State Publisher node
  4. How to start Rviz configured

List of resources used in this post

  1. Use the rosject: https://app.theconstructsim.com/l/56476c77/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

While many examples demonstrate how to spawn Gazebo robots using Python launch files, in this post, we will be learning how to achieve the same result using XML launch files. Let’s get started!

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject with a simulation for that: https://app.theconstructsim.com/l/56476c77/.

You can download the rosject on your own computer if you want to work locally, but just by copying the rosject (clicking the link), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

How to spawn a Gazebo robot using XML launch files – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Compiling the workspace

As you may already know, instead of using a real robot, we are going to use a simulation. In order to spawn that simulated robot, we need to have our workspace compiled, and for that, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

Once inside the first terminal, let’s run the commands below, to compile the workspace

cd ~/ros2_ws
colcon build
source install/setup.bash
There may be some warning messages when running “colcon build”. Let’s just ignore those messages for now.
If everything went well, you should have a message saying that 3 packages were compiled:
How to spawn a Gazebo robot using XML launch files - ros2_ws compiled

How to spawn a Gazebo robot using XML launch files – ros2_ws compiled

Starting the Gazebo simulator

Now that our workspace is compiled, let’s run a gazebo simulation and RViz using normal python launch files.

For that, run the following command in the terminal:

ros2 launch minimal_diff_drive_robot gazebo_and_rviz.launch.py
Again, you may see some error messages. As long as the simulation appears, you can just ignore those error messages.
Now, in a second terminal, let’s also launch the Robot State Publisher, so that we can properly see the robot in RViz (Robot Visualization tool).

 

ros2 run joint_state_publisher joint_state_publisher
Now you should be able to see both Gazebo simulator and RViz, similar to what we can see in the image below:
How to spawn a Gazebo robot using XML launch files - Gazebo and RViz launched

How to spawn a Gazebo robot using XML launch files – Gazebo and RViz launched

In case you want to know, the content of the file used to spawn the robot can be seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.py

 

Moving the robot around

To make sure everything is working as expected so far, you can also run a new command to move the robot around using the keyboard. For that, open a third terminal, and run the following command:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

Now, to move the robot around just press the keys “i“, “k“, or other keys presented in the terminal where you launched the teleop_twist_keyboard command.

The XML file for launching spawning the robot

As we mentioned earlier, the code for the python launch file can be found seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.py

 

and for the XML file? The XML file is in the exact same folder, but the file has an XML extension. The content of the file can be seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.xml
The command above outputs the following:
<?xml version="1.0"?>

<launch>
  <arg name="model" default="$(find-pkg-share minimal_diff_drive_robot)/urdf/minimal_diff_drive_robot.urdf" />

  <arg name="start_gazebo" default="true" />
  <arg name="start_rviz" default="true" />

  <!-- Start Gazebo -->
  <group if="$(var start_gazebo)">
    <include file="$(find-pkg-share gazebo_ros)/launch/gazebo.launch.py">
      <!--arg name="paused" value="true"/>
      <arg name="use_sim_time" value="true"/>
      <arg name="gui" value="true"/>
      <arg name="recording" value="false"/>
      <arg name="debug" value="false"/>
      <arg name="verbose" value="true"/-->
    </include>

    <!-- Spawn robot in Gazebo -->
    <node name="spawn_robot_urdf" pkg="gazebo_ros" exec="spawn_entity.py"
      args="-file $(var model) -x 0.0 -y 0.0 -z 0.0 -entity my_robot" output="screen" />
  </group>

  <!-- TF description -->
  <node name="robot_state_publisher" pkg="robot_state_publisher" exec="robot_state_publisher" output="screen">
    <param name="robot_description" value="$(command 'cat $(var model)')"/>
    <param name="use_sim_time" value="true" />
  </node>

  <!-- Show in Rviz   -->
  <group if="$(var start_rviz)">
    <node name="rviz" pkg="rviz2" exec="rviz2" args="-d $(find-pkg-share minimal_diff_drive_robot)/config/robot.rviz">
      <param name="use_sim_time" value="true" />
    </node>
  </group>

</launch>

If we check carefully the output above, we can see that we start launching the Gazebo simulator, and in the same <group> we spawn the robot in Gazebo by calling spawn_entity.py

Then we launch the Robot State Publisher to be able to see the robot in RViz, and finally, we launch RViz itself.

When launching RViz, we tell it to use a file named config/robot.rviz, as we can see at:

$(find-pkg-share minimal_diff_drive_robot)/config/robot.rviz
That “command” translates to the following path:
cat ~/ros2_ws/src/minimal_diff_drive_robot/config/robot.rviz
Feel free to check the content of that file, be it through the Code Editor, or in the terminal by checking what the cat command outputs.

Spawning the robot in Gazebo using XML launch files

Similar to what we did with Python, you can just run the following command to spawn the robot using XML file.

Please, remember to kill the previous programs by pressing CTRL+C in the terminals where you launched the commands previously.

Assuming that now all previous programs are terminated, let’s spawn gazebo using XML in the first terminal:

ros2 launch minimal_diff_drive_robot gazebo_and_rviz.launch.xml
Now, in the second terminal, let’s launch the Joint State Publisher to be able to correctly see the robot wheels in RViz:
ros2 run joint_state_publisher joint_state_publisher

And on the third terminal, you can start the command to move the robot around:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

And that is basically it

Congratulations. Now you know how to spawn a robot in Gazebo using Python and also using XML launch files.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

[ROS2 Q&A] How to follow waypoints using nav2 #232

[ROS2 Q&A] How to follow waypoints using nav2 #232

What we are going to learn

  1. How to launch a functional nav2 system
  2. How to use nav2 simple commander API
  3. How to launch nav2 waypoint follower module

List of resources used in this post

  1. Use the rosject: https://app.theconstructsim.com/l/4da61f89/
  2. The Construct: https://app.theconstructsim.com/
  3. Nav2 simple commander API: https://github.com/ros-planning/navigation2/tree/main/nav2_simple_commander
    1. https://navigation.ros.org/commander_api/index.html
  4. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

In this post, we’ll be learning how to use nav2 SImple Command API to write a program that makes your program follow waypoints.

What we are going to create is something like a patroling system, in which the robot patrols a given area.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject with a simulation for that: https://app.theconstructsim.com/l/4da61f89/.

You can download the rosject on your own computer if you want to work locally, but just by copying the rosject (clicking the link), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

How to follow waypoints using nav2 – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Launching the simulation

As you may imagine, instead of using a real robot, we are going to use a simulation. The simulation package we are using, neo_simulation2 (by Neobotix), comes along with all the new ROS 2 features.

Like its predecessor, the neo_simulation2 package is fully equipped with all the Neobotix robots that are available in the market.

 

By the way, Neobotix is a state-of-the manufacturer of mobile robots and robot systems. We offer robots and manipulators for all applications with full ROS support. Neobotix products range from small mobile robots to mobile robot arms and several omnidirectional robots. They are specialized in designing customized mobile robots to meet your unique requirements.

 

Combining the novelty of ROS 2 and the state-of-the-art Neobotix platforms would allow users to learn and develop various reliable and robust application that caters to their needs in both research and as well as in industries.

Alright, having opened the rosject and explained a little bit about Neobotix, let’s start running some commands in the terminal. For that, let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

Once inside the first terminal, let’s run the commands below, to launch a simulation

cd ros2_ws
source install/setup.bash
ros2 launch neo_simulation2 simulation.launch.py
There will be countless red error messages on this simulation terminal. Let’s just ignore those messages for now.
If you want to know a bit more about Neobotix robots, they offer:

ROS2 Navigation

In order to move the robot to a desired goal location, pre-defined controllers and planners are available to be readily used. Thanks to Navigation 2, the all-new ROS-2 navigation stack, provides us with localization, global planning, and local planning algorithms, that allow us to jump-start by testing our intended application on the robot real-quick.

Almost all the algorithms found in move_base (ROS-1 Navigation stack) are available in Navigation2. All the Neobotix robots in the simulation for ROS-2 are primed and ready with Navigation2.

Once the simulation is started (seen in the previous tutorial), ROS-2 navigation stack can be launched using the following command

Now, in a second terminal, we can launch the Localization Server using the following command:

ros2 launch localization_server localization.launch.py
And in a third terminal, we can launch the Path Planner Server:
ros2 launch path_planner_server pathplanner.launch.py

The commands above should have launched the simulation, Localization Server, and Path Planner server.

After some seconds, we should have Gazebo (simulation), RViz (Robot Visualization), and Teleop running now. The simulation should be similar to the following:

Simulation - How to follow waypoints using nav2

Simulation – How to follow waypoints using nav2

 

If the Gazebo simulation doesn’t pop up:

  • Please open the Gazebo from the below menu bar
  • RViz would have been loaded as well and can be found in the Graphical tools
  • Also, another terminal would have popped out in the Graphical tools for the teleoperation. Please follow the instruction given in that terminal for moving the robot.

To make sure everything is working so far, you can send a 2D NavGoal in RViz to make sure the navigation system is working.

The files used to launch the Localization Server and Path Planner are found on the following paths:

ls ~/ros2_ws/src/neobotix_mp_400_navigation/localization_server/launch/localization.launch.py
ls ~/ros2_ws/src/neobotix_mp_400_navigation/path_planner_server/launch/pathplanner.launch.py

These files can also be seen in the Code Editor:

Localization Server and Path Planner - How to follow waypoints using nav2

Localization Server and Path Planner – How to follow waypoints using nav2

 

Feel free to localize and send goals to the robot as shown in this video about ROS2 Navigation for Omnidirectional Robots:

 

Global Costmap and Local Costmap in RViz

Assuming you have RViz running, you can add Global and Local costmaps to it. For that, click the Add button on the bottom left side of RViz, then Add by Topic, then select Global Costmap:

Add by topic - Global Costmap - How to follow waypoints using nav2

Add by topic – Global Costmap – How to follow waypoints using nav2

 

To add Local Costmap, click the Add button on the bottom left side of RViz, then Add by Topic, then select the Map under Local Costmap:

Add by topic - Local Costmap - How to follow waypoints using nav2

Add by topic – Local Costmap – How to follow waypoints using nav2

 

Assuming everything went well so far, now we are going to test the waypoint follower.

Waypoint follower

If you forked the rosject (clicking on the link we provided to you earlier), you should have a package named follow_waypoints on your ros2_ws/src folder already, but for documentation purposes, and in case you want to know the baby steps, here is how we created that package.

First, in a fourth terminal we created that package:

cd ~/ros2_ws/src

ros2 pkg create --build-type ament_python follow_waypoints
By listing the content of that ~/ros2_ws/src folder, we see that the package has been created:
ls

# follow_waypoints  neo_local_planner2  neo_simulation2  neobotix_mp_400_navigation

That follow_waypoints package has a folder with the same name on it. On that folder, we created a file named follow_waypoints.py

cd ~/ros2_ws/src/follow_waypoints/follow_waypoints

touch follow_waypoints.py

chmod +x follow_waypoints.py

The touch command was used to create the file, and the chmod +x command was used to give execution permissions to that file (make it executable, basically)

We then pasted some content on the follow_waypoints.py file. You can see the content by opening that file using the Code Editor.

The content we pasted is basically a modified version of https://github.com/ros-planning/navigation2/blob/main/nav2_simple_commander/nav2_simple_commander/example_waypoint_follower.py

Inspection Route - To navigation to - How to follow waypoints using nav2

Inspection Route – To navigation to – How to follow waypoints using nav2

 

On lines 33 to 36 we define an inspection_route variable, which essentially is an array that defines the waypoints (positions in the map) that the robot has to go when patrolling.

#! /usr/bin/env python3
# Copyright 2021 Samsung Research America
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import time
from copy import deepcopy

from geometry_msgs.msg import PoseStamped
from rclpy.duration import Duration
import rclpy

from nav2_simple_commander.robot_navigator import BasicNavigator, NavigationResult


def main():
    rclpy.init()

    navigator = BasicNavigator()

    # Inspection route, probably read in from a file for a real application
    # from either a map or drive and repeat.
    inspection_route = [ # simulation points
        [5.0, 0.0],
        [-5.0, -5.0],
        [-5.0, 5.0]]


    # Set our demo's initial pose
    # initial_pose = PoseStamped()
    # initial_pose.header.frame_id = 'map'
    # initial_pose.header.stamp = navigator.get_clock().now().to_msg()
    # initial_pose.pose.position.x = 3.45
    # initial_pose.pose.position.y = 2.15
    # initial_pose.pose.orientation.z = 1.0
    # initial_pose.pose.orientation.w = 0.0
    # navigator.setInitialPose(initial_pose)

    # Wait for navigation to fully activate
    navigator.waitUntilNav2Active()

    while rclpy.ok():

        # Send our route
        inspection_points = []
        inspection_pose = PoseStamped()
        inspection_pose.header.frame_id = 'map'
        inspection_pose.header.stamp = navigator.get_clock().now().to_msg()
        inspection_pose.pose.orientation.z = 1.0
        inspection_pose.pose.orientation.w = 0.0
        for pt in inspection_route:
            inspection_pose.pose.position.x = pt[0]
            inspection_pose.pose.position.y = pt[1]
            inspection_points.append(deepcopy(inspection_pose))
        nav_start = navigator.get_clock().now()
        navigator.followWaypoints(inspection_points)

        # Do something during our route (e.x. AI to analyze stock information or upload to the cloud)
        # Simply print the current waypoint ID for the demonstation
        i = 0
        while not navigator.isNavComplete():
            i = i + 1
            feedback = navigator.getFeedback()
            if feedback and i % 5 == 0:
                print('Executing current waypoint: ' +
                    str(feedback.current_waypoint + 1) + '/' + str(len(inspection_points)))

        result = navigator.getResult()
        if result == NavigationResult.SUCCEEDED:
            print('Inspection of shelves complete! Returning to start...')
        elif result == NavigationResult.CANCELED:
            print('Inspection of shelving was canceled. Returning to start...')
            exit(1)
        elif result == NavigationResult.FAILED:
            print('Inspection of shelving failed! Returning to start...')

        # go back to start
        # initial_pose.header.stamp = navigator.get_clock().now().to_msg()
        # navigator.goToPose(initial_pose)
        while not navigator.isNavComplete:
            pass


if __name__ == '__main__':
    main()


 

In addition to that follow_waypoints.py file, we also had to create the ~/ros2_ws/src/follow_waypoints/config/follow_waypoints.yaml and ~/ros2_ws/src/follow_waypoints/setup.py files.

  • Please check that files. If you want a deeper explanation about those files, please check the video available at the end of this post.

 

Alright, after having created that package and the required configuration files, the next was was compiling the package:

cd ~/ros2_ws

colcon build; source install/setup.bash

 

Then, to see the robot following the waypoints, we can run:

cd ~/ros2_ws

source install/setup.bash

ros2 run follow_waypoints follow_waypoints_exe

 

Looking at the simulation and at RViz, you should be able to see the robot moving.

Congratulations. You just learned how to make a robot follow waypoints using nav2 (the official ROS 2 Navigation package)

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Pin It on Pinterest