How to Export a 3D Robot Model to ROS2 | Onshape CAD to URDF

How to Export a 3D Robot Model to ROS2 | Onshape CAD to URDF

What we are going to learn

  1. Learn how to export a robot model from OnShape to URDF so that we can integrate it with ROS2

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5ee7cc96/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. URDF for Robot Modeling in ROS2: https://app.theconstructsim.com/courses/83/
    2. ROS2 Basics in 5 Days (Python): https://app.theconstructsim.com/Course/132
    3. ROS2 Basics in 5 Days (C++): https://app.theconstructsim.com/Course/133
  4. OnShape: https://www.onshape.com/en/

Do you want to master robotics? Robotics Developer Master Class: https://www.theconstruct.ai/robotics-developer/

Overview

ROS2 (Robot Operating System version 2) is widely used in robotics, and it uses robot models in a format called URDF (Unified Robotics Description Format).

OnShape is a 3D CAD (3-dimensional computer-aided design) tool that allows anyone to easily create 3D models using only a Web Browser.

In this post, we are going to learn how to export models from OnShape to URDF, so that the model can be used in ROS2 programs.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it, print it, and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Creating an OnShape account

Alright, since we are going to export a model from OnShape, the first thing that you need is an OnShape account.

Feel free to create an account at: https://www.onshape.com/en/

On OnShape you can create your own design, or use any existing design already provided by OnShape.

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5ee7cc96/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

How to Export a 3D Robot Model to ROS2 | Onshape CAD to URDF – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Installing the onshape-to-robot package

In order to install a package (and interact with ROS2), we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

In order to install onshape-to-robot, please run the following command in the terminal:

sudo pip install onshape-to-robot
We installed it at the system level.
If you want, you can also install it in a Python Virtual Environment.

If you want to install it in a virtual environment

If for any reason you are using a computer that you don’t have root access, you can create a virtual environment and install onshape-to-robot there.
The virtual env can be created with the following command:
source onshape_venv/bin/activate You should see now in your linux promt the (onshape_venv) pip install onshape-to-robot
the following output should be produced:
cd

python -m venv onshape_venv
Then, “enable” the virtual env:
source onshape_venv/bin/activate
Now you can install onshape-to-robot in this virtual environment:
pip install onshape-to-robot

Install dependencies

To make the export from OnShape to URDF work, we also need to add openscad and meshlab. We can instal theml with the following commands:

sudo add-apt-repository ppa:openscad/releases
sudo apt-get update
sudo apt-get install openscad -y
sudo apt-get install meshlab -y

Add the OnShapeKeys to the `keys. sh` file

After installing the dependencies, the next step is to get the API keys from OnShape.
The reason why we need this is because the onshape-to-robot needs to authenticate to OnShape to have access to the model that we are going to export.
In order to get those keys, you have to go to https://dev-portal.onshape.com/keys  and click on the Create new API key button.
After clicking that button, for the purpose of this video, you can just select all the permissions, and then click Create API key.
Then you will be presented with the keys, something similar to what we have below:
How to Export a 3D Robot Model to ROS2 - Onshape CAD to URDF

How to Export a 3D Robot Model to ROS2 – Onshape CAD to URDF

Make sure you copy that information in a safe place because the Secret Key won’t be shown again once you click the Close button.
Now, let’s create a keys.sh file with these secrets:
mkdir -pv ~/ros2_ws
cd ~/ros2_ws
touch keys.sh
Then you can open that file and paste the following content to it (these are the keys that we just created):
export ONSHAPE_API=https://cad.onshape.com
export ONSHAPE_ACCESS_KEY=5D7TA69e4CiVC82sOFTXJRWM
export ONSHAPE_SECRET_KEY=2uA7a5DHwNFrsHAA9IliZIIwD2Wxud0LhxOms55kLiQHeYl5
You could have created that file also using the Code Editor.

It is worth mentioning that if you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:

Open the IDE - Code Editor

Open the IDE – Code Editor

 

Now, let’s source that file so that the ONSHAPE variables are available in our terminal:
cd ~/ros2_ws
source keys.sh
Now we should be able to see those environment variables:
echo $ONSHAPE_API 

echo $ONSHAPE_ACCESS_KEY 

echo $ONSHAPE_SECRET_KEY

Creating a ROS2 package where our URDF will be exported to

If you are using the rosject that we shared at the beginning of this post, the ROS2 package is already created.

We are going to write here the steps for creating the package, just in case you want to do the project from scratch yourself.

This is how the package was created. We first enter the src folder of the ros2_ws (ROS2 Workspace):.

cd ~/ros2_ws/src
Then we created a packaged named quadruped_description that depends on urdf and xacro packages:
ros2 pkg create --build-type ament_cmake quadruped_description --dependencies urdf xacro
Then we entered into the quadruped_description folder, and created some other useful folders there. The folders we are going to create are quadruped, rviz and launch:
cd quadruped_description
Let’s first create the quadruped folder:
mkdir quadruped
Now let’s create a config.json file inside that folder. That file will be used by the onshape-to-robot tool:
touch quadruped/config.json
And create the launch and rviz folders:
mkdir launch rviz
Now let’s open that quadruped/config.json file using the Code Editor and paste the following content to it:
{
  "documentId": "33b91de06ddc91b068fcf725",
  "outputFormat": "urdf",
  "packageName": "quadruped_description/quadruped",
  "robotName": "quadruped_robot",
  "assemblyName": "quadruped_v1"
}
This is the minimal information we need to make this export work.
The first field, documentId is the ID of the file that you have created. This is the ID that was used when creating this video. For your own projects, you are going to have a different ID. This ID appears in the URL of your project, as we can see below:
OnShape Document ID - ROS2

OnShape Document ID – ROS2

Alright, let me put here again the content of the config.json file to be better fo understand it:
{
"documentId": "33b91de06ddc91b068fcf725",
"outputFormat": "urdf",
"packageName": "quadruped_description/quadruped",
"robotName": "quadruped_robot",
"assemblyName": "quadruped_v1"
}
The packageName points to the folder that we created, where the config.json file is located, and it is the place where our files will be exported.
The robotName defines the name of the robot that will be exported.
The assemblyName is the name of our model on OnShape. In the previous image, in the bottom part of the image you can see on the 6th tab our model name: quadruped_v1

Prepare our CMakeLists.txt to export the folders we just created

The next thing we need is to modify our ~/ros2_ws/src/quadruped_description/CMakeLists.txt file so that the folders that we just created can be “installed” when we build our package.

The final content of that quadruped_description/CMakeLists.txt file is:

cmake_minimum_required(VERSION 3.8)
project(quadruped_description)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

# find dependencies
find_package(ament_cmake REQUIRED)
find_package(urdf REQUIRED)
find_package(xacro REQUIRED)

if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  ament_lint_auto_find_test_dependencies()
endif()

install(
  DIRECTORY
    rviz
    launch
        quadruped
  DESTINATION
    share/${PROJECT_NAME}/
)

ament_package()

Finally, run onshape-to-robot command

Ok, now the time has come to finally run the command that converts our model from OnShape to URDF.

For that, let’s first enter to the right folder:

cd ~/ros2_ws/src/quadruped_description
Then, let’s run the following command (bear in mind that in this rosject we already ran that command. We are putting the commands here basically for documentation reasons):
onshape-to-robot quadruped
In the command above, we are essentially running onshare-to-command and telling it to go to the quadruped folder and read the config.json file there.
If everything goes well, now, instead of only the config.json file, we should have many files in that quadruped folder.
In that quadruped folder you should also find a robot.urdf file.
The only modification that you need to do on that file is to add the following content in the first line, so that the Code Editor can properly highlight the syntax:
<?xml version="1.0"?>

Creating .launch and .rviz files to be able to see our model in RViz.

Again, these commands were already executed when we first created the rosject that we shared with you at the beginning of this post.

These are the commands we used to create the launch files:

cd ~/ros2_ws/src/quadruped_description

touch launch/quadruped.launch.py

touch launch/start_rviz.launch.py

touch rviz/quadruped.rviz

To see the content of those files, just open them using the Code Editor.

Seeing our robot with ROS2

To see our robot model in ROS2, we first need to build our workspace:

cd ~/ros2_ws/
source install/setup.bash
build the workspace:
colcon build --packages-select quadruped_description
Source setup.bash so that ROS2 knows how to find our package:
source install/setup.bash
Now, let’s launch quadruped.launch.py :
cd ~/ros2_ws/

source install/setup.bash

ros2 launch quadruped_description quadruped.launch.py

Now in a second terminal, let’s run RViz:
cd ~/ros2_ws/

source install/setup.bash

ros2 launch quadruped_description start_rviz.launch.py

After launching RViz, after waiting for a few seconds for RViz to show, you will see that the model is not well presented. We can’t see the joints properly.
Let’s publish Fake Joint States to see the model properly.
For that, let’s run the following commands in a third terminal:
cd ~/ros2_ws/

source install/setup.bash

ros2 run joint_state_publisher_gui joint_state_publisher_gui

If you go to the Graphical Tools again and click Joint State Publisher, you should be able to see the robot model properly:
Joint State Publisher - Export a 3D Robot Model to ROS2 - Onshape CAD to URDF

Joint State Publisher – Export a 3D Robot Model to ROS2 – Onshape CAD to URDF

 

Congratulations. You have now learned how to export a 3D model from OnShape to URDF.

If you want to learn more about URDF, have a look at the course below:

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

ROS2 – Combine Publisher, Subscriber, Service with Practical Robot Examples (detect objects) – Part 2

ROS2 – Combine Publisher, Subscriber, Service with Practical Robot Examples (detect objects) – Part 2

What we are going to learn

  1. Learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node to detect objects in a scene

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5c13606c/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days (C++): https://app.theconstructsim.com/Course/133
  4. Apple Detector: https://shrishailsgajbhar.github.io/post/OpenCV-Apple-detection-counting
  5. Banana Detector: https://github.com/noorkhokhar99/Open-CV-Banana-Detection

Overview

ROS2 (Robot Operating System version 2) is becoming the de facto standard “framework” for programming robots.

In this post, we are going to learn how to combine Publisher, Subscriber, and Service in ROS2 to detect bananas and apples in an image.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it, print it, and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5c13606c/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

Combining Publisher, Subscriber & Service in ROS2 Single Node – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Modifying existing files

In order to interact with ROS2, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

In this rosject we cloned the https://bitbucket.org/theconstructcore/fruit_detector and placed it inside the ~/ros2_ws/src folder. You can see its content with the following command in the terminal:

ls ~/ros2_ws/src/fruit_detector/
the following output should be produced:
custom_interfaces  pub_sub_srv_ros2_pkg_example
A new file called pubsubserv_example.py was created inside the fruit_detector/pub_sub_srv_ros2_pkg_example/scripts folder.
You can see its content with the following command:
cat  ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/scripts/pubsubserv_example.py
You could have created that file also using the Code Editor.

If you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:

Open the IDE - Code Editor

Open the IDE – Code Editor

 

The following content was pasted to that file:
#! /usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from custom_interfaces.srv import StringServiceMessage
import os
import cv2
from cv_bridge import CvBridge
import ament_index_python.packages as ament_index

class CombineNode(Node):

    def __init__(self, dummy=True):
        super().__init__('combine_node')

        self._dummy= dummy

        self.pkg_path = self.get_package_path("pub_sub_srv_ros2_pkg_example")
        self.scripts_path = os.path.join(self.pkg_path,"scripts")
        cascade_file_path = os.path.join(self.scripts_path,'haarbanana.xml')

        self.banana_cascade = cv2.CascadeClassifier(cascade_file_path)


        self.bridge = CvBridge()

        self.publisher = self.create_publisher(Image, 'image_detected_fruit', 10)
        self.subscription = self.create_subscription(
            Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)

        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
        

        self.get_logger().info(f'READY CombineNode')



    def get_package_path(self, package_name):
        try:
            package_share_directory = ament_index.get_package_share_directory(package_name)
            return package_share_directory
        except Exception as e:
            print(f"Error: {e}")
            return None


    def image_callback(self, msg):
        self.get_logger().info('Received an image.')
        self.current_image = msg


    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

    def generate_unknown_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'unknown.png')
        self.get_logger().warning("Unknown path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the unknown image.")
        else:
            self.get_logger().warning("SUCCESS to load the unknown image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")


    def generate_apple_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'apple.png')
        self.get_logger().warning("Apple path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the apple image.")
        else:
            self.get_logger().warning("SUCCESS to load the apple image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def generate_banana_detection_image(self):
        self.banana_img_path = os.path.join(self.scripts_path,'banana.png')
        self.get_logger().warning("Banana path="+str(self.banana_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple yellow circle on a black image
        image = cv2.imread(self.banana_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the banana image.")
        else:
            self.get_logger().warning("SUCCESS to load the banana image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def publish_image(self, image_msg):
        if image_msg is not None:
            self.publisher.publish(image_msg)


    def detect_and_publish_apple(self):
        if self.current_image is not None:
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")

            # Your apple detection code here (from approach_2.py)
            # HIGH=====
            # (0.0, 238.935, 255.0)
            # LOW=====
            # (1.8, 255.0, 66.045)


            low_apple_raw = (0.0, 80.0, 80.0)
            high_apple_raw = (20.0, 255.0, 255.0)

            image_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

            mask = cv2.inRange(image_hsv, low_apple_raw, high_apple_raw)

            cnts, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
                                    cv2.CHAIN_APPROX_SIMPLE)
            c_num = 0
            radius = 10
            for i, c in enumerate(cnts):
                ((x, y), r) = cv2.minEnclosingCircle(c)
                if r > radius:
                    print("OK="+str(r))
                    c_num += 1
                    cv2.circle(frame, (int(x), int(y)), int(r), (0, 255, 0), 2)
                    cv2.putText(frame, "#{}".format(c_num), (int(x) - 10, int(y)),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
                else:
                    print(r)

            

            # Publish the detected image as a ROS 2 Image message
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)

        else:
            self.get_logger().error("Image NOT found")


    def detect_and_publish_banana(self):
        self.get_logger().warning("detect_and_publish_banana Start")
        if self.current_image is not None:
            self.get_logger().warning("Image found")
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            bananas = self.banana_cascade.detectMultiScale(
                gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)

            for (x, y, w, h) in bananas:
                cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3)
                cv2.putText(frame, 'Banana', (x-10, y-10),
                            cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))

            # Publish the detected image as a ROS 2 Image message
            self.get_logger().warning("BananaDetection Image Publishing...")
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)
            self.get_logger().warning("BananaDetection Image Publishing...DONE")
        else:
            self.get_logger().error("Image NOT found")
    


def main(args=None):
    rclpy.init(args=args)
    node = CombineNode(dummy=False)
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
After creating that Python file, we also modified the CMakeLists.txt file of the pub_sub_srv_ros2_pkg_example package:
~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt
We basically added ‘scripts/pubsubserv_example.py‘ to the list of files to be installed when we build our ros2 workspace.
In the end, the content of ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt is like this:
cmake_minimum_required(VERSION 3.8)
project(pub_sub_srv_ros2_pkg_example)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

find_package(ament_cmake REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_srvs REQUIRED)
find_package(custom_interfaces REQUIRED)


if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  set(ament_cmake_copyright_FOUND TRUE)
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

# We add it to be able to use other modules of the scripts folder
install(DIRECTORY
  scripts
  rviz
  DESTINATION share/${PROJECT_NAME}
)

install(PROGRAMS
	scripts/example1_dummy.py
  	scripts/example1.py
	scripts/example1_main.py
	scripts/pubsubserv_example.py
  DESTINATION lib/${PROJECT_NAME}
)

ament_package()

We then compiled specifically the pub_sub_srv_ros2_pkg_example package using the following command:
cd ~/ros2_ws/

source install/setup.bash
colcon build --packages-select pub_sub_srv_ros2_pkg_example

After the package is compiled, we could run that python script using the following command:

cd ~/ros2_ws

source install/setup.bash

ros2 run pub_sub_srv_ros2_pkg_example pubsubserv_example.py
After running that script you are not going to see any output because we are not printing anything.
But, let’s try to list the services in a second terminal by typing ros2 node list. If everything goes well, we should be able to see the combine_node node:
$ ros2 node list

/combine_node

Launching the simulation

So far we can’t see what our node is capable of.
Let’s launch a simulation so that we can understand our node better.
For that, let’s run the following command in a third terminal:
ros2 launch box_bot_gazebo garden_main.launch.xml
A simulation similar to the following should appear in a few seconds:
Combine Publisher, Subscriber & Service in ROS2 Single Node - Simulation

Combine Publisher, Subscriber & Service in ROS2 Single Node – Simulation

After launching the simulation, in the first terminal where we launched our node, we should start seeing messages like the following:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
We will soon understand what these messages mean.

See what the robot sees through rviz2

Now that the simulation is running, we can open rviz2 (ROS Visualization version 2).
To make it easier for you to see the robot model, and the robot camera, a fruit.rviz file was created at ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz.
You can tell rviz2 to load that config file using the following command:
rviz2 --display-config ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz
A new screen should pop up in a few seconds, and you should be able to see what the robot camera sees, as well as the robot model.
The ROS2 Topic that we set for the camera was /box_bot_1/box_bot_1_camera/image_raw. You can find this topic if you list the topics in another terminal using ros2 topic list.
If you look at the topic that we subscribe to at the __init__ method of the CombineNode class, it is exactly this topic:
self.subscription = self.create_subscription( Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)
When a new Image message comes, the image_callback method is called. It essentially saves the image in an internal variable called current_image:
def image_callback(self, msg): 
    self.get_logger().info('Received an image.') 
    self.current_image = msg
At the __init__ method we also created a service for analyzing an image and detecting whether or not it contains a banana:
    def __init__(self, dummy=True):
        super().__init__('combine_node')
       
        # ...
        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)

    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

By analyzing the code above, we see that when the detect_fruit_service service that we created is called, it calls the string_service_callback method that is responsible for detecting bananas and apples.
Now, going back to the messages we see in the first terminal:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
These messages basically say that we are correctly receiving the Image messages from the /box_bot_1/box_bot_1_camera/image_raw topic mentioned earlier.
If we list the services, we should find a service named . Let’s try it, by running the following command in a free terminal:
ros2 service list

You should see a huge number of services, and among them, you should be able to find the following one, that we created:

/detect_fruit_service
By the way, the reason why we have so many services is that the Gazebo simulator generates a lot of services, making it easier to interact with Gazebo using ROS2.
Now, let’s call that service. In order to detect an apple, a banana, and a strawberry, we could run the following commands respectively:
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'apple'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'banana'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'strawberry'"
If you don’t understand the commands we have been using so far, I highly recommend you take the ROS2 Basics course: https://app.theconstructsim.com/courses/132.
Alright. After calling the service to detect a banana, we should have an output similar to the following:
requester: making request: custom_interfaces.srv.StringServiceMessage_Request(detect='banana') 

response: custom_interfaces.srv.StringServiceMessage_Response(success=True, message='Received and processed: banana')
Indicating that the service correctly detected a banana.
If you check the logs in the first terminal where we launched our node, you will also see a message similar to the following:
BananaDetection Image Publishing...
If you check the window where RViz (Robot Visualizer), when detecting the apple you should have a green circle around the apple, just like in the image below:
ROS2 - Combine Publisher, Subscriber, Service with Practical Robot Examples - Part 2

ROS2 – Combine Publisher, Subscriber, and Service with Practical Robot Examples – Part 2

Congratulations. Now you know how to combine different ROS2 pieces in a single node.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Combine Publisher, Subscriber & Service in ROS2 Single Node | ROS2 Tutorial

Combine Publisher, Subscriber & Service in ROS2 Single Node | ROS2 Tutorial

What we are going to learn

  1. Learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5c13606c/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days (C++): https://app.theconstructsim.com/Course/133
  4. Apple Detector: https://shrishailsgajbhar.github.io/post/OpenCV-Apple-detection-counting
  5. Banana Detector: https://github.com/noorkhokhar99/Open-CV-Banana-Detection

Overview

ROS2 (Robot Operating System version 2) is becoming the de facto standard “framework” for programming robots.

In this post, we are going to learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5c13606c/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

Combining Publisher, Subscriber & Service in ROS2 Single Node – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Creating the required files

In order to interact with ROS2, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

In this rosject we cloned the https://bitbucket.org/theconstructcore/fruit_detector and placed it inside the ~/ros2_ws/src folder. You can see its content with the following command in the terminal:

ls ~/ros2_ws/src/fruit_detector/
the following output should be produced:
custom_interfaces  pub_sub_srv_ros2_pkg_example
A new file called pubsubserv_example.py was created inside the fruit_detector/pub_sub_srv_ros2_pkg_example/scripts folder.
The command for creating that file was:
touch  ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/scripts/pubsubserv_example.py
You could have created that file also using the Code Editor.

If you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:

Open the IDE - Code Editor

Open the IDE – Code Editor

 

The following content was pasted to that file:
#! /usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from custom_interfaces.srv import StringServiceMessage
import os
import cv2
from cv_bridge import CvBridge
import ament_index_python.packages as ament_index

class CombineNode(Node):

    def __init__(self, dummy=True):
        super().__init__('combine_node')

        self._dummy= dummy

        self.pkg_path = self.get_package_path("pub_sub_srv_ros2_pkg_example")
        self.scripts_path = os.path.join(self.pkg_path,"scripts")
        cascade_file_path = os.path.join(self.scripts_path,'haarbanana.xml')

        self.banana_cascade = cv2.CascadeClassifier(cascade_file_path)


        self.bridge = CvBridge()

        self.publisher = self.create_publisher(Image, 'image_detected_fruit', 10)
        self.subscription = self.create_subscription(
            Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)

        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
        

        self.get_logger().info(f'READY CombineNode')



    def get_package_path(self, package_name):
        try:
            package_share_directory = ament_index.get_package_share_directory(package_name)
            return package_share_directory
        except Exception as e:
            print(f"Error: {e}")
            return None


    def image_callback(self, msg):
        self.get_logger().info('Received an image.')
        self.current_image = msg


    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

    def generate_unknown_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'unknown.png')
        self.get_logger().warning("Unknown path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the unknown image.")
        else:
            self.get_logger().warning("SUCCESS to load the unknown image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")


    def generate_apple_detection_image(self):

        self.apple_img_path = os.path.join(self.scripts_path,'apple.png')
        self.get_logger().warning("Apple path="+str(self.apple_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple red circle on a black image
        image = cv2.imread(self.apple_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the apple image.")
        else:
            self.get_logger().warning("SUCCESS to load the apple image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def generate_banana_detection_image(self):
        self.banana_img_path = os.path.join(self.scripts_path,'banana.png')
        self.get_logger().warning("Banana path="+str(self.banana_img_path))
        # Replace this with your actual image processing logic
        # In this example, we create a simple yellow circle on a black image
        image = cv2.imread(self.banana_img_path)  # Replace with your image path
        if image is None:
            self.get_logger().error("Failed to load the banana image.")
        else:
            self.get_logger().warning("SUCCESS to load the banana image.")
        return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")

    def publish_image(self, image_msg):
        if image_msg is not None:
            self.publisher.publish(image_msg)


    def detect_and_publish_apple(self):
        if self.current_image is not None:
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")

            # Your apple detection code here (from approach_2.py)
            # HIGH=====
            # (0.0, 238.935, 255.0)
            # LOW=====
            # (1.8, 255.0, 66.045)


            low_apple_raw = (0.0, 80.0, 80.0)
            high_apple_raw = (20.0, 255.0, 255.0)

            image_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

            mask = cv2.inRange(image_hsv, low_apple_raw, high_apple_raw)

            cnts, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
                                    cv2.CHAIN_APPROX_SIMPLE)
            c_num = 0
            radius = 10
            for i, c in enumerate(cnts):
                ((x, y), r) = cv2.minEnclosingCircle(c)
                if r > radius:
                    print("OK="+str(r))
                    c_num += 1
                    cv2.circle(frame, (int(x), int(y)), int(r), (0, 255, 0), 2)
                    cv2.putText(frame, "#{}".format(c_num), (int(x) - 10, int(y)),
                                cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
                else:
                    print(r)

            

            # Publish the detected image as a ROS 2 Image message
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)

        else:
            self.get_logger().error("Image NOT found")


    def detect_and_publish_banana(self):
        self.get_logger().warning("detect_and_publish_banana Start")
        if self.current_image is not None:
            self.get_logger().warning("Image found")
            frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            bananas = self.banana_cascade.detectMultiScale(
                gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)

            for (x, y, w, h) in bananas:
                cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3)
                cv2.putText(frame, 'Banana', (x-10, y-10),
                            cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))

            # Publish the detected image as a ROS 2 Image message
            self.get_logger().warning("BananaDetection Image Publishing...")
            image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
            self.publish_image(image_msg)
            self.get_logger().warning("BananaDetection Image Publishing...DONE")
        else:
            self.get_logger().error("Image NOT found")
    


def main(args=None):
    rclpy.init(args=args)
    node = CombineNode(dummy=False)
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
After creating that Python file, we also modified the CMakeLists.txt file of the pub_sub_srv_ros2_pkg_example package:
~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt
We basically added ‘scripts/pubsubserv_example.py‘ to the list of files to be installed when we build our ros2 workspace.
In the end, the content of ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt is like this:
cmake_minimum_required(VERSION 3.8)
project(pub_sub_srv_ros2_pkg_example)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

find_package(ament_cmake REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_srvs REQUIRED)
find_package(custom_interfaces REQUIRED)


if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  set(ament_cmake_copyright_FOUND TRUE)
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

# We add it to be able to use other modules of the scripts folder
install(DIRECTORY
  scripts
  rviz
  DESTINATION share/${PROJECT_NAME}
)

install(PROGRAMS
	scripts/example1_dummy.py
  	scripts/example1.py
	scripts/example1_main.py
	scripts/pubsubserv_example.py
  DESTINATION lib/${PROJECT_NAME}
)

ament_package()

We then compiled specifically the pub_sub_srv_ros2_pkg_example package using the following command:
cd ~/ros2_ws/

source install/setup.bash
colcon build --packages-select pub_sub_srv_ros2_pkg_example

After the package is compiled, we could run that python script using the following command:

cd ~/ros2_ws

source install/setup.bash

ros2 run pub_sub_srv_ros2_pkg_example pubsubserv_example.py
After running that script you are not going to see any output because we are not printing anything.
But, let’s try to list the services in a second terminal by typing ros2 node list. If everything goes well, we should be able to see the combine_node node:
$ ros2 node list

/combine_node

Launching the simulation

So far we can’t see what our node is capable of.
Let’s launch a simulation so that we can understand our node better.
For that, let’s run the following command in a third terminal:
ros2 launch box_bot_gazebo garden_main.launch.xml
A simulation similar to the following should appear in a few seconds:
Combine Publisher, Subscriber & Service in ROS2 Single Node - Simulation

Combine Publisher, Subscriber & Service in ROS2 Single Node – Simulation

After launching the simulation, in the first terminal where we launched our node, we should start seeing messages like the following:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
We will soon understand what these messages mean.

See what the robot sees through rviz2

Now that the simulation is running, we can open rviz2 (ROS Visualization version 2).
To make it easier for you to see the robot model, and the robot camera, a fruit.rviz file was created at ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz.
You can tell rviz2 to load that config file using the following command:
rviz2 --display-config ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz
A new screen should pop up in a few seconds, and you should be able to see what the robot camera sees, as well as the robot model.
The ROS2 Topic that we set for the camera was /box_bot_1/box_bot_1_camera/image_raw. You can find this topic if you list the topics in another terminal using ros2 topic list.
If you look at the topic that we subscribe to at the __init__ method of the CombineNode class, it is exactly this topic:
self.subscription = self.create_subscription( Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)
When a new Image message comes, the image_callback method is called. It essentially saves the image in an internal variable called current_image:
def image_callback(self, msg): 
    self.get_logger().info('Received an image.') 
    self.current_image = msg
At the __init__ method we also created a service for analyzing an image and detecting whether or not it contains a banana:
    def __init__(self, dummy=True):
        super().__init__('combine_node')
       
        # ...
        self.string_service = self.create_service(
            StringServiceMessage, 'detect_fruit_service', self.string_service_callback)

    def string_service_callback(self, request, response):
        # Handle the string service request
        self.get_logger().info(f'Received string service request: {request.detect}')
        
        
        if request.detect == "apple":
            if self._dummy:
                # Generate and publish an image related to apple detections
                apple_image = self.generate_apple_detection_image()
                self.publish_image(apple_image)
            else:
                self.detect_and_publish_apple()
        elif request.detect == "banana":
            if self._dummy:
                # Generate and publish an image related to banana detections
                banana_image = self.generate_banana_detection_image()
                self.publish_image(banana_image)
            else:
                self.detect_and_publish_banana()
        else:
            # If no specific request           
            unknown_image = self.generate_unknown_detection_image()
            self.publish_image(unknown_image)

        # Respond with success and a message
        response.success = True
        response.message = f'Received and processed: {request.detect}'
        return response

By analyzing the code above, we see that when the detect_fruit_service service that we created is called, it calls the string_service_callback method that is responsible for detecting bananas and apples.
Now, going back to the messages we see in the first terminal:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
These messages basically say that we are correctly receiving the Image messages from the /box_bot_1/box_bot_1_camera/image_raw topic mentioned earlier.
If we list the services, we should find a service named . Let’s try it, by running the following command in a free terminal:
ros2 service list

You should see a huge number of services, and among them, you should be able to find the following one, that we created:

/detect_fruit_service
By the way, the reason why we have so many services is that the Gazebo simulator generates a lot of services, making it easier to interact with Gazebo using ROS2.
Now, let’s call that service. In order to detect an apple, a banana, and a strawberry, we could run the following commands respectively:
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'apple'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'banana'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'strawberry'"
If you don’t understand the commands we have been using so far, I highly recommend you take the ROS2 Basics course: https://app.theconstructsim.com/courses/132.
Alright. After calling the service to detect a banana, we should have an output similar to the following:
requester: making request: custom_interfaces.srv.StringServiceMessage_Request(detect='banana') 

response: custom_interfaces.srv.StringServiceMessage_Response(success=True, message='Received and processed: banana')
Indicating that the service correctly detected a banana.
If you check the logs in the first terminal where we launched our node, you will also see a message similar to the following:
BananaDetection Image Publishing...
So, as you can see, we have in the same ROS2 Node a Publisher, a Subscriber, and a Service.

Congratulations. Now you know how to combine different ROS2 pieces in a single node.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

ROS2 C++ Package Creation Guide | ROS2 Tutorial

ROS2 C++ Package Creation Guide | ROS2 Tutorial

What we are going to learn

  1. How to create a ROS2 package
  2. How to create a package with some dependencies
  3. How to create many packages in a ros project
  4. How to compile a ros2 workspace

List of resources used in this post

  1. Use this rosject: https://app.theconstructsim.com/l/5bda8c95/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

ROS (Robot Operating System) is becoming the de facto standard “framework” for programming robots. In this post, let’s learn how to create a ROS2 package, essential for giving instruction to robots, using the ros2 command.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5bda8c95/.

Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

ROS2 package creation  – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Creating a ros2 package

In order to create a ROS2 package, we need to have a ROS2 Workspace, and for that, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

Once inside the first terminal, let’s first run a command that shows the list of available options for ros2:

ros2 -h
the following output should be produced:
ros2 is an extensible command-line tool for ROS 2.
options:
  -h, --help            show this help message and exit

Commands:
  action     Various action related sub-commands
  bag        Various rosbag related sub-commands
  component  Various component related sub-commands
  daemon     Various daemon related sub-commands
  doctor     Check ROS setup and other potential issues
  interface  Show information about ROS interfaces
  launch     Run a launch file
  lifecycle  Various lifecycle related sub-commands
  multicast  Various multicast related sub-commands
  node       Various node related sub-commands
  param      Various param related sub-commands
  pkg        Various package related sub-commands
  run        Run a package specific executable
  security   Various security related sub-commands
  service    Various service related sub-commands
  topic      Various topic related sub-commands
  wtf        Use `wtf` as alias to `doctor`

  Call `ros2 <command> -h` for more detailed usage.

As we can see in the output above, we have a command called pkg, and we can also get help with the ros2 pkg -h command. Let’s try it:
ros2 pkg -h
Running that command produces the following:
Various package related sub-commands
options:
  -h, --help            show this help message and exit

Commands:
  create       Create a new ROS 2 package
  executables  Output a list of package specific executables
  list         Output a list of available packages
  prefix       Output the prefix path of a package
  xml          Output the XML of the package manifest or a specific tag

Since what we want to do is create a package, we could ask for help with the create command shown above. Let’s try it:
ros2 pkg create -h
That gives us the following:
usage: ros2 pkg create [-h] [--package-format {2,3}] [--description DESCRIPTION] [--license LICENSE] [--destination-directory DESTINATION_DIRECTORY] [--build-type {cmake,ament_cmake,ament_python}]
                       [--dependencies DEPENDENCIES [DEPENDENCIES ...]] [--maintainer-email MAINTAINER_EMAIL] [--maintainer-name MAINTAINER_NAME] [--node-name NODE_NAME] [--library-name LIBRARY_NAME]
                       package_name

Create a new ROS 2 package

positional arguments:
  package_name          The package name

options:
  -h, --help            show this help message and exit
  --package-format {2,3}, --package_format {2,3}
                        The package.xml format.
  --description DESCRIPTION
                        The description given in the package.xml
  --license LICENSE     The license attached to this package; this can be an arbitrary string, but a LICENSE file will only be generated if it is one of the supported licenses (pass '?' to get a list)
  --destination-directory DESTINATION_DIRECTORY
                        Directory where to create the package directory
  --build-type {cmake,ament_cmake,ament_python}
                        The build type to process the package with
  --dependencies DEPENDENCIES [DEPENDENCIES ...]
                        list of dependencies
  --maintainer-email MAINTAINER_EMAIL
                        email address of the maintainer of this package
  --maintainer-name MAINTAINER_NAME
                        name of the maintainer of this package
  --node-name NODE_NAME
                        name of the empty executable
  --library-name LIBRARY_NAME
                        name of the empty library

Ok, according to the instructions, we should be able to create a package just using ros2 pkg create PKG_NAME. Let’s try to create a package named my_superbot inside the ros2_ws/src folder.
cd ~/ros2_ws/src

ros2 pkg create my_superbot

Assuming that everything went as expected, we should see something like this:

going to create a new package
package name: my_superbot
destination directory: /root/ros2_ws/src
package format: 3
version: 0.0.0
description: TODO: Package description
maintainer: ['root <root@todo.todo>']
licenses: ['TODO: License declaration']
build type: ament_cmake
dependencies: []
creating folder ./my_superbot
creating ./my_superbot/package.xml
creating source and include folder
creating folder ./my_superbot/src
creating folder ./my_superbot/include/my_superbot
creating ./my_superbot/CMakeLists.txt
According to the log messages, we now have a package called my_superbot, with some files inside the my_superbot folder. The most important files are ./my_superbot/package.xml and ./my_superbot/CMakeLists.txt. The former (package.xml) because it defines the package name, and the latter (CMakeLists.txt) because it contains the “instructions” on how to compile our package.
If you now run the ls command, you should be able to see the my_superbot folder, which is essentially your ROS2 package.
Also, if you run the “tree .  command, you should see the folder structure.:
tree .
The package structure:
.
└── my_superbot
    ├── CMakeLists.txt
    ├── include
    │   └── my_superbot
    ├── package.xml
    └── src

4 directories, 2 files
This is the simplest and easiest way of creating a ROS2 package.
If  you don’t have the tree command installed, you can install it using the commands below:
sudo apt-get update

sudo apt-get install -y tree

Creating a ros2 package with some dependencies

Most of the times, when we create a package, we basically want to reuse or leverage existing tools (or packages).

Let’s remove the package we just created, and create it again, but at this time, specifying some dependencies:
cd ~/ros2_ws/src

rm -rfv my_superbot
Okay, we just removed the package we created earlier. If you remember, previous we executed the “ros2 pkg create -h“, which provided us with some help with dependencies:
...

--dependencies DEPENDENCIES [DEPENDENCIES ...] 
     list of dependencies

...
Let’s now create the package with the same name, but at this time, specifying rclcpp and std_msgs dependencies:
cd ~/ros2_ws/src


ros2 pkg create my_superbot --dependencies rclcpp std_msgs
If you use the “ls” or “tree” commands, like before, you will see that the package has been successfully created. The main differences are in the contents of the package.xml and CMakeLists.txt files.
ls

tree .

Creating many ros2 packages

There is a principle in Software Development called DRY (Don’t repeat yourself). It basically tells us that we have to reuse code, making code easier to maintain.

There is also the Separation of Concerns (SoC) design principle that manages complexity by partitioning the software system so that each partition is responsible for a separate concern, minimizing the overlap of concerns as much as possible.

In a robotics project, we should ideally have different packages for different purposes. Let’s remove again the package we just created, and rather than creating the package directly on the ros2_ws/src folder, let’s create a project folder there, and then create the packages inside that project folder. Start by removing the existing package:

cd ~/ros2_ws/src

rm -rfv my_superbot
Now, let’s create a folder called superbot_project:
cd ~/ros2_ws/src

mkdir superbot_project

Inside the project folder, we can now create different packages.

cd superbot_project

ros2 pkg create superbot_description

ros2 pkg create superbot_detection

ros2 pkg create superbot_audio
We created 3 packages.  If we run “tree .” or “ls -l“, we should be able to see the three packages there:
tree .
The output of the tree command:
├── superbot_audio
│   ├── CMakeLists.txt
│   ├── include
│   ├── package.xml
│   └── src
├── superbot_description
│   ├── CMakeLists.txt
│   ├── include
│   │   └── superbot_description
│   ├── package.xml
│   └── src
└── superbot_detection
    ├── CMakeLists.txt
    ├── include
    │   └── superbot_detection
    ├── package.xml
    └── src

12 directories, 6 files

Building our ros2 packages

Now that we have created the packages, even though they don’t contain any meaning code, let’s learn how to compile the workspace, which contains the packages.

For that, we use the “colcon build” command on the main workspace folder:

cd ~/ros2_ws/

colcon build
source install/setup.bash
Assuming that everything worked nicely, the output should be similar to the following:
Starting >>> superbot_audio
Starting >>> superbot_description
Starting >>> superbot_detection
Finished <<< superbot_description [0.67s]                                                                                            
Finished <<< superbot_audio [0.69s]
Finished <<< superbot_detection [0.68s]

Summary: 3 packages finished [0.83s]
If we now run the “ls” command, we should see three new folders there: devel, install, log, in addition to the src folder that we created.
ls

# build  install  log  src
When you compile your workspace, if you want to make ROS2 aware that your packages are compiled and ready to use, you have to tell it where to find the packages using the “source” command. That is why we used it after the “colcon build” command.

Congratulations. Now you know how to create your own packages in ROS2, and how to compile them.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Install ROS2 Iron Irwini on Ubuntu 22 | ROS2 Tutorial

Install ROS2 Iron Irwini on Ubuntu 22 | ROS2 Tutorial

What we are going to learn

  1. How to install ROS2 Iron on Ubuntu 22 on your own computer
  2. How to use ROS without having to install anything

List of resources used in this post

  1. Your own computer with Ubuntu 22 installed
  2. The Construct: https://app.theconstructsim.com/
  3. https://docs.ros.org/en/iron/Installation/Ubuntu-Install-Debians.html
  4. https://man7.org/linux/man-pages/man7/locale.7.html
  5. https://help.ubuntu.com/community/Repositories/Ubuntu
  6. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

ROS2 is a “framework” for developing robotics applications. Its real-time capabilities, cross-platform support, security features, language flexibility, improved communication, modularity, community support, and industry adoption make it a valuable framework for robotic development.

In this tutorial, we are going to learn how to install it on our own computers.

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Setting up locales

According to ROS documentation, we need to have support for UTF-8, in order for ROS2 to work properly.

To set up locales for UTF-8 support, we can run the following commands on Ubuntu 22. Let’s start by installing the locales command:

sudo apt update && sudo apt install locales -y

 

According to locale docs:

A locale is a set of language and cultural rules.  These cover
aspects such as language for messages, different character sets,
lexicographic conventions, and so on.  A program needs to be able
to determine its locale and act accordingly to be portable to
different cultures.

Once the locales package is installed, let’s configure UTF-8 in our system:

sudo locale-gen en_US en_US.UTF-8
sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8

 

Now, if we run the locale command we should be able to see UTF-8:

locale

In the output, you should see UTF-8 in all variables that have a value. Something similar to the following:

LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE=pt_BR.UTF-8
LC_NUMERIC=pt_BR.UTF-8
LC_TIME=pt_BR.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=pt_BR.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=pt_BR.UTF-8
LC_NAME=pt_BR.UTF-8
LC_ADDRESS=pt_BR.UTF-8
LC_TELEPHONE=pt_BR.UTF-8
LC_MEASUREMENT=pt_BR.UTF-8
LC_IDENTIFICATION=pt_BR.UTF-8
LC_ALL=

 

Setting up repositories

Now that the locale is ready for UTF-8, let’s enable the repositories we need for installing ROS.

Let’s start enabling the Universe Repository (which contains community-maintained free and open-source software),

sudo apt install software-properties-common -y
sudo add-apt-repository universe

 

Now that the repository has been added, let’s get the ROS2 GPG key, necessary when downloading the ROS2 packages:

sudo apt update && sudo apt install curl -y
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg

 

Now we need to add the ROS2 repository to the list of enabled repositories from where we can download packages. The repository is added to the /etc/apt/sources.list.d/ros2.list file using the following command:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list

Installing ROS 2 development tools

If you are interested in installing ROS, you are probably going to create some packages, build the workspace, etc. For that, it is recommended to install the development tools.

These tools can be installed using the following command:

sudo apt update && sudo apt install ros-dev-tools -y

Upgrading packages, and Installing ROS 2 Iron

Now that we have all the requirements in place, we can install ROS2 Iron, but before we do that, since ROS leverages existing tools, let’s upgrade the packages installed on our system in order to have the most recent changes on the programs that are already installed.

sudo apt update

sudo apt-get upgrade -y

 

Now that we have all base packages upgraded, we can install ROS, the Desktop version using the next command:

sudo apt install ros-iron-desktop -y

 

Testing the ROS 2 installation

Now that ROS is installed, let’s run an example of a node named talker that publishes a message to a topic called /chatter.

Before running a node, we need to “enable” the ROS installation. We do that using the source command:

source /opt/ros/iron/setup.bash

 

Now that the current terminal is aware of ROS, we can run the talker with the command below:

source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py talker

 

If everything went ok, you should see an output similar to the following:

[INFO] [1696342010.719351116] [talker]: Publishing: "Hello World: 0"
[INFO] [1696342011.707609990] [talker]: Publishing: "Hello World: 1"
[INFO] [1696342012.707533232] [talker]: Publishing: "Hello World: 2"
[INFO] [1696342013.707451283] [talker]: Publishing: "Hello World: 3"
[INFO] [1696342014.707842625] [talker]: Publishing: "Hello World: 4"
[INFO] [1696342015.706340664] [talker]: Publishing: "Hello World: 5"
[INFO] [1696342016.707204262] [talker]: Publishing: "Hello World: 6"
[INFO] [1696342017.707310619] [talker]: Publishing: "Hello World: 7"
[INFO] [1696342018.707408333] [talker]: Publishing: "Hello World: 8"
[INFO] [1696342019.707478561] [talker]: Publishing: "Hello World: 9"
[INFO] [1696342020.706401798] [talker]: Publishing: "Hello World: 10"
[INFO] [1696342021.707534531] [talker]: Publishing: "Hello World: 11"
[INFO] [1696342022.706507971] [talker]: Publishing: "Hello World: 12"
[INFO] [1696342023.706325651] [talker]: Publishing: "Hello World: 13"
[INFO] [1696342024.706483290] [talker]: Publishing: "Hello World: 14"
...

 

In another terminal, you can also run the listener node, which subscribers to the /chatter topic and prints to the screen what the talker node “said”:

source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py listener

The output should be similar to this:

...
[INFO] [1696342081.719585297] [listener]: I heard: [Hello World: 71]
[INFO] [1696342082.709465778] [listener]: I heard: [Hello World: 72]
[INFO] [1696342083.709447192] [listener]: I heard: [Hello World: 73]
[INFO] [1696342084.709592572] [listener]: I heard: [Hello World: 74]
[INFO] [1696342085.708058493] [listener]: I heard: [Hello World: 75]
[INFO] [1696342086.708537524] [listener]: I heard: [Hello World: 76]
[INFO] [1696342087.708396171] [listener]: I heard: [Hello World: 77]
...

 

 

Using ROS on The Construct (not having to install ROS on your own computer)

Alright, we learned how to install ROS on Ubuntu 22. Turns out that some people may not have a computer with Linux Ubuntu 22 installed, and do not want all the hassle of installing ROS locally.

Thank God we have The Construct, a platform that allows us to use ROS 2 online without having to install anything.

In order to use ROS, you just have to first create an account, create a rosject, and then “run” the rosject. Below we have the step-by-step process.

  1. First, create your account at https://app.theconstructsim.com/
  2. After authenticated, you go to the My Rosjects page and click the “Create a new rosject” button: https://app.theconstructsim.com/rosjects/my_rosjects
Create a new rosject

Create a new rosject

  1. On the Create rosject form, you select the ROS Distribution that you want to use (you can choose ROS Humble, ROS Iron, etc)
  2. Once the rosject is created, you can just press RUN to start the ROS environment. Something similar to what we can see in the image below:
  3. Learn ROS2 Parameters - Run rosjectRUN rosject

 

After the environment is running, you can just open a terminal and start creating and running your ros programs:

 

Open a new Terminal

Open a new Terminal

And that is basically it

Congratulations. Now you know how to install ROS2 on your own computer, and also know The Construct, a platform where you can program our ROS projects with ease.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

How to spawn a Gazebo robot using XML launch files

How to spawn a Gazebo robot using XML launch files

What we are going to learn

  1. How to start Gazebo
  2. How to spawn a robot to Gazebo
  3. How to run the Robot State Publisher node
  4. How to start Rviz configured

List of resources used in this post

  1. Use the rosject: https://app.theconstructsim.com/l/56476c77/
  2. The Construct: https://app.theconstructsim.com/
  3. ROS2 Courses –▸
    1. ROS2 Basics in 5 Days Humble (Python): https://app.theconstructsim.com/Course/132
    2. ROS2 Basics in 5 Days Humble (C++): https://app.theconstructsim.com/Course/133

Overview

While many examples demonstrate how to spawn Gazebo robots using Python launch files, in this post, we will be learning how to achieve the same result using XML launch files. Let’s get started!

ROS Inside!

ROS Inside

ROS Inside

Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:

ROS Inside logo

Opening the rosject

In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject with a simulation for that: https://app.theconstructsim.com/l/56476c77/.

You can download the rosject on your own computer if you want to work locally, but just by copying the rosject (clicking the link), you will have a setup already prepared for you.

After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).

Learn ROS2 Parameters - Run rosject

How to spawn a Gazebo robot using XML launch files – Run rosject (example of the RUN button)

 

After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.

Compiling the workspace

As you may already know, instead of using a real robot, we are going to use a simulation. In order to spawn that simulated robot, we need to have our workspace compiled, and for that, we need a terminal.

Let’s open a terminal by clicking the Open a new terminal button.

 

Open a new Terminal

Open a new Terminal

 

Once inside the first terminal, let’s run the commands below, to compile the workspace

cd ~/ros2_ws
colcon build
source install/setup.bash
There may be some warning messages when running “colcon build”. Let’s just ignore those messages for now.
If everything went well, you should have a message saying that 3 packages were compiled:
How to spawn a Gazebo robot using XML launch files - ros2_ws compiled

How to spawn a Gazebo robot using XML launch files – ros2_ws compiled

Starting the Gazebo simulator

Now that our workspace is compiled, let’s run a gazebo simulation and RViz using normal python launch files.

For that, run the following command in the terminal:

ros2 launch minimal_diff_drive_robot gazebo_and_rviz.launch.py
Again, you may see some error messages. As long as the simulation appears, you can just ignore those error messages.
Now, in a second terminal, let’s also launch the Robot State Publisher, so that we can properly see the robot in RViz (Robot Visualization tool).

 

ros2 run joint_state_publisher joint_state_publisher
Now you should be able to see both Gazebo simulator and RViz, similar to what we can see in the image below:
How to spawn a Gazebo robot using XML launch files - Gazebo and RViz launched

How to spawn a Gazebo robot using XML launch files – Gazebo and RViz launched

In case you want to know, the content of the file used to spawn the robot can be seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.py

 

Moving the robot around

To make sure everything is working as expected so far, you can also run a new command to move the robot around using the keyboard. For that, open a third terminal, and run the following command:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

Now, to move the robot around just press the keys “i“, “k“, or other keys presented in the terminal where you launched the teleop_twist_keyboard command.

The XML file for launching spawning the robot

As we mentioned earlier, the code for the python launch file can be found seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.py

 

and for the XML file? The XML file is in the exact same folder, but the file has an XML extension. The content of the file can be seen with:

cat ~/ros2_ws/src/minimal_diff_drive_robot/launch/gazebo_and_rviz.launch.xml
The command above outputs the following:
<?xml version="1.0"?>

<launch>
  <arg name="model" default="$(find-pkg-share minimal_diff_drive_robot)/urdf/minimal_diff_drive_robot.urdf" />

  <arg name="start_gazebo" default="true" />
  <arg name="start_rviz" default="true" />

  <!-- Start Gazebo -->
  <group if="$(var start_gazebo)">
    <include file="$(find-pkg-share gazebo_ros)/launch/gazebo.launch.py">
      <!--arg name="paused" value="true"/>
      <arg name="use_sim_time" value="true"/>
      <arg name="gui" value="true"/>
      <arg name="recording" value="false"/>
      <arg name="debug" value="false"/>
      <arg name="verbose" value="true"/-->
    </include>

    <!-- Spawn robot in Gazebo -->
    <node name="spawn_robot_urdf" pkg="gazebo_ros" exec="spawn_entity.py"
      args="-file $(var model) -x 0.0 -y 0.0 -z 0.0 -entity my_robot" output="screen" />
  </group>

  <!-- TF description -->
  <node name="robot_state_publisher" pkg="robot_state_publisher" exec="robot_state_publisher" output="screen">
    <param name="robot_description" value="$(command 'cat $(var model)')"/>
    <param name="use_sim_time" value="true" />
  </node>

  <!-- Show in Rviz   -->
  <group if="$(var start_rviz)">
    <node name="rviz" pkg="rviz2" exec="rviz2" args="-d $(find-pkg-share minimal_diff_drive_robot)/config/robot.rviz">
      <param name="use_sim_time" value="true" />
    </node>
  </group>

</launch>

If we check carefully the output above, we can see that we start launching the Gazebo simulator, and in the same <group> we spawn the robot in Gazebo by calling spawn_entity.py

Then we launch the Robot State Publisher to be able to see the robot in RViz, and finally, we launch RViz itself.

When launching RViz, we tell it to use a file named config/robot.rviz, as we can see at:

$(find-pkg-share minimal_diff_drive_robot)/config/robot.rviz
That “command” translates to the following path:
cat ~/ros2_ws/src/minimal_diff_drive_robot/config/robot.rviz
Feel free to check the content of that file, be it through the Code Editor, or in the terminal by checking what the cat command outputs.

Spawning the robot in Gazebo using XML launch files

Similar to what we did with Python, you can just run the following command to spawn the robot using XML file.

Please, remember to kill the previous programs by pressing CTRL+C in the terminals where you launched the commands previously.

Assuming that now all previous programs are terminated, let’s spawn gazebo using XML in the first terminal:

ros2 launch minimal_diff_drive_robot gazebo_and_rviz.launch.xml
Now, in the second terminal, let’s launch the Joint State Publisher to be able to correctly see the robot wheels in RViz:
ros2 run joint_state_publisher joint_state_publisher

And on the third terminal, you can start the command to move the robot around:

ros2 run teleop_twist_keyboard teleop_twist_keyboard

And that is basically it

Congratulations. Now you know how to spawn a robot in Gazebo using Python and also using XML launch files.

We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Related Courses & Training

If you want to learn more about ROS and ROS2, we recommend the following courses:

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Get ROS2 Industrial Ready- Hands-On Training by The Construct cover.png

Pin It on Pinterest