ROS2 (Robot Operating System version 2) is widely used in robotics, and it uses robot models in a format called URDF (Unified Robotics Description Format).
OnShape is a 3D CAD (3-dimensional computer-aided design) tool that allows anyone to easily create 3D models using only a Web Browser.
In this post, we are going to learn how to export models from OnShape to URDF, so that the model can be used in ROS2 programs.
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it, print it, and attach it to your robot. It is really free. Find it in the link below:
On OnShape you can create your own design, or use any existing design already provided by OnShape.
Opening the rosject
In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5ee7cc96/.
Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
How to Export a 3D Robot Model to ROS2 | Onshape CAD to URDF – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Installing the onshape-to-robot package
In order to install a package (and interact with ROS2), we need a terminal.
Let’s open a terminal by clicking the Open a new terminal button.
Open a new Terminal
In order to install onshape-to-robot, please run the following command in the terminal:
sudo pip install onshape-to-robot
We installed it at the system level.
If you want, you can also install it in a Python Virtual Environment.
If you want to install it in a virtual environment
If for any reason you are using a computer that you don’t have root access, you can create a virtual environment and install onshape-to-robot there.
The virtual env can be created with the following command:
source onshape_venv/bin/activate You should see now in your linux promt the (onshape_venv) pip install onshape-to-robot
the following output should be produced:
cd
python -m venv onshape_venv
Then, “enable” the virtual env:
source onshape_venv/bin/activate
Now you can install onshape-to-robot in this virtual environment:
pip install onshape-to-robot
Install dependencies
To make the export from OnShape to URDF work, we also need to add openscad and meshlab. We can instal theml with the following commands:
sudo add-apt-repository ppa:openscad/releases
sudo apt-get update
sudo apt-get install openscad -y
sudo apt-get install meshlab -y
Add the OnShapeKeys to the `keys. sh` file
After installing the dependencies, the next step is to get the API keys from OnShape.
The reason why we need this is because the onshape-to-robot needs to authenticate to OnShape to have access to the model that we are going to export.
You could have created that file also using the Code Editor.
It is worth mentioning that if you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:
Open the IDE – Code Editor
Now, let’s source that file so that the ONSHAPE variables are available in our terminal:
cd ~/ros2_ws
source keys.sh
Now we should be able to see those environment variables:
Then we entered into the quadruped_description folder, and created some other useful folders there. The folders we are going to create are quadruped, rviz and launch:
cd quadruped_description
Let’s first create the quadruped folder:
mkdir quadruped
Now let’s create a config.json file inside that folder. That file will be used by the onshape-to-robot tool:
touch quadruped/config.json
And create the launch and rviz folders:
mkdir launch rviz
Now let’s open that quadruped/config.json file using the Code Editor and paste the following content to it:
This is the minimal information we need to make this export work.
The first field, documentId is the ID of the file that you have created. This is the ID that was used when creating this video. For your own projects, you are going to have a different ID. This ID appears in the URL of your project, as we can see below:
OnShape Document ID – ROS2
Alright, let me put here again the content of the config.json file to be better fo understand it:
The packageName points to the folder that we created, where the config.json file is located, and it is the place where our files will be exported.
The robotName defines the name of the robot that will be exported.
The assemblyName is the name of our model on OnShape. In the previous image, in the bottom part of the image you can see on the 6th tab our model name: quadruped_v1
Prepare our CMakeLists.txt to export the folders we just created
The next thing we need is to modify our ~/ros2_ws/src/quadruped_description/CMakeLists.txt file so that the folders that we just created can be “installed” when we build our package.
The final content of that quadruped_description/CMakeLists.txt file is:
Ok, now the time has come to finally run the command that converts our model from OnShape to URDF.
For that, let’s first enter to the right folder:
cd ~/ros2_ws/src/quadruped_description
Then, let’s run the following command (bear in mind that in this rosject we already ran that command. We are putting the commands here basically for documentation reasons):
onshape-to-robot quadruped
In the command above, we are essentially running onshare-to-command and telling it to go to the quadruped folder and read the config.json file there.
If everything goes well, now, instead of only the config.json file, we should have many files in that quadruped folder.
In that quadruped folder you should also find a robot.urdf file.
The only modification that you need to do on that file is to add the following content in the first line, so that the Code Editor can properly highlight the syntax:
<?xml version="1.0"?>
Creating .launch and .rviz files to be able to see our model in RViz.
Again, these commands were already executed when we first created the rosject that we shared with you at the beginning of this post.
These are the commands we used to create the launch files:
cd ~/ros2_ws/src/quadruped_description
touch launch/quadruped.launch.py
touch launch/start_rviz.launch.py
touch rviz/quadruped.rviz
To see the content of those files, just open them using the Code Editor.
Seeing our robot with ROS2
To see our robot model in ROS2, we first need to build our workspace:
Source setup.bash so that ROS2 knows how to find our package:
source install/setup.bash
Now, let’s launch quadruped.launch.py :
cd ~/ros2_ws/
source install/setup.bash
ros2 launch quadruped_description quadruped.launch.py
Now in a second terminal, let’s run RViz:
cd ~/ros2_ws/
source install/setup.bash
ros2 launch quadruped_description start_rviz.launch.py
After launching RViz, after waiting for a few seconds for RViz to show, you will see that the model is not well presented. We can’t see the joints properly.
Let’s publish Fake Joint States to see the model properly.
For that, let’s run the following commands in a third terminal:
cd ~/ros2_ws/
source install/setup.bash
ros2 run joint_state_publisher_gui joint_state_publisher_gui
If you go to the Graphical Tools again and click Joint State Publisher, you should be able to see the robot model properly:
Joint State Publisher – Export a 3D Robot Model to ROS2 – Onshape CAD to URDF
Congratulations. You have now learned how to export a 3D model from OnShape to URDF.
If you want to learn more about URDF, have a look at the course below:
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses:
ROS2 (Robot Operating System version 2) is becoming the de facto standard “framework” for programming robots.
In this post, we are going to learn how to combine Publisher, Subscriber, and Service in ROS2 to detect bananas and apples in an image.
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it, print it, and attach it to your robot. It is really free. Find it in the link below:
In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5c13606c/.
Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
Combining Publisher, Subscriber & Service in ROS2 Single Node – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Modifying existing files
In order to interact with ROS2, we need a terminal.
Let’s open a terminal by clicking the Open a new terminal button.
You could have created that file also using the Code Editor.
If you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:
Open the IDE – Code Editor
The following content was pasted to that file:
#! /usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from custom_interfaces.srv import StringServiceMessage
import os
import cv2
from cv_bridge import CvBridge
import ament_index_python.packages as ament_index
class CombineNode(Node):
def __init__(self, dummy=True):
super().__init__('combine_node')
self._dummy= dummy
self.pkg_path = self.get_package_path("pub_sub_srv_ros2_pkg_example")
self.scripts_path = os.path.join(self.pkg_path,"scripts")
cascade_file_path = os.path.join(self.scripts_path,'haarbanana.xml')
self.banana_cascade = cv2.CascadeClassifier(cascade_file_path)
self.bridge = CvBridge()
self.publisher = self.create_publisher(Image, 'image_detected_fruit', 10)
self.subscription = self.create_subscription(
Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)
self.string_service = self.create_service(
StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
self.get_logger().info(f'READY CombineNode')
def get_package_path(self, package_name):
try:
package_share_directory = ament_index.get_package_share_directory(package_name)
return package_share_directory
except Exception as e:
print(f"Error: {e}")
return None
def image_callback(self, msg):
self.get_logger().info('Received an image.')
self.current_image = msg
def string_service_callback(self, request, response):
# Handle the string service request
self.get_logger().info(f'Received string service request: {request.detect}')
if request.detect == "apple":
if self._dummy:
# Generate and publish an image related to apple detections
apple_image = self.generate_apple_detection_image()
self.publish_image(apple_image)
else:
self.detect_and_publish_apple()
elif request.detect == "banana":
if self._dummy:
# Generate and publish an image related to banana detections
banana_image = self.generate_banana_detection_image()
self.publish_image(banana_image)
else:
self.detect_and_publish_banana()
else:
# If no specific request
unknown_image = self.generate_unknown_detection_image()
self.publish_image(unknown_image)
# Respond with success and a message
response.success = True
response.message = f'Received and processed: {request.detect}'
return response
def generate_unknown_detection_image(self):
self.apple_img_path = os.path.join(self.scripts_path,'unknown.png')
self.get_logger().warning("Unknown path="+str(self.apple_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple red circle on a black image
image = cv2.imread(self.apple_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the unknown image.")
else:
self.get_logger().warning("SUCCESS to load the unknown image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def generate_apple_detection_image(self):
self.apple_img_path = os.path.join(self.scripts_path,'apple.png')
self.get_logger().warning("Apple path="+str(self.apple_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple red circle on a black image
image = cv2.imread(self.apple_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the apple image.")
else:
self.get_logger().warning("SUCCESS to load the apple image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def generate_banana_detection_image(self):
self.banana_img_path = os.path.join(self.scripts_path,'banana.png')
self.get_logger().warning("Banana path="+str(self.banana_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple yellow circle on a black image
image = cv2.imread(self.banana_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the banana image.")
else:
self.get_logger().warning("SUCCESS to load the banana image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def publish_image(self, image_msg):
if image_msg is not None:
self.publisher.publish(image_msg)
def detect_and_publish_apple(self):
if self.current_image is not None:
frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
# Your apple detection code here (from approach_2.py)
# HIGH=====
# (0.0, 238.935, 255.0)
# LOW=====
# (1.8, 255.0, 66.045)
low_apple_raw = (0.0, 80.0, 80.0)
high_apple_raw = (20.0, 255.0, 255.0)
image_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(image_hsv, low_apple_raw, high_apple_raw)
cnts, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
c_num = 0
radius = 10
for i, c in enumerate(cnts):
((x, y), r) = cv2.minEnclosingCircle(c)
if r > radius:
print("OK="+str(r))
c_num += 1
cv2.circle(frame, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(frame, "#{}".format(c_num), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
else:
print(r)
# Publish the detected image as a ROS 2 Image message
image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
self.publish_image(image_msg)
else:
self.get_logger().error("Image NOT found")
def detect_and_publish_banana(self):
self.get_logger().warning("detect_and_publish_banana Start")
if self.current_image is not None:
self.get_logger().warning("Image found")
frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
bananas = self.banana_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in bananas:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3)
cv2.putText(frame, 'Banana', (x-10, y-10),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))
# Publish the detected image as a ROS 2 Image message
self.get_logger().warning("BananaDetection Image Publishing...")
image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
self.publish_image(image_msg)
self.get_logger().warning("BananaDetection Image Publishing...DONE")
else:
self.get_logger().error("Image NOT found")
def main(args=None):
rclpy.init(args=args)
node = CombineNode(dummy=False)
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
After creating that Python file, we also modified the CMakeLists.txt file of the pub_sub_srv_ros2_pkg_example package:
We basically added ‘scripts/pubsubserv_example.py‘ to the list of files to be installed when we build our ros2 workspace.
In the end, the content of ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt is like this:
cmake_minimum_required(VERSION 3.8)
project(pub_sub_srv_ros2_pkg_example)
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_srvs REQUIRED)
find_package(custom_interfaces REQUIRED)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
set(ament_cmake_copyright_FOUND TRUE)
set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
# We add it to be able to use other modules of the scripts folder
install(DIRECTORY
scripts
rviz
DESTINATION share/${PROJECT_NAME}
)
install(PROGRAMS
scripts/example1_dummy.py
scripts/example1.py
scripts/example1_main.py
scripts/pubsubserv_example.py
DESTINATION lib/${PROJECT_NAME}
)
ament_package()
We then compiled specifically the pub_sub_srv_ros2_pkg_example package using the following command:
cd ~/ros2_ws/
source install/setup.bash
colcon build --packages-select pub_sub_srv_ros2_pkg_example
After the package is compiled, we could run that python script using the following command:
cd ~/ros2_ws
source install/setup.bash
ros2 run pub_sub_srv_ros2_pkg_example pubsubserv_example.py
After running that script you are not going to see any output because we are not printing anything.
But, let’s try to list the services in a second terminal by typing ros2 node list. If everything goes well, we should be able to see the combine_node node:
$ ros2 node list
/combine_node
Launching the simulation
So far we can’t see what our node is capable of.
Let’s launch a simulation so that we can understand our node better.
For that, let’s run the following command in a third terminal:
ros2 launch box_bot_gazebo garden_main.launch.xml
A simulation similar to the following should appear in a few seconds:
Combine Publisher, Subscriber & Service in ROS2 Single Node – Simulation
After launching the simulation, in the first terminal where we launched our node, we should start seeing messages like the following:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
We will soon understand what these messages mean.
See what the robot sees through rviz2
Now that the simulation is running, we can open rviz2 (ROS Visualization version 2).
To make it easier for you to see the robot model, and the robot camera, a fruit.rviz file was created at ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz.
You can tell rviz2 to load that config file using the following command:
A new screen should pop up in a few seconds, and you should be able to see what the robot camera sees, as well as the robot model.
The ROS2 Topic that we set for the camera was /box_bot_1/box_bot_1_camera/image_raw. You can find this topic if you list the topics in another terminal using ros2 topic list.
If you look at the topic that we subscribe to at the __init__ method of the CombineNode class, it is exactly this topic:
When a new Image message comes, the image_callback method is called. It essentially saves the image in an internal variable called current_image:
def image_callback(self, msg):
self.get_logger().info('Received an image.')
self.current_image = msg
At the __init__ method we also created a service for analyzing an image and detecting whether or not it contains a banana:
def __init__(self, dummy=True):
super().__init__('combine_node')
# ...
self.string_service = self.create_service(
StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
def string_service_callback(self, request, response):
# Handle the string service request
self.get_logger().info(f'Received string service request: {request.detect}')
if request.detect == "apple":
if self._dummy:
# Generate and publish an image related to apple detections
apple_image = self.generate_apple_detection_image()
self.publish_image(apple_image)
else:
self.detect_and_publish_apple()
elif request.detect == "banana":
if self._dummy:
# Generate and publish an image related to banana detections
banana_image = self.generate_banana_detection_image()
self.publish_image(banana_image)
else:
self.detect_and_publish_banana()
else:
# If no specific request
unknown_image = self.generate_unknown_detection_image()
self.publish_image(unknown_image)
# Respond with success and a message
response.success = True
response.message = f'Received and processed: {request.detect}'
return response
By analyzing the code above, we see that when the detect_fruit_service service that we created is called, it calls the string_service_callback method that is responsible for detecting bananas and apples.
Now, going back to the messages we see in the first terminal:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
These messages basically say that we are correctly receiving the Image messages from the /box_bot_1/box_bot_1_camera/image_raw topic mentioned earlier.
If we list the services, we should find a service named . Let’s try it, by running the following command in a free terminal:
ros2 service list
You should see a huge number of services, and among them, you should be able to find the following one, that we created:
/detect_fruit_service
By the way, the reason why we have so many services is that the Gazebo simulator generates a lot of services, making it easier to interact with Gazebo using ROS2.
Now, let’s call that service. In order to detect an apple, a banana, and a strawberry, we could run the following commands respectively:
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'apple'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'banana'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'strawberry'"
Alright. After calling the service to detect a banana, we should have an output similar to the following:
requester: making request: custom_interfaces.srv.StringServiceMessage_Request(detect='banana')
response: custom_interfaces.srv.StringServiceMessage_Response(success=True, message='Received and processed: banana')
Indicating that the service correctly detected a banana.
If you check the logs in the first terminal where we launched our node, you will also see a message similar to the following:
BananaDetection Image Publishing...
If you check the window where RViz (Robot Visualizer), when detecting the apple you should have a green circle around the apple, just like in the image below:
ROS2 – Combine Publisher, Subscriber, and Service with Practical Robot Examples – Part 2
Congratulations. Now you know how to combine different ROS2 pieces in a single node.
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses:
ROS2 (Robot Operating System version 2) is becoming the de facto standard “framework” for programming robots.
In this post, we are going to learn how to combine a Publisher, a Subscriber, and a Service in the same ROS2 Node.
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:
In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5c13606c/.
Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
Combining Publisher, Subscriber & Service in ROS2 Single Node – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Creating the required files
In order to interact with ROS2, we need a terminal.
Let’s open a terminal by clicking the Open a new terminal button.
You could have created that file also using the Code Editor.
If you want to use the Code Editor, also known as IDE (Integrated Development Environment), you just have to open it as indicated in the image below:
Open the IDE – Code Editor
The following content was pasted to that file:
#! /usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from custom_interfaces.srv import StringServiceMessage
import os
import cv2
from cv_bridge import CvBridge
import ament_index_python.packages as ament_index
class CombineNode(Node):
def __init__(self, dummy=True):
super().__init__('combine_node')
self._dummy= dummy
self.pkg_path = self.get_package_path("pub_sub_srv_ros2_pkg_example")
self.scripts_path = os.path.join(self.pkg_path,"scripts")
cascade_file_path = os.path.join(self.scripts_path,'haarbanana.xml')
self.banana_cascade = cv2.CascadeClassifier(cascade_file_path)
self.bridge = CvBridge()
self.publisher = self.create_publisher(Image, 'image_detected_fruit', 10)
self.subscription = self.create_subscription(
Image, '/box_bot_1/box_bot_1_camera/image_raw', self.image_callback, 10)
self.string_service = self.create_service(
StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
self.get_logger().info(f'READY CombineNode')
def get_package_path(self, package_name):
try:
package_share_directory = ament_index.get_package_share_directory(package_name)
return package_share_directory
except Exception as e:
print(f"Error: {e}")
return None
def image_callback(self, msg):
self.get_logger().info('Received an image.')
self.current_image = msg
def string_service_callback(self, request, response):
# Handle the string service request
self.get_logger().info(f'Received string service request: {request.detect}')
if request.detect == "apple":
if self._dummy:
# Generate and publish an image related to apple detections
apple_image = self.generate_apple_detection_image()
self.publish_image(apple_image)
else:
self.detect_and_publish_apple()
elif request.detect == "banana":
if self._dummy:
# Generate and publish an image related to banana detections
banana_image = self.generate_banana_detection_image()
self.publish_image(banana_image)
else:
self.detect_and_publish_banana()
else:
# If no specific request
unknown_image = self.generate_unknown_detection_image()
self.publish_image(unknown_image)
# Respond with success and a message
response.success = True
response.message = f'Received and processed: {request.detect}'
return response
def generate_unknown_detection_image(self):
self.apple_img_path = os.path.join(self.scripts_path,'unknown.png')
self.get_logger().warning("Unknown path="+str(self.apple_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple red circle on a black image
image = cv2.imread(self.apple_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the unknown image.")
else:
self.get_logger().warning("SUCCESS to load the unknown image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def generate_apple_detection_image(self):
self.apple_img_path = os.path.join(self.scripts_path,'apple.png')
self.get_logger().warning("Apple path="+str(self.apple_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple red circle on a black image
image = cv2.imread(self.apple_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the apple image.")
else:
self.get_logger().warning("SUCCESS to load the apple image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def generate_banana_detection_image(self):
self.banana_img_path = os.path.join(self.scripts_path,'banana.png')
self.get_logger().warning("Banana path="+str(self.banana_img_path))
# Replace this with your actual image processing logic
# In this example, we create a simple yellow circle on a black image
image = cv2.imread(self.banana_img_path) # Replace with your image path
if image is None:
self.get_logger().error("Failed to load the banana image.")
else:
self.get_logger().warning("SUCCESS to load the banana image.")
return self.bridge.cv2_to_imgmsg(image, encoding="bgr8")
def publish_image(self, image_msg):
if image_msg is not None:
self.publisher.publish(image_msg)
def detect_and_publish_apple(self):
if self.current_image is not None:
frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
# Your apple detection code here (from approach_2.py)
# HIGH=====
# (0.0, 238.935, 255.0)
# LOW=====
# (1.8, 255.0, 66.045)
low_apple_raw = (0.0, 80.0, 80.0)
high_apple_raw = (20.0, 255.0, 255.0)
image_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(image_hsv, low_apple_raw, high_apple_raw)
cnts, _ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
c_num = 0
radius = 10
for i, c in enumerate(cnts):
((x, y), r) = cv2.minEnclosingCircle(c)
if r > radius:
print("OK="+str(r))
c_num += 1
cv2.circle(frame, (int(x), int(y)), int(r), (0, 255, 0), 2)
cv2.putText(frame, "#{}".format(c_num), (int(x) - 10, int(y)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)
else:
print(r)
# Publish the detected image as a ROS 2 Image message
image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
self.publish_image(image_msg)
else:
self.get_logger().error("Image NOT found")
def detect_and_publish_banana(self):
self.get_logger().warning("detect_and_publish_banana Start")
if self.current_image is not None:
self.get_logger().warning("Image found")
frame = self.bridge.imgmsg_to_cv2(self.current_image, desired_encoding="bgr8")
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
bananas = self.banana_cascade.detectMultiScale(
gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags=cv2.CASCADE_SCALE_IMAGE)
for (x, y, w, h) in bananas:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3)
cv2.putText(frame, 'Banana', (x-10, y-10),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0))
# Publish the detected image as a ROS 2 Image message
self.get_logger().warning("BananaDetection Image Publishing...")
image_msg = self.bridge.cv2_to_imgmsg(frame, encoding="bgr8")
self.publish_image(image_msg)
self.get_logger().warning("BananaDetection Image Publishing...DONE")
else:
self.get_logger().error("Image NOT found")
def main(args=None):
rclpy.init(args=args)
node = CombineNode(dummy=False)
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
After creating that Python file, we also modified the CMakeLists.txt file of the pub_sub_srv_ros2_pkg_example package:
We basically added ‘scripts/pubsubserv_example.py‘ to the list of files to be installed when we build our ros2 workspace.
In the end, the content of ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/CMakeLists.txt is like this:
cmake_minimum_required(VERSION 3.8)
project(pub_sub_srv_ros2_pkg_example)
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(std_srvs REQUIRED)
find_package(custom_interfaces REQUIRED)
if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
set(ament_cmake_copyright_FOUND TRUE)
set(ament_cmake_cpplint_FOUND TRUE)
ament_lint_auto_find_test_dependencies()
endif()
# We add it to be able to use other modules of the scripts folder
install(DIRECTORY
scripts
rviz
DESTINATION share/${PROJECT_NAME}
)
install(PROGRAMS
scripts/example1_dummy.py
scripts/example1.py
scripts/example1_main.py
scripts/pubsubserv_example.py
DESTINATION lib/${PROJECT_NAME}
)
ament_package()
We then compiled specifically the pub_sub_srv_ros2_pkg_example package using the following command:
cd ~/ros2_ws/
source install/setup.bash
colcon build --packages-select pub_sub_srv_ros2_pkg_example
After the package is compiled, we could run that python script using the following command:
cd ~/ros2_ws
source install/setup.bash
ros2 run pub_sub_srv_ros2_pkg_example pubsubserv_example.py
After running that script you are not going to see any output because we are not printing anything.
But, let’s try to list the services in a second terminal by typing ros2 node list. If everything goes well, we should be able to see the combine_node node:
$ ros2 node list
/combine_node
Launching the simulation
So far we can’t see what our node is capable of.
Let’s launch a simulation so that we can understand our node better.
For that, let’s run the following command in a third terminal:
ros2 launch box_bot_gazebo garden_main.launch.xml
A simulation similar to the following should appear in a few seconds:
Combine Publisher, Subscriber & Service in ROS2 Single Node – Simulation
After launching the simulation, in the first terminal where we launched our node, we should start seeing messages like the following:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
We will soon understand what these messages mean.
See what the robot sees through rviz2
Now that the simulation is running, we can open rviz2 (ROS Visualization version 2).
To make it easier for you to see the robot model, and the robot camera, a fruit.rviz file was created at ~/ros2_ws/src/fruit_detector/pub_sub_srv_ros2_pkg_example/rviz/fruit.rviz.
You can tell rviz2 to load that config file using the following command:
A new screen should pop up in a few seconds, and you should be able to see what the robot camera sees, as well as the robot model.
The ROS2 Topic that we set for the camera was /box_bot_1/box_bot_1_camera/image_raw. You can find this topic if you list the topics in another terminal using ros2 topic list.
If you look at the topic that we subscribe to at the __init__ method of the CombineNode class, it is exactly this topic:
When a new Image message comes, the image_callback method is called. It essentially saves the image in an internal variable called current_image:
def image_callback(self, msg):
self.get_logger().info('Received an image.')
self.current_image = msg
At the __init__ method we also created a service for analyzing an image and detecting whether or not it contains a banana:
def __init__(self, dummy=True):
super().__init__('combine_node')
# ...
self.string_service = self.create_service(
StringServiceMessage, 'detect_fruit_service', self.string_service_callback)
def string_service_callback(self, request, response):
# Handle the string service request
self.get_logger().info(f'Received string service request: {request.detect}')
if request.detect == "apple":
if self._dummy:
# Generate and publish an image related to apple detections
apple_image = self.generate_apple_detection_image()
self.publish_image(apple_image)
else:
self.detect_and_publish_apple()
elif request.detect == "banana":
if self._dummy:
# Generate and publish an image related to banana detections
banana_image = self.generate_banana_detection_image()
self.publish_image(banana_image)
else:
self.detect_and_publish_banana()
else:
# If no specific request
unknown_image = self.generate_unknown_detection_image()
self.publish_image(unknown_image)
# Respond with success and a message
response.success = True
response.message = f'Received and processed: {request.detect}'
return response
By analyzing the code above, we see that when the detect_fruit_service service that we created is called, it calls the string_service_callback method that is responsible for detecting bananas and apples.
Now, going back to the messages we see in the first terminal:
...
[INFO] [1699306370.709477898] [combine_node]: Received an image.
[INFO] [1699306373.374917545] [combine_node]: Received an image.
[INFO] [1699306376.390623360] [combine_node]: Received an image.
[INFO] [1699306379.277906884] [combine_node]: Received an image
...
These messages basically say that we are correctly receiving the Image messages from the /box_bot_1/box_bot_1_camera/image_raw topic mentioned earlier.
If we list the services, we should find a service named . Let’s try it, by running the following command in a free terminal:
ros2 service list
You should see a huge number of services, and among them, you should be able to find the following one, that we created:
/detect_fruit_service
By the way, the reason why we have so many services is that the Gazebo simulator generates a lot of services, making it easier to interact with Gazebo using ROS2.
Now, let’s call that service. In order to detect an apple, a banana, and a strawberry, we could run the following commands respectively:
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'apple'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'banana'"
ros2 service call /detect_fruit_service custom_interfaces/srv/StringServiceMessage "detect: 'strawberry'"
Alright. After calling the service to detect a banana, we should have an output similar to the following:
requester: making request: custom_interfaces.srv.StringServiceMessage_Request(detect='banana')
response: custom_interfaces.srv.StringServiceMessage_Response(success=True, message='Received and processed: banana')
Indicating that the service correctly detected a banana.
If you check the logs in the first terminal where we launched our node, you will also see a message similar to the following:
BananaDetection Image Publishing...
So, as you can see, we have in the same ROS2 Node a Publisher, a Subscriber, and a Service.
Congratulations. Now you know how to combine different ROS2 pieces in a single node.
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses:
ROS (Robot Operating System) is becoming the de facto standard “framework” for programming robots. In this post, let’s learn how to create a ROS2 package, essential for giving instruction to robots, using the ros2 command.
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:
In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject for that: https://app.theconstructsim.com/l/5bda8c95/.
Just by copying the rosject (clicking the link above), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
ROS2 package creation – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Creating a ros2 package
In order to create a ROS2 package, we need to have a ROS2 Workspace, and for that, we need a terminal.
Let’s open a terminal by clicking the Open a new terminal button.
Open a new Terminal
Once inside the first terminal, let’s first run a command that shows the list of available options for ros2:
ros2 -h
the following output should be produced:
ros2 is an extensible command-line tool for ROS 2.
options:
-h, --help show this help message and exit
Commands:
action Various action related sub-commands
bag Various rosbag related sub-commands
component Various component related sub-commands
daemon Various daemon related sub-commands
doctor Check ROS setup and other potential issues
interface Show information about ROS interfaces
launch Run a launch file
lifecycle Various lifecycle related sub-commands
multicast Various multicast related sub-commands
node Various node related sub-commands
param Various param related sub-commands
pkg Various package related sub-commands
run Run a package specific executable
security Various security related sub-commands
service Various service related sub-commands
topic Various topic related sub-commands
wtf Use `wtf` as alias to `doctor`
Call `ros2 <command> -h` for more detailed usage.
As we can see in the output above, we have a command called pkg, and we can also get help with the ros2 pkg -h command. Let’s try it:
ros2 pkg -h
Running that command produces the following:
Various package related sub-commands
options:
-h, --help show this help message and exit
Commands:
create Create a new ROS 2 package
executables Output a list of package specific executables
list Output a list of available packages
prefix Output the prefix path of a package
xml Output the XML of the package manifest or a specific tag
Since what we want to do is create a package, we could ask for help with the create command shown above. Let’s try it:
ros2 pkg create -h
That gives us the following:
usage: ros2 pkg create [-h] [--package-format {2,3}] [--description DESCRIPTION] [--license LICENSE] [--destination-directory DESTINATION_DIRECTORY] [--build-type {cmake,ament_cmake,ament_python}]
[--dependencies DEPENDENCIES [DEPENDENCIES ...]] [--maintainer-email MAINTAINER_EMAIL] [--maintainer-name MAINTAINER_NAME] [--node-name NODE_NAME] [--library-name LIBRARY_NAME]
package_name
Create a new ROS 2 package
positional arguments:
package_name The package name
options:
-h, --help show this help message and exit
--package-format {2,3}, --package_format {2,3}
The package.xml format.
--description DESCRIPTION
The description given in the package.xml
--license LICENSE The license attached to this package; this can be an arbitrary string, but a LICENSE file will only be generated if it is one of the supported licenses (pass '?' to get a list)
--destination-directory DESTINATION_DIRECTORY
Directory where to create the package directory
--build-type {cmake,ament_cmake,ament_python}
The build type to process the package with
--dependencies DEPENDENCIES [DEPENDENCIES ...]
list of dependencies
--maintainer-email MAINTAINER_EMAIL
email address of the maintainer of this package
--maintainer-name MAINTAINER_NAME
name of the maintainer of this package
--node-name NODE_NAME
name of the empty executable
--library-name LIBRARY_NAME
name of the empty library
Ok, according to the instructions, we should be able to create a package just using ros2 pkg create PKG_NAME. Let’s try to create a package named my_superbot inside the ros2_ws/src folder.
cd ~/ros2_ws/src
ros2 pkg create my_superbot
Assuming that everything went as expected, we should see something like this:
According to the log messages, we now have a package called my_superbot, with some files inside the my_superbot folder. The most important files are ./my_superbot/package.xml and ./my_superbot/CMakeLists.txt. The former (package.xml) because it defines the package name, and the latter (CMakeLists.txt) because it contains the “instructions” on how to compile our package.
If you now run the ls command, you should be able to see the my_superbot folder, which is essentially your ROS2 package.
Also, if you run the “tree .“ command, you should see the folder structure.:
This is the simplest and easiest way of creating a ROS2 package.
If you don’t have the tree command installed, you can install it using the commands below:
sudo apt-get update
sudo apt-get install -y tree
Creating a ros2 package with some dependencies
Most of the times, when we create a package, we basically want to reuse or leverage existing tools (or packages).
Let’s remove the package we just created, and create it again, but at this time, specifying some dependencies:
cd ~/ros2_ws/src
rm -rfv my_superbot
Okay, we just removed the package we created earlier. If you remember, previous we executed the “ros2 pkg create -h“, which provided us with some help with dependencies:
...
--dependencies DEPENDENCIES [DEPENDENCIES ...]
list of dependencies
...
Let’s now create the package with the same name, but at this time, specifying rclcpp and std_msgs dependencies:
cd ~/ros2_ws/src
ros2 pkg create my_superbot --dependencies rclcpp std_msgs
If you use the “ls” or “tree” commands, like before, you will see that the package has been successfully created. The main differences are in the contents of the package.xml and CMakeLists.txtfiles.
ls
tree .
Creating many ros2 packages
There is a principle in Software Development called DRY (Don’t repeat yourself). It basically tells us that we have to reuse code, making code easier to maintain.
There is also the Separation of Concerns (SoC) design principle that manages complexity by partitioning the software system so that each partition is responsible for a separate concern, minimizing the overlap of concerns as much as possible.
In a robotics project, we should ideally have different packages for different purposes. Let’s remove again the package we just created, and rather than creating the package directly on the ros2_ws/src folder, let’s create a project folder there, and then create the packages inside that project folder. Start by removing the existing package:
cd ~/ros2_ws/src
rm -rfv my_superbot
Now, let’s create a folder called superbot_project:
cd ~/ros2_ws/src
mkdir superbot_project
Inside the project folder, we can now create different packages.
Now that we have created the packages, even though they don’t contain any meaning code, let’s learn how to compile the workspace, which contains the packages.
For that, we use the “colcon build” command on the main workspace folder:
cd ~/ros2_ws/
colcon build
source install/setup.bash
Assuming that everything worked nicely, the output should be similar to the following:
If we now run the “ls” command, we should see three new folders there: devel, install, log, in addition to the src folder that we created.
ls
# build install log src
When you compile your workspace, if you want to make ROS2 aware that your packages are compiled and ready to use, you have to tell it where to find the packages using the “source” command. That is why we used it after the “colcon build” command.
Congratulations. Now you know how to create your own packages in ROS2, and how to compile them.
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses:
ROS2 is a “framework” for developing robotics applications. Its real-time capabilities, cross-platform support, security features, language flexibility, improved communication, modularity, community support, and industry adoption make it a valuable framework for robotic development.
In this tutorial, we are going to learn how to install it on our own computers.
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:
A locale is a set of language and cultural rules. These cover
aspects such as language for messages, different character sets,
lexicographic conventions, and so on. A program needs to be able
to determine its locale and act accordingly to be portable to
different cultures.
Once the locales package is installed, let’s configure UTF-8 in our system:
Now we need to add the ROS2 repository to the list of enabled repositories from where we can download packages. The repository is added to the /etc/apt/sources.list.d/ros2.list file using the following command:
If you are interested in installing ROS, you are probably going to create some packages, build the workspace, etc. For that, it is recommended to install the development tools.
These tools can be installed using the following command:
Now that we have all the requirements in place, we can install ROS2 Iron, but before we do that, since ROS leverages existing tools, let’s upgrade the packages installed on our system in order to have the most recent changes on the programs that are already installed.
sudo apt update
sudo apt-get upgrade -y
Now that we have all base packages upgraded, we can install ROS, the Desktop version using the next command:
sudo apt install ros-iron-desktop -y
Testing the ROS 2 installation
Now that ROS is installed, let’s run an example of a node named talker that publishes a message to a topic called /chatter.
Before running a node, we need to “enable” the ROS installation. We do that using the source command:
source /opt/ros/iron/setup.bash
Now that the current terminal is aware of ROS, we can run the talker with the command below:
source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py talker
If everything went ok, you should see an output similar to the following:
In another terminal, you can also run the listener node, which subscribers to the /chatter topic and prints to the screen what the talker node “said”:
source /opt/ros/iron/setup.bash
ros2 run demo_nodes_py listener
The output should be similar to this:
...
[INFO] [1696342081.719585297] [listener]: I heard: [Hello World: 71]
[INFO] [1696342082.709465778] [listener]: I heard: [Hello World: 72]
[INFO] [1696342083.709447192] [listener]: I heard: [Hello World: 73]
[INFO] [1696342084.709592572] [listener]: I heard: [Hello World: 74]
[INFO] [1696342085.708058493] [listener]: I heard: [Hello World: 75]
[INFO] [1696342086.708537524] [listener]: I heard: [Hello World: 76]
[INFO] [1696342087.708396171] [listener]: I heard: [Hello World: 77]
...
Using ROS on The Construct (not having to install ROS on your own computer)
Alright, we learned how to install ROS on Ubuntu 22. Turns out that some people may not have a computer with Linux Ubuntu 22 installed, and do not want all the hassle of installing ROS locally.
Thank God we have The Construct, a platform that allows us to use ROS 2 online without having to install anything.
In order to use ROS, you just have to first create an account, create a rosject, and then “run” the rosject. Below we have the step-by-step process.
On the Create rosject form, you select the ROS Distribution that you want to use (you can choose ROS Humble, ROS Iron, etc)
Once the rosject is created, you can just press RUN to start the ROS environment. Something similar to what we can see in the image below:
RUN rosject
After the environment is running, you can just open a terminal and start creating and running your ros programs:
Open a new Terminal
And that is basically it
Congratulations. Now you know how to install ROS2 on your own computer, and also know The Construct, a platform where you can program our ROS projects with ease.
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our YouTube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses:
While many examples demonstrate how to spawn Gazebo robots using Python launch files, in this post, we will be learning how to achieve the same result using XML launch files. Let’s get started!
ROS Inside!
ROS Inside
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:
In order to follow this tutorial, we need to have ROS2 installed in our system, and ideally a ros2_ws (ROS2 Workspace). To make your life easier, we have already prepared a rosject with a simulation for that: https://app.theconstructsim.com/l/56476c77/.
You can download the rosject on your own computer if you want to work locally, but just by copying the rosject (clicking the link), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
How to spawn a Gazebo robot using XML launch files – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Compiling the workspace
As you may already know, instead of using a real robot, we are going to use a simulation. In order to spawn that simulated robot, we need to have our workspace compiled, and for that, we need a terminal.
Let’s open a terminal by clicking the Open a new terminal button.
Open a new Terminal
Once inside the first terminal, let’s run the commands below, to compile the workspace
cd ~/ros2_ws
colcon build
source install/setup.bash
There may be some warning messages when running “colcon build”. Let’s just ignore those messages for now.
If everything went well, you should have a message saying that 3 packages were compiled:
How to spawn a Gazebo robot using XML launch files – ros2_ws compiled
Starting the Gazebo simulator
Now that our workspace is compiled, let’s run a gazebo simulation and RViz using normal python launch files.
For that, run the following command in the terminal:
To make sure everything is working as expected so far, you can also run a new command to move the robot around using the keyboard. For that, open a third terminal, and run the following command:
ros2 run teleop_twist_keyboard teleop_twist_keyboard
Now, to move the robot around just press the keys “i“, “k“, or other keys presented in the terminal where you launched the teleop_twist_keyboard command.
The XML file for launching spawning the robot
As we mentioned earlier, the code for the python launch file can be found seen with:
If we check carefully the output above, we can see that we start launching the Gazebo simulator, and in the same <group> we spawn the robot in Gazebo by calling spawn_entity.py
Then we launch the Robot State Publisher to be able to see the robot in RViz, and finally, we launch RViz itself.
When launching RViz, we tell it to use a file named config/robot.rviz, as we can see at:
Now, in the second terminal, let’s launch the Joint State Publisher to be able to correctly see the robot wheels in RViz:
ros2 run joint_state_publisher joint_state_publisher
And on the third terminal, you can start the command to move the robot around:
ros2 run teleop_twist_keyboard teleop_twist_keyboard
And that is basically it
Congratulations. Now you know how to spawn a robot in Gazebo using Python and also using XML launch files.
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section.
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.
Keep pushing your ROS Learning.
Related Courses & Training
If you want to learn more about ROS and ROS2, we recommend the following courses: