In this post, we will see how to get an RGB camera working in ROS2 and RVIZ2. You might already be used to doing this in ROS1 and RVIZ1, and it’s easy-peasy. In ROS2 however, it’s a bit tricky and you are about to learn how to break the codes.
Step 1: Grab a copy of the ROS Project containing the code
Click here to get your own copy of the project (PS: If you don’t have an account on the ROS Development Studio, you would need to create one. Once you create an account or log in, we will copy the project to your workspace).
That done, open the project using the Run button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions. This post includes a summary of these instructions and some other tips.
Step 2: Explore the source code using the IDE
Open the IDE by clicking on the icon as shown above. You should now see something similar to the image below:
All the files used in the simulation are in the ros2_ws/src directory. Explore the files. Double-click to open a file in the editor. You will refer back to some of the files later on.
Step 3: Study the main files needed to get an RGB camera working in ROS2 and RVIZ2
There are two main things you need to do to get this working:
Add the camera to the URDF file (and launch the URDF file) so we can have the camera in Gazebo
Create a robot state publisher node for the robot so we can visualize it in Rviz2
Let’s see these steps in details.
All the code for the simulation is in ros2_ws/src/box_bot/. Let’s examine the main files related to launching the camera, so you can understand how to implement yours.
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.actions import IncludeLaunchDescription
from launch.conditions import IfCondition
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
pkg_box_bot_gazebo = get_package_share_directory('box_bot_gazebo')
pkg_box_bot_description = get_package_share_directory('box_bot_description')
# Start World
start_world = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(pkg_box_bot_gazebo, 'launch', 'start_world_launch.py'),
)
)
# Spawn the robot
spawn_robot_world = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(pkg_box_bot_description, 'launch', 'spawn_robot_launch_v3.launch.py'),
)
)
return LaunchDescription([
start_world,
spawn_robot_world
])
The launch file above does two things:
Spawns the world, by calling the launch file that spawns the world.
Spawns the robot, by calling the launch file that spawn the robot. This is where our interest lies. Let’s see how this is done, in the launch file ros2_ws/src/box_bot/box_bot_description/launch/spawn_robot_launch_v3.launch.py.
import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
def generate_launch_description():
use_sim_time = LaunchConfiguration('use_sim_time', default='true')
# Get the path to the URDF file
urdf = os.path.join(get_package_share_directory('box_bot_description'), 'robot/', 'box_bot.urdf')
assert os.path.exists(urdf), "Thebox_bot.urdf doesnt exist in "+str(urdf)
# Open the URDF file
with open(urdf, 'r') as infp:
robot_desc = infp.read()
return LaunchDescription([
DeclareLaunchArgument(
'use_sim_time',
default_value='false',
description='Use simulation (Gazebo) clock if true'),
Node(package='box_bot_description', executable='spawn_box_bot.py', arguments=[urdf], output='screen'),
Node(
package='robot_state_publisher',
executable='robot_state_publisher',
name='robot_state_publisher',
output='screen',
parameters=[{'use_sim_time': use_sim_time, 'robot_description': robot_desc}],
arguments=[urdf]),
])
The launch file above does two things:
Creates a node that that spawns the box bot with the camera, taking the path to the URDF file as the argument. The node is implemented in the file spawn_bot_bot.py file. We’ll look at this file next.
Creates a node that publishes the robot state. It takes the URDF file string as a parameter. This will be used by Rviz2.
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import os
import sys
import rclpy
from gazebo_msgs.srv import SpawnEntity
def main(args=None):
rclpy.init(args=args)
node = rclpy.create_node('minimal_client')
cli = node.create_client(SpawnEntity, '/spawn_entity')
content = ""
if sys.argv[1] is not None:
with open(sys.argv[1], 'r') as content_file:
content = content_file.read()
req = SpawnEntity.Request()
req.name = "box_bot"
req.xml = content
req.robot_namespace = ""
req.reference_frame = "world"
while not cli.wait_for_service(timeout_sec=1.0):
node.get_logger().info('service not available, waiting again...')
future = cli.call_async(req)
rclpy.spin_until_future_complete(node, future)
if future.result() is not None:
node.get_logger().info(
'Result ' + str(future.result().success) + " " + future.result().status_message)
else:
node.get_logger().info('Service call failed %r' % (future.exception(),))
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
The file simply takes the URDF file passed to it and spawns the robot.
Now, let’s see the URDF file itself. The part that adds the camera is labelled Camera, somewhere in the middle of the file. Locate the file in the IDE:
Step 4: Launch the Simulation and Rviz2 to see the RGB camera
Open a web shell and run the following command:
user:~$ cd ~/ros2_wsuser:~/ros2_ws$ source install/setup.bash
user:~/ros2_ws$ ros2 launch box_bot_gazebo box_bot_launch.py
Open the Gazebo app (if it does not open automatically). You should see a simulation similar to this one:
Next, let’s launch Rviz2 to see the camera. In another web shell, type:
rviz2
Now open the graphical tools app (if it does not open automatically).
You should now see the Rviv2 window. Set the Fixed Frame to base_link and click Add to add an Image display.
Expand the Image display, then expand the Topic property. Select the /rgb_cam/image_raw topic and set the Reliability to Best Effort. There you go!
Step 5: Consolidate your learning
Do you understand how to get an RGB camera working in ROS2 and RVIZ2? Are you able to implement a camera in your own simulation? If not, please go over the post again and maybe watch the video below? Let us know what worked for you in the comments.
Extra Step: Watch the video to see the sights and sounds version how to get an RGB camera working in ROS2 and RVIZ2
Did you like this post? Do you have any questions about the explanations? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS2 topics, please let us know in the comments area and we will do a video or post about it.
In this post, we will see how to implement Real Time in ROS2, using the ROS2 Pendulum Real Time demo.
Step 1: Grab a copy of the ROS Project containing the code
Click here to get your own copy of the project (PS: If you don’t have an account on the ROS Development Studio, you would need to create one. Once you create an account or log in, we will copy the project to your workspace).
That done, open the project using the Run button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions. This post includes a summary of these instructions and some other tips.
Step 2: Start the Simulation
Start a web shell from the bottom bar of the screen as shown above and run the following command:
cd ~/ros2_ws
source install/local_setup.bash
ros2 launch pendulum_bringup pendulum_bringup.launch.py rviz:=True
See the robot by clicking on the Graphical Tools icon on the bottom bar:
You should now see something like this (if you cannot see it, please ensure the ros2 launch command is still running and restart it if necessary):
You might need to drag the window into focus and then maximize it.
Move the robot by executing the following in a new terminal:
If you want to run the simulation on your local PC, then you need to run the commands given in the Setup section.
Step 3: Explore the source code using the IDE
Open the IDE by clicking on the icon as shown above. You should now see something similar to the image below:
All the files used in the simulation are in the ros2_ws/src directory. Explore the files. Double-click to open a file in the editor. You will refer back to some of the files later on.
Step 4: Watch the video to understand how the simulation implemented Real Time in ROS2 with the Pendulum
Here you go:
Step 5: Consolidate your learning
Do you understand how to implement Real Time in ROS2 after watching the video? If not, have you gone over the video again?
Can you try to implement the simulation on your local PC?
Did you like this post? Do you have any questions about the explanations? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS2 topics, please let us know in the comments area and we will do a video or post about it.
In this post, we will see how to implement Gazebo joints control in ROS2. The way this is done has changed a bit from ROS1.
Step 1: Grab a copy of the ROS Project containing the code
Click here to get your own copy of the project (PS: If you don’t have an account on the ROS Development Studio, you would need to create one. Once you create an account or log in, we will copy the project to your workspace).
That done, open the project using the Run button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions. This post includes a summary of these instructions and some other tips.
Step 2: Start the Simulation and get the robot moving
Start a web shell from the bottom bar of the screen as shown above and run the following command:
ros2 launch box_car_gazebo box_bot_launch.py
Start another tab in the web shell, run the following command:
ros2 run teleop_twist_keyboard teleop_twist_keyboard --ros-args --remap cmd_vel:=/box_bot/cmd_vel
See the robot by clicking on the Gazebo icon on the bottom bar:
You should now see something like this (if you cannot see it, please ensure the ros2 launch command is still running and restart it if necessary):
Put the Gazebo window and the shell window with the teleop command side by side, focus on the shell and move the robot with the keyboard. You should see the robot run off. PS: use the z key to reduce the speed before moving the robot forward or backward, otherwise it runs off too quickly!
Step 3: Explore the source code using the IDE
Open the IDE by clicking on the icon as shown above. You should now see something similar to the image below:
All the files used in the simulation are in the ros2_ws/src directory. Explore the files. Double-click to open a file in the editor. You will refer back to some of the files later on.
Step 4: Watch the video to understand how we implemented gazebo joints control in ROS2
Here you go:
Step 5: Consolidate your learning
Do you understand how to implement gazebo joints control in ROS2 after watching the video? If not, have you gone over the video again?
If you are familiar with ROS1 control, can you see how it’s different from ROS control?
Did you like this post? Do you have any questions about the explanations? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS2 topics, please let us know in the comments area and we will do a video or post about it.
In this post, we will see how to make a robot follow another robot. We’ll make the iRobot follow the big turtle all around the world when it moves, using ROS TF broadcaster and listener nodes.
PS:This ROS project is part of our ROS Mini Challenge series, which gives you an opportunity to win an amazing ROS Developers T-shirt! This challenge is already solved. For updates on future challenges, please stay tuned to our Twitter channel.
Step 1: Grab a copy of the ROS Project containing the code for the challenge
Click here to get your own copy of the project. If you don’t have an account on the ROS Development Studio, you will need to create one. Once you create an account or log in, we will copy the project to your workspace. That done, open your ROSject using the Open button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions about the challenge. This post includes a summary of these instructions as well as the solution to the challenge.
PS: Please ignore the Claim your Prize! section because…well…you are late the party 🙂
Step 2: Start the Simulation and get the robots moving
Click on the Simulations menu and then Choose launch file… . In the dialog that appears, select rotw7.launchunder turtle_tf_3d package. Then click the Launchbutton. You should see a Gazebo window popup showing the simulation.
Get the robots moving. Pick a Shell from the Tools menu and run the following commands:
At this point, you should already see the iRobot moving towards the big turtle.
Nothing happened? Heck, we gotta fix this! Let’s do that in the next section.
Step 3: Let’s find the problem
So the robots didn’t move as we expected. And we had this error message:
[INFO] [1580892397.791963, 77.216000]: Retrieveing Model indexes
[INFO] [1580892397.860043, 77.241000]: Robot Name=irobot, is NOT in model_state, trying again
The error message above says it cannot find the model name specified in the code, so let’s check that up. Fire up the IDE from the Tools menu and browse to the directory catkin_ws/src/rotw7_pkg/scripts. We have two Python scripts in there:
turtle_tf_broadcaster.py
turtle_tf_listener.py
The robot model names are specified on line 19 of turtle_tf_broadcaster.py:, in the publisher_of_tf function:
robot_name_list = ["irobot","turtle"]
Let’s check if we can find these robots in the simulation, using a Gazebo service:
But, which of the turtles is the follower, and what is the correct topic for its “cmd_vel”? We have a hint from the launch file irobot_follow_turtle.launch:
So the follower is turtle2. Now, let’s check what it’s “cmd_vel” topic is. It’s specified as /cmd_vel in the code, but is this true? Let’s check the list of topics:
user:~$ rostopic list
#...
/turtle1/cmd_vel
/turtle2/cmd_vel
Probably, it’s /turtle2/cmd_vel. How do we know? Let’s publish to both /cmd_vel and /turtle2/cmd_vel and see which works.
You should now see the iRobot moving towards the big turtle. Now you can start moving the Turtle using the keyboard. Pick another Shell from the Tools menu and run the following command:
Move the big turtle around with the keyboard, and you should see that the iRobot follows it. Done, that’s an example of how to make a robot follow another robot.
Extra: Video of this post
We made a video showing how we solved this challenge and made the iRobot follow another robot. If you prefer “sights and sounds” to “black and white”, here you go:
Did you like this post? Do you have any questions about the explanations? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS or ROS2 topics, please let us know in the comments area and we will do a video or post about it.
In this post, we will see how to make a robot detect and avoid an obstacle in its front. We will move the robot forward until it detects there’s an obstacle (the wall) closer than 1 meter. Then, we will stop the robot so that it does not collide with the wall.
PS:This ROS project is part of our ROS Mini Challenge series, which gives you an opportunity to win an amazing ROS Developers T-shirt! This challenge is already solved. For updates on future challenges, please stay tuned to our Twitter channel.
Step 1: Grab a copy of the ROS Project that contains the code for the challenge
Click here to get your own copy of the project. If you don’t have an account on the ROS Development Studio, you will need to create one. Once you create an account or log in, we will copy the project to your workspace. That done, open your ROSject using the Open button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions about the challenge. This post includes a summary of these instructions as well as the solution to the challenge.
PS: Please ignore the Claim your Prize! section because…well…you are late the party 🙂
Step 2: Start the Simulation and get the robot moving
Click on the Simulations menu and then Choose launch file… . In the dialog that appears, select rotw5.launchunder rosbot_gazebo package. Then click the Launchbutton. You should see a Gazebo window popup showing the simulation: a ROSbot in front of a wall.
Get the robot moving. Pick a Shell from the Tools menu and run the following commands:
Yay, the robot moved. But, wait, it didn’t detect and avoid the obstacle – it crashed into the wall!We don’t want that, solet’s fix it in the next section!
Step 3: Identify the problem, man!
So the robot didn’t stop as we expected. As the challenge hinted, something must be wrong with the code, especially the part that should detect the wall and stop the robot. Let’s see.
Fire up the IDE from the Toolsmenu and browse to catkin_ws/src/rotw5_pkg/src/detect_wall.py
#! /usr/bin/env python
import rospy
from sensor_msgs.msg import LaserScan
from geometry_msgs.msg import Twist
def callback(msg):
#If the distance to an obstacle in front of the robot is bigger than 1 meter, the robot will move forward
if msg.ranges[360] > 1:
move.linear.x = 0.5
move.angular.z = 0.0
#If the distance to an obstacle in front of the robot is smaller than 1 meter, the robot will stop
if msg.ranges[360] < 1:
move.linear.x = 0.0
move.angular.z = 0.0
pub.publish(move)
rospy.init_node('rotw5_node')
sub = rospy.Subscriber('/scan', LaserScan, callback) #We subscribe to the laser's topic
pub = rospy.Publisher('/cmd_vel', Twist)
move = Twist()
rospy.spin()
Now, this is the part that should stop the robot. It says that if there’s an obstacle less than 1 meter away, we should set both linear and angular velocities to zero (stopping the robot)
#If the distance to an obstacle in front of the robot is smaller than 1 meter, the robot will stop
if msg.ranges[360] < 1:
move.linear.x = 0.0
move.angular.z = 0.0
This is not being executed obviously because the condition msg.ranges[360] < 1 is never True! Let’s test this assumption:
In the IDE, add the following lines to the detect_wall.py, after the comment below. Then save.
#If the distance to an obstacle in front of the robot is smaller than 1 meter, the robot will stop
print "Number of readings: ", print len(msg.ranges)
print "Reading at position 360:", msg.ranges[360]
From the Shell: Press Ctrl + C to end the current program. Then stop the robot by publishing zero velocities to the /cmd_vel topic. Press Ctrl + C again and then call the Gazebo service that resets the world.
user:~$ rosrun rotw5_pkg detect_wall.py
Number of readings: 720
Reading at position 360: inf
# ...
Number of readings: 720
Reading at position 360: 11.7952680588
You should notice that several readings at 360 are printed, and none of them is less than 1. Whew!
Step 4: Now let’s solve the problem
We saw from the previous section that our problem was the condition msg.ranges[360] < 1, but how did we arrive at this condition?
We knew the laser sensor has 720 scan points stored in the ranges array, as seen in the output of our print statement in Step 3.
We assumed that the laser had a range of 180 degrees, covering the front of the robot only, so that position 360 is at the front-middle of the robot. But this is false, and we can confirm this by running the following command:
The command prints out a single message from the /scan topic. We can deduce the range from the angle_max and angle_min values: angle_max - angle_min = 2pi = 360 deg! Buff! So position 360/720 of the scan messages could be somewhere at the back of the robot. No wonder it didn’t detect the wall.
Now, we have to figure out which position is the front-middle of the robot. Depending on how the sensor was installed, this could be any position. For this robot, the sensor was installed with position “0″ at the front-center, so the readings of positions “0″ or 720 could represent the front-center. Let’s use the position “0″ and test again.
In the IDE, change the front-middle position from 360 to “0″:
#! /usr/bin/env python
import rospy
from sensor_msgs.msg import LaserScan
from geometry_msgs.msg import Twist
def callback(msg):
#If the distance to an obstacle in front of the robot is bigger than 1 meter, the robot will move forward
if msg.ranges[0] > 1:
move.linear.x = 0.5
move.angular.z = 0.0
#If the distance to an obstacle in front of the robot is smaller than 1 meter, the robot will stop
print "Number of ranges: ", len(msg.ranges)
print "Reading at position 0:", msg.ranges[0]
if msg.ranges[0] < 1:
move.linear.x = 0.0
move.angular.z = 0.0
pub.publish(move)
rospy.init_node('rotw5_node')
sub = rospy.Subscriber('/scan', LaserScan, callback) #We subscribe to the laser's topic
pub = rospy.Publisher('/cmd_vel', Twist)
move = Twist()
rospy.spin()
From the Shell: Press Ctrl + C to end the current program. Then stop the robot by publishing zero velocities to the /cmd_vel topic. Press Ctrl + C again and then call the Gazebo service that resets the world.
Run the program again and observe the output and see if the robot detects and avoids the obstacle:
user:~$ rosrun rotw5_pkg detect_wall.py
Number of ranges: 720
Reading at position 0: 2.49150013924
Number of ranges: 720
#...
Number of ranges: 720
Reading at position 0: 1.92249274254
#...
Number of ranges: 720
Reading at position 0: 0.991922438145
Yay, finally we are able to make the robot detect and avoid the obstacle!
Extra: Video of this post
We made a video showing how this challenge was solved. If you prefer “sights and sounds” to “black and white”, here you go:
Did you like this post? Do you have questions about what was explained? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS or ROS2 topics, please let us know in the comments area and we will do a video or post about it.
In this post, we will see how to configure MoveIt to make a manipulator robot execute a trajectory based on a Position Goal. We send a Position Goal to the manipulator, and the robot computes and executes a trajectory plan in order to reach this goal.
PS:This ROS project is part of our ROS Mini Challenge series, which gives you an opportunity to win an amazing ROS Developers T-shirt! This challenge is already solved. For updates on future challenges, please stay tuned to our Twitter channel.
Step 1: Grab a copy of the ROS Project that contains the simulation
Click here to get your own copy of the project. If you don’t have an account on the ROS Development Studio, you will need to create one. Once you create an account or log in, we will copy the project to your workspace. That done, open your ROSject using the Open button. This might take a few moments, please be patient.
You should now see a notebook with detailed instructions about the challenge. Please ignore the Claim your Prize! section because…well…you are late the party 🙂
Step 2: Start the Simulation and run the MoveIt program to configure the manipulator
Click on the Simulations menu and then Choose launch file… . In the dialog that appears, select main.launchfrom the shadow_gazebo package. Then click the Launchbutton. You should see a Gazebo window popup showing the simulation: a manipulator robot.
Start the MoveIt package – pick a Shell from the Tools menu and run the following command:
Once you see the green text that says “You can start planning now”, pick the Graphical Tools app from the Toolsmenu.
Probably, at first, you will see the window a bit displaced.
In order to center it, just click on the Resize opened app button. At the end, you should get something like this:
In the window that appears, go to the Planning tab and then to the Select Goal State dropdown under Query.
Select “start” and click Update.
Click on the Planbutton under Commands in order to calculate a suitable trajectory for reaching the goal position.
Finally, click on the Executebutton under Commands in order to execute the position goal (and the plan).
You should see the manipulator on the right moving in order to execute the trajectory, as shown below (in the Graphical Tools and the Gazebo windows).
But this didn’t happen did it?Let’s fix that in the next section!
Step 3: Let’s solve the problem!
What the heck was the problem?! Hmm, I wished I had paid more attention to the Shell…because it left a message for us there, in red letters! Let’s go see it:
[ERROR] [1580760680.575679474]: Unable to identify any set of controllers that can actuate the specified joints: [ elbow_joint shoulder_lift_joint shoulder_pan_joint wrist_1_joint wrist_2_joint wrist_3_joint ]
[ERROR] [1580760680.575917519]: Known controllers and their joints:
[ INFO] [1580760680.671585828]: ABORTED: Solution found but controller failed during execution
We didn’t set up a controller for moving the robot arm. Whew!
We need to examine the package to find the fix, so fire up the IDE from the Tools menu and browse to the catkin_ws/src/myrobot_moveit_config package. In the launch subdirectory, we have a file named smart_grasping_sandbox_moveit_controller_manager.launch.xml:
The first line of the configuration above refers to a file my_controllers.yamlfile in the config subdirectory let’s open that file up:
controller_list:
Hi there!
Huh, we expected this file to list the controllers, but all we get here is “Hi there!”! We don’t need that – so this must be the wrong file! We need to find a file containing the controllers we need, so let’s check out some other files there, for example controllers.yaml:
WOW, this might be what we need (ssh! Top secret: it is!), so let’s copy over the contents of controllers to my_controllers and save!
Step 4: Test the Fix
Now get back to the Shell. Stop the program we ran in Step 2 using Ctrl + C.
^C[rviz_rosdscomputer_8081_1478977940469499802-4] killing on exit
[move_group-3] killing on exit
[joint_state_publisher-2] killing on exit
[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
user:~$ roslaunch myrobot_moveit_config myrobot_planning_execution.launch
After this, repeat everything in Step 2. Oops, yeah, it didn’t work yet!! Instead, we got another error:
[ERROR] [1580771100.563075907, 98.692000000]: Unable to identify any set of controllers that can actuate the specified joints: [ elbow_joint shoulder_lift_joint shoulder_pan_joint wrist_1_joint wrist_2_joint wrist_3_joint ]
[ERROR] [1580771100.563656482, 98.692000000]: Known controllersand their joints:
controller 'hand_controller' controls joints:
H1_F1J1
H1_F1J2
H1_F1J3
H1_F2J1
H1_F2J2
H1_F2J3
H1_F3J1
H1_F3J2
H1_F3J3
The controllers again! Now let’s examine the error message again.
The controllers for the parts that need to move could not be found. But these controllers are listed in the my_controllers.yaml .
Some other controllers listed in the same file were found.
Perhaps there’s some misconfiguration – let’s have a second look at this my_controllers.yaml file:
We have two controllers: arm_controller and hand_controller.
They both have an action_ns called follow_joint_trajectory.
But they have different types! They should be the same since they have the same “action namespace”!
The type for arm_controller is probably wrong because it’s the one we couldn’t find!
Now, change the type for arm_controller to FollowJointTrajectory and repeat step 2! Now the manipulator should execute the trajectory successfully – you should see the robot arm moving in both the Graphical Tools and the Gazebo windows.
This is one example of how to make a manipulator robot execute a trajectory based on a Position Goal. I hope you found it useful.
Extra: Video of this post
We made a video showing how this challenge was solved. If you prefer “sights and sounds” to “black and white”, here you go:
Did you like this post? Do you have questions about what was explained? Whatever the case, please leave a comment on the comments section below, so we can interact and learn from each other.
If you want to learn about other ROS or ROS2 topics, please let us know in the comments area and we will do a video or post about it.