Hey ROS developers! In this post, we will make our robot able to be spawned into Gazebo simulator. Based on the YouTube video series, we’ll show in this format the steps to achieve the final result of the series!
In this post number #4, I’m gonna create the Transmissions and controllers to some of our joints using the same XACROs files we have been working with. Up to the end of the post, we’ll have part of our model controllers by gazebo controllers!
Step 1 – Simplifying the robot
It turns out we have a very complex robot, with 6 links and 5 joints. I want to make it simpler in order to show how to use gazebo controller plugin.
Let’s start opening the file mrm.xacro and comment leave only the first links of the robot (the rest, we will comment). It will look like below:
First main modification is the group tag, where we define a namespace (mrm) for our robot processes. Everything inside this tag inherits the namespace. It’s necessary for the controller plugin.
Then, in the secion Load controllers we have a new parameter, that loads the config.yml file we have just created.
Finally, a new node to run the controller process.
Step 7 – Testing!
Great! Now we have already configured the controllers, you can spawn the robot again! Same commands as before, but this time you will notice the last link won’t fall down! This is very important to achieve since the controllers are already actuating.
Check the shell, you can see the controllers being loaded in the logs!
process[mrm/mybot_spawn-1]: started with pid [4817]
process[mrm/controller_spawner-2]: started with pid [4822]
[INFO] [1567187158.002693, 0.000000]: Controller Spawner: Waiting for service /mrm/controller_manager/load_controller
the rosdep view is empty: call 'sudo rosdep init' and 'rosdep update'
SpawnModel script started
[INFO] [1567187165.292282, 0.000000]: Loading model XML from ros parameter
[INFO] [1567187165.303925, 0.000000]: Waiting for service /gazebo/spawn_urdf_model
[INFO] [1567187165.312998, 0.000000]: Calling service /gazebo/spawn_urdf_model
[INFO] [1567187165.504207, 904.010000]: Spawn status: SpawnModel: Successfullyspawned entity
[mrm/mybot_spawn-1] process has finished cleanly
log file: /home/user/.ros/log/cfa21f58-cb4b-11e9-9ca9-0a2504d14030/mrm-mybot_spawn-1*.log
[INFO] [1567187166.829017, 904.050000]: Controller Spawner: Waiting for service /mrm/controller_manager/switch_controller
[INFO] [1567187166.831181, 904.050000]: Controller Spawner: Waiting for service /mrm/controller_manager/unload_controller
[INFO] [1567187166.848968, 904.070000]: Loading controller: joint_state_controller
[INFO] [1567187167.000380, 904.220000]: Loading controller: joint1_position_controller
[INFO] [1567187167.268639, 904.500000]: Loading controller: joint2_position_controller
[INFO] [1567187167.345849, 904.570000]: Controller Spawner: Loaded controllers: joint_state_controller, joint1_position_controller, joint2_position_controller
[INFO] [1567187167.355672, 904.590000]: Started controllers: joint_state_controller, joint1_position_controller, joint2_position_controller
We have also new ROS topics related to the controllers:
Great! If you missed any of the steps or something just didn’t work, you can check the original project just clicking on the link: http://www.rosject.io/l/c1ed2ee/
Hey ROS developers! In this post, we will make our robot able to be spawned into Gazebo simulator. Based on the YouTube video series, we’ll show in this format the steps to achieve the final result of the series!
In this post number #3, I’m gonna create the Inertia properties to each of our links using the same XACROs files we have been working with. Up to the end of the post, we’ll have the complete model of the robot inside a Gazebo world!
We added the inertial property, the one in charge of passing the mass and inertia values, from a real or project of a robot we have, to the simulator. These values are going to be used by the physics calculator of Gazebo.
And the collision property is used to calculate when an object collides to another during the simulation. Important to notice its values are the same of the visual part. That’s because we have a simple visual part for these links. It happens to have different values for visual and collision when we want to simplify the collisions for a more complex mesh we may have for a joint (we’ll have an example up to the end of the series).
Finally, don’t forget (if you are copying the new values manually) to notice the new params values we have added to the definition of the tagxacro:macro.
Box links
And for the second type of links we have, the box one. We need to do very similar modifications, but quite different parameters, due to the kind of shape we have now.
Again, we have added new parameters to the macro tag, which are in the value of the params attribute.
And added the inertial and collision tags, for the very same reason.
Step 2 – Adjusting the main XACRO file – mrm.xacro
Let’s open the main file we have to describe the entire robot: ~/simulation_ws/src/mrm_description/urdf/mrm.xacro
The changed we need to do are related only the new parameters we added to the MACROs. The worst part is that we’ll do one by one (which is the most important part when you have many different links from a real robot). Let’s start!
Base_link
Starting from the base_link (or ${link_00_name}, the one which is a box):
The values used to fill the 7 new parameters were taken from a CAD software. It’s not something we will cover in this series, we are just bringing pre-calculated values to the practical part of simulating a robot.
Link_01
For the next link named link_01 (or ${link_01_name}), let’s do the same (very similar, at least):
Just before trying to simulate it, let’s make sure we didn’t break anything we had before. Let’s open it in RViz, using the same launch file we used before:
From a web shell, execute the following:
roslaunch mrm_description rviz.launch
And you must have a screen like the one below:
Great!
Step 4 – Creating a launch file
Before anything related to the simulation, we need a launch file to spawn the robot. Let’s create one:
The file will be ~/simulation_ws/src/mrm_description/launch/spawn.launch
What follows is the ROS story of Shahram Najam from Pakistan. Shahram is a ROS student at the Robot Ignite Academy, and he wanted to share his ROS engagement with all of us.
Why ROS?
So, for this article, it would be appropriate to give a brief background of who I am and what are my credentials. So, my name is Shahram and I am enrolled as an undergrad student in one of the most renowned universities of Pakistan, NED University of Engineering and Technology with my major being in Electronics with a specialization in Robotics.
One of the most common questions I come across, (which evidently I had in my mind as well as beginner somewhere in time as well) is “What is ROS?” and “Why should I learn ROS?”. So these are the questions that I will be addressing in my brief article today and share my personal motivation and experience that led me down the mysterious pathways of ROS.
What is ROS?
Ever since its inception back in 2007, the popularity and importance as a universal framework has grown exponentially and now the ROS community is based on researchers and hobbyists alike. Everyone irrespective of their background is mesmerized by the “Esoteric-World” of ROS. So it would only be fair to introduce ROS in the simplest of manner.
ROS is an acronym of “Robotics Operating System”. It is obviously not your traditional “Operating System” like Microsoft Windows, Linux or Mac OS, rather it is a “meta-operating system” for a robot (Aerial robots/drones, Humanoid Robots, UGV, etc.), which provides relevant services one would expect from an OS to control and communicate between the various components of the robot to function seamlessly. In short, ROS is a flexible and universal framework for writing and publishing software for robots.Due to its vast community there are readily available libraries and tools that could be used as “out of the box” solutions that could be used across a wide variety of platforms for emulating robust and composite behavior of robots i.e. the concept of ROS was conceived with only one thing in mind, and that was to make the robotic software reusable across various platforms without being interlaced with the hardware of any specification.
Why ROS is the best option to become a robotics developer?
In the modern-day and age of computer, practically everything is being taken over by computers for automation; and the backend for any of these systems there exists a robust software. For those who have been a part of software development would opt with my point that software development can be quite a time consuming, and that is why most of us try to utilize the readily available frameworks that offer tools and conventions to streamline and speed up the development process and be unobtrusive, modular and efficient.
Likewise, before the advent of ROS, most of the time during research projects was invested in re-implementing the software infrastructure such as drivers for embedded electronics, actuators and sensor networks, defining communication protocols etc. for formulating algorithm for advancements in robotics research and very brief period of time was actually dedicated towards the actual research for intelligent algorithms for intelligent robots.
ROS is nothing short of a miracle and time-saver for any robotics developer in this era of robotics. The ROS community has sky-rocketed and now on daily basis state-of-the-art robotics tools, libraries and packages are being published and open-sourced by various researchers and industries which can be used as modular building blocks for building a robust foundation for any project and to simulate its behavior without the constraints of hardware availability and costs. This is especially a win-win situation for any robotics developer irrespective of his/her background; be it a hobbyist looking to make a savvy line follower robot using image processing and a sensor network or be it a researcher to implement the state-of-the-art mobility or control technique, presented just days before in a conference, for further improvements or be it an industry looking to implement the latest update seamlessly to its network of robots on the assembly line without cessation of production. The question isn’t “Why should I become a ROS developer” rather the question should be “Why shouldn’t I become a ROS developer?” given the strong community support and easily reproducible state-of-the-art tools available.
Why do I want to be a ROS developer & What do I plan on using ROS for?
Well like many others, I have always been fascinated by robots and so this fascination and alacrity led me towards the world of embedded systems for robotics. As my capstone research project in my final year, I decided to work on Pakistan’s first-ever human-sized Humanoid Robot “I-Force” due to my obsession with human-machine interaction. Instantly I ran into a world of problems when the software development phase has begun. It was quite difficult to mix and match various tools for actuator control, image processing, etc. and to implement the state-of-the-art algorithms. So this was the time when I was desperately looking for a way out of robotics and change my specialization to something else in the domain of electronics. That’s when I came across the esoteric yet wonderful world of ROS.
The more I read about ROS, the more I fell in love with it. So I decided to incorporate ROS in my humanoid robot “I-Force”. In its current state it is able to detect any object (without concept) using low-level image processing algorithm and localize its location in 3D space, this location is then used to determine the parameters like actuator position, sweep velocity, etc. using inverse kinematics technique and pick the object with the end-effector, which in my case is a claw. As a ROS developer porting the entire system onto ROS and incorporating decision making capability aided by AI to learn optimum strategies of detecting objects with context in high dimensional space state and implement Reinforcement Learning RL using the various tools that ROS has to offer. The sheer amount of data required for this is quite impractical to “teach” robot these capabilities, so a simulation on ROS could be used to train and transferring this useful “insight” from the state-of-the-art deep learning and RL models to the actual robot (this approach is readily being used for training warehouse robots, UGV, drones, etc.).
Why did I consider Robot Ignite Academy as the best place to learn ROS?
While searching for a comprehensive step-by-step tutorial for learning ROS from the ground up, I came across various books, video tutorials, and blog posts, but none were as comprehensive and exhaustive as in Robot Ignite Academy. The way clear paths are defined with a series of courses with hands-on exposure rather than just chugging down theory to reach a specific goal is truly an amazing concept, and all of this without the need of any ROS installation on a local system.
Based on the YouTube video series, we’ll learn in this post the steps to achieve the final result of the series!
In this post number #2, I’m gonna show how to use XACROs in order to simplify a URDF file. Up to the end of the post, we’ll have the complete model of the robot, which includes 6 links in total, and visualize it in RViz!
Before anything else, if you want to use the logo above on your own robot or computer, feel free to download it and attach it to your robot. It is really free. Find it in the link below:
In order to follow this tutorial, we need to have ROS installed in our system, and ideally a ROS Workspace (it can be named simulation_ws). To make your life easier, we have already prepared a rosject with a simulation for that: https://app.theconstructsim.com/l/bc1c398/.
You can download the rosject on your own computer if you want to work locally, but just by copying the rosject (clicking the link), you will have a setup already prepared for you.
After the rosject has been successfully copied to your own area, you should see a Run button. Just click that button to launch the rosject (below you have a rosject example).
My Robotic Manipulator #2: Basic URDF & RViz – Run rosject (example of the RUN button)
After pressing the Run button, you should have the rosject loaded. Now, let’s head to the next section to get some real practice.
Open a terminal
In order to create files, and launch ROS nodes, we need a terminal. Let’s open a terminal by clicking the Open a new terminal button.
Open a new Terminal
Next step – Create some MACROs to help
Having opened the first terminal, let’s now proceed with the other parts of this tutorial
We have already created some links and a joint in the previous post. Now we need to create more of them, but let’s try to make it simpler, making our XACRO file cleaner.
Let’s create a new file, ~/simulation_ws/src/mrm_description/urdf/links_joints.xacro running the following commands in this first terminal:
In this file, we are gonna create XML MACROs. They will help us create links and joints. Please open that file in the Code Editor, also known as IDE (Integraded Development Environment).
In order to open the IDE, just click the Code Editor button, as we can see in the image below:
Open the IDE – Code Editor
Once the Code Editor is open, just navigate to simulation_ws/src/mrm_description/urdf/links_joints.xacro, and paste the following content to it:
We have created 3 MACROs – each one with a name defined in the name attribute of the xacro:macro tags. The following attribute is used for defining the parameters we want to make available in a MACRO.
These parameters are used in the following format: ${param_name}
Let’s check how to use them in the next section.
Using our XML MACROs
In our main file, ~/simulation_ws/src/mrm_description/urdf/mrm.xacro, we have to include this new XACRO file. Let’s override its content with the following one!
After these changes, we are now using the MACROs we have just defined, yet we must have the very same model that we had before. Let’s check it on RViz:
roslaunch mrm_description rviz.launch
The RViz window should be similar to the one in the previous post, when we did not use our own macros:
Using multiple MACRO files
In order to get used to working with MACRO files, let’s create another one. This new file will only define the name of the links and joints we are goingto have!
Let us create a new file ~/simulation_ws/src/mrm_description/urdf/robot_parameters.xacro:
Let's also include it in the main file mrm.xacro (just appending a new include to the INCLUDE section):
<!-- BGN - Include -->
<xacro:include filename="$(find mrm_description)/urdf/links_joints.xacro" />
<xacro:include filename="$(find mrm_description)/urdf/robot_parameters.xacro" />
<!-- END - Include -->
Finally, replace the names in the links and joints with the values we imported from the parameters file. You must have the description part similar the one below:
If you want, you can explore and play with the parameters (radius and length, for example) in order to customize your own model.
Checking the robot in RViz
Alright! Having everything in place, the time now has come for seeing the robot in RViz.
Our rviz.launch may be using joint_state_publisher, but we need joint_state_publisher_gui. To solve this, let’s open that launch/rviz.launch file using the Code Editor, and replace its content with the following one:
Now, in the first terminal, let’s launch RViz again. Remember to kill any running process that you may have in the first terminal by pressing CTRL+C first.
roslaunch mrm_description rviz.launch
If everything went ok, you should have something similar to the following robot in RViz, after moving the joints using the Joint State Publisher:
Congratulations. We have finished the basics of a robot model using Xacro files.
In the next post, we’ll create the inertia of the robot. A very important step to have it working in Gazebo!
We hope this post was really helpful to you. If you want a live version of this post with more details, please check the video in the next section. Although the video was recorded in a previous version of The Construct platform, it still adds a lot of value:
Youtube video
So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.
This is our last post of the series, actually a Bonus post, to show something ready-to-use ROS provides to us. The GMapping package, to create maps while our robot navigates in a given environment. We are going to show how to configure a launch file in order to use the same robot we have been using throughout this series.
Let’s start!
Step 1 – Configuring the environment
First thing, we will use the same robot in the last world we have worked. If you don’t have the ROSJect, just do a copy by clicking here
Next, launch a simulation, as we have done before:
And spawn the robot using the command below in a shell:
roslaunch m2wr_description spawn y:=8
You must have the robot there, waiting to be used:
Step 2 – Configuring GMapping launch file
We need to create a new launch file, and I chose to do it in the motion_plan package (~/catkin_ws/src/motion_plan/launch/gmapping.launch)
The content must be the same as below, and I’ll explain the major points:
If you don’t know yet this package, this is used to send velocity commands to the robot using the keyboard. You can check the instructions on the shell itself:
Step 4 – Results
After driving the robot around the environment, you must have something similar to the image below:
The robot is placed more or less at the center of the world and we have almost the entire map generated.
It depends on the hardware, the odometry and the way you have navigated. But we have a map! Don’t we?!
Related Courses
ROS Navigation
RTAB-Map Course
Conclusion
After configuring the launch file, it’s quite easy to use gmapping. You can take advantage of this package to generate maps of different environments!
Don’t forget: If you have suggestions, doubts or just like this post, please, leave a comment and let us know your opinion.
Yes! We have skipped part number #11. That’s because it was an adjustment from ROS Indigo to ROS Kinetic. It’s not something necessary now, because we have already started using Kinetic.
In this post, we implement Bug 2 algorithm in our existing robot! Let’s use some algorithms we have already done in order to have it all working!
At the end of this post, we’ll have the robot working as shown on the image below:
Step 1 – Importing libraries
We start creating a script file, it goes like that: ~/catkin_ws/src/motion_plan/scripts/bug2.py
(Don’t forget to assign the necessary permissions to the file)
Open the file in the editor and start coding. We start from the necessary libraries:
#! /usr/bin/env python
# import ros stuff
import rospy
# import ros message
from geometry_msgs.msg import Point
from sensor_msgs.msg import LaserScan
from nav_msgs.msg import Odometry
from tf import transformations
from gazebo_msgs.msg import ModelState
from gazebo_msgs.srv import SetModelState
# import ros service
from std_srvs.srv import *
import math
Now.. we define some global variables to store the goal and the current status of the robot and algorithm:
srv_client_go_to_point_ = None
srv_client_wall_follower_ = None
yaw_ = 0
yaw_error_allowed_ = 5 * (math.pi / 180) # 5 degrees
position_ = Point()
initial_position_ = Point()
initial_position_.x = rospy.get_param('initial_x')
initial_position_.y = rospy.get_param('initial_y')
initial_position_.z = 0
desired_position_ = Point()
desired_position_.x = rospy.get_param('des_pos_x')
desired_position_.y = rospy.get_param('des_pos_y')
desired_position_.z = 0
regions_ = None
state_desc_ = ['Go to point', 'wall following']
state_ = 0
count_state_time_ = 0 # seconds the robot is in a state
count_loop_ = 0
# 0 - go to point
# 1 - wall following
Like in Bug 1 algorithm, we are storing the state of the robot. I put some description of the states in an Array called state_desc_. There are also 2 new arguments being received: initial_x and initial_y. They are used to restore the robot position in case you want to restart the algorithm. Very useful to test the algorithm many times in a row!
Step 2 – Defining callbacks
Here we define callbacks, the very same as before, no big deal here. But don’t miss this part of the code:
The first helper function is change_state. We had one like this in Bug 1, but we have different rules and states now. Let’s check it:
def change_state(state):
global state_, state_desc_
global srv_client_wall_follower_, srv_client_go_to_point_
global count_state_time_
count_state_time_ = 0
state_ = state
log = "state changed: %s" % state_desc_[state]
rospy.loginfo(log)
if state_ == 0:
resp = srv_client_go_to_point_(True)
resp = srv_client_wall_follower_(False)
if state_ == 1:
resp = srv_client_go_to_point_(False)
resp = srv_client_wall_follower_(True)
We have only 2 states: wall_follower and go_to_point. The different is that we are not driving the robot in a straight line to the desired point, but following the original line. The one from the initial position to the desired one. This will be implemented on the main function.
The next one: distance_to_line:
def distance_to_line(p0):
# p0 is the current position
# p1 and p2 points define the line
global initial_position_, desired_position_
p1 = initial_position_
p2 = desired_position_
# here goes the equation
up_eq = math.fabs((p2.y - p1.y) * p0.x - (p2.x - p1.x) * p0.y + (p2.x * p1.y) - (p2.y * p1.x))
lo_eq = math.sqrt(pow(p2.y - p1.y, 2) + pow(p2.x - p1.x, 2))
distance = up_eq / lo_eq
return distance
This function is used to calculate the distance of the robot to the initial line. Super important for this algorithm and it’s used all the time during its execution.
Finally, we’ll use again the normalize_angle function:
Part 1 – ROS node, callbacks, services and initial state
def main():
global regions_, position_, desired_position_, state_, yaw_, yaw_error_allowed_
global srv_client_go_to_point_, srv_client_wall_follower_
global count_state_time_, count_loop_
rospy.init_node('bug0')
sub_laser = rospy.Subscriber('/m2wr/laser/scan', LaserScan, clbk_laser)
sub_odom = rospy.Subscriber('/odom', Odometry, clbk_odom)
rospy.wait_for_service('/go_to_point_switch')
rospy.wait_for_service('/wall_follower_switch')
rospy.wait_for_service('/gazebo/set_model_state')
srv_client_go_to_point_ = rospy.ServiceProxy('/go_to_point_switch', SetBool)
srv_client_wall_follower_ = rospy.ServiceProxy('/wall_follower_switch', SetBool)
srv_client_set_model_state = rospy.ServiceProxy('/gazebo/set_model_state', SetModelState)
# set robot position
model_state = ModelState()
model_state.model_name = 'm2wr'
model_state.pose.position.x = initial_position_.x
model_state.pose.position.y = initial_position_.y
resp = srv_client_set_model_state(model_state)
# initialize going to the point
change_state(0)
rate = rospy.Rate(20)
This is something necessary, the same standard we have been doing so far.
Part 2 – The loop logic
while not rospy.is_shutdown():
if regions_ == None:
continue
distance_position_to_line = distance_to_line(position_)
if state_ == 0:
if regions_['front'] > 0.15 and regions_['front'] < 1:
change_state(1)
elif state_ == 1:
if count_state_time_ > 5 and \
distance_position_to_line < 0.1:
change_state(0)
count_loop_ = count_loop_ + 1
if count_loop_ == 20:
count_state_time_ = count_state_time_ + 1
count_loop_ = 0
rospy.loginfo("distance to line: [%.2f], position: [%.2f, %.2f]", distance_to_line(position_), position_.x, position_.y)
rate.sleep()
We are all the time calculating the distance to the original line. Depending on the presence of obstacles, we switch between wall_follower and go_to_point.
The point to be used as goal is always the nearest in the line.
Step 5 – Launch it!
Let’s create a launch file and see it working!
Create a new file at ~/catkin_ws/src/motion_plan/launch/bug2.launch. The content is:
Remember! If in any of the steps above you have missed something, you can always clone our ROSJect! Use this link to get your copy: http://www.rosject.io/l/b550d08/
If you like it or have a suggestion to improve it, leave a comment!