ROS Q&A | Treating quaternions for 2D navigation

ROS Q&A | Treating quaternions for 2D navigation

Are you having problems to make your robot navigate because of the kind of data you have to treat? It’s very common to get stuck when we have to work with quaternions, instead of RPY angles. Quaternions are used in robotics to represent the rotation, in 3 axis, of a rigid body in respect with a coordinate frame. But sometimes it’s too much for what we want to do.

In this post, we are going to see how to get only the necessary data from a quaternion for a 2D application. It is very helpful, since robotics algorithms works with quaternions, but user’s interface, RPY angles.

Let’s start with a robot, to make it clear. In this post, we are going to use the Turtlebot simulation, available in ROS Development Studio (RDS).

Turtlebot Simulation

Turtlebot Simulation

We have provided by the robot sensors a /odom topic, which publishes messages of the nav_msgs/Odometry type. See below a single message example:

Odometry message

Odometry message

Now, unless you have a path described using quaternions, we need to convert this quaternion to RPY. To do the following, we will use the TF library. In the image below, you can see the program in charge of this task:


#include <tf/tf.h>
#include <nav_msgs/Odometry.h>
#include <geometry_msgs/Pose2D.h>

ros::Publisher pub_pose_;

void odometryCallback_(const nav_msgs::Odometry::ConstPtr msg) {
geometry_msgs::Pose2D pose2d;
pose2d.x = msg->pose.pose.position.x;
pose2d.y = msg->pose.pose.position.y;

tf::Quaternion q(
msg->pose.pose.orientation.x,
msg->pose.pose.orientation.y,
msg->pose.pose.orientation.z,
msg->pose.pose.orientation.w);
tf::Matrix3x3 m(q);
double roll, pitch, yaw;
m.getRPY(roll, pitch, yaw);

pose2d.theta = yaw;
pub_pose_.publish(pose2d);
}

int main(int argc, char **argv)
{
ros::init(argc, argv, “conversion_node”);

ros::NodeHandle nh_;

ros::Subscriber sub_odom_ = nh_.subscribe(“odom”, 1, odometryCallback_);
pub_pose_ = nh_.advertise<geometry_msgs::Pose2D>(“pose2d”, 1);

ros::spin();

return 0;
}

First, we are creating a node and subscribing the /odom topic. In the callback of this subscriber, we are getting the information that matters for us, which are the X, Y and Yaw data. To have the Yaw angle, we are using the quaternion data and converting to RPY (We need a Matrix3x3 object in the middle process). Since our robot has only three degrees of freedom (linear in X and Y, angular in Z), we are not considering Roll and Pitch, the Pose2D message type is enough for us. This is a simplification for a ground robot navigating in a planar scenario.

Go to point program for ground robots using ActionLib

Go to point program for ground robots using ActionLib

RDS Public Simulation List

ROS Development Studio Public Simulation List

Navigation is one of the challenging tasks in robotics. It’s not a simple task. To reach a goal or follow a trajectory, the robot must know the environment using and localize itself through sensors and a map.

But when we have a robot that has this information already, it’s possible to start navigating, defining points to follow in order to reach a goal. That’s the point we are starting from in this post. Using a very popular ground robot, Turtlebot 2, we are going to perform a navigation task, or part of it.

In order to program the robot, RDS (ROS Development Studio) is going to be used. Before starting, make sure you are logged in using your credentials and able to see the public simulation list (as below) and run the Kobuki gazebo simulation.

RDS Public Simulation List

RDS Public Simulation List

At this point, you should have the simulation running (image below). On the left side of the screen, you have a Jupyter Notebook with the basics and some instructions about the robot. On the center, the simulation. On the right, an IDE and a terminal. It’s possible to set each one in full screen mode or reorganize the screen, as you wish. Feel free to explore it!

kobuki-sim

So, you have already noticed the simulation is running, the robot is ready and you might have send some velocity commands already. Now let’s start showing our Action Server that sends the velocity commands in order to the robot achieve a goal. First thing, clone this repository [https://marcoarruda@bitbucket.org/TheConstruct/theconstruct_navigation.git] to your workspace (yes, your cloud workspace, using RDS!). Done? Let’s explore it for a while. Open the file src/theconstruct_navigation/go_to_point2d/src/go_to_point2d_server.cpp. You must have something like this:
gotopoint-code

So, let’s start from the main function of the file. You can see there we are creating a node (ROS::init()) and a GoToPoint2DAction object. That’s the name of the class created at the beginning of the file. Once this variable is created, all methods and behaviors of the class will be working.

Now, taking a look inside the class we can see that there are some methods and attributes. The attributes are used only inside the class. The interface between the object and our ROS node are the methods, which are public.

When the object is instantiated, it registers some mandatory callbacks for the ActionLib library (goalCB, the one who receives the goal or points that we want to send the robot and preembCB, that allows us to interrupt the task). It’s also getting some parameters from the launch file. And finally, creating a publisher for the velocity topic and subscribing the odometry topic, which is used to localize the robot.

Let’s compile it! Using the terminal, enter into the directory catkin_ws (cd catkin_ws) and compile the workspace (catkin_make). It may take some minutes, because we are generating the message files. The action message is defined at the folder theconstruct_navigation/action/GoToPoint2D.action. You can explore there and see what it expects and delivers.

Finally, let’s run the action server. Use the launch file to set the parameters:
roslaunch go_to_point2d go_to_point2d_server.launch. Did the robot move? No? Great! The server is waiting for the messages, so it must not send any command until we create an action client and send the requests. First, let’s take a look in the launch file:

launch-file

Notice that we have some parameters to define the limits of the robot operation. The first 3 sets the maximum and minimum linear velocity and a gain that is used to set the robot speed in a straight line, since it depends on the distance between the robot and the goal point.

The next 3 parameters set the same as the previous parameters, but for the angular velocity.

Finally the last 3 parameters are used to establish a tolerance for the robot. Well, the robot’s odometry and yaw measurement are not perfect, so we need to consider we’ll have some errors. The error cannot be too small, otherwise the robot will never get to its goal. If the error is too big, the robot will stop very far from the goal (it depends on the robot perception).

Now that we have the basic idea of how this package works, let’s use it! In order to create an action client and send a goal to the server, we are going to use the jupyter notebook and create a node in python. You can use the following code and see the robot going to the points:

ipython

Restart the notebook kernel before running it, because we have compiled a new package. Execute the cells, one by one, in the order and you’ll see the robot going to the point!
If you have any doubts about how to do it, please leave a comment. You can also check this video, where all the steps described in this post are done:
[ROS Q&A] How to test ROS algorithms using ROS Development Studio

Related ROS Answers Forum question: actionlib status update problem

Pin It on Pinterest