Creating a Restaurant Robot Simulation, Part 1- #ROS Projects

Creating a Restaurant Robot Simulation, Part 1- #ROS Projects

.

 

 

 

 

 

 

 

 

What we are going to learn

  • How to add modifications to the 3D model.
  • How to create a load sensor plugin for gazebo.
  • How to set up navigation, create a map and save table waypoints.

In summary, we will learn how to create a simulation that allows us to simulate a robot that serves in a restaurant.

List of resources used in this post

 

Open the ROSject

In order to see the simulation, you have to click on the ROSject link (http://www.rosject.io/l/ded2912/) to get a copy of the ROSject. You can now open the ROSject by clicking on the Open ROSject button.

Open ROSject - Barista Robot

Open ROSject – Barista Robot

 

Launching the simulation

With the ROSject open, you can launch the simulation by clicking Simulation, then Choose Launch File. Then you select the main_simple.launch file from the barista_gazebo package.

If everything went ok, you should now have the simulation like in the image below:

Barista Robot in ROSDS

Barista Robot in ROSDS

 

The launch file we just selected can be found on ~/simulation_ws/src/barista/barista_gazebo/launch/main_simple.launch  and has the following content:

<?xml version="1.0" encoding="UTF-8"?>
<launch>
  <arg name="pause" default="false"/>
  <arg name="x" default="0"/>
  <arg name="y" default="0"/>
  <arg name="z" default="0.0"/>
  <arg name="roll" default="0"/>
  <arg name="pitch" default="0"/>
  <arg name="yaw" default="0" />

  <include file="$(find barista_worlds)/launch/start_world_simple10x10.launch">
    <arg name="pause" value="$(arg pause)" />

    <arg name="put_robot_in_world" value="true" />
    <arg name="put_robot_in_world_package" value="barista_gazebo" />
    <arg name="put_robot_in_world_launch" value="put_robot_in_world.launch" />

    <arg name="x" value="$(arg x)" />
    <arg name="y" value="$(arg y)" />
    <arg name="z" value="$(arg z)" />
    <arg name="roll" value="$(arg roll)"/>
    <arg name="pitch" value="$(arg pitch)"/>
    <arg name="yaw" value="$(arg yaw)"/>
  </include>
  
  <include file="$(find barista_description)/launch/rviz_localization.launch"/>


</launch>

As we can see in the launch file, it spawns the robot and opens RViz (Robot Visualization). You can see RViz by opening the Graphical Tools (Tools -> Graphical Tools).

Barista robot in RViz, in ROSDS

Barista robot in RViz, in ROSDS

Where is the Barista robot defined

If you look at the launch file mentioned earlier, you can see that we use the put_robot_in_world.launch file to spawn the robot. The content of the file is:

<launch>
  <arg name="base" default="barista"/>
  <arg name="stacks" default="hexagons"/>
  <arg name="3d_sensor" default="asus_xtion_pro"/>

  <arg name="x" default="0"/>
  <arg name="y" default="0"/>
  <arg name="z" default="0.5"/>
  <arg name="roll" default="0"/>
  <arg name="pitch" default="0"/>
  <arg name="yaw" default="0" />
  
  <arg name="urdf_file" default="$(find xacro)/xacro.py '$(find barista_description)/urdf/barista/$(arg base)_$(arg stacks)_$(arg 3d_sensor).urdf.xacro'" />
  <param name="robot_description" command="$(arg urdf_file)" />

  <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">
    <param name="publish_frequency" type="double" value="30.0" />
  </node>

  <!-- Gazebo model spawner -->
  <node name="spawn_$(arg base)_model" pkg="gazebo_ros" type="spawn_model"
        args="-urdf -param robot_description -model $(arg base) -x $(arg x) -y $(arg y) -z $(arg z) -R $(arg roll) -P $(arg pitch) -Y $(arg yaw)"/>

  <!-- Diagnostics publication to simulate the kobuki mobile base -->
  <node name="sim_diagnostics_pub_node" pkg="barista_gazebo" type="sim_diagnostics_pub.py"
        respawn="false" output="screen">
  </node>
  <!-- Bumper/cliff to pointcloud (not working, as it needs sensors/core messages) -->
  <!-- include file="$(find turtlebot_bringup)/launch/includes/kobuki/bumper2pc.launch.xml"/-->
</launch>

We are not going to explain this file in detail because it will take us a long time. If you need further understanding of URDF files, we highly recommend the URDF for Robot Modeling course on Robot Ignite Academy.

In this file, we essentially load the barista_hexagons_asus_xtion_pro.urdf.xacro file located on the ~/simulation_ws/src/barista/barista_description/urdf/barista/ folder.

At the end of that file we can see that we basically spawn the robot and load the sensors:

<?xml version="1.0"?>
<!--
    - Base      : barista
    - Stacks    : hexagons
    - 3d Sensor : laser-hokuyo
-->    
<robot name="barista" xmlns:xacro="http://ros.org/wiki/xacro">

  <xacro:include filename="$(find turtlebot_description)/urdf/common_properties.urdf.xacro"/>
  <xacro:include filename="$(find turtlebot_description)/urdf/turtlebot_properties.urdf.xacro"/>
  <!-- Bases -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_kobuki.urdf.xacro" />
  <!-- Stacks -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_hexagons.urdf.xacro"/>
  <!-- 3D Sensors -->
  <!-- Barista Mods -->
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_mod.urdf.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_hokuyo.urdf.xacro" />

  <!-- Load Sensor and Common Macros and properties -->
  <xacro:include filename="$(find barista_description)/urdf/barista/macros.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/properties.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/materials.xacro" />
  <xacro:include filename="$(find barista_description)/urdf/barista/barista_loadsensor.xacro" />
  
  <barista_kobuki/>
  <stack_hexagons parent="base_link"/>
  <barista_mod  bottom_parent="plate_middle_link" top_parent="plate_top_link"/>
  <barista_hokuyo  parent="plate_middle_link" x_hok="0.116647" y_hok="0.0" z_hok="0.045"/>

  <barista_loadsensor  parent="barista_top_link"
                       x_loadsensor="0.014395"
                       y_loadsensor="0.0"
                       z_loadsensor="${0.082804+(loadsensor_height/2.0)}"
                       r="${loadsensor_radius}"
                       l="${loadsensor_height}"
                       mass="${loadsensor_mass}"/>
  
</robot>

The barista_loadsensor macro is defined in the barista_loadsensor.xacro.xacro file of the barista_description package, and has the following content:

<?xml version="1.0" ?>
<!-- 
  This is the load sensor used for detecting elements positioned ontop
 -->
<robot name="loadsensor" xmlns:xacro="http://ros.org/wiki/xacro">


  <xacro:macro name="barista_loadsensor" params="parent x_loadsensor y_loadsensor z_loadsensor r l mass">
    
    <link name="loadsensor_link">
	    <inertial>
		    <mass value="${mass}"/>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <cylinder_inertia mass="${mass}" r="${r}" l="${l}" />
	    </inertial>
	    <collision>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <geometry>
			    <cylinder length="${l}" radius="${r}"/>
		    </geometry>
	    </collision>
	    <visual>
		    <origin xyz="0 0 0" rpy="0 0 0"/>
		    <geometry>
			    <cylinder length="${l}" radius="${r}"/>
		    </geometry>
			<material name="red"/>
	    </visual>
    </link>

    <joint name="loadsensor_joint" type="fixed">
    	<origin xyz="${x_loadsensor} ${y_loadsensor} ${z_loadsensor}" rpy="0 0 0"/>
    	<parent link="${parent}"/>
    	<child link="loadsensor_link"/>
    </joint>
    

    <gazebo reference="loadsensor_link">
      <mu1 value="2000.0"/>
      <mu2 value="1000.0"/>
      <kp value="${kp}" />
      <kd value="${kd}" />
		<material>Gazebo/Red</material>
      <sensor name="loadsensor_link_contactsensor_sensor" type="contact">
        <always_on>true</always_on>
        <contact>
          <collision>base_footprint_fixed_joint_lump__loadsensor_link_collision_10</collision>
        </contact>
        <plugin name="loadsensor_link_plugin" filename="libgazebo_ros_bumper.so">
          <bumperTopicName>loadsensor_link_contactsensor_state</bumperTopicName>
          <frameName>loadsensor_link</frameName>
        </plugin>
      </sensor>
  	</gazebo>
  </xacro:macro>
  
  
  
</robot>

An important part of this file is the section that contains:

<plugin name="loadsensor_link_plugin" filename="libgazebo_ros_bumper.so">
          <bumperTopicName>loadsensor_link_contactsensor_state</bumperTopicName>
          <frameName>loadsensor_link</frameName>
        </plugin>

The plugin libgazebo_ros_bumper.so configures everything that has to do with the collisions of the load sensor.

You can see that it sets the topic /loadsensor_link_contactsensor_state to publish the data related to the load sensor.

You can see the data being published with:

rostopic echo /loadsensor_link_contactsensor_state

You can see that this topic alone publishes too much data.

Since we have to show data in the web app that we are going to create, we use the /diagnostics topic to simulate some elements, like the battery, for example, in the format that the real robot uses.

Showing the Battery status

In the barista_gazebo package, we created the sim_pick_go_return_demo.launch file. There you can see that we spawn the battery_charge_pub.py file, which basically subscribes to the /diagnostics topic and publishes into the /battery_charge topic just basic info about the battery status.

Let’s launch that file to launch the main systems:

roslaunch barista_gazebo sim_pick_go_return_demo.launch

If you open a new shell, and type the command below, you should get the battery load:

rostopic echo /battery_charge

Bear in mind that in this case, the value is simulated, given that we are still not using a real robot.

In order to test that the load_sensor works, It will be easier if you check the video https://youtu.be/Uv9yF5zifz4?t=1348 that we created.

ROS Navigation

If you look at the file sim_pick_go_return_demo.launch we mentioned earlier, in that file we are launching the navigation system, more specifically on the sections shown below:

<!-- Start the navigation systems -->
  <include file="$(find costa_robot)/launch/localization_demo.launch">
    <arg name="map_name" value="simple10x10"/>
    <arg name="real_barista" value="false"/>
  </include>

and

 <!-- We start the waypoint generator to be able to reset tables on the fly -->
  <node pkg="ros_waypoint_generator"
          type="ros_waypoint_generator"
          name="ros_waypoint_generator_node">
    </node>

To start creating the map, please consider reloading the simulation (Simulations -> Change Simulation -> Choose Launch file -> main_simple.launch).

The files related to mapping can be found on the costa_robot package. There you can find the following files:

  • gmapping_demo.launch
  • localization_demo.launch
  • save_map.launch
  • start_mapserver.launch

To create a map, after making sure the simulation is already running, we run the following:

roslaunch costa_robot gmapping_demo.launch

Now we have the lasers so that we can create the map. To see the lasers, please check RViz using Tools -> Graphical Tools.

You can now move the robot around by running the command below in a new shell (Tools -> Shell), to generate the map:

roslaunch turtlebot_teleop sim_keyboard_teleop.launch

While we are moving around, the map is being generated.

Then, to save the generated map, we run:

roslaunch costa_robot save_map.launch map_name:="new_map"

The new map should have been generated at ~/simulation_ws/src/barista/costa_robot/map/generated/new_map.pgm.

Now that we have the map, let’s generate the Waypoints. For that, we first launch the localization using the generated map.

roslaunch costa_robot localization_demo.launch map_name:="new_map" real_barista:="false"

Now we generate the waypoints:

roslaunch barista_gazebo start_save_waypoints_system.launch map_name:="new_map" real_barista:="false"

In order to save the waypoints, in RViz we click Publish Point and click where the robot is, then we run:

rosservice call /waypoints_save_server "name: 'HOME'"

This saves the current location of the robot as HOME. We can now move the robot around and save the new location. To move the robot around, remember that we use:

roslaunch turtlebot_teleop sim_keyboard_teleop.launch

Now we click Publish Point in RViz again and click where the robot is located and save the waypoints as Table1, for example:

rosservice call /waypoints_save_server "name: 'TABLE1'"

Great, so far we know how to start the map, start the localization and save the waypoints.

Youtube video

It may happen that you couldn’t reproduce some of the steps mentioned here. If this is the case, remember that we have the live version of this post on YouTube. Also, if you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

How to Install ROS 2 Crystal over Ubuntu 18.04 and ROS Melodic

How to Install ROS 2 Crystal over Ubuntu 18.04 and ROS Melodic

Hey ROS Developers!

In this post, we are going step-by-step to install ROS 2 Crystal in a fresh Ubuntu 18.04.

I’m using a regular desktop computer, the one I use to support me on ROS development.

This tutorial is a report of my personal experience as a ROS developer.

All the steps were taken from the official tutorial: https://index.ros.org/doc/ros2/Installation/Crystal/Linux-Install-Binary

Let’s do it!

Step 1 – Configuration

The first step is adding the ROS repository to your ubuntu sources.list.

user:~$ sudo apt update && sudo apt install curl gnupg2 lsb-release
(multiple lines from apt update)...
user:~$ curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
OK
user:~$ sudo sh -c 'echo "deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list'

Great!

Step 2 – Download

A bit different from ROS 1 (Melodic), we don’t install from using sudo apt, instead, we download a file.

It was taken from: https://github.com/ros2/ros2/releases/tag/release-crystal-20191212

In my case, for Ubuntu 18.04 desktop, I have chosen ros2-crystal…amd64.zip

Save it into your ~/Downloads folder.

Now, let’s create a new folder to keep the files:

user:~$ mkdir ~/ros2_crystal
user:~$ cd ~/ros2_crystal

Extract the files there:

user:~/ros2_crystal$ tar xf ros2-crystal-20191212-linux-xenial-amd64.tar.bz2

 


 

Installing and initializing rosdep

Let’s update ubuntu’s repo once more:

user:~$ sudo apt update

We need to install python-rosdep package, the one in charge of managing ROS dependencies:

user:~$ sudo apt install -y python-rosdep

Then we initialize rosdep and update it:

user:~$ sudo rosdep init
user:~$ rosdep update

Finally, we install some dependencies for this specific version:

user:~$ cd ~/ros2_crystal
user:~$ source /opt/ros/melodic/setup.bash
user:~/ros2_crystal$ CHOOSE_ROS_DISTRO=crystal
user:~/ros2_crystal$ ROS_PYTHON_VERSION=3
user:~/ros2_crystal$ rosdep install --from-paths ros2-linux/share --ignore-src --rosdistro $CHOOSE_ROS_DISTRO -y --skip-keys "console_bridge fastcdr fastrtps libopensplice67 libopensplice69 osrf_testing_tools_cpp poco_vendor rmw_connext_cpp rosidl_typesupport_connext_c rosidl_typesupport_connext_cpp rti-connext-dds-5.3.1 tinyxml_vendor tinyxml2_vendor urdfdom urdfdom_headers"

 

Installing the Autocomplete

In the case of this tutorial, we are working with Ubuntu 18.04 and ROS 1 Melodic, so the autocomplete installation goes like this:

user:~$ sudo apt install python3-pip
user:~$ sudo pip3 install argcomplete

 

Trying it!

Finally, let’s try some examples!

We have some nodes ready to be used from the installation process.

Open a shell, let’s call it Shell 1:

user:~$ . ~/ros2_crystal/ros2-linux/setup.bash
user:~$ ros2 run demo_nodes_cpp talker

You must have the following output:

[INFO] [talker]: Publishing: 'Hello World: 1'
[INFO] [talker]: Publishing: 'Hello World: 2'
[INFO] [talker]: Publishing: 'Hello World: 3'
[INFO] [talker]: Publishing: 'Hello World: 4'
[INFO] [talker]: Publishing: 'Hello World: 5'
[INFO] [talker]: Publishing: 'Hello World: 6'
[INFO] [talker]: Publishing: 'Hello World: 7'
[INFO] [talker]: Publishing: 'Hello World: 8'
[INFO] [talker]: Publishing: 'Hello World: 9'

And a second shell: Shell 2

user:~$ . ~/ros2_crystal/ros2-linux/setup.bash
user:~$ ros2 run demo_nodes_py listener

The output must be something like:

[INFO] [listener]: I heard: [Hello World: 8]
[INFO] [listener]: I heard: [Hello World: 9]
[INFO] [listener]: I heard: [Hello World: 10]
[INFO] [listener]: I heard: [Hello World: 11]
[INFO] [listener]: I heard: [Hello World: 12]
[INFO] [listener]: I heard: [Hello World: 13]
[INFO] [listener]: I heard: [Hello World: 14]

Conclusion

Finally, this is the basic process of installation we achieved with the following environment:

  • Ubuntu 18.04
  • ROS 1 Melodic
  • ROS 2 Crystal

If you like this kind of post, don’t forget to share it.

Leave a comment so we take your feedback to improve our blog posts.

Cheers!

 

Jupyter ROS: embedding ROS Data Visualization on Jupyter notebooks

Jupyter ROS: embedding ROS Data Visualization on Jupyter notebooks

.

 

 

 

 

 

 

What we are going to learn

Learn how to embed ROS data visualization inside a Jupyter notebook.

List of resources used in this post

Opening the ROSject

In today’s post, we are going to learn how to see ROS Data inside a Jupyter Notebook using the Jupyter ROS package. Let’s start by clicking on the ROSject link (http://www.rosject.io/l/d4b0981/). You should now have a copy of the ROSject.

Now, let’s open the notebook by clicking the Open ROSject button.

Open ROSject - Juypter ROS Demo on ROSDS

Open ROSject – Juypter ROS Demo on ROSDS

When you open the ROSject, a default notebook will be automatically open with the instructions on how to show ROS Data in Jupyter.

The demos are possible thanks to the Jupyter-ROS project developed by Wolf Vollprecht from Quantstack. All credit goes to him and his team.

What follows is the list of ROS enabled notebook demos that work off-the-shelf by using the ROSDS integrated simulations. The original notebook demos were created by Wolf. In The Construct, we only have added the explanations required to understand the code and launch the simulations, so you can have a live demo without having to install and configure your computer.

The ROSject provides everything already installed and configured. Hence, you don’t need to install Jupyter ROS, you only need to learn how to use it… and then use it! (for instructions about how to install on your own computer, please check the author’s repo)

You can freely use all the material from the ROSject to create your own notebooks with ROS.

List of demos

When you open the notebook, the list of demos you have are:

  • ROS 3D Grid: about how to include an interactive 3D grid inside a notebook.
  • ROS Robot: about how to add the robot model inside the 3D grid
  • ROS Laser Scan: about how to show the robot laser on the 3D grid.
  • ROS TEB Demo: about how to add interactive markers to the 3D viewer

Where to find the notebooks inside the rosject

All the notebook files are contained in the ROSject.

In order to see the actual files, use the IDE (top menu, Tools->IDE). Then navigate through the folders up to notebook_ws->notebooks.

Jupyter ROS - List of files in ROSDS

Jupyter ROS – List of files in ROSDS

 

How to modify the provided notebooks directly in ROSDS

Use the Jupyter notebook tool to edit the notebooks included in this rosject (or to create new ones).In case you close the notebook, you can re-open it by going to the top menu and selecting Tools->Jupyter Notebook.
Once you have the notebook open, edit at your own will. Check the Jupyter documentation to understand all the possible options.

ROS Laser Scan demo

In the notebooks of the ROSject, we have four demos, but in this post, we are going to show only how to show Laser Scan in the notebook to make things simpler.

In order to get this working, we are going to use the simulation of a Turtlebot.

Remember that you have to have the ROS Bridge running in order for Jupyter ROS to work. If you haven’t started it yet, launch it now.

  1. Go to the ROSDS TAB in your browser
  2. On the top menu, select Tools->Shell
  3. On the opened shell type the following command:
roslaunch ~/notebook_ws/notebooks/launch/bridge.launch --screen

Launching the simulation

We’ll need someone to publish the URDF and the TF. For that, we are going to use a Gazebo simulation. Let’s start a Turtlebot2 simulation.
  1. Go to the ROSDS TAB and go to its top menu.
  2. Select Simulations.
  3. On the panel that appears, click on the label that says Select world.
  4. Then on the drop-down menu that appears, move down until you see the AR Drone world.
  5. On the panel that appears, click on the label that says Select robot.
  6. Then on the drop-down menu that appears, move down until you see the Turtlebot 2.
  7. Click on it and then press Start simulation
Turtlebot in a green area in ROSDS

Turtlebot in a green area in ROSDS

 

A new window should appear and load the simulation. Once loaded you should see something similar to this:

Turtlebot opened in a green area in ROSDS

Turtlebot opened in a green area in ROSDS

First, Start the demo

First, import the required classes from Jupyter ROS.

Click on the next cell and then press Shift+Enterto activate the Python code.IMPORTANT: the import of such a class can take some time!. You will know that the code is still running because the number on the left of the cell has changed to a * character. Do not move to the next step until the * is gone.

try:
    from sidecar import Sidecar
except:
    pass
    
from jupyros import ros3d

Second, create an instance of the viewer, connect to ROS and get the TF

Click on the next cell and then press Shift+Enterto activate the Python code.

v = ros3d.Viewer()
rc = ros3d.ROSConnection()
tf_client = ros3d.TFClient(ros=rc, fixed_frame='/base_footprint')

Connect to the topic of the laser

Click on the next cell and then press Shift+Enterto activate the Python code.
laser_view = ros3d.LaserScan(topic="/kobuki/laser/scan", ros=rc, tf_client=tf_client)

Fourth, get the 3D grid and the robot model

Click on the next cell and then press Shift+Enterto activate the Python code.

g = ros3d.GridModel()
import os
urdf = ros3d.URDFModel(ros=rc, tf_client=tf_client, path=os.environ.get('JUPYROS_ASSETS_URL', 'http://localhost:3000'))
v.objects = [g, laser_view, urdf]

Fifth, visualize everything

Click on the next cell and then press Shift+Enterto activate the Python code

try:
    sc = Sidecar(title='Sidecar Output')
    with sc:
        display(v)
except:
    display(v)

Sixth, change the visualization style (if required)

Click on the next cell and then press Shift+Enterto activate the Python code.

v.layout.height= '1000px'
g.color = '#CCC'
laser_view.point_size = 0.1
laser_view.static_color = "green"

Move the robot around and watch the laser change!

Open a new shell (Tools -> Shell) and type the following command to make the robot move around:

rostopic pub /cmd_vel geometry_msgs/Twist "linear:
  x: 0.2
  y: 0.0
  z: 0.0
angular:
  x: 0.0
  y: 0.0
  z: 0.3"

That is it. If you did everything ok, you should see the robot moving around in the jupyter notebook.

Learn more about Jupyter ROS

If you want to learn more about Jupyter ROS, here we have the interview we made to Wolf for the ROS Developers Podcast where he explains the ins and outs of Jupyter ROS.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Overlaying ROS Workspaces

Overlaying ROS Workspaces

One of the critical topics of ROS, overlaying workspaces, something that can be confusing even for those who are working with ROS for some time. In this post, we are going to clarify why it happens and how to manage it.


 

ROSDS Initial environment

As usual, we are going to work with ROSDS, the ROS Development Studio. Creating a new ROSJect, you will have 2 workspaces in your environment, from scratch. One more designed to store ROSDS public simulations and ROS pre-installed packages. Let check this out!

Open a web shell and type the following command:

user:~$ echo $ROS_PACKAGE_PATH

You must see something like:

/home/user/catkin_ws/src:/home/user/simulation_ws/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

The workspaces are separated by “:”, let’s check one by one, following the order (which is very important!)

  1. /home/user/catkin_ws/src
  2. /home/user/simulation_ws/src
  3. /home/simulations/public_sim_ws/src
  4. /opt/ros/kinetic/share

The order was pre-defined by TheConstruct engineers team, to make it suitable for working with public simulations and custom simulations from the user. The order means:

ROS commands are going to look for packages starting from the workspace /home/user/catkin_ws/src, then /home/user/simulation_ws/src and so on. Remember, you can NOT have packages with the same name in the same workspace. But you can have packages with the same name in different workspaces! If the same package exists in two or more workspaces, the first one will be used.

So, if you want to overwrite a simulation from /home/simulations/public_sim_ws/src, you can do just creating/cloning the package with the same name at /home/user/simulation_ws/src.


 

Re-defining the $ROS_PACKAGE_PATH

“What if I want to add a new workspace to $ROS_PACKAGE_PATH?

This is an ENVIRONMENT VARIABLE, so is it just a matter of export it the way I want? WRONG!

The ENVIRONMENT VARIABLE is generated by the devel/setup.bash file from each workspace. It means this is just one of the results of sourcing a workspace!

If you need to re-define your $ROS_PACKAGE_PATH, you need to it in a safe/correct way, let’s call it. It is like described below:

  1. Source the installation path of ROS:
  2. Recompile the workspace you want just after the installation folder:
  3. Source its setup.bash:

At this moment, your $ROS_PACKAGE_PATH must be:

/home/user/my_ros_workspace/src:/opt/ros/kinetic/share

You only have 1 workspace defined.


 

Re-defining the $ROS_PACKAGE_PATH – Practical example

Let’s try it in ROSDS. First, I will create a new workspace:

user:~$ mkdir -p my_ros_workspace/src
user:~$ cd my_ros_workspace/src
user:~/my_ros_workspace/src$ catkin_init_workspace
user:~/my_ros_workspace/src$ cd ..
user:~/my_ros_workspace$ source /opt/ros/kinetic/setup.bash
user:~/my_ros_workspace$ catkin_make

Let’s source the devel/setup.bash of the new workspace and check our $ROS_PACKAGE_PATH:

user:~/my_ros_workspace$ source devel/setup.bash
user:~/my_ros_workspace$ echo $ROS_PACKAGE_PATH

And the result must be:

/home/user/my_ros_workspace/src:/opt/ros/kinetic/share

Even though we have many other workspaces defined, our $ROS_PACKAGE_PATH considers only one workspace! That’s because we have sourced just this workspace’s devel/setup.bash.


 

Overlaying workspaces

Now, let’s do some more advanced. We want to have more workspaces in our $ROS_PACKAGE_PATH. But let’s check something before:

Execute the following:

user:~$ source catkin_ws/devel/setup.bash
user:~$ echo $ROS_PACKAGE_PATH

You must see:

/home/user/catkin_ws/src:/home/user/simulation_ws/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

You still have the previous $ROS_PACKAGE_PATH defined, but only for the catkin_ws. The conclusion is each workspace has its own $ROS_PACKAGE_PATH defined.

Now, we are going to add public_sim_ws to our new workspace. This is the way to make the public simulations provided by TheConstruct available for your new workspace.

user:~$ source /home/simulations/public_sim_ws/devel/setup.bash
user:~$ echo $ROS_PACKAGE_PATH

You must have:

/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

It means ROS is only considering the public_sim_ws. Let’s put our new workspace on top of it.

user:~$ cd my_ros_workspace/
user:~/my_ros_workspace$ rm -rf build devel
user:~/my_ros_workspace$ catkin_make
user:~/my_ros_workspace$ source devel/setup.bash

And you must have a completely new $ROS_PACKAGE_PATH

/home/user/my_ros_workspace/src:/home/simulations/public_sim_ws/src:/opt/ros/kinetic/share

Conclusion

This is how you can work with multiple workspaces. Bare in mind ROS creates rules to give priority, but you are the one in charge of configuring the order of your workspaces!

Don’t forget to check the official reference: http://wiki.ros.org/catkin/Tutorials/workspace_overlaying

And if you like this kind of post, don’t forget to share it.

Leave a comment for us to know your personal feedback!

Cheers!

ROS Mini Challenge #9 – Fusing data to improve robot localization with ROS

ROS Mini Challenge #9 – Fusing data to improve robot localization with ROS

 

 

 

 

 

 

 

 

 

What we are going to learn

Learn how to improve robot localization with data from different sensors

List of resources used in this post

Where to find the code

Once you open the ROSject (buff.ly/2RifSjn), you will get a copy of it. You just have to click Open. Once open, inside the catkin_ws folder, you will find all the ROS packages associated with this challenge. You can have a look at it using the IDE, for instance.

To see the full code, open the IDE by going to the top menu and select Tools->IDE.

Code Editor (IDE) - ROSDS

Code Editor (IDE) – ROSDS

Launching the simulation

Go to the top menu and select Simulations. On the menu that appears, click on the Choose launch file… button.
Choose lunch file to open simulation in ROSDS

Choose lunch file to open simulation in ROSDS

 

Now, from the launch files list, select the launch file named rotw9.launch from the sumit_xl_course_basics package.

ROS Mini Challenge #9 - Select launch file

ROS Mini Challenge #9 – Select launch file

Finally, click on the Launch button to start the simulation. In the end, you should get something like this:
Summit XL gazebo simulation in ROSDS, ROSject of the week

Summit XL gazebo simulation in ROSDS, ROSject of the week

 

The problem to be solved

As you can see, this ROSject contains 1 package inside its catkin_ws workspace: rotw9_pkg. This package contains a launch file and a configuration file, which are used to start an ekf localization node. This node is provided by the robot_localization package, and its main purpose is to fuse different sensor data inputs in order to improve the localization of a robot.

In our case, we are going to fuse Odometry data (which has been tweaked) with Imu data (which is correct). So, the main purpose of this challenge is to improve the Summit XL odometry data, since the initial one is a little bit distorted.

In order to test that it works as expected, first of all, we need to know the odometry topics. For this, we can use the following command:

rostopic list | grep odom
This command will give us the following topics:

/noisy_odom
/robotnik_base_control/odom

For this challenge, we are going to use the /noisy_odom topic, which basically is the original odometry data with some extra noise added to it.

You can have a look at the current odometry using RViz (rosrun rviz rviz ) and adding an Odometry display. You should get something similar to this:
Noisy Odometry example in ROSDS

Noisy Odometry example in ROSDS

As you can see, the Odometry readings are not very stable.
So now let’s start our node in order to correct the odometry readings with the following command:

roslaunch rotw9_pkg start_ekf_localization.launch
This command will generate a new Odometry topic, named /odometry/filtered, which will contain the resulting Odometry data (fusing the /noisy_odom data with the Imu data).
If you visualize this new Odometry, you will get something like this. As you can see, the new Odometry is much more stable than the original one. Great!
Correct (expected) Odometry data in ROSDS

Correct (expected) Odometry data in ROSDS

NOTE: In order to properly visualize the differences between the 2 Odometry readings, you should modify the arrow size and color on the different displays.

Solving the ROS Mini Challenge

Ok, so… where’s the problem? If you have tried to reproduce the steps described above you have already seen that it DOES NOT WORK. When you run the programs introduced above, they just don’t work. The /odometry/filtered is not shown as expected in RViz (the red arrow that appears in RViz).
Let’s start by having a look at the files inside the rotw9_pkg package, to figure out where the error (or errors) are.
If we look at the start_ekf_localization.launch  file, everything seems fine:
<launch>

<!-- Run the EKF Localization node -->
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization">
<rosparam command="load" file="$(find rotw9_pkg)/config/ekf_localization.yaml"/>
</node>

</launch>

Since the launch file loads the rotw9_pkg/config/ekf_localization.yaml file, let’s have a look at it:

#Configuation for robot odometry EKF
#
frequency: 50

two_d_mode: true

publish_tf: false

odom_frame: odom
base_link_frame: base_link
world_frame: odom

odom0: /noisy_odom
odom0_config: [true, true, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

imu0: /imu/data
imu0_config: [false, false, false,
              false, false, true,
              false, false, false,
              false, false, true,
              true, false, false]
imu0_differential: false

process_noise_covariance": [0.05, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0.05, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0.06, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0.03, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0.03, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0.06, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0.025, 0,     0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0.025, 0,    0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0.04, 0,    0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0.01, 0,    0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.01, 0,    0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.02, 0,    0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.01, 0,    0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.01, 0,
                                              0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.015]


initial_estimate_covariance: [1e-9, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    1e-9, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    1e-9, 0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    1e-9, 0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    1e-9, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    1e-9, 0,    0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    1e-9, 0,    0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    1e-9, 0,     0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    1e-9,  0,     0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     1e-9,  0,     0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     1e-9,  0,    0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     1e-9, 0,    0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    1e-9, 0,
                                                      0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    1e-9]

We are going to explain in deeper the configurations in the YAML file, otherwise, the post would be really big. So, in case you want to understand it better, we highly recommend the Fuse Sensor Data to Improve Localization course in Robot Ignite Academy (www.robotigniteacademy.com/en/).

Ok, let’s go straight to the point.

In the YAML file above, the parameters we are going to focus on are:

odom_frame: odom
base_link_frame: base_link
world_frame: odom

We see that the are links named odom and base_link. We have to make sure these links exist in the simulation. Let’s do that by generating a TF Tree with the commands below:

cd ~/catkin_ws/src
rosrun tf view_frames

The commands above generate a file named frames.pdf that contains the full TF Tree of the simulation, with all the connections, the frame names, etc.

You can easily download the frames.pdf using the IDE (Code Editor) by selecting it and clicking Download.

After opening that file, we can see the names of the links:

TF Tree for Summit XL Simulation

TF Tree for Summit XL Simulation

As we can see, the Odom link is actually named summit_xl_a_odom, and the base_link is named summit_xl_a_base_link. Let’s then fix the Yaml file with the correct values and save the file:

odom_frame:summit_xl_a_odom
base_link_frame:summit_xl_a_base_link
world_frame:summit_xl_a_odom

Now we can try to launch our localization node again:

roslaunch rotw9_pkg start_ekf_localization.launch

Unfortunately, the red arrow in RViz (topic /odometry/filtered) is still not as we expect.

Let’s now check the odom0_config settings in the YAML file:

odom0: /noisy_odom
odom0_config: [true, true, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

The error is actually in that matrix.

The first row goes for X, Y and X positions. The second for ROW, PITCH and YAW. The third is for the linear values, the fourth the angular velocities and the final row is for the linear acceleration.

The problem is basically because we are inputting some noise in the X and Y positions. More details on this can be found in the documentation of the robot_localization package. If we just set those values to false, we have the following:

odom0: /noisy_odom
odom0_config: [false, false, false,
               false, false, true,
               true, true, false,
               false, false, true,
               false, false, false]
odom0_differential: false

If we now try to launch our localization node again, we should see the correct values in RViz:

roslaunch rotw9_pkg start_ekf_localization.launch

Remember that if you need a deeper understanding on fusing sensor data, the Fuse Sensor Data to Improve Localization course in Robot Ignite Academy (www.robotigniteacademy.com/en/) can help you.

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

 

ROS Mini Challenge #8 – Extracting the highest value from laser readings

ROS Mini Challenge #8 – Extracting the highest value from laser readings

 

 

 

 

 

 

 

 

 

What we are going to learn

Learn how to read data from a laser topic and extract the highest value.

List of resources used in this post

 

Where to find the code

Once you open the ROSject (http://buff.ly/2PU4ucB), you will get a copy of it. You just have to click Open. Once open, inside the catkin_ws folder, you will find all the ROS packages associated with this challenge. You can have a look at it using the IDE, for instance.

To see the full code, open the IDE by going to the top menu and select Tools->IDE

Code Editor (IDE) - ROSDS

Code Editor (IDE) – ROSDS

Launching the simulation

Go to the top menu and select Simulations. On the menu that appears, click on the Choose launch file… button.

Choose lunch file to open simulation in ROSDS

Choose launch file to open simulation in ROSDS

Now, from the launch files list, select the launch file named rotw5.launch from the rosbot_gazebo package.

 

Finally, click on the Launch button to start the simulation. In the end, you should get something like this:

Turtlebot robot inside a wall in ROSDS

Turtlebot robot inside a wall in ROSDS

 

The problem to be solved

As you can see, this ROSject contains 1 package inside its catkin_ws workspace: rotw8_pkg. This package contains a very simple Python script, which contains a Python class named Challenge(), with some functions in it.

The main purpose of this script is to get the highest value from all the laser readings, and also the position in the array of this particular reading.

In order to test that it works as expected, all you have to do is the following:

First, make sure you source your workspace so that ROS can find your package:
source ~/catkin_ws/devel/setup.bash

 

Now, start the TF Broadcaster and Listener with the following command:
rosrun rotw8_pkg rotw8_code.py

If everything is correct, you will get a message like this in the Shell:

 

ROS Mini Challenge #8 - Expected output

ROS Mini Challenge #8 – Expected output

NOTE: The numbers in the value of the maxim and the position in the array may vary a little bit from the ones in the image above (but not much).

Solving the ROS Mini Challenge

Ok, so… where’s the problem? If you have tried to reproduce the steps described above you have already seen that it DOES NOT WORK. When you run the programs introduced above, the values you get are not the correct ones at all. So… what’s going on?

When you launch rosrun rotw8_pkg rotw8_code.py, the output is not the expected one:

$ rosrun rotw8_pkg rotw8_code.py
Highest value is 4.50035715103 and it is in the position 0 of the array.

The difference is almost 2 meters, and position 0 (zero) is really different from 202, which is the more or less the expected one. The reason why we expect something like position 202 is because, on the right of the robot we have the position 0 in the scan, in the left, we have position 719, right in front we have 360, so the highest distance should be close to the exit, which is around the position 202. You may understand better by looking at the image below:

Laser range positions in the turtlebot simulation in ROSDS

Laser range positions in the turtlebot simulation in ROSDS

 

If we check the ~/catkin_ws/src/rotw8_pkg/src/rotw8_code.py file, its original code is:

#! /usr/bin/env python

import rospy
import time
from sensor_msgs.msg import LaserScan

class Challenge:
    def __init__(self):
        self.sub = rospy.Subscriber("/kobuki/laser/scan", LaserScan, self.laser_callback)
        self.laser_msg = LaserScan()
    def laser_callback(self, msg):
        self.laser_msg = msg

    def get_laser_full(self):
        time.sleep(1)
        return [self.laser_msg.ranges[0], self.laser_msg.ranges[719]]

    def get_highest_lowest(self):
        l = self.get_laser_full()
        i = 0
        maxim = -1
        for value in l:
            if value >= maxim and str(value) != "inf":
                maxim = value
                max_pos = i
            i = i + 1

        print "Highest value is " + str(maxim) + " and it is in the position " + str(max_pos) + " of the array."

if __name__ == '__main__':
    rospy.init_node('rotw8_node', anonymous=True)
    challenge_object = Challenge()
    try:
        challenge_object.get_highest_lowest()

    except rospy.ROSInterruptException:
        pass

If we look carefully, we can notice that the problem is in the get_laser_full function

 def get_laser_full(self):
        time.sleep(1)
        return [self.laser_msg.ranges[0], self.laser_msg.ranges[719]]

As we can see, it only returns two values of the laser.

To solve the problem we just change this function by:

 def get_laser_full(self):
        time.sleep(1)
        return self.laser_msg.ranges

If you now save the file and run the “rosrun rotw8_pkg rotw8_code.py” command again, now we should get the desired output:

$ rosrun rotw8_pkg rotw8_code.py
Highest value is 7.23973226547 and it is in the position 203 ofthe array.

Hey, now the value is the correct one. The problem was really easy, wasn’t it?

If you are still struggling to understand the code, or you want to master your ROS Skills, I highly recommend you take some courses in Robot Ignite Academy: http://www.robotigniteacademy.com/en/

Youtube video

So this is the post for today. Remember that we have the live version of this post on YouTube. If you liked the content, please consider subscribing to our youtube channel. We are publishing new content ~every day.

Keep pushing your ROS Learning.

Pin It on Pinterest