ROS Developers LIVE-Class #18: Let’s Simulate a World in Gazebo Simulator

ROS Developers LIVE-Class #18: Let’s Simulate a World in Gazebo Simulator

 

In this ROS LIVE-Class we’re going to create a world in the Gazebo simulator for the previous differential drive manipulator we created in the previous class, so the robot can navigate around and interact with the objects.

The model of the robot was created using URDF. However, the model of the environment will be created using SDF.

We will see:
▸ How to create the world for the robot using SDF
▸ How to add models of any object you may think of
▸ How to spawn ROS based robots in the world

Part 1

Part 2

Every Wednesday at 18:00 CET/CEST. 

This is a LIVE Class on how to develop with ROS. In Live Classes you practice with me at the same time that I explain, with the provided free ROS material.

IMPORTANT: Remember to be on time for the class because at the beginning of the class we will share the code with the attendants.

IMPORTANT 2: in order to start practicing quickly, we are using the ROS Development Studio for doing the practice. You will need a free account to attend the class. Go to http://rds.theconstructsim.com and create an account prior to the class.

// RELATED LINKS
▸ Gazebo simulator
▸ SDF format

ROS Developers LIVE-Class #19: Let’s Use Gazebo Plugins

ROS Developers LIVE-Class #19: Let’s Use Gazebo Plugins

 

In this ROS LIVE-Class we’re going to show how to create a Gazebo plugin for a robot simulated world. The plugin will allow us to connect/disconnect the light of the world by means of a ROS topic.
The plugin will be created using C++.

We will use the simulation we created on the previous Live Class of a wheeled robot on a home room. Do not worry, you will receive the code of the simulation at the beginning of this Live Class (only provided to those who attend live to the class. The code will be provided during the first 5 minutes of the class, so be in time).

We will see:
▸ Which types of Gazebo plugins exist
▸ How to create a world plugin that allows you to control anything in the simulated world (very useful for Reinforcement Learning)
▸ How to add ROS to the plugin so we can use it from our ROS programs.

Every Tuesday at 18:00 CET/CEST.
This is a LIVE Class on how to develop with ROS. In Live Classes you practice with the teacher in real time, at the same time that he explains the lesson with the provided free ROS material.

IMPORTANT: Remember to be on time for the class because at the beginning of the class we will share the code with the attendants.

IMPORTANT 2: in order to start practicing quickly, we are using the ROS Development Studio the practice. You will need a free account to attend the class. Go to http://rds.theconstructsim.com and create an account prior to the class. Otherwise you will not be able to practice during the class.

// RELATED LINKS

▸ The ROS Development Studio: https://www.theconstruct.ai/rds-ros-development-studio/

[ROS Projects] OpenAI with Hopper Robot in Gazebo Step-by-Step

[ROS Projects] OpenAI with Hopper Robot in Gazebo Step-by-Step

In this series, we are going to show you how to build a hopper robot in ROS and make it learn to hop using reinforcement learning algorithm. The hopper robot simulation has been built in the last post. In case you didn’t follow it, you can find the post here.

Part 1

Use OpenAI to make a Hopper robot learn in Gazebo simulator, using ROS Development Studio. We will use Qlearning and Gym for that.

Step 1. Create a training package

Let’s create a package for training

cd ~/simulation_ws/src/loco_motion
catkin_create_pkg my_hopper_training rospy

Then we create a launch file called main.launch inside the my_hopper_training/launch directory with the following content

<!--
    Date of creation: 5/II/2018
    Application created by: Miguel Angel Rodriguez <duckfrost@theconstructsim.com>
    The Construct https://www.theconstruct.ai
    License LGPLV3 << Basically means you can do whatever you want with this!
-->

<launch>

    <!-- Load the parameters for the algorithm -->
    <rosparam command="load" file="$(find my_hopper_training)/config/qlearn_params.yaml" />

    <!-- Launch the training system -->
    <node pkg="my_hopper_training" name="monoped_gym" type="start_training_v2.py" output="screen"/>
</launch>

To implement reinforcement learning, we’ll use an algorithm called q-learn. We’ll save the parameters for the q-learn algorithm as qlearn_params.yaml under the my_hopper_training/config directory with the following content

# Algortihm Parameters
alpha: 0.1
gamma: 0.8
epsilon: 0.9
epsilon_discount: 0.999 # 1098 eps to reach 0.1
nepisodes: 100000
nsteps: 1000

# Environment Parameters
desired_pose:
    x: 0.0
    y: 0.0
    z: 1.0
desired_force: 7.08 # In Newtons, normal contact force when stanting still with 9.81 gravity
desired_yaw: 0.0 # Desired yaw in radians for the hopper to stay
max_height: 3.0   # in meters
min_height: 0.5   # in meters
max_incl: 1.57       # in rads
running_step: 0.001   # in seconds
joint_increment_value: 0.05  # in radians
done_reward: -1000.0 # reward
alive_reward: 100.0 # reward

weight_r1: 1.0 # Weight for joint positions ( joints in the zero is perfect )
weight_r2: 0.0 # Weight for joint efforts ( no efforts is perfect )
weight_r3: 1.0 # Weight for contact force similar to desired ( weight of monoped )
weight_r4: 1.0 # Weight for orientation ( vertical is perfect )
weight_r5: 1.0 # Weight for distance from desired point ( on the point is perfect )

In this post, we’ll focus on explaining the training script. Let’s create it under the my_hopper_training_src directory and call it start_training_v2.py with the following content

#!/usr/bin/env python

'''
    Original Training code made by Ricardo Tellez <rtellez@theconstructsim.com>
    Moded by Miguel Angel Rodriguez <duckfrost@theconstructsim.com>
    Visit our website at ec2-54-246-60-98.eu-west-1.compute.amazonaws.com
'''
import gym
import time
import numpy
import random
import qlearn
from gym import wrappers
from std_msgs.msg import Float64
# ROS packages required
import rospy
import rospkg

# import our training environment
import monoped_env


if __name__ == '__main__':
    
    rospy.init_node('monoped_gym', anonymous=True, log_level=rospy.INFO)

    # Create the Gym environment
    env = gym.make('Monoped-v0')
    rospy.logdebug ( "Gym environment done")
    reward_pub = rospy.Publisher('/monoped/reward', Float64, queue_size=1)
    episode_reward_pub = rospy.Publisher('/monoped/episode_reward', Float64, queue_size=1)

    # Set the logging system
    rospack = rospkg.RosPack()
    pkg_path = rospack.get_path('my_hopper_training')
    outdir = pkg_path + '/training_results'
    env = wrappers.Monitor(env, outdir, force=True)
    rospy.logdebug("Monitor Wrapper started")
    
    last_time_steps = numpy.ndarray(0)

    # Loads parameters from the ROS param server
    # Parameters are stored in a yaml file inside the config directory
    # They are loaded at runtime by the launch file
    Alpha = rospy.get_param("/alpha")
    Epsilon = rospy.get_param("/epsilon")
    Gamma = rospy.get_param("/gamma")
    epsilon_discount = rospy.get_param("/epsilon_discount")
    nepisodes = rospy.get_param("/nepisodes")
    nsteps = rospy.get_param("/nsteps")

    # Initialises the algorithm that we are going to use for learning
    qlearn = qlearn.QLearn(actions=range(env.action_space.n),
                    alpha=Alpha, gamma=Gamma, epsilon=Epsilon)
    initial_epsilon = qlearn.epsilon

    start_time = time.time()
    highest_reward = 0
    
    # Starts the main training loop: the one about the episodes to do
    for x in range(nepisodes):
        rospy.loginfo ("STARTING Episode #"+str(x))
        
        cumulated_reward = 0
        cumulated_reward_msg = Float64()
        episode_reward_msg = Float64()
        done = False
        if qlearn.epsilon > 0.05:
            qlearn.epsilon *= epsilon_discount
        
        # Initialize the environment and get first state of the robot
        rospy.logdebug("env.reset...")
        # Now We return directly the stringuified observations called state
        state = env.reset()

        rospy.logdebug("env.get_state...==>"+str(state))
        
        # for each episode, we test the robot for nsteps
        for i in range(nsteps):

            # Pick an action based on the current state
            action = qlearn.chooseAction(state)
            
            # Execute the action in the environment and get feedback
            rospy.logdebug("###################### Start Step...["+str(i)+"]")
            rospy.logdebug("haa+,haa-,hfe+,hfe-,kfe+,kfe- >> [0,1,2,3,4,5]")
            rospy.logdebug("Action to Perform >> "+str(action))
            nextState, reward, done, info = env.step(action)
            rospy.logdebug("END Step...")
            rospy.logdebug("Reward ==> " + str(reward))
            cumulated_reward += reward
            if highest_reward < cumulated_reward:
                highest_reward = cumulated_reward

            rospy.logdebug("env.get_state...[distance_from_desired_point,base_roll,base_pitch,base_yaw,contact_force,joint_states_haa,joint_states_hfe,joint_states_kfe]==>" + str(nextState))

            # Make the algorithm learn based on the results
            qlearn.learn(state, action, reward, nextState)

            # We publish the cumulated reward
            cumulated_reward_msg.data = cumulated_reward
            reward_pub.publish(cumulated_reward_msg)

            if not(done):
                state = nextState
            else:
                rospy.logdebug ("DONE")
                last_time_steps = numpy.append(last_time_steps, [int(i + 1)])
                break

            rospy.logdebug("###################### END Step...["+str(i)+"]")

        m, s = divmod(int(time.time() - start_time), 60)
        h, m = divmod(m, 60)
        episode_reward_msg.data = cumulated_reward
        episode_reward_pub.publish(episode_reward_msg)
        rospy.loginfo( ("EP: "+str(x+1)+" - [alpha: "+str(round(qlearn.alpha,2))+" - gamma: "+str(round(qlearn.gamma,2))+" - epsilon: "+str(round(qlearn.epsilon,2))+"] - Reward: "+str(cumulated_reward)+"     Time: %d:%02d:%02d" % (h, m, s)))

    rospy.loginfo ( ("\n|"+str(nepisodes)+"|"+str(qlearn.alpha)+"|"+str(qlearn.gamma)+"|"+str(initial_epsilon)+"*"+str(epsilon_discount)+"|"+str(highest_reward)+"| PICTURE |"))

    l = last_time_steps.tolist()
    l.sort()

    rospy.loginfo("Overall score: {:0.2f}".format(last_time_steps.mean()))
    rospy.loginfo("Best 100 score: {:0.2f}".format(reduce(lambda x, y: x + y, l[-100:]) / len(l[-100:])))

    env.close()

We won’t go into detail to explain the q-learn algorithm. You can find a tutorial here if you are interested. You can simply copy and paste the following code into a file called qlearn.py and put it under the my_hopper_training/src directory

'''
Q-learning approach for different RL problems
as part of the basic series on reinforcement learning @

 
Inspired by https://gym.openai.com/evaluations/eval_kWknKOkPQ7izrixdhriurA
 
        @author: Victor Mayoral Vilches <victor@erlerobotics.com>
'''

import random

class QLearn:
    def __init__(self, actions, epsilon, alpha, gamma):
        self.q = {}
        self.epsilon = epsilon  # exploration constant
        self.alpha = alpha      # discount constant
        self.gamma = gamma      # discount factor
        self.actions = actions

    def getQ(self, state, action):
        return self.q.get((state, action), 0.0)

    def learnQ(self, state, action, reward, value):
        '''
        Q-learning:
            Q(s, a) += alpha * (reward(s,a) + max(Q(s') - Q(s,a))            
        '''
        oldv = self.q.get((state, action), None)
        if oldv is None:
            self.q[(state, action)] = reward
        else:
            self.q[(state, action)] = oldv + self.alpha * (value - oldv)

    def chooseAction(self, state, return_q=False):
        q = [self.getQ(state, a) for a in self.actions]
        maxQ = max(q)

        if random.random() < self.epsilon:
            minQ = min(q); mag = max(abs(minQ), abs(maxQ))
            # add random values to all the actions, recalculate maxQ
            q = [q[i] + random.random() * mag - .5 * mag for i in range(len(self.actions))] 
            maxQ = max(q)

        count = q.count(maxQ)
        # In case there're several state-action max values 
        # we select a random one among them
        if count > 1:
            best = [i for i in range(len(self.actions)) if q[i] == maxQ]
            i = random.choice(best)
        else:
            i = q.index(maxQ)

        action = self.actions[i]        
        if return_q: # if they want it, give it!
            return action, q
        return action

    def learn(self, state1, action1, reward, state2):
        maxqnew = max([self.getQ(state2, a) for a in self.actions])
        self.learnQ(state1, action1, reward, reward + self.gamma*maxqnew)

In the training script, we are basically doing the following step:

  1. create the training environment
  2. read q learn parameters from the parameter server
  3. try to get the highest reward with the q learn algorithm by deciding which action to take based on the current state for each timestep

That’s it for today. For the next post, we are going to explain how to build the gym training environment.

 

Edit by: Tony Huang

 

Here you will find all the code:
https://bitbucket.org/theconstructcore/hopper/src/master/

Or use directly the project of ROSDevelopementStudio:
https://rds.theconstructsim.com/tc_projects/use_project_share_link/f162963c-5651-460a-bab0-b1cd45607103

Check Out this OpenAI course in RobotIgnite Academy for learning the basics step by step:
https://wp.me/P9Rthq-1UZ

 

All about Gazebo ROS (Gazebo 9)

All about Gazebo ROS (Gazebo 9)

Let’s see how to install Gazebo 9 simulator to work with your ROS system. We are going to see how to replace the default version of Gazebo that comes with the installation of ROS and if previously existing simulations work (or not) with this new version of the simulator.

How to install Gazebo ROS (Gazebo 9) in an existing ROS environment

I presume that you already have a ROS distribution in your system. If you do, you probably installed the version of Gazebo that came by default with that ROS distribution. If you check the documentation of Gazebo, you will see that the following table corresponds to the default versions of Gazebo that automatically install with ROS:

  • ROS Indigo: Gazebo 2.x
  • ROS Kinetic:  Gazebo 7.x
  • ROS Lunar: Gazebo 7.x

Let’s see now, how can you proceed to change the default Gazebo version by the newest one (9.x as for 3rd May 2018).

First, uninstall the default Gazebo

If you want to install the latest version, you will have to remove first your default installed Gazebo (which was probably installed when you installed ROS). That is easy, because, independently of the ROS distro, the same command applies to all the distributions to remove the default Gazebo installation:

$ sudo apt-get remove ros-ROS_DISTRO-gazebo*
$ sudo apt-get remove libgazebo*
$ sudo apt-get remove gazebo*

(replace ROS_DISTRO by your distro name.

After having done the uninstall, no Gazebo files will be in your system, neither the ROS related packages. Let’s now install the new Gazebo 9.

Update the repository

You will need to add the osrfoundation repo to your Linux package system in order to get the new packages of Gazebo.

$ sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" /etc/apt/sources.list.d/gazebo-stable.list'
$ wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -

Then update the repo of packages:

$ sudo apt-get update

The integration of Gazebo with ROS is performed by means of the series of ros-<ROS_VERSION>-gazebo9 packages. The list of ROS – Gazebo packages that OpenRobotics is usually offering is the following (where in our case, we used ROS_VERSION=kinetic):

  • ros-kinetic-gazebo9-dev
  • ros-kinetic-gazebo9-plugins
  • ros-kinetic-gazebo9-ros-control
  • ros-kinetic-gazebo9-msgs
  • ros-kinetic-gazebo9-ros
  • ros-kinetic-gazebo9-ros-pkgs

Install Gazebo 9

A very simple command will do it:

$ sudo apt-get install ros-kinetic-gazebo9-*

That command will install all dependencies. To test if everything is properly working, just type:

$ gazebo

A window like the this should appear on your screen.

Gazebo_ROS_install_gazebo

Gazebo 9 robot simulator start screen

[irp posts=”8825″ name=”Launching Husarion ROSbot navigation demo in Gazebo simulation”]

Testing Gazebo with a battery of ROS based simulations

So if you are reading this post is because you are interested in the combo Gazebo / ROS. And your next question should be: will this new version work with our previously working ROS based simulations? The answer to that question is… it depends. It depends for which Gazebo version your simulation was made, and which parts of Gazebo does that simulation use. We have done the following experiments with some of our simulations.

Testing with a robotic arm simulation

Let’s do a simple example: let’s launch a Wam arm robot, which includes several models, a kinect, a laser and an arm robot with joint controllers. The simulation was created for Gazebo 7.x

First, you need to create a catkin_ws:

$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws
$ catkin_make

You can clone and compile the Wam simulation from The Construct public simulations repo with the following commands:

$ cd ~/catkin_ws/src
$ git clone https://TheConstruct@bitbucket.org/theconstructcore/iri_wam.git -b kinetic
$ cd ..
$ catkin_make
$ roslaunch iri_wam_gazebo main.launch

The result is the simulation running just showing some warnings related to xacro namespace redefinitions.

inconsistent namespace redefinitions for xmlns:xacro:
 old: http://ros.org/wiki/xacro
 new: http://www.ros.org/wiki/xacro (/home/ricardo/catkin_ws/src/iri_wam/iri_wam_description/xacro/iri_wam_1.urdf.xacro)

That warning can be resolved by changing in all the affected files, the xacro definition from this:

xmlns:xacro="http://ros.org/wiki/xacro"

to this:

xmlns:xacro="http://www.ros.org/wiki/xacro"

There was no problem executing any of those. Bear in mind that it includes joint controllers as well as a couple of sensor plugins. So no modification was required in the simulation (remember, originally created for Gazebo 7.x).

Gazebo ROS Gazebo9 simulation Wam robot ros

Gazebo 9 simulation of the Wam robot with ROS

Testing with a wheeled robot simulation

Next simulation we tested was the Summit XL robot simulation by Robotnik. We used the following commands:

$ cd ~/catkin_ws/src 
$ git clone https://TheConstruct@bitbucket.org/theconstructcore/summit_xl.git -b kinetic
$ cd ..
$ catkin_make

In this case, we also had no problem when launching the simulation with the following command:

$ roslaunch sumit_xl_course_basics main.launch
Gazebo ROS Gazebo9 Summit XL robot simulation

Summit XL robot simulation running in Gazebo 9

Testing with a full environment simulation

In this case, we decided to test a simulation created by the Gazebo team themselves, which they used for a competition and that was created for Gazebo 8. It is also an interesting simulation because includes a complete biped robot with several sensors, in a full office environment with people moving around, and plenty of stuff. Have a look at it here.

$ cd ~/catkin_ws/src 
$ hg clone https://TheConstruct@bitbucket.org/osrf/servicesim  
$ cd .. 
$ catkin_make
$ roslaunch servicesim servicesim.launch

The simulation worked nicely off-the-shelf.

Gazebo ROS ServiceSim Gazebo9

ServiceSim running in Gazebo 9

Gazebo ROS ServiceSim Gazebo9 robot

ServiceSim running in Gazebo 9

Gazebo ROS Rviz data produced by ServiceSim in Gazebo 9

Rviz showing the data produced by ServiceSim in Gazebo 9

Problems when working with ROS with Gazebo 9

Gazebo is still and will always be a standalone program completely independent from ROS. This makes the work between them is not as smooth as it could be.

No ROS controllers provided

One of the problems I see with Gazebo 9 when working with ROS is that Gazebo provides a lot of interesting robot models through their Ignition Fuel library. However, none of the models includes the ROS controllers. So in case that you want to use the models for a ROS based situation, you need to create the controllers by yourself. One example of this case is the beautiful simulation of the autonomous car environment created by the Gazebo team. The simulation is perfect for a work with autonomous cars, but the only support it has is for Gazebo topics.

Use of SDF format instead of URDF

An additional problem with the models is that they have been created in SDF format. SDF is the default format for creating models and whole simulations in Gazebo 9, but that format is not supported by ROS. This makes more difficult to use the models in Gazebo + ROS simualtions since ROS requires a URDF description of the model to show it on Rviz. (just in case you want to convert SDF models into URDF, check the following tutorial about it).

 

You may be thinking why to use then SDF instead of URDF for defining the simulations. One of the reasons for using SDF in Gazebo instead of URDF (as indicated by Louise Poubel in this interview of the ROS Developers Podcast) is that SDF overcomes some of the limitations of URDF, like for example the creation of closed loops in a robot model. URDF does not allow to create a robot that has a kinematic chain that splits into two at some point and then unite again. SDF handles that with easy. Watch this video to understand the problem:

Based on that, could it be that the most convenient solution would be to change ROS to support SDF instead of changing Gazebo to support URDF?

[irp posts=”9004″ name=”My Robotic Manipulator – Part #1 – Basic URDF and RViz”]

What about ROS plugins?

The ROS plugins for Gazebo 9 are the plugins that provide the access to the different sensors and actuators and other functionalities of the simulator through a ROS interface. ROS plugins packages are provided as a different set of ROS packages from the main Gazebo 9 distribution. Usually, those packages are provided some weeks after the new Gazebo version has been released. The good new is that those packages for Gazebo 9 are already available (good job Jose Luis Rivero 😉 and you already installed them at the beginning of this post.

If you where using standard plugins provided by ROS in your simulations, it is very likely that they will still work off-the-shelf. On the other side, if you have created your own plugins using the Gazebo API for that, chances are that they may not work and may need to adapt small changes done in the plugins API.

Conclusion

With Gazebo 9, the simulator reaches a very mature version were quite detailed simulations can be created. Just check for example the impressive simulation created by OSRF of an autonomous cars environment.  Every new version we find new features, but more important than that, we find more stability (that is, less crashes).

If you want to know what features will be included in the future versions of Gazebo and when are they going to be released, just check the Gazebo roadmap.

Related and useful links




[ROS Q&A] 117 – How to Launch a ROS Industrial Robots Simulation

 

In this video we will see how to launch a complex industrial environment with several robots in it, including ROS industrial robots and service robots.

The simulation contains a UR5 industrial robot and a couple of mobile bases. Also, many types of sensors include, including lasers cameras, and even a conveyor belt simulation.

This amazing simulation was created by the OSRF for their ARIAC competition 2017 using Gazebo simulator

 

[irp posts=”8409″ name=”RDP 006: Using ROS for Industrial Projects With Carlos Rosales”]

Step 1. Create a project in ROS Development Studio(ROSDS)

ROSDS helps you follow our tutorial in a fast pace without dealing without setting up an environment locally. If you haven’t had an account yet, you can create a free account here. You can get the shared project through this link .

Step 2. Run the simulation

We prebuilt the package for this project. You can run the simulation with the following command

source ~/simulation_ws/install/setup.bash
rosrun osrf_gear gear.py --development-mode -f /home/user/simulation_ws/install/share/osrf_gear/config/sample.yaml

You can then open the gazebo simulation from Tools->Gazebo. You should see the whole simulation of a warehouse.

Step 3. Demo

We prepared a demo which you can simply launch with the following command

./catkin_ws/src/demo.sh

After executing, you should see the robots are moving around.

We will use this simulation in a future ROS Developers Live Class #20 for playing with robots in industrial environments:

[irp posts=”9470″ name=”ROS Developers LIVE-Class #20: Simulate an Industrial Environment”]

 

Edit by: Tony Huang

RELATED LINKS
▸ OSRF: https://www.osrfoundation.org/
▸ ARIAC competition: http://gazebosim.org/ariac

Robot Ignite Academy
ROS Industrial online course
ROS Development Studio


 

How_To_Launch_ROS_Industrial_Robots_Simulation_post_course_banner

 

 

Pin It on Pinterest