OpenAI has released the Gym, a toolkit for developing and comparing reinforcement learning (RL) algorithms. That toolkit is a huge opportunity for speeding up the progress in the creation of better reinforcement algorithms, since it provides an easy way of comparing them, on the same conditions, independently of where the algorithm is executed.
The toolkit is mainly aimed at the creation of RL algorithms for a general abstract agent. Here, we are interested in applying it to the control of robots (of course!). Specifically, we are interested in ROS based robots. That is why, in this post we describe how to apply the OpenAI Gym to the control of a drone that runs with ROS
Let’s see an example of training.
The drone training example
In this example, we are going to train a ROS based drone to be able to go to a location of the space moving as low as possible (may be to avoid being detected), but avoiding obstacles in its way.
For developing the algorithm we are going to use the ROS Development Studio (RDS). That is an environment that allows to program with ROS and its simulations with a web browser, without having to install anything on the computer. So we have all the required packages for ROS and OpenAI Gym and Gazebo simulations already installed. You can follow the rest of the post you have two options:
Either you install everything in your computer. If you need to install those packages, have a look here for ROS installation, here for OpenAI installation, and here to download the Gazebo simulation (for Indigo + Gazebo 7).
What follows are the instructions for developing the training program with RDS, but as you can see, the steps are the same for your local installation.
Get the prepared code for drone training
We have prepared the training code already for you, so you don’t have to build everything from scratch. The goal of this post is to show you how this code works, and how you would modify it for your own case (different robot, or different task).
In order to get the code, just open the RDS (http://rds.theconstructsim.com) and create a new project. You can call it openai_with_ros_example. Then open the project by clicking on the Open Project button.
Once you have the environment open, go to the Tools menu and open a Linux Shell. Inside the shell go to the catkin_ws/src directory. This is the place where ROS code must be put in the RDS in order to build, test, debug and execute it against robot simulations. Once there, clone the following git repo which contains the code to train the drone with OpenAI:
Now, you should have a ROS package named drone_training inside the catkin_ws. Let’s test it right now!
Testing what you have installed
First thing would be to test what you got so we can see what we are trying to understand. For this, follow the next steps:
Launch the Parrot drone simulation. On RDS, you can find it as Parrot AR.Drone at the menu Simulations. After launching, you show see a window like this.
Parrot Drone with ROS simulation
Let’s launch our package so it will start training the Parrot drone. For that, type the following in the previous shell:
> roslaunch drone_training main.launch
You should see the drone start moving doing some strange things. It actually looks like the drone is drunk! That makes perfect sense.
What is happening is that the robot is learning. It is exploring its space of actions and practicing what it will sense based on the actions that it takes. That is exactly how the reinforcement learning problem works. Basically, the robot is performing the classical RL loop o the figure:
How a reinforcement learning problem works (image from StackOverflow)
The agent (the drone plus the learning algorithm), decides to take an action from the pool of available actions (for example, move forward), and executes it in the environment (the drone moves forward). The result of that action, makes the agent closer or not to its target (to fly to a given location). If the robot is closer, it gets a good reward. If it is further away, it gets a bad reward. In any case, the agent perceives the current state of itself and the environment (where it is located now), and then feeds reward, previous state, new state and action taken to the learning algorithm (to learn the results based on its actions). Then the process repeats again for the number of steps the robot is allowed to experiment. When the number of steps is done, the final reward is obtained and the robot starts again from the initial position, now with an improved algorithm. The whole process is repeated again and again for a given number of episodes (usually high).
Now, let’s see how all that works together in the code. Let’s see its structure:
The drone_training package for OpenAI training with ROS
This package is just an example of how you can interface OpenAI with ROS robots. There are other ways of doing it, and in future posts we will explore them.
The package contains the following directories:
launch directory. It contains the main.launch file that we used to launch the whole training thing.
config directory. It contains a configuration file qlearn_params.yaml with the desired parameters for the training. Usually, you need to tune the parameters by trying several times, so it is a good practice to keep them structured on a file or files for easier modification/review.
training_results directory. It will contain the results of our training for later analysis.
utils directory. Contains a Python file plot_results.py that we will use to plot the training results.
src directory. The crux of the matter. It contains the code that makes possible the training of the drone. Let’s have a deeper look at this one.
The src directory
The launch file will launch the start_training.py file. That is the file that orchestrates the training. Let’s see what it does step by step:
Initializes the ROS node
rospy.init_node('drone_gym', anonymous=True)
Of course, first thing is to declare that code as a node of ROS.
Creates the Gym environment
env = gym.make('QuadcopterLiveShow-v0')
That is the main class that OpenAI provides. Every experiment of OpenAI must be defined within an environment. By organizing like that, different developers can test different algorithms always comparing against the same environment. Hence we can compare if an algorithm is better than another always on the same conditions.
The environment defines the actions available to the agent, how to compute the reward based on its actions and results, and how to obtain the state of the world of the agent.
Every environment in OpenAI must define the following things:
A function _reset that sets the training environment to its initial state.
A function _step that makes the changes in the environment based on the last action taken, and then observes what the new state of the environment is. Based on those two things, it generates a reward.
A function _seed used for initializing the random number generator.
A function render function used to show on screen what is happening in the environment.
What is the task that the agent has to solve in this environment.
The number of possible actions in the environment. In our case, we are allowing the robot to take 5 actions.
A way to compute the reward obtained by the agent. Those are the points provided to the agent on each step, based on how good or how bad has it done in the environment to solve the task at hands.
A way to determine if the task at hands has been solved.
We are going to see how to do the code of the environment below.
Loading the parameters of the algorithm from the ROS param server
In this case we are using a Qlearning reinforcement learning algorithm. But you can use any other of the available (including deep learning) or encode your own.
That is the key part that we want to test. How good is this algorithm for solving the task at hands.
Implementation of the training loop
The training loop is the one that repeats the learning cycle explained above. That is where the learning code is executed. It basically consists of two main loops:
First we have a loop with the number of episodes that the robot will be tested. Each episode means the number of times that we will allow the robot to try to solve the task.
Second, we have the loop with the number of steps. For each episode, we allow the robot to take a given number of actions, number of steps, number of loops into the reinforcement cycle. If the robot consumes all the steps, we consider it has not solved the task and hence, a new episode must start.
The number of episodes loop
It starts with the code:
for x in range(nepisodes):
Remember that the number of episodes is a parameter from the config file. The loop basically does is to reset the environment (initialize the robot) so a new trial can start from the original position. It also gets the initial state observation required by the learning algorithm to generate the first action.
observation = env.reset()
The number of steps loop
It starts with the code:
for i in range(nsteps):
and basically what it does is:
Make the learning algorithm choose an action based on the current state
action = qlearn.chooseAction(state)
Execute the action in the environment
observation, reward, done, info = env.step(action)
Get the new state after the action
nextState = ''.join(map(str, observation))
Learn from that result
qlearn.learn(state, action, reward, nextState)
And that is all. That simple. The loops will repeat based on the parameters values, and once they finish, the log files will generate in the training_results directory.
About the learning algorithm
In this example, we are using the Qlearn reinforcement learning algorithm. That is a classical algorithm of reinforcement learning. You can find here a description of it.
The code for the Qlearn algorithm is provided in the qlearn.py file. It has been taken from Victor Mayoral’s git, and you can find the original code here (thanks Victor for such a good work!).
You could change this algorithm for another one that you may have developed and that is going to be the next hit in artificial intelligence. Just create the code (like the qlearn.py) with the same inputs and outputs, and then substitute the call in the start_training.py file. That is the greatness of the OpenAI framework: that you can just plug your algorithm, and do not change anything of the rest, and the whole learning system will still work. By doing this, you can compare your algorithm with the others under the exact same conditions.
Additionally, we have included in the repo another classic reinforcement learning algorithm called Sarsa (sarsa.py).
Here your first homework!
Change the learning algorithm inside the start_training.py file by the Sarsa algorithm, and watch if there is any difference in learning speed or improved behavior.
The Gym environment
As I said, the environment defines the actions available to the agent, how to compute the reward based on its actions and results, and how to obtain the state of the world of the agent, after that actions have been performed.
OpenAI provides an standarized way of creating an environment. Basically, you must create an environment class which must inherit from gym.Env. That inheritance, entitles you to implement within that class the functions _seed,_reset and _step (explained above).
In our case, we have created a class named QuadCopterEnv. You can find the code in the myquadcopter_env.py file.
The code starts by registering the class into the pool of available environments of OpenAI. You register a new environment with the following code:
Then the class starts initializing the topics it needs to connect to, gets the configuration parameters from the ROS param server, and connects to the Gazebo simulation.
Now it is time for the definition of each of the mandatory functions for an environment.
In the function _seed, we initialize the random seed, required to generate random numbers. Those are used by the learning algorithm when generating random actions
In the function _reset we initialize the whole environment to a known initial state, so all the episodes can start with the same conditions. Very simple code. Just resets the simulation, and clears the topics. The only special thing is that at the end of the function we pause the simulation, because we do not want the robot be running while we are doing other computational tasks. Otherwise, we would not be able to guarantee the initial conditions for all the episodes, since it would largely depend on the execution time of other algorithms in the training computer.
The main function is the _step function. This is the one that is called during the loops of training. This function receives as a parameter the action selected by the learning algorithm. Remember that that parameter is just the number of the action select, not the actual action. The learning algorithm doesn’t know which actions we have for this task. It just knows the number of actions available and it picks one of them based on its current learning status. So what we receive here is just the number of the action selected by the learning algorithm.
The first thing we do is to convert that number into the actual action command for the robot. That is what this code does, it converts the number into the movement action that we will send to the ROS robot:
Next step is to send the action to the robot. For that, we need to unpause the simulator, send the command, wait for some time for the execution of the command, take an observation of the state of the environment after the execution, and pause again the simulator.
Then, we process the current state of the robot/environment to calculate the reward. For the reward, we are taking into account how close to the desired position the drone is, but also other factors like the inclination of the drone, or it height. Additionally, we promote moving forward against turning.
Finally, we return the current state, the reward obtained and a flag indicating if this episode must be considered done (either because the drone achieved the goal, or either because it went against the conditions of height or inclination).
state = [data_pose.position.x]
return state, reward, done, {}
Basically, that’s it. The code above calls additional functions that you can check by looking into the QuadCopterEnv class. Those are the functions that do the dirty job of calculating the actual values, but we don’t need to review them here, because they are out of the scope of this subject.
One function that we may need to cover, though, is the function that computes the reward. Its code is the following:
def process_data(self, data_position, data_imu):
done = False
euler = tf.transformations.euler_from_quaternion([data_imu.orientation.x,
data_imu.orientation.y,
data_imu.orientation.z,
data_imu.orientation.w])
roll = euler[0]
pitch = euler[1]
yaw = euler[2]
pitch_bad = not(-self.max_incl < pitch < self.max_incl)
roll_bad = not(-self.max_incl < roll < self.max_incl)
altitude_bad = data_position.position.z > self.max_altitude
if altitude_bad or pitch_bad or roll_bad:
rospy.loginfo ("(Drone flight status is wrong) >>> ("+str(altitude_bad)+","+str(pitch_bad)+","+str(roll_bad)+")")
done = True
reward = -200
else:
reward = self.improved_distance_reward(data_position)
return reward,done
That code, basically does two things:
First, detects if the robot has surpassed the previously defined operation limits of height and inclination. If that is the case, it considers the episode done
Computes the reward based on the distance to the goal
How to configure the test
You can find a yaml file in the config directory containing the different parameters required to configure the learning task. I have divided the parameters in two types:
Parameters related to the learning algorithm being used: those are the parameters that my algorithm needs. In this case, are specifically for a Qlearn algorithm. You would define here the ones your algorithm needs, and then read them in the start_training.py file
Parameters related to the environment: those are parameters that affect the way the reward is obtained, and hence, they affect the environment. Those include the goal position or the conditions for which an episode can be considered aborted due to unstable drone conditions (too much altitude or too much inclination).
How to plot the results of the training
Plotting results is very important because you can visually identify if your system is learning or not, and how fast. If you are able to early identify that your system is not learning properly, then you can modify the parameters (or even the conditions of the experiment), so you can retry fast.
In order to plot the results, I have provided a Python script in the utils directory that does the job. I did not create the code myself, I took it from somewhere else, but I cannot remember from where (if you are the author and want to have the credit just contact me). To launch that code just type on the utils directory:
> python plot_results.py
The script will take the results generated in the training_results directory and generate a plot with all the rewards obtained for each episode. In order to watch the plot, you must open the Graphic Tools window (Tools->Graphic Tools). You must see something like this:
For this post, I have run the code provided to you (as the version of 8 Feb2018) for 500 episodes, and the results are not very good, as you can see in the following figure
In that figure, you can see that there is no progress in the reward, episode after episode. Furthermore, the variations in the reward values look completely random. This means that the algorithm is actually not learning at all about the problem trying to solve. What can be the reasons for that? Well, I can figure out a few. The goal for the engineer is to devise ways to modify the learning situation so the learning can actually be accomplished. Some possible reasons why is not learning:
The reward function is not properly created. If the reward is too complex, the system may not be able to capture small baby steps of improvement.
The state provided to the learning algorithm is continuous, and Qlearning is not well suited for that. As you can see in the code, we are returning the current robot position in the x axis as the state of the environment. That is a continuous value. I would suggest to discretize the state into different zones (let’s say 10 zones). Each zone meaning getting closer and closer to the goal point.
The parameters of the learning algorithm are not correct.
The experiment is not completely sound per se. Take into account that the goal position of the robot is fixed, and that the obstacles cannot be detected by the robot (he can only infer their position after crashing many times against it in several episodes).
What is clear is that the structure of the training environment is correct (by structure, I mean the organization of the whole learning system). That is a good point, since it allow us to start looking for ways to improve the learning from within a learning structure that already works.
How to improve the learning
This example is massively improvable (as the plot of the results are showing ;-). Here a list of suggestions where you can improve the system to make it learn quickly and better solutions:
Take the observations/state in (x,y, z) instead of only x. Also, discretize the state space.
Make the robot detect obstacles with sonar sensor and use it to avoid obstacles.
Make the robot go to random points, not only to a fixed point
That is your homework!
Apply any of those improvements, and send me your plots of the improved reward evolution, and videos showing how the drone has learnt to do the task. We will publish them in our social channels giving you credit about it.
ROS Developers Live Show about this example OpenAI with ROS
We recently did a live class showing how all the explained above works in real time with many people attending at the same time and doing the exercises with me. It may clarify you all the content above. Have a look here:
We do a ROS Developers Live Show every Wednesday at 18:00 CET. You may want to subscribe to our Youtube channel in order to stay notified of our future Live Show.
Additionally, we have created an online course where to learn all the material above and other things about OpenAI for robotics. It is online with all the simulations integrated and requires just a web browser (you can do the exercises with ROS even from Windows!). You can find it here: OpenAI Gym for Robotics 101 (additionally, in case you like it, you can use the discount coupon 2AACEE38 for a 10% discount).
Conclusion
OpenAI is a very good framework for training robots to do things using the latest techniques in artificial intelligence. Also, as you have seen, it is not difficult to integrate with ROS based robots. This makes the tandem OpenAI+ROS a killer combination for robot development!
If you still have doubts, write your questions below the post and we will try to answer them all. Happy robot training!
In this video, we are going to explore the macros for URDF files, using XACRO files. At the end of this video, we will have the same model organized in different files, in a organized way.
Learn how to build the Sentinel Robots from the Matrix for Gazebo Simulator. Learn about advances XACRO techniques. In these first set of videos you will build a basic geometric version. Here is the git for the code: https://bitbucket.org/theconstructcore/sentinel
2
Second part where we talk about how the xacro works in depth.
3
The third video on the series where you learn how to build the sentinel of TheMatrix film, which is an octopus like robot. Learn how to build it in Gazebo and use ROS to move it around. Next set of videos will be about adding the meshes. Remember to post your crazy robot projects in this video and I’ll pick a winner on the next videos ;). Have fun with robotics.
4
In this fourth video, you will learn how to add meshes to the Sentinel-Octopus Model of The Matrix Revolutions that we did in previous videos for the Gazebo simulator. We will use as always ROS Development Studio for this but also Blender for the mesh import/scaling and Thingiverse for the download.
We would love to see your results following this tutorial and other projects in the comments bellow.
6
Learn how to add textures to your stl files to make look this Sentinel as close as possible to the ones in TheMatrix. Add also emissive materials for the elements that generate light.
Drones market is growing more and more each year, and so does the need of improving the way we control them. One of the most important topics here is, of course, how to navigate drones. In this series of videos we are going to have a look at how to implement in ROS one of the approaches that can allow us to perform Localization and Mapping in drones in a quite easy way: LSD-SLAM. LSD-SLAM is a direct monocular SLAM technique, developed by TUM, which allows to localize and create maps with drones with just a 3D camera. Hope you enjoy it!
You will learn step by step through 4 video tutorials:
[irp posts=”6638″ name=”ROS Q&A | How to Start Programming Drones using ROS”]
Part 1 – Setup the whole environment
How to perform LSD-SLAM with a ROS based Parrot AR.Drone with a Gazebo simulation. In this 1st video, you’re going to
Learn the definition of LSD SLAM (Large-Scale Direct Monocular SLAM)
Install LSD-SLAM packages and compile them
Setup the whole environment in order to have all the packages we need for performing LSD-SLAM with a Parrot AR.Drone.
Step 1. Create the project in ROS Development Studio(RDS)
You can simply build the project in RDS without any configuration in the local machine. If you haven’t had an account yet, please register here.
Step 2. Download and compile the simulation environment
Let’s clone the drone simulation under the simulation_ws, you can find shell from Tools->shell
$ cd ~/simulation_ws/src
$ git clone https://bitbucket.org/theconstructcore/tum_ardrone_sim
Now we have to compile the package before using it.
Since the package is for ROS-indigo, we have to configure something first. Please replace the following part in the file ~/simulation_ws/src/tum_ardrone_sim/tum_ardrone/src/UINode/RosThread.h
Part 2 – Solve compilation errors & Launch the nodes for performing LSD SLAM
In this 2nd video of the series, we are going to solve some compilation errors we got in the previous video, and we are trying to launch the nodes for performing LSD SLAM.
[irp posts=”8584″ name=”How to launch two drones on a Single Gazebo Simulation”]
Part 3 – Launch the LSD-SLAM ROS node in an Hector Quadrotor simulation
In this 3rd video of the series, we are successfully achieve to launch the LSD-SLAM ROS node in an Hector Quadrotor simulation.
Part 4 – Perform LSD-SLAM in an small village environment
In this 4th video of the series, we have successfully achieved to launch the LSD-SLAM ROS nodes in an Hector Quadrotor simulation, and we perform some LSD-SLAM in an small village environment.
Many people would like to teach a MOOC about robotics, however the preparation of it can be very long, specially if one wants to provide on the course something more than just a list of facts and concepts.
If the course is based on ROS, the teacher will have access to many concepts working off the shelf. Using ROS speeds up the creation of the MOOC since allows the teacher to demo those concepts without infinite number of preparation hours. Furthermore, it permits to embed student practice in the course itself.
In this article, we are going to show you a way to organize and speed the development of your MOOC robotics course when it is based on ROS. It doesn’t matter the robotics subject, as far as it is about programming robots for doing things. We are leaving from this tutorial MOOCs about robotics hardware.
Introduction
When we talk about robotics MOOCs here, we are talking about courses that teach some theoretical subject of robotics (inverse kinematics, SLAM, visual servoing…) but that make the student practice with real robots at the same time. We believe that it is mandatory to practice with robots at the same time in order to really understand the theory. For this reason, we are going to use the ROS infrastructure as our practical framework.
While explaining the method, we are going to do an example of building a MOOC. Please do the example with me, so you can get the practice of building those courses. In case you have questions, please post them beneath on the comments of the post.
Steps to build a robotics MOOC based on ROS
This is the list of steps we have defined to build a robotics MOOC using ROS as the base system:
Step 0: setting up the environment
The first step is about deciding which distribution structure we will have for our course. This step is required in order to do the rest of material according to the environment selected. You have to take into account that at the end, the student will have to access your MOOC someway. That access is the distribution structure.
Distribution platforms can be based on Youtube, any of the available MOOC academies, or just your files of the MOOC for download from some personal page.
The whole structure
In this article, we propose you to use a structure based on Jupyter notebooks, because they integrate very well with ROS for practicing. We believe that teaching about robotics must be practice based, and providing videos alone is not the proper way for teaching (even if you can include them as an additional material, more about it below).
Since we are going to use ROS and we want the student to practice, we are going to use robot simulations. Gazebo will be the simulator used here.
Now, having decided that we are going to use Jupyter notebooks+Gazebo simulations, we need to decide the way we are going to pack this and provide it to the student. You have two options for this:
Either you make the students install in their computers the software required to open your content (ROS, Gazebo and any other library you may need for your course) by means of direct installation in their machines, provide a virtual machine or docker.
You use an online platform that already provides the full environment to your students.
Since we want to go fast and want to make easy to the students to access the material, for this course we are going to use the second option. For the online environment, we are going to use the free tier of ROS Development Studio (also known as RDS). Go now and create a free account at rds.theconstructsim.com which we are going to use for the rest of the article.
In case you would like to build a virtual machine or docker, you will have to look for installations instructions over the internet. Even if that is your case, I would ask you to follow the rest of the article now so you can learn the rest of steps which are independent of any installation option selected.
Step 1: decide the subject you are going to teach
You must decide what is the subject of robotics that you want to teach. Remember here that we are talking about how to learn something with ROS based robots from the point of view of programming robots (not about robotics hardware).
As a matter of doing a full example, for the rest of the article, we are going to teach how to make robots autonomously navigate.
Also, you should decide in which programming language you are going to teach the course. We do heavily recommend you do the course in Python language (unless your subject requires explicitly the use of C++) . We do not recommend to use C++ for teaching robotics concepts because your students will have a lot of compilation problems that will slow down the learning of the robotics subject (which is what really matters here).
We decide to do the example of this article in Python.
Step 2: decide which units our course will cover
We must list the units for each of the subjects to teach and the exercises that we will use in each unit. Additionally we must provide a project that the students must complete during the whole course.
In our case we are going to include the following units:
Odometry based navigation. Basic concepts of robot navigation. Exercise: make the robot move doing a perfect 1m square.
Sensors for robot navigation. Sensors used on navigation. Exercise: make the robot move around selecting the direction that has less obstacles.
SLAM: How to build a map of the environment. Exercise: make the robot build the map of an environment using the ROS navigation stack.
Monte Carlo Localization. Robot localization on a map. Exercise: make the robot localise in the map built on the previous unit.
Rapid Random Trees: Robot path planning. Exercise: make the robot create paths on the map built on the unit 3 while the robot is localised.
Dynamic Window Approach. Obstacle Avoidance. Exercise: make the robot move autonomously using the map, localization and path planning while avoiding new obstacles in its way
Project. Make a robot patrol a zone. For that, the student must create the map, localize the robot, and create a package that sends a sequence of points the robot has to visit, one after another, endlessly.
We are going to create an RDS project for each one of the units. Let’s start with the first unit now:
Go now to the RDS and create the project for the first Unit. Just press the button Create New Project.
Call the project Unit1_Odometry_Based_Navigation. You can add a description of the project if you want. When done, press Create.
You have now created the shell of the Unit.
Step 3: decide which robots to use
Next step would be to decide which robots are we going to use for each of the units. It is convenient to provide different robots to the students to practice with. What we recommend is to use one type of robot per each unit, if possible. This is interesting for the student because he will have as many different points of view as possible of the same concepts.
As indicated, we will use simulations of the robots for demonstration and practice. The selected robots will have to be prepared for the concepts to teach. For example, if you are giving a course about manipulators, you will need to use simulations of robots that have arms and grippers.
For our example of robot navigation, we are going to use wheeled robots that have odometry and laser data. We decided the following robot assignment for each unit:
Unit 1 – Turtlebot 3
Unit 2 – Husky
Unit 3 – Turtlebot 2
Unit 4 – Jackal
Unit 5 – Summit XL
Unit 6 – RB-1
Project – Turtlebot 2
Since our course is based on ROS, we are going to use Gazebo simulations for our courses. In order to get the simulations, you could build them yourself, but actually, unless you are doing a very strange subject, you can find many Gazebo simulations of robots on the Internet, ready to be used.
For instance, you can download all The Construct simulations from our public repo, which contains many simulations of different types of robots. The simulations of the repo are guaranteed to work with ROS Indigo and Gazebo 7 (as for January 2018). Our public repo is available here: https://bitbucket.org/account/user/theconstructcore/projects/PS
Now let’s start populating our course with the simulation for the Unit 1 (Turtlebot 3)
For that, let’s open the first unit we created previously on RDS. Then, let’s assign to this Unit 1 the simulation that we are going to use for it. In this case, it is going to be the Turtlebot 3 from Robotis. For the Turtlebot 3 we are going to download the simulation from the official documentation page of Robotis. The steps are the following:
Press the Open Project option of the project you just created on the previous step.
When the RDS main desktop appears, open a shell (top menu->Tools->Shell) and go to the simulations_ws/src directory (cd simulations_ws/src). You must place your simulations always on that directory.
Clone the Turtlebot 3 simulation from the official repo on that directory. For this robot, this step requires to clone three different repos:
Now, go to the simulations_ws (cd ..) and compile it with catkin_make.
Before we launch the simulation, we need to do a small modification to the launch file. Please, open a webshell (top menu->Tools->Shell). Then go to the turtlebot3_gazebo/launch directory (roscd turtlebot3_gazebo/launch), then open the turtlebot3_world.launch file (vi turtlebot3_world.launch). On line number 2 of that file you have to set the actual value of the parameter model. Remove the current content of the default value, and set it to burguer.
We can now launch the simulation and see how it looks like. For launching the simulation in RDS, select the Simulations option on the top menu, and then press the Select Launch File option. On the pop up that appears, we need to select the actual launch file that we want to launch. For our example we will use the launch file that we modified on the previous step: turtlebot3_gazebo/turtlebot3_world.launch.
By selecting that launch, a new window will appear loading the simulation, which will show the Turtlebot 3 simulation.
NOTE: In case you have problems with the instructions to install the simulation, please watch the following video that shows step by step how to do it.
Step 4: create the notebook
Let’s create the notebook that will contain the explanatory text of the unit. That is the text that teaches the student about the unit subject. For the notebook we use the Jupyter notebooks. Jupyter notebooks allow us to embed text with Python, code, images, videos, and results in real time obtained from the simulation.
Write the notebook on RDS
You should now open the Jupyter notebook and start writing your unit content.
For the sake of speed, let’s use a notebook file I have previously created for this course. Let’s use this file as if we had already written the notebook content for the students. You can also use this as a way of learning the type of content you can embed in a notebook.
You can get this file from the link of the webinar where we explained all the details of this post (https://youtu.be/Z8d1TY8gJ3Q). Go to the webinar link, and there, look at the notes of the video, and look for the Notebook example for download link. Please download the file on the link and then let’s do the following steps together to set that file as the notebook of the first Unit.
Go to the link of the webinar.
Download the zipped iypnb file that contains the notebook of the Unit 1.
Go to the RDS and open the IDE Jupyter notebook in RDS.
Select Upload and upload the notebook file.
Uncompress it using a webshell. Go to the notebook_ws and type unzip <file_name>.
Open the uploaded file from the Jupyter notebook tool of RDS.
Now you have the notebook of Unit 1.
NOTE: In this example we have used an already created notebook for the sake of simplicity and demonstration purposes. However, you can create your own notebook inside the RDS starting from scratch (instead of using the file provided here). For doing that, you only need to open the Jupyter notebook (top menu->Tools->Jupyter Notebook). On the new window that will appear just press the New button, and then the Python 2 option under the Notebooks option. A new and empty notebook will appear that you can populate. Play with all the options there. You can rename the file and add all the content you want. Remember that all the notebooks that you create there are going to be stored in the notebooks_ws, so they will be shipped with the whole project.
When doing your own material for your course, you can write all the content by yourself. Let me show you some of the things you can do with a notebook, by following the content of the file already provided:
In this cell we have the text explaining to the user some of the context
In this cell we have included a video from Youtube that permits to include dynamic explanations and examples of what we mean, so it is easier to the student to understand the concepts.
In those cells we have code that can be copied and executed on the Linux shells.
In this cell we have some code that directly affects the simulation of the robot.
Some important points to take into consideration:
You can modify the whole notebook at will, or you can create new ones.
Your Jupyter notebooks must be stored in the notebook_ws workspace.
Your notebooks must contain exercises for the student.
Now let’s save the current status of the whole unit. For that, press the save icon on the top menu. This saves everything from all the workspaces of the project.
Step 5: provide some example code to the students
We may want to provide some code already done to the students, so they have to modify it or use it as an example for their exercises.
In that case, the code must be put in the catkin_ws, and make reference to it in the notebook.
For the Unit 1 of our MOOC, the student has to do an exercise for moving the robot exactly a square of 1 meter. In order to help him a little, we provide a code example with which the robot reads odometry and moves the robot forward.
Let’s add this example code into the project of Unit 1. In order to get that example code, go to the webinar page again and download the tar.gz file containing the code for the student (look for the link which says Example of package with ROS code for the course). Follow the next steps to include the package in the Unit 1.
Download the file from the webinar page.
Go to the IDE of RDS (top menu->Tools->Code Editor), and select the catkin_ws workspace. Then, browse the directories until you open the src directory.
Right-click on that directory and select Upload. Then upload the compressed file you downloaded from the webinar.
Now the code file is at the RDS inside your project. Use a webshell and visit the catkin_ws/src directory. Once there, uncompress the file with tar xvfzp <name_of_file>. You have now the example code in your catkin_ws and can be used to execute it on the robot. Since it is Python code, you do not need to compile it.
To launch that code, type rosrun t3_basics basics.py. The Turtlebot 3 robot should start moving.
You can now delete the compressed file you uploaded.
Once you have the code done, go and save everything again. The Save icon will save all your changes made in any of your three workspaces: catkin_ws, simulation_ws or notebook_ws.
At this point in time we have the whole information for the Unit 1 done.
Step 6: continue with next unit
Move to the next Unit until you got all of them done. We recommend that you create a project per each Unit, but it is also possible to have all the Units in a single project. The drawback of putting everything together is that the final file will be too large and confusing, since the code, notebooks and simulations provided for each unit will be mixed all around the project. So not recommended at all. Additionally, by separating in projects each Unit, you can share each Unit separately at different times (for example, you do not get Unit 2 until exercise of Unit 1 is sent).
Step 7: share with students
At this point, you should have the whole MOOC built. You can now share the MOOC with your students. In order to share, you have three options:
First option, you share the units inside the RDS platform. Sharing inside the RDS is very simple and prevents having to make students install and configure their equipments. That is as easy as having a list of all the participants, and pushing the share button. For sharing, do the following:
Got to the page that lists all your RDS projects (world icon on the top menu).
Press the Share option on your project.
On the dialog that appears, paste the emails of the students and press Share Files.
That’s it! Your students will receive the units in their own area in RDS and will be able to open and execute the exact same thing that you did.
Second option, download your project from the RDS and put it in a link somewhere (may be at your University server) so anybody with the link can download it. You can download at any point in time your content at RDS. For doing that, just use the IDE to go to the workspace you would like to download, press Right-click and then Download Files. This will download the whole workspace that you can put in a local machine or else to work in the same way as it was working on RDS.
Third option, if you want your course to be provided automatically to the whole world (with no work from your side), you can send us your packages to courses@robotingiteacademy.com and we will publish it at our academy (the Robot Ignite Academy). If your course is included in our catalogue of courses, you will receive a payment based on the use of your course.
Done!
Did you got the instructions clear? Did it work for you? In case you had some trouble with the written instructions, we have created a webinar explaining all those steps. You can watch how the steps are performed by following the video of the webinar, below.
Conclusion
Traditionally, MOOC courses have been based on video tutorials where a person explains the subject, and the student… does what it can to understand. Here we propose a more dynamic and interesting way of creating your MOOC by building interactive elements.
As you can see, working with ROS, Gazebo and Python notebooks is a powerful and fast way of creating an engaging MOOC for the students. All the tools explained here are free, so there is no excuse to not start creating your MOOC right now.
Additionally, if you are still interested on the video path, you can record yourself while explaining through the notebook and simulations. That is the way we have done at Robot Ignite Academy for more than a year already, with a huge success, because the teacher supports his explanation with the notebook, and the students engage in the explanation with the practice while listening the teacher.
As an example, in the video above you can see how I’m teaching Robot Navigation at LaSalle University using Robot Ignite Academy. Every student is practicing at the same time I explain the lesson to them. Additionally, they can do that from any place, with any computer. As you can see on the video, we brought the whole class to the cafeteria where our Barista robot is serving coffees, and they were able to keep practising, programming and testing prior to send their programs to the real robot.
Even if we would like to, engineering students do not receive proper ROS training during their undergraduate period.
This is a problem when students get engaged in the development of the Msc Thesis inside one of the labs of the University, or want to start their PhD. The students must dedicate a long time to get up to speed in ROS, before they can really use the code that is there already.
Typical option for the lab is to provide to the student with a computer, and a link to the ROS Wiki tutorials. Hence the student will pass the days and weeks trying to get the most of it.
Here we propose a smooth learning path for your interns in order to maximize their learning speed. This path is inspired by the book The first 20 hours, and basically consists of four steps:
1. Deconstruct ROS
Deconstructing ROS means to identify the different parts that compose ROS. This work has to be done by somebody that already knows ROS. We have done that work and identified the following main parts:
Installation and setup
Basic organization of the development environment with ROS (catkin workspaces, compilers issues, CMakeLists, IDE configuration)
Basic subjects: packages, roscore, rosparam server
Topics: publishers, subscribers, messages, how to create your own messages
Services: clients, servers, service messages, how to create your own service messages
Actions: clients, servers, service messages, how to create your own action messages
How to make a robot navigate: mapping, localization, path planning and obstacle avoidance
How to make a robot perceive: blob detection with OpenCV, object recognition, point cloud usage, people detection and recognition
Gazebo simulations
URDF robot creation
Robot Control
How to make a robot manipulate objects: MoveIt! usage, combining perception with manipulation, grasping
The previous points cover a global knowledge of ROS. This does not mean that your students will have to learn it all. Those points just express the most common subjects required for the creation of an autonomous robot (like for example the ones used in the Robocup@home competition).
2. Remove stuff
That is, we eliminate as many things as possible, things that are not really necessary right now. The idea is to get the 80% of the results with the 20% of the effort.
C++ or Python?
For the moment, we eliminate the necessity to use C++ for ROS. Using C++ in ROS introduces three problems:
First, the C++ version of ROS includes a lot more of concepts, like the node_handle, the callback_queue or having to deal with the threads of each callback (if you want them in parallel). Python handles all that by itself.
Second, by using Python, the student will have to know just the minimum of CMake handling. Making the proper CMakeLists.txt in C++ is a nightmare. Instead, in Python you almos no need to touch the default one.
Using C++ for ANYTHING, makes the development a lot harder and by hence slower. And here speed is the key. The faster you can close the loop of doing something and experiencing the result, the faster the brain of the student will make the connection that makes him learn the concept. With C++, compilation problems are of the first order.
I know you must be thinking: but C++ is our development language!! They need to learn in that language.
I understand
However, starting their development in C++ is a bad idea, because it makes the pill to swallow a lot bigger. Even if you need the student to use ROS with C+, I recommend you to start with Python (even if the student doesn’t know about Python!!!)
Unless the student is a master of C++, the following path:
Learning Python -> Learning ROS with Python -> Learning ROS with C++
is faster than the path:
Learning ROS with C++
Even if the student already know C++, the additional amount of knowledge required to get ROS using that language makes the evolution very slow.
Installation
The student doesn’t need to know about how to install. Starting with the installation is a waste of time, since at that point the student knows nothing about ROS and may have trouble about the installation. ROS installation is a stupid step, that has nothing to do with ROS or intelligence or knowledge. It is just an experience, that the student will be able to do faster when he already knows ROS.
So we suggest you avoid that step to your students.
How to create a catkin_ws
Again, this is a concept that is difficult to grasp if you don’t know ROS and understand the problems of programming with it. Trying to make the student understand why he has to create a catkin_ws, how to do it and where, is a waste of time. The student will not understand and keep coming to this point once and every time, slowing the progress speed.
Other points to remove
Other points to avoid are, configuring IDEs for programming with ROS (you must provide it done), what is ROS (who cares), how to setup Gazebo (provide it). Basically, you have to remove barriers, physical, emotional and mental, that make the work of learning ROS a lot harder.
So in this point what we propose is that you have a system already set up and working for the student, and that he concentrates on learning ROS for Python.
3. Learn ROS
Here is were the student has to dedicate the time to learn. But the learning has to be done intelligently, that means, following a proper order and with the option of self-correcting at each step. The faster the self correction is produced, the faster the learning will be.
We propose you to provide to your students with the following sequence of tutorials of ROS (from wiki.ros.org), which we have found to be optimal for faster learning:
In order to learn fast, the student has to practice. Copy + paste of the code of the wiki does not count as practice. Practice means, providing the student with exercises to be solved (without solution provided). The exercises have to be tied to a simulation, so the student can see the results of his efforts quickly and with a meaning. Providing just a number example, for example a topic publishing a text, is not good enough to get the student engaged. And engagement here is key if we want the student to learn quickly. The student will be a lot more motivated if he can see the result of his efforts in a robot. We recommend to provide simulated robots, since they provide the possibility of testing quickly.
Practice should include an exam, if possible. Students learn the most under the stress situation of an evaluation. Sorry, but it works like that (I did not make the rules).
Super important: in order to apply the method and speed up learning, the student has to be practicing at least 20 hours in a row. Dedicating 20 hour focused in the important points will create a momentum that will increase the speed of learning.
Summary
So how can you have your students up to speed with ROS fast? This is a summary of what we have said above:
Provide a fully setup environment. Complete ROS + Gazebo + development IDE installed computer.
Provide the student with the optimal list of tutorials to follow (either the one above of your own), but do not just point the student to the ROS wiki.
Provide a full list of exercises that the student has to solve, without providing the solution.
I know that all that is a lot of work but if you want to invest on that once, you will speed up the process of integration of new students in your lab, and your results will raise.
Another option that you have is to use the services of Robot Ignite Academy. In our academy we provide everything already done for your student, organised in the proposed manner, including development environments, robot simulations, exercises and exams. Everything already working, and requiring only a web browser. No installation required, any computer will work. Give it a try!