Webinar | How to develop with ROS… fast!

Webinar | How to develop with ROS… fast!

In this webinar we are going to show how to develop your ROS programs in a fast way.
We will be doing a life demo of the ROS Development Studio (RDS), the online web environment that allows development with ROS using any type of computer.

We will show:

1- How to develop a ROS program with RDS using a web browser
2- How to test the program in any of the provided simulations
3- How to debug the program using Rviz and other ROS tools
4- How RDS integrates with git
5- How to use the RDS to create a shareable demo of your product or research result, that others can use off-the-shelf

The presentation will be 20 minutes plus 10 minutes questions.

?Please start watching from 02:20

 

 

QUESTIONS ASKED DURING THE WEBINAR THAT WERE NOT RECEIVED BY THE SPEAKER:

* Vamsi Tungala: why simulation time and real-time is different?
Speaker: because the real time is the time we live by, the clock time. The simulation time is the time in the simulation. It may happen that the simulation is very complex and then simulating a second of the robot takes 10 seconds of real time. It could also happen that you use very fast computers and simulate faster that real time.

* Mohamed Abdelkader Zahana: do you need __main__ ?
Speaker: No. No main was required for that example. It was just a simple example. For more complex environments a main definition may be required.

* Sergio Polimante what is the best way, specialy free ways, to learn fast and solid content [of ROS]?
Speaker: I would recommend you to watch this video with the most common methods of learning ROS: https://www.youtube.com/watch?v=udHlvH6TGvo

* Lisset Salinas Pinacho: As this is a web interface, where do the files go? Are they kept in the server?
Speaker: you can leave them there, you can push them to your git, or you can download them using the download option. In the close future there will also be the option to execute on the real robot directly from the platform.

Webinar | How to train your team with ROS for self-driving cars

Webinar | How to train your team with ROS for self-driving cars

 

The rapid development of auto-car has promoted a large demand for self-driving cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. In this webinar you will learn how to start with self-driving cars using ROS.

RELATED LINKS:

* Autoware autonomous cars software
* ADA’s car ROS interface
* Five ways to learn ROS
* Robot Ignite Academy for learning ROS online
* ROS for autonomous cars tutorial
* Duckietown project
* Gazebo simulation of autonomous cars
* Robotics worldwide mailing list
* ROS real time report by TUM and BMW
* ROS Developers LIVE-Class
* ROS in 5 days series of books

ROS Webinars: How to Teach ROS?

ROS Webinars: How to Teach ROS?

 

Learn a method to teach ROS fast with no hassle.

AIMS & SCOPE

The aim of this one-hour webinar is to show you
how to change your classes from passive listening to active practising.
Move away from a slides based teaching method to a notebook based one, where direct interaction with robots is embedded in the method itself.

 

WHO SHOULD ATTEND

Teachers who may need to prepare a syllabus for a summer/winter ROS course, for a future semester, or for a robotics programming course.
We are not going to teach ROS but how to teach ROS for fast learning

 

HOW TO IMPLEMENT THE THEORY FOR A ROS COURSE

The Notebook-Simulation Approach:
1. Creating the Notebooks
2. Embedding ROS Code in the Notebook
3. Embedding Real-time Graphics in the Notebook
4. Embedding Controls in the Notebook
5. Creating the Gazebo simulations
6. Connecting simulations to notebooks
7. Including projects
8. Including exams
9. Teaching Schedule

Related links:

How to Start with Self-Driving Cars Using ROS

How to Start with Self-Driving Cars Using ROS

 

Self-driving cars are inevitable.

In recent years, self-driving car research is becoming the main direction of automotive companies. BMW, Bosch, Google, Baidu, Toyota, GE, Tesla, Ford, Uber, and Volvo are investing in autonomous driving research. Also, many new companies have appeared in the autonomous cars industry: Drive.ai, Cruise, nuTonomy, Waymo to name a few (read this post for a 260 list of companies involved in the self-driving industry).

The rapid development of this field has promoted a large demand for autonomous-cars engineers. Among the skills required, knowing how to program with ROS is becoming an important one. You just have to visit the robotics-worldwide list to see the large amount of job offers for working/researching in autonomous cars, which demand knowledge of ROS.

Why ROS is interesting for Autonomous Cars

Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).

Due to those characteristics, ROS is a perfect tool for self-driving cars. After all, an autonomous vehicle can be considered just as another type of robot, so the same types of programs can be used to control them. ROS is interesting for autonomous cars because:

  1. There is a lot of code for autonomous cars already created. Autonomous cars require the creation of algorithms that are able to build a map, localize the robot using lidars or GPS, plan paths along maps, avoid obstacles, process pointclouds or cameras data to extract information, etc… All kind of algorithms required for the navigation of wheeled robots is almost directly applicable to autonomous cars. Hence, since those algorithms have already been created in ROS, self-driving cars can just make use of them off-the-shelf.
  2. Visualization tools already available. ROS has created a suite of graphical tools that allow the easy recording and visualization of data captured by the sensors, and represent the status of the vehicle in a comprehensive manner. Also, it provides a simple way to create additional visualizations required for particular needs. This is tremendously useful when developing the control software and trying to debug the code.
  3. It is relatively simple to start an autonomous car project with ROS onboard. You can start right now with a simple wheeled robot equipped with a pair of wheels, a camera, a laser scanner, and the ROS navigation stack, and you are set up in a few hours. That could serve as a basis to understand how the whole thing works. Then you can move to more professional setups, like for example, buying a car that is already prepared for autonomous car experiments, with full ROS support (like the Dataspeed Inc. Lincoln MKZ DBW kit).

Self-driving cars companies have realized those advantages and have started to use ROS in their developments. Examples of companies using ROS include BMW (watch their presentation at ROSCON 2015), Bosch or nuTonomy.

Captura de pantalla 2017-09-28 a las 20.55.51

 

Weak points of using ROS

ROS is not all nice and good. At present, ROS presents two important drawbacks for autonomous vehicles:

  1. Single point of failure. All ROS applications rely on a software component called the roscore. That component, provided by ROS itself, is in charge of handling all coordination between the different parts of the ROS application. If the component fails, then the whole ROS system goes down. This implies that it does not matter how well your ROS application has been constructed. If roscore dies, your application dies.
  2. ROS is not secure. The current version of ROS does not implement any security mechanism for preventing third parties to get into the ROS network and read the communication between nodes. This implies that anybody with access to the network of the car can get into the ROS messaging and kidnap the car behavior.

All those drawbacks are expected to be solved in the newest version of ROS, the ROS 2. Open Robotics, the creators of ROS have recently released a second beta of ROS 2 which can be tested here. It is expected to have a release version by the end of 2017.

In any case, we believe that the ROS based path to self-driving vehicles is the way to go. That is why, we propose a low budget learning path for becoming a self-driving car engineer, based on the ROS framework.

Our low-cost path for becoming a self-driving cars engineer

Step 1

First thing you need is to learn ROS. ROS is quite a complex framework to learn and requires dedication and effort. Watch the following video for a list of the 5 best methods to learn ROS. Learning basic ROS will help you understand how to create programs with that framework, and how to reuse programs made by others.

[irp posts=”6110″ name=”5 methods for learning ROS: which one is for you?”]

Step 2

Next, you need to get familiar with the basic concepts of robot navigation with ROS. Learning how the ROS navigation stack works will provide you the knowledge of basic concepts of navigation like mapping, path planning or sensor fusion. There is no better way to learn this than taking the ROS Navigation in 5 days course developed by Robot Ignite Academy.

Step 3

Third step would be to learn the basic ROS application to autonomous cars: how to use the sensors available in any standard of autonomous car, how to navigate using a GPS, how to generate an algorithm for obstacle detection based on the sensors data, how to interface ROS with the Can-bus protocol used in all the cars used in the industry…

The following video tutorial is ideal to start learning ROS applied to Autonomous Vehicles from zero. The course teaches how to program a car with ROS for autonomous navigation by using an autonomous car simulation. The video is available for free, but if you want to get the most of it, we recommend you to do the exercises at the same time by enrolling the Robot Ignite Academy (additionally, in case you like it, you can use the discount coupon 99B9A9D8 for a 10% discount).

Step 4

After the basic ROS for Autonomous Cars course, you should learn more advanced subjects like obstacles and traffic signals identification, road following, as well as coordination of vehicles in crossroads. For that purpose, our recommendation would be to use the Duckietown project. That project provides complete instructions to physically build a small size town, with lanes, traffic lights and traffic signals, where to perform real practice of algorithms (even if at a small scale). It also provides instructions to build the autonomous cars that should populate the town. Cars are based on differential drives and a single camera for sensors. That is why they achieve a very low cost (around 100$ per each car).

552172802_640

Image by Duckietown project

Due to the low economical requirements, and to the good experience that it may be for testing real stuff, the Duckietown project is ideal for start practicing some autonomous cars concepts like line following based on vision, other cars detection, traffic signals based behavior. Still, if your budget is even below that cost, you can use a Gazebo simulation of the Duckietown, and still be able to practice most of the content.

Step 5

Then if you really want to go pro, you need to practice with real-life data. For that purpose, we propose you to install and learn the Autoware project. This project provides real data obtained from real cars on real streets, by means of ROS bags. ROS bags are logs containing data captured from sensors which can be used in ROS programs as if the programs were connected to the real car. By using those bags, you will be able to test algorithms as if you had an autonomous car to practice with (the only limitation is that the data is always the same and restricted to the situation that happened when it was recorded).


Image-by-the-Autoware-project-post-of-self-driving-cars-ros
Image by the Autoware project

The Autoware project is an amazing huge project that, apart from the ROS bags, provides multiple state-of-the-art algorithms for localization, mapping, obstacles detection and identification using deep learning. It is a little bit complex and huge, but definitely worth studying for a deeper understanding of ROS with autonomous vehicles. I recommend you to watch the Autoware ROSCON2017 presentation for an overview of the system (will be available in October 2017).

Step 6

The final step would be to start implementing your own ROS algorithms for autonomous cars and test them in different, close to real situations. The previous step provided you with real-life situations, but always fixed for the moment the bags were recorded. Now it is time that you test your algorithms in more different situations. You can use already existing algorithms in a mix of all the steps above, but at some point, you will see that all those implementations lack some things required for your goals. You will have to start developing your own algorithms, and you will need lots of tests. For this purpose, one of the best options is to use a Gazebo simulation of an autonomous car as a testbed of your ROS algorithms. Recently, Open Robotics has released a simulation of cars for Gazebo 8 simulator.

Captura de pantalla 2017-09-28 a las 20.56.03

Image by the Open Robotics

That simulation, based on ROS contains a Prius car model, together with 16 beam lidar on the roof, 8 ultrasonic sensors, 4 cameras, and 2 planar lidar, which you can use to practice and create your own self-driving car algorithms. By using that simulation, you will be able to put the car in as many different situations as you want, checking if your algorithm works on those situations, and repeat as many times as you want until it works.

Conclusion

Autonomous cars is an exciting subject whose demand for experienced engineers is increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your time to do the effort and learn. Money is not an excuse anymore. Go for it!

Course

ROS Autonomous Vehicles 101

Introduction to Autonomous Vehicles in the ROS Ecosystem

The Need For Robotics Standards

The Need For Robotics Standards

 

Last week I was talking to one lead engineer of a Singapore company which is building a benchmarking system for robot solutions. Having seen my presentation at ROSCON2016 about robot benchmarking, he asked me how would I benchmark solutions that are non-ROS compatible. I said that I wouldn’t. I would not dedicate time to benchmark solutions that are not ROS based. Instead, I suggested, I would use the time to polish the ROS-based benchmarking and suggest the vendors to adopt that middleware in their products.

 

Benchmarks are necessary and they need standards

Benchmarks are necessary in order to improve any field. By having a benchmark, different solutions to a single problem can be compared and hence a direction for improvement can be traced. Up to the present, robotics lacks such benchmarking system.

I strongly believe that in order to create a benchmark for robotics we need a standard at the level of programming.

By having a standard at the level of programming, manufacturers can build their own hardware solutions at will, as far as they are programmable with the programming standard. That is the approach taken by devices that can be plugged into a computer. Manufacturers create the product on their own terms, and then provide a Windows driver that allows any computer in the world (that runs on the Windows standard) to communicate with the product. Once this communication computer-to-product is made, you can create (Windows) programs that compare the same type of devices (from different manufacturers) for performance, quality, noise, whatever your benchmark is trying to compare.

You see? Different types of devices, different types of hardware. But all of them can be compared through the same benchmarking system that relies on the Windows standard.

 

Software development for robots also needs standards

The need for standards is not only required for comparing solutions but also to speed robotics development. By having a robotics standard, developers can concentrate on building solutions that do not have to be re-implemented whenever the robot hardware changes. Actually, given the middleware structure, developers can disassociate so much from the hardware that they can almost rely 100% of its time in the software realm while developing for robotics (something that software developers like very much, to be away from the hardware. Actually, this is one of the reasons why so few good software developers exists in robotics  (most of us are a mix of hardware and software… now you understand the current state of AI for robotics ;-)).

We need the same type of standard for robotics. We need a kind of operating system that allows us to compare different robotics solutions. We need the Windows of the PCs, the Android of the phones, the CAN of the buses…

IMG_0154

 

A few standard proposals and a winner

But you already know that. I’m not the first one to state this. Actually, there have been many people already that tried to create such standard. Some examples include Player, ROS, YARP, OROCOS, Urbi, MIRA or JdE Robot, to name a few.

Personally, I actually don’t care which the standard is. It could be ROS, it could be YARP or it could be any other that still has not been created. The only thing I really care is that we  adopt a standard as soon as possible.

And it looks like the developers have decided. Robotics developers prefer ROS as their common middleware to program robots with.

No other middleware for robotics has had so large adoption. Some data about it:

ROS YARP OROCOS
Number of Google pages: 243.000 37.000 42.000
Number of citations to the paper describing the middleware: 3.638 463 563
Alexa ranking: 14.118Screenshot from 2017-08-24 19:50:39 1.505.000Screenshot from 2017-08-24 19:50:29 668.293Screenshot from 2017-08-24 19:50:19

Note 1: Only showing the current big three players

Note 2: Very simple comparison. Difficult to compare in other terms since data is not available

Note 3: Data measured in August 2017. May vary at the time you are reading this. Links provided on the numbers themselves, so you can check by yourself.

So it is not only the feeling that we, roboticists, have. Also the numbers indicate that ROS is becoming that standard for robotics programming.

Screenshot from 2017-08-24 19:25:59

 

Why ROS?

The question is then, why has ROS emerged on top of all the other possible contestants. None of them is worst than ROS in terms of features. Actually you can find some feature in all the other middlewares that outperform ROS. If that is so, why or how has ROS achieved the status of becoming the standard ?

A simple answer from my point of view: excellent learning tutorials and debugging tools.

1

Here there is a video where Leila Takayama, early developer of ROS, explains when she realized that the key for having ROS used worldwide would be to provide tools that simplify the reuse of ROS code. None of the other projects have such a set of clear and structured tutorials. Even less, those other middlewares provide debugging tools into their packages. Lacking those two essential points are preventing new people to use their middlewares (even if I understand the developers of OROCOS and YARP for not providing it… who wants to write tutorials or build debugging tools… nobody! 😉 )

Additionally, it is not only about Tutorials and Debugging-Tools. ROS creators also managed to create a good system of managing packages. The result of that is that developers worldwide could use others packages in a (relatively) easy way. This created an explosion in ROS packages available, providing off-the-shelf almost anything for your brand new ROSified robot.

Now, the rate at which contributions to the ROS ecosystem are made is so big that makes ROS almost unstoppable in terms of growing.

Screenshot from 2017-02-23 20:39:27

 

What about companies?

At the beginning, ROS was mostly used by students at Universities. However, as ROS becomes more mature and the number of packages increases, companies are realizing that adopting ROS is also good for them because they will be able to use the code developed by others. On top of that, it will be easier for them to hire new engineers already knowing the middleware (otherwise they would need to teach the newcomers their own middleware).

Based of that principle, many companies have jumped into the ROS train, developing from scratch their products to be ROS compatible. Examples include Robotnik, Fetch Robotics, Savioke, Pal Robotics, Yujin Robots, The Construct, Rapyuta Robotics, Erle Robotics, Shadow Robot or Clearpath, to name a few of the sponsors of the next ROSCON 😉 . Creating their ROS-compatible products, they decreased their development time in several orders of magnitude.

To bring things further, two Spanish companies have revolutioned the standarization of robotics products towards the ROS middleware: in one side, Robotnik has created the ROS Components shop. A shop where anyone can buy ROS compatible devices, starting from mobile bases to sensors or actuators. On the other side, Erle Robotics (now Acutronic Robotics) is in the process of developing Hardware ROS. The H-ROS is a standardized software and hardware infrastructure to easily create reusable and reconfigurable robot hardware parts. ROS is taking the hardware standarization too! But this time is made by companies, not research! That must mean something…

Screenshot from 2017-08-24 22:25:45

Finally, it looks like industrial robots manufacturers have understood the value that a standard can provide to their business. Even if they do not build their industrial robots ROS enabled from scratch, they are adopting the ROS Industrial flavour of ROS, which allows them to ROSify their industrial robots and re-use all the software created for manipulators in the ROS ecosystem.

But are all companies getting into the ROS bus? Not all of them!

Some companies like Jibo, Aldebaran or Google still do not rely on ROS for their robots programming. Some of them rely on their own middleware created previously to the existence of ROS  (that is the case of Aldebaran). Some others, though, are creating their own middleware from scratch. Their reasons: do not believe ROS is good, they have already created a middleware, or do not want to develop their products depending on the middleware of others. Those companies have very fair reasons to go their way. However, will that make them competitive? (if we have to judge from previous history, mobiles, VCRs, the answer may be no).

 

So is ROS the standard for programming robots?

That is a question still too soon to be answered. It looks like it is becoming the standard, but many things can change. It is unlikely that another middleware takes the current number one middleware title from ROS. But it may happen something that wipes ROS out of the map (may be Google will release its middleware to the public (like they did with Android) and take the sector by storm?).

Still, ROS has its problems, like a total lack of security or the instability of some important packages. Even if the OSRF group are working hard to build a better ROS system (for instance ROS2 is in beta phase with many root improvements), some hard work is still required for some basic things (like the ROS controllers for real robots).

IMG_3330

Given those numbers, at The Construct we believe that ROS IS THE STANDARD (that is why we are devoted to creating the best ROS learning tutorials of the world). Actually, it was thanks to this standardization that two Barcelona students were able to create an autonomous robot product for coffee shops in only three months from zero knowledge of robotics (see Barista robot).

This is the future that is coming, and it is good. On that future, thanks to standards, almost anyone will be able to build, design and program their own robotics product, in a similar way as PC stores are building computers today.

So my advice, as I said to the Singapore engineer, is to bet for ROS. Right now, it is the best option for a robotics standard.


 

learning-ros-starts-here-robot-ignite-academy

Go to point program for ground robots using ActionLib

Go to point program for ground robots using ActionLib

RDS Public Simulation List

ROS Development Studio Public Simulation List

Navigation is one of the challenging tasks in robotics. It’s not a simple task. To reach a goal or follow a trajectory, the robot must know the environment using and localize itself through sensors and a map.

But when we have a robot that has this information already, it’s possible to start navigating, defining points to follow in order to reach a goal. That’s the point we are starting from in this post. Using a very popular ground robot, Turtlebot 2, we are going to perform a navigation task, or part of it.

In order to program the robot, RDS (ROS Development Studio) is going to be used. Before starting, make sure you are logged in using your credentials and able to see the public simulation list (as below) and run the Kobuki gazebo simulation.

RDS Public Simulation List

RDS Public Simulation List

At this point, you should have the simulation running (image below). On the left side of the screen, you have a Jupyter Notebook with the basics and some instructions about the robot. On the center, the simulation. On the right, an IDE and a terminal. It’s possible to set each one in full screen mode or reorganize the screen, as you wish. Feel free to explore it!

kobuki-sim

So, you have already noticed the simulation is running, the robot is ready and you might have send some velocity commands already. Now let’s start showing our Action Server that sends the velocity commands in order to the robot achieve a goal. First thing, clone this repository [https://marcoarruda@bitbucket.org/TheConstruct/theconstruct_navigation.git] to your workspace (yes, your cloud workspace, using RDS!). Done? Let’s explore it for a while. Open the file src/theconstruct_navigation/go_to_point2d/src/go_to_point2d_server.cpp. You must have something like this:
gotopoint-code

So, let’s start from the main function of the file. You can see there we are creating a node (ROS::init()) and a GoToPoint2DAction object. That’s the name of the class created at the beginning of the file. Once this variable is created, all methods and behaviors of the class will be working.

Now, taking a look inside the class we can see that there are some methods and attributes. The attributes are used only inside the class. The interface between the object and our ROS node are the methods, which are public.

When the object is instantiated, it registers some mandatory callbacks for the ActionLib library (goalCB, the one who receives the goal or points that we want to send the robot and preembCB, that allows us to interrupt the task). It’s also getting some parameters from the launch file. And finally, creating a publisher for the velocity topic and subscribing the odometry topic, which is used to localize the robot.

Let’s compile it! Using the terminal, enter into the directory catkin_ws (cd catkin_ws) and compile the workspace (catkin_make). It may take some minutes, because we are generating the message files. The action message is defined at the folder theconstruct_navigation/action/GoToPoint2D.action. You can explore there and see what it expects and delivers.

Finally, let’s run the action server. Use the launch file to set the parameters:
roslaunch go_to_point2d go_to_point2d_server.launch. Did the robot move? No? Great! The server is waiting for the messages, so it must not send any command until we create an action client and send the requests. First, let’s take a look in the launch file:

launch-file

Notice that we have some parameters to define the limits of the robot operation. The first 3 sets the maximum and minimum linear velocity and a gain that is used to set the robot speed in a straight line, since it depends on the distance between the robot and the goal point.

The next 3 parameters set the same as the previous parameters, but for the angular velocity.

Finally the last 3 parameters are used to establish a tolerance for the robot. Well, the robot’s odometry and yaw measurement are not perfect, so we need to consider we’ll have some errors. The error cannot be too small, otherwise the robot will never get to its goal. If the error is too big, the robot will stop very far from the goal (it depends on the robot perception).

Now that we have the basic idea of how this package works, let’s use it! In order to create an action client and send a goal to the server, we are going to use the jupyter notebook and create a node in python. You can use the following code and see the robot going to the points:

ipython

Restart the notebook kernel before running it, because we have compiled a new package. Execute the cells, one by one, in the order and you’ll see the robot going to the point!
If you have any doubts about how to do it, please leave a comment. You can also check this video, where all the steps described in this post are done:
[ROS Q&A] How to test ROS algorithms using ROS Development Studio

Related ROS Answers Forum question: actionlib status update problem

Pin It on Pinterest