How to learn ROS?

How to learn ROS?

 

ROS (Robot Operating System) is a common robot software platform which intents to integrate the world’s robotics research energy, and is completely open source. Anyone could just install ROS and immediately get access to all the resources that ROS integrates.

Over the several years, ROS is growing faster than ever. It can be used not only in the laboratory but in the commercial and services industries. If you subscribe to some mailing list such as Robotics Worldwide, you could see that around 70% of robotics job offerings require ROS.

However, this system is huge and complex. There are over 3,000 packages in the ROS ecosystem, and those are constantly updated every day. It requires a lot of effort to learn ROS and it is relatively hard for a beginner.

If you Google it, you will find a full of variety of ROS learning resources, everyone implementing a different learning method. But which kind of learning method is most effective for you? Check the following 5 ROS learning methods and find the best one for you:

 

  1. Official tutorials: ROS Wiki

ros-wiki-ros-learning-methods-fig-1

The official ROS tutorial website provided by OSRF is very comprehensive and it is available in multiple languages. It includes details for ROS installation, documentation of ROS, ROS courses&events, etc. and it’s completely free. You just need to follow the ROS tutorials provided on ROS Wiki page, and get started.

This type of tutorial belongs to the traditional academic learning materials. They start by describing concepts one by one, following a well-defined hierarchy. It’s good material but easy to get lost while reading, and it takes time to grasp the core spirit of ROS.

 

  1. ROS Video Tutorials:

video-tutorials-ros-learning-methods-fig-2

ROS video tutorials provide a unique presentation which shows how programs are created and ran, in a practical wayIt allows you to learn how to carry a ROS project from the professional or instructor, which alleviate a beginner’s fear to start learning ROS to a certain degree.

But there is a drawback that anyone can create a video, this means not require any sort of qualification to publish their content, credibility might be shifty. 

One of the ROS video tutorial course provided by Dr. Anis Koubaa from Prince Sultan University, is a great starting point to learn ROS. The course combines a guided tutorial, different examples, and exercises with increasing level of difficulty along with an autonomous robot.

 

  1. Integrated ROS learning platform – Robot Ignite Academy

The integrated learning platform is a more dynamic way of ROS learning. Compared to other learning methods, it provides a more comprehensive learning platform.

This method makes the students forget as many concepts as possible and concentrate on doing things with the robots. You will learn ROS by practicing.

You will follow a step-by-step ROS tutorial and will program the robots while observing the program’s result on the robot simulation in real-time. The whole platform is integrated into a web page so you don’t have to install anything. You just connect by using a web browser from any type of computer and start learning. 

Well, perhaps the only drawback is it is not free. You can try the platform for free in www.robotigniteacdemy.com or watch their free video tutorials on YouTube.

 

 

  1.  Face-to-face ROS training 

offline-ros-training-ros-learning-methods-fig-4

The face-to-face instructional course is the traditional way of teaching. It builds strong foundations of ROS into students.

ROS training is usually a short course, it requires you to focus on learning ROS in a particular environment and a period of time. With the interaction with teachers and colleagues which allows you to get feedback directly. Under the guidance and supervision of instructors, it definitely encourages a better result.

The following are some of the institutions that are holding offline ROS training or summer courses on a regular basis:

 

          5. ROS Books

ros-books-ros-learning-methods-fig-5

ROS books are published by experienced roboticists. They extract the essence of ROS and present a lot of practical examples.

Books are good tools for learning ROS, however, it requires high self-discipline and concentration so as to achieve the desired result. (but they are only as good as the person using them, it depends on many factors. It allows many distractions to easily affect your progress, unless the strong self-discipline to ensure is paying full attention at all times)

Some recommend readings:

  • Programming Robot with ROS: by combining real-world examples with valuable knowledge from the ROS community, this practical book provides a set of motivating recipes for solving specific robotics use cases.
  • ROS IN 5 DAYS Book Collection:  a book collection associated with online ROS courses giving you the basic tools and knowledge to be able to understand and create any basic ROS related project. 
  • ROS Robotics by Example: this book will help you boost your knowledge of ROS and give you advanced practical experience you can apply to your ROS robot platforms.

 

Do you have any other better ROS learning method? Please, comment to share it with us!

 

Teaching ROS Fast To Many Students

Teaching ROS Fast To Many Students

Lecturer Steffen Pfiffner of the University of Weingarten in Germany is teaching ROS (Robot Operating System) to 26 students at the same time at a very fast pace.

His students, all of them within the Master on Computer Science of University of Weingarten, use only a web browser. They connect to a web page containing the lessons, a ROS development environment and several ROS based simulated robots. Using the browser, Pfiffner and his colleague Benjamin Stähle, are able to teach fast how to program with ROS to so many students, each student engaged in her own learning experience, moving at her own peace. The teacher, providing support when the student gets stuck. This is what Robot Ignite Academy is made for.

With Ignite Academy our students can jump right into ROS without all the hardware and software setup problems. And the best: they can do this from everywhere

— indicates Pfiffner

Robot Ignite Academy provides a web service which contains the teaching material in text and video format, the simulations of several ROS based robots that the students must learn to program, and the development environment required to build ROS programs and test them on the simulated robot.

Screen Shot 2017-03-28 at 12.04.33 PM

Students Point of View

Students bring their own laptops to the class and connect to the online platform (http://robotignite.academy). From that moment, their laptop becomes a ROS development machine, ready to develop programs for many simulated real robots.

The Academy provides the text, the videos and the examples that the student has to follow. Then proposes the student to create her own ROS program and make the robot perform some specific action. The student develops the ROS programs as if she was in a typical ROS development computer.
The main advantage is that students  can use a Windows, Linux or Mac machine to learn ROS. They don’t even have to install ROS in their computers. The only requisite of the laptop is to have a browser. So students do not mess with all the installation problems that frustrate them (and the teachers!) specially when they are starting.
After class, students can later continue with their learning at home, library or even the beach if there is a wifi available! All their code, learning material, simulations are stored online so they can access from anywhere, anytime using any computer and keep learning and practicing.

Teachers Point of View

But the advantage of using the platform is not only for the students but also for the teachers. Teachers do not have to create the material and maintain it updated. They do not have to prepare the simulations and make them work in so many different computers. They don’t even have to prepare the exams!! (which are already provided by the platform).

So what are the teachers there for?!!?

For the most important part of the whole process: to teach.
By making use of the provided material, the teacher can concentrate on guiding the students by explaining the most confusing parts, answer questions, suggest modifications according to the level of each student, and adapt the pace to the different types of students. But specially, provide support to the student. That is teaching!

This new method of teaching ROS is exploding among the Universities and High Schools which want to provide the latest and most practical teachings to their students. The method, developed by Robot Ignite Academy, combines a new way of teaching based on practice, with an online learning platform. Those two points combined make the teaching of ROS a smooth experience and skyrocket the students knowledge of the subject.

As user Walace Rosa indicates in his Youtube comment about Robot Ignite Academy:


The method is becoming very popular in the robotics circuits, and many teachers are using it, even for younger students. For example High School Mundet in Barcelona is using it to teach ROS to 15 years old students.

C8-XCEcXYAAZTGX

High School Students of Institut A.G. Mundet, Barcelona, learning ROS

Next events where Robot Ignite Academy will be used to teach ROS together with teachers:

  • 1 week ROS course in Barcelona for SMART-E project team members. This is a private course given by Robot Ignite Academy at Barcelona for 15 members of the SMART-E project that need to be up to speed with ROS fast. From 8th to 12nd of May 2017
  • 1 day ROS course for the Col·legi d’Enginyers de Barcelona. The 17th of May 2017.
  • 3 months course for University of La Salle in Barcelona within the Master on Automatics, Domotics and Robotics. From 10th of May to 29th of June 2017.
  • 1 weekend ROS course for teenagers in Bilbao, Spain. The 20th and 21st of May 2017.
  • If you are interested to attend any of those events, do not hesitate to contact us.
  • … or, we can also organize an special event like those for you and your team.

    In any case, the best thing that you can do is to give a try to the Academy by yourself. You are the best judge.

For Robotics teachers

ROS Teaching Center

Resources & Tips to start teaching ROS & Robotics

RDS | Smart Grasping System available on ROS Development Studio

RDS | Smart Grasping System available on ROS Development Studio

Would you like to make a robot to grasp something, but you think that is impossible to you just because you can’t buy a robot arm?

I’m here to tell that you can definitely achieve this without buying a real robot.

  • How is that possible?

The Smart Grasping Sandbox built by Shadow Robotics is now available for everybody on the ROS Development Studio – a system that allows you to create and test your robot programs through simulations using only a web browser.

  • Wait, what is the Smart Grasping Sandbox?

The Smart Grasping Sandbox is a public simulation for the Shadow’s Smart Grasping System with the UR10 robot from Universal Robots. It allows you to make a robot to grasp something without having to learn everything related to Machine Learning, and being available on the ROS Development Studio, it allows you to test it without the hassle of installing all the requirements.

As Ugo Cupcic said:

“I don’t want to have to specify every aspect of a problem — I’d rather the system learn the best way to approach a given problem itself”.

  • Using the Development Environment for Grasping

In order to user the Smart Grasping Sandbox on ROS Development Studio – also known as RDS -, just go to http://rds.theconstructsim.com and Sign In. Once logged in, go to the Public Simulations, select the Smart Grasping Sandbox simulation and press the red Launch this simulation button.

smart-grasping

After pressing the Launch this simulation, a new screen will appears asking you to select the launch file. Just keep the default main.launch and press the red run button.

Select the launch file

After a few seconds, the simulation will be loaded and you will be able interact with the simulation.

Once the simulation is loaded, you can see the instructions about how to control it on the left side.

In the center you have the simulation itself, and on the right side you can see the Integrated Development Environment and the Web Shell that allows you to modify the code and send Linux commands to the simulation respectively.

You can modify, the simulation, the control programs or even create your own manipulation control programs, make the grasping system learn using deep learning or else. This is up to you as developer!

You can also see what the robot sees using the online ROS 3D Visualizer (RViz) typing the following command on the Web Shell:

$ rosrun rviz rviz

After running Rviz, you press the red Open ROS Graphic Tools button on the bottom left side of the system and a new tab should appear with Rviz.

  • Conclusion

Here we have seen how to use a simulation to program a robot to grasp something.

The simulation uses ROS as the middleware to control the robot.

If you want to learn ROS or master your ROS Skills, we recommend you to give a try to Robot Ignite Academy.

If you are a robot builder and want to have your own simulation available for everybody on RDS, or you want to have a course specifically for your robot, contact us through the email info@theconstructsim.com.

Regarding to the Smart Grasping Sandbox, you can also watch the following video with the system in action on the ROS Development Studio:

—–

Simulating Husky Robot: The Easy Way

Simulating Husky Robot: The Easy Way

Screenshot from 2016-02-19 11:29:27

This is the second robot in a row that we are presenting for easy simulation inside The Construct.

Husky robot is a wheeled robot created by Clearpath Robotics for all terrain and comes with different configurations.

We have included this robot in our list of easy launches. Hence, in order to simulate the robot you only need to download this file which contains a couple of useful launch files inside a ROS package, upload it to your account in The Construct, and then launch the main.launch file.

We have also included a launch file that will allow you to control the robot with the keyboard. Additionally, you can modify the launch files to configure the robot in the different configurations available (ur5 arm, kinect, laser, etc). Check out the launch files and experiment with it.

Watch the following short tutorial to learn how to launch the robot in The Construct.

 

Would you like to have any other robot for easy simulation? Let us know in the comments and we will include them in our system!

Simulating Tiago robot: the easy way

Simulating Tiago robot: the easy way

Screenshot from 2015-12-28 12:24:20

Many people requests us to explain how to launch Tiago robot from Pal Robotics into The Construct. Actually there is a very easy way of doing it, and, thanks to Jordi Pagés from Pal Robotics and José Capriles from The Construct, it is even possible to have Tiago grasping and navigating along a home with a couple of launch files.

Want to discover how? Then download this file (which contains the basic launch files for Tiago in The Construct) and watch the following video.

NOTE: the file downloaded does not contain the model of Tiago. That model is already installed in The Construct. This file only contains interesting launch files that allow you to quickly start with Tiago in The Construct.

How to create a ROS Sensor Plugin for Gazebo

How to create a ROS Sensor Plugin for Gazebo

 

There are magnificent tutorials about how to create plugins for Gazebo in the GazeboSim webpage. There are even some tutorials about how to create plugins for Gazebo + ROS. Those tutorials show that there are several types of plugins (world, model, sensor, system, visual), and indicate how to create a plugin for a world type plugin.

Recently I need to create a plugin for a light detector. Reading the tutorials, I missed a concrete example about how to create a sensor plugin. Hence, I had to investigate a little bit about it. The result is the content of this post.

 

How to: light sensor plugin in Gazebo

Following the indications provided at the answers forum of Gazebo, I decided to build a very simple light detector sensor based on a camera. Instead of using a raytracing algorithm from lights, the idea is to use a camera to capture an image, then use the image to calculate the illuminance of the image, and then publish that illuminance value through a ROS topic.

Since the plugin is for its use with ROS, the whole plugin should be compilable using a ROS environment. Hence, be sure that you have installed the following packages in your Linux system:

  • ros-<your_ros_version>-<your_gazebo_version>-ros. (in my case it is ros-jade-gazebo6-ros)
  • ros-<your_ros_version>-<your_gazebo_version>-plugins (in my case it is ros-jade-gazebo6-plugins)

This tutorial, has two parts: on the first one we will explain how to create the plugin, and on the second, how to test that it works

 

Creating the plugin

Creating a ROS package for the plugin

First thing, is to create the package in our catkin workspace that will allow us to compile the plugin without a problem.

cd ~/catkin_ws/src
catkin_create_pkg gazebo_light_sensor_plugin gazebo_ros gazebo_plugins roscpp

Creating the plugin code

For this purpose, since we are using a camera to capture the light, we are going to create a plugin class that inherits from the CameraPlugin. The code that follows has been created taking as guideline the code of the authentic gazebo ROS camera plugin.

Create a file called light_sensor_plugin.h inside the include directory of your package, including the following code:

#ifndef GAZEBO_ROS_LIGHT_SENSOR_HH
#define GAZEBO_ROS_LIGHT_SENSOR_HH

#include <string>

// library for processing camera data for gazebo / ros conversions
#include <gazebo/plugins/CameraPlugin.hh>

#include <gazebo_plugins/gazebo_ros_camera_utils.h>

namespace gazebo
{
  class GazeboRosLight : public CameraPlugin, GazeboRosCameraUtils
  {
    /// \brief Constructor
    /// \param parent The parent entity, must be a Model or a Sensor
    public: GazeboRosLight();

    /// \brief Destructor
    public: ~GazeboRosLight();

    /// \brief Load the plugin
    /// \param take in SDF root element
    public: void Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf);

    /// \brief Update the controller
    protected: virtual void OnNewFrame(const unsigned char *_image,
    unsigned int _width, unsigned int _height,
    unsigned int _depth, const std::string &_format);

    ros::NodeHandle _nh;
    ros::Publisher _sensorPublisher;

    double _fov;
    double _range;
  };
}
#endif

As you can see, the code includes a node handler to connect to the roscore. It also defines a publisher that will publish messages containing the illuminance value. Two parameters have been defined: fov (field of view) and range. At present only fov is used to indicate the amount of pixels around the center of the image that will be taken into account to calculate the illuminance.

Next step is to create a file named light_sensor_plugin.cpp containing the following code in the src directory of your package:

#include <gazebo/common/Plugin.hh>
#include <ros/ros.h>
#include "gazebo_light_sensor_plugin/light_sensor_plugin.h"

#include "gazebo_plugins/gazebo_ros_camera.h"

#include <string>

#include <gazebo/sensors/Sensor.hh>
#include <gazebo/sensors/CameraSensor.hh>
#include <gazebo/sensors/SensorTypes.hh>

#include <sensor_msgs/Illuminance.h>

namespace gazebo
{
  // Register this plugin with the simulator
  GZ_REGISTER_SENSOR_PLUGIN(GazeboRosLight)

  ////////////////////////////////////////////////////////////////////////////////
  // Constructor
  GazeboRosLight::GazeboRosLight():
  _nh("light_sensor_plugin"),
  _fov(6),
  _range(10)
  {
    _sensorPublisher = _nh.advertise<sensor_msgs::Illuminance>("lightSensor", 1);
  }

  ////////////////////////////////////////////////////////////////////////////////
  // Destructor
  GazeboRosLight::~GazeboRosLight()
  {
    ROS_DEBUG_STREAM_NAMED("camera","Unloaded");
  }

  void GazeboRosLight::Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf)
  {
    // Make sure the ROS node for Gazebo has already been initialized
    if (!ros::isInitialized())
    {
      ROS_FATAL_STREAM("A ROS node for Gazebo has not been initialized, unable to load plugin. "
        << "Load the Gazebo system plugin 'libgazebo_ros_api_plugin.so' in the gazebo_ros package)");
      return;
    }

    CameraPlugin::Load(_parent, _sdf);
    // copying from CameraPlugin into GazeboRosCameraUtils
    this->parentSensor_ = this->parentSensor;
    this->width_ = this->width;
    this->height_ = this->height;
    this->depth_ = this->depth;
    this->format_ = this->format;
    this->camera_ = this->camera;

    GazeboRosCameraUtils::Load(_parent, _sdf);
  }

  ////////////////////////////////////////////////////////////////////////////////
  // Update the controller
  void GazeboRosLight::OnNewFrame(const unsigned char *_image,
    unsigned int _width, unsigned int _height, unsigned int _depth,
    const std::string &_format)
  {
    static int seq=0;

    this->sensor_update_time_ = this->parentSensor_->GetLastUpdateTime();

    if (!this->parentSensor->IsActive())
    {
      if ((*this->image_connect_count_) > 0)
      // do this first so there's chance for sensor to run once after activated
        this->parentSensor->SetActive(true);
    }
    else
    {
      if ((*this->image_connect_count_) > 0)
      {
        common::Time cur_time = this->world_->GetSimTime();
        if (cur_time - this->last_update_time_ >= this->update_period_)
        {
          this->PutCameraData(_image);
          this->PublishCameraInfo();
          this->last_update_time_ = cur_time;

          sensor_msgs::Illuminance msg;
          msg.header.stamp = ros::Time::now();
          msg.header.frame_id = "";
          msg.header.seq = seq;

          int startingPix = _width * ( (int)(_height/2) - (int)( _fov/2)) - (int)(_fov/2);

          double illum = 0;
          for (int i=0; i<_fov ; ++i)
          {
            int index = startingPix + i*_width;
            for (int j=0; j<_fov ; ++j)
              illum += _image[index+j];
          }

          msg.illuminance = illum/(_fov*_fov);
          msg.variance = 0.0;

          _sensorPublisher.publish(msg);

          seq++;
        }
      }
    }
  }
}

That is the code that calculates the illuminance in a very simple way. Basically, it just adds the values of all the pixels in the fov of the camera and then divides by the total number of pixels.

Create a proper CMakeLists.txt

Substitute the code of the automatically created CMakeLists.txt by the code below:

cmake_minimum_required(VERSION 2.8.3)
project(gazebo_light_sensor_plugin)

find_package(catkin REQUIRED COMPONENTS
  gazebo_plugins
  gazebo_ros
  roscpp
)

find_package (gazebo REQUIRED)

catkin_package(
  INCLUDE_DIRS include
  CATKIN_DEPENDS gazebo_plugins gazebo_ros roscpp
)

###########
## Build ##
###########

set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")

link_directories(${GAZEBO_LIBRARY_DIRS})
include_directories(include)
include_directories( ${catkin_INCLUDE_DIRS} 
                     ${Boost_INCLUDE_DIR} 
                     ${GAZEBO_INCLUDE_DIRS}
)

add_library(${PROJECT_NAME} src/light_sensor_plugin.cpp)

## Specify libraries to link a library or executable target against
target_link_libraries( ${PROJECT_NAME} ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} CameraPlugin )

Update the package.xml and compile

Now you need to include the following line in your package.xml, between the tags <export></export>

<gazebo_ros plugin_path="${prefix}/lib" gazebo_media_path="${prefix}" />

Now you are ready to compile the plugin. Compilation should generate the library containing the plugin inside your building directory.

> roscd
> cd ..
> catkin_make

Testing the Plugin

Let’s create a world file containing the plugin and launch it to see how it works

Create a world file

You need a world file that includes the plugin. Here it is an example. Create a worlds directory inside your plugin package, and save the following code in a file entitled light.world. This world file just loads the camera with its plugin so it may be a bit ugly but enough for your tests. Feel free to add more elements and models in the world file (like for example, in the picture at the top of this post).

<?xml version="1.0" ?>
<sdf version="1.4">
 <world name="default">
 <include>
   <uri>model://ground_plane</uri>
 </include>

 <include>
   <uri>model://sun</uri>
 </include>

 <!-- reference to your plugin -->
 <model name='camera'>
   <pose>0 -1 0.05 0 -0 0</pose>
   <link name='link'>
     <inertial>
       <mass>0.1</mass>
       <inertia>
         <ixx>1</ixx>
         <ixy>0</ixy>
         <ixz>0</ixz>
         <iyy>1</iyy>
         <iyz>0</iyz>
         <izz>1</izz>
       </inertia>
     </inertial>
     <collision name='collision'>
       <geometry>
         <box>
            <size>0.1 0.1 0.1</size>
         </box>
       </geometry>
       <max_contacts>10</max_contacts>
       <surface>
         <contact>
           <ode/>
         </contact>
         <bounce/>
         <friction>
           <ode/>
         </friction>
       </surface>
     </collision>
     <visual name='visual'>
       <geometry>
         <box>
           <size>0.1 0.1 0.1</size>
         </box>
       </geometry>
     </visual>
     <sensor name='camera' type='camera'>
       <camera name='__default__'>
         <horizontal_fov>1.047</horizontal_fov>
         
         <clip>
           <near>0.1</near>
           <far>100</far>
         </clip>
       </camera>
       <plugin name="gazebo_light_sensor_plugin" filename="libgazebo_light_sensor_plugin.so">
         <cameraName>camera</cameraName>
         <alwaysOn>true</alwaysOn>
         <updateRate>10</updateRate>
         <imageTopicName>rgb/image_raw</imageTopicName>
         <depthImageTopicName>depth/image_raw</depthImageTopicName>
         <pointCloudTopicName>depth/points</pointCloudTopicName>
         <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
         <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
         <frameName>camera_depth_optical_frame</frameName>
         <baseline>0.1</baseline>
         <distortion_k1>0.0</distortion_k1>
         <distortion_k2>0.0</distortion_k2>
         <distortion_k3>0.0</distortion_k3>
         <distortion_t1>0.0</distortion_t1>
         <distortion_t2>0.0</distortion_t2>
         <pointCloudCutoff>0.4</pointCloudCutoff>
         <robotNamespace>/</robotNamespace>
       </plugin>
     </sensor>
     <self_collide>0</self_collide>
     <kinematic>0</kinematic>
     <gravity>1</gravity>
   </link>
 </model>
 </world>
</sdf>

 

Create a launch file

Now the final step, to create a launch that will upload everything for you. Save the following code as main.launch inside the launch directory of you package.

<launch>
  <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched -->
  <include file="$(find gazebo_ros)/launch/empty_world.launch">
    <arg name="verbose" value="true"/>
    <arg name="world_name" value="$(find gazebo_light_sensor_plugin)/worlds/light.world"/>
    <!-- more default parameters can be changed here -->
  </include>
</launch>

 

Ready to run!

Now launch the world. Be sure that a roscore is running or your machine, and that the GAZEBO_PLUGIN_PATH environment var includes the path to the new plugin.

Now execute the following command:

roslaunch gazebo_light_sensor_plugin main.launch

You can see what the camera is observing by running the following command:

rosrun image_view image_view image:=/camera/rgb/image_raw

After running that command, a small window will appear in your screen showing what the camera is capturing. Of course, since your world is completely empty, you will only see something like as ugly as this:

Screenshot from 2016-02-11 13:07:19

Try to add some stuff in front of the camera and see how it is actually working.

Screenshot from 2015-05-20 17:34:57

Now it is time to check the value of illuminance by watching the published topic (/light_sensor_plugin/lightSensor). Just type the following and you are done:

rostopic echo /light_sensor_plugin/lightSensor

You should see the topic messages been published in your screen, something like this:

Screenshot from 2015-05-20 17:35:32

 

Conclusion

Now you have a plugin for your Gazebo simulations that can measure (very roughly) the light detected. It can be improved in many ways, but it serves as a starting point for understanding the complex world of plugins within Gazebo.

You can use it in you desktop Gazebo or even inside the ROS Development Studio. It is also independent of the ROS version you are using (just install the proper packages).

 

Do you have any interesting modifications for this plugin? What about computing variance? Or what about computing illuminance by raytracing to lights? Please share your mods here!

Pin It on Pinterest