[ROS Projects] – ROS with Raspberry Pi 3 using Gazebo Face Simulation #Part 4

[ROS Projects] – ROS with Raspberry Pi 3 using Gazebo Face Simulation #Part 4

 

In this video, you will learn how to use the Face Landmarks published in Part2 to make the MagicMirror Face move in simulation. 

Want to start developing for ROS now? Check out RDS:
https://goo.gl/Lud9hQ

Want to learn more about how to move a robot in ROS? Check out the RobotIgniteCourse on URDF robot generation:
https://goo.gl/9kUsS9

[ROS Projects] – ROS with Raspberry Pi 3 using Gazebo Face Simulation #Part3

[ROS Projects] – ROS with Raspberry Pi 3 using Gazebo Face Simulation #Part3

 

In this video you will learn how to use the sound_play package for working with sound in ROS. You can use it for playing sounds from sound files or to make Text to Speech. Here you will also learn how to play sounds in a python notebook in ROS Development Studio. At the end you will learn how to apply all this to your Raspberry Pi.

Thank you very much to:
Play Sound in Python Notebooks:
https://gist.github.com/aisipos/8640905

ROS sound play package:
http://wiki.ros.org/sound_play

Work with pyaudio sound streams:
https://www.swharden.com/wp/2016-07-19-realtime-audio-visualization-in-python/

ROS Q&A | Treating quaternions for 2D navigation

ROS Q&A | Treating quaternions for 2D navigation

Are you having problems to make your robot navigate because of the kind of data you have to treat? It’s very common to get stuck when we have to work with quaternions, instead of RPY angles. Quaternions are used in robotics to represent the rotation, in 3 axis, of a rigid body in respect with a coordinate frame. But sometimes it’s too much for what we want to do.

In this post, we are going to see how to get only the necessary data from a quaternion for a 2D application. It is very helpful, since robotics algorithms works with quaternions, but user’s interface, RPY angles.

Let’s start with a robot, to make it clear. In this post, we are going to use the Turtlebot simulation, available in ROS Development Studio (RDS).

Turtlebot Simulation

Turtlebot Simulation

We have provided by the robot sensors a /odom topic, which publishes messages of the nav_msgs/Odometry type. See below a single message example:

Odometry message

Odometry message

Now, unless you have a path described using quaternions, we need to convert this quaternion to RPY. To do the following, we will use the TF library. In the image below, you can see the program in charge of this task:


#include <tf/tf.h>
#include <nav_msgs/Odometry.h>
#include <geometry_msgs/Pose2D.h>

ros::Publisher pub_pose_;

void odometryCallback_(const nav_msgs::Odometry::ConstPtr msg) {
geometry_msgs::Pose2D pose2d;
pose2d.x = msg->pose.pose.position.x;
pose2d.y = msg->pose.pose.position.y;

tf::Quaternion q(
msg->pose.pose.orientation.x,
msg->pose.pose.orientation.y,
msg->pose.pose.orientation.z,
msg->pose.pose.orientation.w);
tf::Matrix3x3 m(q);
double roll, pitch, yaw;
m.getRPY(roll, pitch, yaw);

pose2d.theta = yaw;
pub_pose_.publish(pose2d);
}

int main(int argc, char **argv)
{
ros::init(argc, argv, “conversion_node”);

ros::NodeHandle nh_;

ros::Subscriber sub_odom_ = nh_.subscribe(“odom”, 1, odometryCallback_);
pub_pose_ = nh_.advertise<geometry_msgs::Pose2D>(“pose2d”, 1);

ros::spin();

return 0;
}

First, we are creating a node and subscribing the /odom topic. In the callback of this subscriber, we are getting the information that matters for us, which are the X, Y and Yaw data. To have the Yaw angle, we are using the quaternion data and converting to RPY (We need a Matrix3x3 object in the middle process). Since our robot has only three degrees of freedom (linear in X and Y, angular in Z), we are not considering Roll and Pitch, the Pose2D message type is enough for us. This is a simplification for a ground robot navigating in a planar scenario.

Go to point program for ground robots using ActionLib

Go to point program for ground robots using ActionLib

RDS Public Simulation List

ROS Development Studio Public Simulation List

Navigation is one of the challenging tasks in robotics. It’s not a simple task. To reach a goal or follow a trajectory, the robot must know the environment using and localize itself through sensors and a map.

But when we have a robot that has this information already, it’s possible to start navigating, defining points to follow in order to reach a goal. That’s the point we are starting from in this post. Using a very popular ground robot, Turtlebot 2, we are going to perform a navigation task, or part of it.

In order to program the robot, RDS (ROS Development Studio) is going to be used. Before starting, make sure you are logged in using your credentials and able to see the public simulation list (as below) and run the Kobuki gazebo simulation.

RDS Public Simulation List

RDS Public Simulation List

At this point, you should have the simulation running (image below). On the left side of the screen, you have a Jupyter Notebook with the basics and some instructions about the robot. On the center, the simulation. On the right, an IDE and a terminal. It’s possible to set each one in full screen mode or reorganize the screen, as you wish. Feel free to explore it!

kobuki-sim

So, you have already noticed the simulation is running, the robot is ready and you might have send some velocity commands already. Now let’s start showing our Action Server that sends the velocity commands in order to the robot achieve a goal. First thing, clone this repository [https://marcoarruda@bitbucket.org/TheConstruct/theconstruct_navigation.git] to your workspace (yes, your cloud workspace, using RDS!). Done? Let’s explore it for a while. Open the file src/theconstruct_navigation/go_to_point2d/src/go_to_point2d_server.cpp. You must have something like this:
gotopoint-code

So, let’s start from the main function of the file. You can see there we are creating a node (ROS::init()) and a GoToPoint2DAction object. That’s the name of the class created at the beginning of the file. Once this variable is created, all methods and behaviors of the class will be working.

Now, taking a look inside the class we can see that there are some methods and attributes. The attributes are used only inside the class. The interface between the object and our ROS node are the methods, which are public.

When the object is instantiated, it registers some mandatory callbacks for the ActionLib library (goalCB, the one who receives the goal or points that we want to send the robot and preembCB, that allows us to interrupt the task). It’s also getting some parameters from the launch file. And finally, creating a publisher for the velocity topic and subscribing the odometry topic, which is used to localize the robot.

Let’s compile it! Using the terminal, enter into the directory catkin_ws (cd catkin_ws) and compile the workspace (catkin_make). It may take some minutes, because we are generating the message files. The action message is defined at the folder theconstruct_navigation/action/GoToPoint2D.action. You can explore there and see what it expects and delivers.

Finally, let’s run the action server. Use the launch file to set the parameters:
roslaunch go_to_point2d go_to_point2d_server.launch. Did the robot move? No? Great! The server is waiting for the messages, so it must not send any command until we create an action client and send the requests. First, let’s take a look in the launch file:

launch-file

Notice that we have some parameters to define the limits of the robot operation. The first 3 sets the maximum and minimum linear velocity and a gain that is used to set the robot speed in a straight line, since it depends on the distance between the robot and the goal point.

The next 3 parameters set the same as the previous parameters, but for the angular velocity.

Finally the last 3 parameters are used to establish a tolerance for the robot. Well, the robot’s odometry and yaw measurement are not perfect, so we need to consider we’ll have some errors. The error cannot be too small, otherwise the robot will never get to its goal. If the error is too big, the robot will stop very far from the goal (it depends on the robot perception).

Now that we have the basic idea of how this package works, let’s use it! In order to create an action client and send a goal to the server, we are going to use the jupyter notebook and create a node in python. You can use the following code and see the robot going to the points:

ipython

Restart the notebook kernel before running it, because we have compiled a new package. Execute the cells, one by one, in the order and you’ll see the robot going to the point!
If you have any doubts about how to do it, please leave a comment. You can also check this video, where all the steps described in this post are done:
[ROS Q&A] How to test ROS algorithms using ROS Development Studio

Related ROS Answers Forum question: actionlib status update problem

How to create a ROS Sensor Plugin for Gazebo

How to create a ROS Sensor Plugin for Gazebo

 

There are magnificent tutorials about how to create plugins for Gazebo in the GazeboSim webpage. There are even some tutorials about how to create plugins for Gazebo + ROS. Those tutorials show that there are several types of plugins (world, model, sensor, system, visual), and indicate how to create a plugin for a world type plugin.

Recently I need to create a plugin for a light detector. Reading the tutorials, I missed a concrete example about how to create a sensor plugin. Hence, I had to investigate a little bit about it. The result is the content of this post.

 

How to: light sensor plugin in Gazebo

Following the indications provided at the answers forum of Gazebo, I decided to build a very simple light detector sensor based on a camera. Instead of using a raytracing algorithm from lights, the idea is to use a camera to capture an image, then use the image to calculate the illuminance of the image, and then publish that illuminance value through a ROS topic.

Since the plugin is for its use with ROS, the whole plugin should be compilable using a ROS environment. Hence, be sure that you have installed the following packages in your Linux system:

  • ros-<your_ros_version>-<your_gazebo_version>-ros. (in my case it is ros-jade-gazebo6-ros)
  • ros-<your_ros_version>-<your_gazebo_version>-plugins (in my case it is ros-jade-gazebo6-plugins)

This tutorial, has two parts: on the first one we will explain how to create the plugin, and on the second, how to test that it works

 

Creating the plugin

Creating a ROS package for the plugin

First thing, is to create the package in our catkin workspace that will allow us to compile the plugin without a problem.

cd ~/catkin_ws/src
catkin_create_pkg gazebo_light_sensor_plugin gazebo_ros gazebo_plugins roscpp

Creating the plugin code

For this purpose, since we are using a camera to capture the light, we are going to create a plugin class that inherits from the CameraPlugin. The code that follows has been created taking as guideline the code of the authentic gazebo ROS camera plugin.

Create a file called light_sensor_plugin.h inside the include directory of your package, including the following code:

#ifndef GAZEBO_ROS_LIGHT_SENSOR_HH
#define GAZEBO_ROS_LIGHT_SENSOR_HH

#include <string>

// library for processing camera data for gazebo / ros conversions
#include <gazebo/plugins/CameraPlugin.hh>

#include <gazebo_plugins/gazebo_ros_camera_utils.h>

namespace gazebo
{
  class GazeboRosLight : public CameraPlugin, GazeboRosCameraUtils
  {
    /// \brief Constructor
    /// \param parent The parent entity, must be a Model or a Sensor
    public: GazeboRosLight();

    /// \brief Destructor
    public: ~GazeboRosLight();

    /// \brief Load the plugin
    /// \param take in SDF root element
    public: void Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf);

    /// \brief Update the controller
    protected: virtual void OnNewFrame(const unsigned char *_image,
    unsigned int _width, unsigned int _height,
    unsigned int _depth, const std::string &_format);

    ros::NodeHandle _nh;
    ros::Publisher _sensorPublisher;

    double _fov;
    double _range;
  };
}
#endif

As you can see, the code includes a node handler to connect to the roscore. It also defines a publisher that will publish messages containing the illuminance value. Two parameters have been defined: fov (field of view) and range. At present only fov is used to indicate the amount of pixels around the center of the image that will be taken into account to calculate the illuminance.

Next step is to create a file named light_sensor_plugin.cpp containing the following code in the src directory of your package:

#include <gazebo/common/Plugin.hh>
#include <ros/ros.h>
#include "gazebo_light_sensor_plugin/light_sensor_plugin.h"

#include "gazebo_plugins/gazebo_ros_camera.h"

#include <string>

#include <gazebo/sensors/Sensor.hh>
#include <gazebo/sensors/CameraSensor.hh>
#include <gazebo/sensors/SensorTypes.hh>

#include <sensor_msgs/Illuminance.h>

namespace gazebo
{
  // Register this plugin with the simulator
  GZ_REGISTER_SENSOR_PLUGIN(GazeboRosLight)

  ////////////////////////////////////////////////////////////////////////////////
  // Constructor
  GazeboRosLight::GazeboRosLight():
  _nh("light_sensor_plugin"),
  _fov(6),
  _range(10)
  {
    _sensorPublisher = _nh.advertise<sensor_msgs::Illuminance>("lightSensor", 1);
  }

  ////////////////////////////////////////////////////////////////////////////////
  // Destructor
  GazeboRosLight::~GazeboRosLight()
  {
    ROS_DEBUG_STREAM_NAMED("camera","Unloaded");
  }

  void GazeboRosLight::Load(sensors::SensorPtr _parent, sdf::ElementPtr _sdf)
  {
    // Make sure the ROS node for Gazebo has already been initialized
    if (!ros::isInitialized())
    {
      ROS_FATAL_STREAM("A ROS node for Gazebo has not been initialized, unable to load plugin. "
        << "Load the Gazebo system plugin 'libgazebo_ros_api_plugin.so' in the gazebo_ros package)");
      return;
    }

    CameraPlugin::Load(_parent, _sdf);
    // copying from CameraPlugin into GazeboRosCameraUtils
    this->parentSensor_ = this->parentSensor;
    this->width_ = this->width;
    this->height_ = this->height;
    this->depth_ = this->depth;
    this->format_ = this->format;
    this->camera_ = this->camera;

    GazeboRosCameraUtils::Load(_parent, _sdf);
  }

  ////////////////////////////////////////////////////////////////////////////////
  // Update the controller
  void GazeboRosLight::OnNewFrame(const unsigned char *_image,
    unsigned int _width, unsigned int _height, unsigned int _depth,
    const std::string &_format)
  {
    static int seq=0;

    this->sensor_update_time_ = this->parentSensor_->GetLastUpdateTime();

    if (!this->parentSensor->IsActive())
    {
      if ((*this->image_connect_count_) > 0)
      // do this first so there's chance for sensor to run once after activated
        this->parentSensor->SetActive(true);
    }
    else
    {
      if ((*this->image_connect_count_) > 0)
      {
        common::Time cur_time = this->world_->GetSimTime();
        if (cur_time - this->last_update_time_ >= this->update_period_)
        {
          this->PutCameraData(_image);
          this->PublishCameraInfo();
          this->last_update_time_ = cur_time;

          sensor_msgs::Illuminance msg;
          msg.header.stamp = ros::Time::now();
          msg.header.frame_id = "";
          msg.header.seq = seq;

          int startingPix = _width * ( (int)(_height/2) - (int)( _fov/2)) - (int)(_fov/2);

          double illum = 0;
          for (int i=0; i<_fov ; ++i)
          {
            int index = startingPix + i*_width;
            for (int j=0; j<_fov ; ++j)
              illum += _image[index+j];
          }

          msg.illuminance = illum/(_fov*_fov);
          msg.variance = 0.0;

          _sensorPublisher.publish(msg);

          seq++;
        }
      }
    }
  }
}

That is the code that calculates the illuminance in a very simple way. Basically, it just adds the values of all the pixels in the fov of the camera and then divides by the total number of pixels.

Create a proper CMakeLists.txt

Substitute the code of the automatically created CMakeLists.txt by the code below:

cmake_minimum_required(VERSION 2.8.3)
project(gazebo_light_sensor_plugin)

find_package(catkin REQUIRED COMPONENTS
  gazebo_plugins
  gazebo_ros
  roscpp
)

find_package (gazebo REQUIRED)

catkin_package(
  INCLUDE_DIRS include
  CATKIN_DEPENDS gazebo_plugins gazebo_ros roscpp
)

###########
## Build ##
###########

set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")

link_directories(${GAZEBO_LIBRARY_DIRS})
include_directories(include)
include_directories( ${catkin_INCLUDE_DIRS} 
                     ${Boost_INCLUDE_DIR} 
                     ${GAZEBO_INCLUDE_DIRS}
)

add_library(${PROJECT_NAME} src/light_sensor_plugin.cpp)

## Specify libraries to link a library or executable target against
target_link_libraries( ${PROJECT_NAME} ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} CameraPlugin )

Update the package.xml and compile

Now you need to include the following line in your package.xml, between the tags <export></export>

<gazebo_ros plugin_path="${prefix}/lib" gazebo_media_path="${prefix}" />

Now you are ready to compile the plugin. Compilation should generate the library containing the plugin inside your building directory.

> roscd
> cd ..
> catkin_make

Testing the Plugin

Let’s create a world file containing the plugin and launch it to see how it works

Create a world file

You need a world file that includes the plugin. Here it is an example. Create a worlds directory inside your plugin package, and save the following code in a file entitled light.world. This world file just loads the camera with its plugin so it may be a bit ugly but enough for your tests. Feel free to add more elements and models in the world file (like for example, in the picture at the top of this post).

<?xml version="1.0" ?>
<sdf version="1.4">
 <world name="default">
 <include>
   <uri>model://ground_plane</uri>
 </include>

 <include>
   <uri>model://sun</uri>
 </include>

 <!-- reference to your plugin -->
 <model name='camera'>
   <pose>0 -1 0.05 0 -0 0</pose>
   <link name='link'>
     <inertial>
       <mass>0.1</mass>
       <inertia>
         <ixx>1</ixx>
         <ixy>0</ixy>
         <ixz>0</ixz>
         <iyy>1</iyy>
         <iyz>0</iyz>
         <izz>1</izz>
       </inertia>
     </inertial>
     <collision name='collision'>
       <geometry>
         <box>
            <size>0.1 0.1 0.1</size>
         </box>
       </geometry>
       <max_contacts>10</max_contacts>
       <surface>
         <contact>
           <ode/>
         </contact>
         <bounce/>
         <friction>
           <ode/>
         </friction>
       </surface>
     </collision>
     <visual name='visual'>
       <geometry>
         <box>
           <size>0.1 0.1 0.1</size>
         </box>
       </geometry>
     </visual>
     <sensor name='camera' type='camera'>
       <camera name='__default__'>
         <horizontal_fov>1.047</horizontal_fov>
         
         <clip>
           <near>0.1</near>
           <far>100</far>
         </clip>
       </camera>
       <plugin name="gazebo_light_sensor_plugin" filename="libgazebo_light_sensor_plugin.so">
         <cameraName>camera</cameraName>
         <alwaysOn>true</alwaysOn>
         <updateRate>10</updateRate>
         <imageTopicName>rgb/image_raw</imageTopicName>
         <depthImageTopicName>depth/image_raw</depthImageTopicName>
         <pointCloudTopicName>depth/points</pointCloudTopicName>
         <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
         <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
         <frameName>camera_depth_optical_frame</frameName>
         <baseline>0.1</baseline>
         <distortion_k1>0.0</distortion_k1>
         <distortion_k2>0.0</distortion_k2>
         <distortion_k3>0.0</distortion_k3>
         <distortion_t1>0.0</distortion_t1>
         <distortion_t2>0.0</distortion_t2>
         <pointCloudCutoff>0.4</pointCloudCutoff>
         <robotNamespace>/</robotNamespace>
       </plugin>
     </sensor>
     <self_collide>0</self_collide>
     <kinematic>0</kinematic>
     <gravity>1</gravity>
   </link>
 </model>
 </world>
</sdf>

 

Create a launch file

Now the final step, to create a launch that will upload everything for you. Save the following code as main.launch inside the launch directory of you package.

<launch>
  <!-- We resume the logic in empty_world.launch, changing only the name of the world to be launched -->
  <include file="$(find gazebo_ros)/launch/empty_world.launch">
    <arg name="verbose" value="true"/>
    <arg name="world_name" value="$(find gazebo_light_sensor_plugin)/worlds/light.world"/>
    <!-- more default parameters can be changed here -->
  </include>
</launch>

 

Ready to run!

Now launch the world. Be sure that a roscore is running or your machine, and that the GAZEBO_PLUGIN_PATH environment var includes the path to the new plugin.

Now execute the following command:

roslaunch gazebo_light_sensor_plugin main.launch

You can see what the camera is observing by running the following command:

rosrun image_view image_view image:=/camera/rgb/image_raw

After running that command, a small window will appear in your screen showing what the camera is capturing. Of course, since your world is completely empty, you will only see something like as ugly as this:

Screenshot from 2016-02-11 13:07:19

Try to add some stuff in front of the camera and see how it is actually working.

Screenshot from 2015-05-20 17:34:57

Now it is time to check the value of illuminance by watching the published topic (/light_sensor_plugin/lightSensor). Just type the following and you are done:

rostopic echo /light_sensor_plugin/lightSensor

You should see the topic messages been published in your screen, something like this:

Screenshot from 2015-05-20 17:35:32

 

Conclusion

Now you have a plugin for your Gazebo simulations that can measure (very roughly) the light detected. It can be improved in many ways, but it serves as a starting point for understanding the complex world of plugins within Gazebo.

You can use it in you desktop Gazebo or even inside the ROS Development Studio. It is also independent of the ROS version you are using (just install the proper packages).

 

Do you have any interesting modifications for this plugin? What about computing variance? Or what about computing illuminance by raytracing to lights? Please share your mods here!

Pin It on Pinterest