[ROS Q&A] 123 – Roslaunch can’t locate node, but rosrun works fine

[ROS Q&A] 123 – Roslaunch can’t locate node, but rosrun works fine

In this video, we are going to answer a question found at ROS answers (https://answers.ros.org/question/291235/roslaunch-cant-locate-node-but-rosrun-works-fine/). The person is asking why he can run a Python script with rosrun, but not when using roslaunch.

Let’s try this out and solve this problem in ROS Development Studio. You can register a free account and follow the tutorial below! 

Step1: create a package

After creating a new project in RDS, open a new shell from the tools tab and type the following command into the shell to create a ROS package.

cd catkin_ws/src/
catkin_create_pkg video_qa rospy

Step2: create a test file for the code.

It’s easier to do this with an IDE. You can find IDE in the tools tab.

Now right click and select the new file to open a new file test under the folder /video_qa/src

Paste the following code into the test file.

#! /usr/bin/env python

import rospy

rospy.init_node('test')
rate = rospy.Rate(1)

while not rospy.is_shutdown():
    print "Hello there"
    rate.sleep()

Step3: create a launch file

According to ROS’s file structure, the launch file should be created under the launch directory. Let’s create it with the IDE and add a test.launch file under the /video_qa/launch folder.

Copy and paste the following code for testing. 

<launch>
    <node pkg="video_qa" type="test" name="test" output="screen">
    </node>
</launch>

Step4: start roscore

To run the code with roslaunch, we need to start roscore. You can either do that by typing roscore into the shell or start a simulation from the Simulations tab.

Step5: rosrun and roslaunch

Firstly, we try to directly execute the test file using rosrun video_qa test, but we got some error message.

[rosrun] Couldn’t find executable named test below /home/user/catkin_ws/src/video_qa

That’s because of the lack of permission for this file. We can give the file permission by typing:

chmod +x video_qa/src/test

then we do rosrun video_qa test again. It works! 

Then we try roslaunch video_qa test.launch. Strangely, it also works…WHY?

It turns out if we name the file test.py instead of test, we will have the same problem. To solve it, we rename the file to test.py and also in the launch file we use test.py instead of test.

TAKEAWAY:

Always name the file with extension(e.g. test.py). However, the type in the launch file takes the executable during the compiling process. Since there is no need for compiling .py file. If you use python, just put the file name with the extension(e.g. test.py) in the type section.

I hope you enjoy the ROS Q&A video today. If you are interested in learning ROS, please check the links below for the Robot ignite academy.

 

// RELATED LINKS
▸ ROS answers question: https://answers.ros.org/question/291235/roslaunch-cant-locate-node-but-rosrun-works-fine/
▸ ROS Development Studio: https://goo.gl/273g12
▸ Robot Ignite Academy: https://goo.gl/LsMMqh
▸ ROS Basics in 5 days online course: https://goo.gl/TDVG1j

[ROS Q&A] 122 – How to show laser data on Rviz

[ROS Q&A] 122 – How to show laser data on Rviz

 

In this video, we are going to answer a question found at ROS answers (https://goo.gl/ws2Whw). The person is asking why he cannot show the laser data of a fake laser given that he is publishing the proper frame and even a static transform from the laser to map.
Very simple and surprise answer!

Step 1. Create project

Instead of installing and configuring a ROS environment locally, we can easily reproduce this question in ROS Development Studio. You can register an account for free now! After signing up and creating a new project, we can clone the repo provided in the question into the workspace. Open a Shell window in the Tools tab and type the following command:

$ cd catkin_ws/src 

$ git clone https://github.com/erenaud3/fakeLaserScan 

You can use IDE tools to check the source tree.

 

Step 2. Compile and Run

It seems the code is mostly correct. Let’s compile it first by typing the following command in the shell.

$ roscd; cd ..

$ catkin_make

$ source devel/setup.bash

Then we launch the fakeLaserScan.launch file in the package.

$ roslaunch beginner_tutorials fakeLaserScan.launch

Step 3. Check if the topic is publishing correctly

To check if the node we just launched really publishing something, we open another shell and type

$ rostopic echo -n1 /fakeScan

The fakeScan topic is publishing.

Let’s check it further with Rviz by typing.

$ rosrun rviz rviz

1.In order to see Rviz running in the RDS, open the Graphical Tools in the Tools Tab.

2.Add a LaserScan visualization.

3. The visualization is not showing due to some problems with the frame.

4. Let’s change the fixed frame to the fake_laser_frame

Now everything looks fine but the laserscan still not showing.

How come?

It turns out, in this particular example, it’s not showing simply because the size of the visualization is too small to be seen. Let’s change the size.

Takeaway Today:

  1. You can check if one topic is publishing correctly by typing rostopic echo /TOPICNAME
  2. If you want to check the topic in RViz, remember to select correct topic type, correct frame and choose a proper size!

 

If you want to learn more about ROS, we have ROS courses for different levels available in Robot Ignite Academy. You can preview any of them for free.

 

Edit by Tony Huang

[irp posts=”7406″ name=”ROS Q&A | How to merge data from two different lasers”]

// RELATED LINKS
▸ ROS answers question: https://goo.gl/ws2Whw
▸ ROS Development Studio: https://goo.gl/Mx18Zn
▸ Robot Ignite Academy: https://goo.gl/XFkCpk
▸ ROS Navigation in 5 days online course: https://goo.gl/AV4hv4

My Robotic Manipulator #2: URDF & XACRO

My Robotic Manipulator #2: URDF & XACRO

URDF & XACRO

In this video we’ll improve our URDF code of the robotic manipulator using XACRO. From the URDF model, finish the robot, creating all joints and links and visualize it using RViz. Up to the end of the video, we will have the model finished, a launch file to visualize it and a RViz and some MACROs to help you in the development of the robot.


Related resources:

 

[ROS Q&A] 121 – How to Write Robot Poses to a File

In this video we are going to see how to write robot Poses into a file. This will allow you to keep track of the different positions that a mobile robot is in while it is moving.

 

▸ Get the code of the video by clicking on this link: https://goo.gl/DBdKoz
(You need an account on RDS. Once you click on the link, the whole code will appear on your RDS account ready to use)

Follow these steps to reproduce the project as shown in the video

Step 1

  • Go to https://rds.theconstructsim.com/
  • Create a new project. Choose a informative name and add some description in the description field. (for this tutorial we are using project name poses_to_file)
  • Open the project. This will being you to the RDS environment with two menu options Tools and Simulations
  • Start an IDE by selecting IDE option from the Tools. IDE is a very useful tool, it lets you navigate and edit files/directories. Try navigating around and verify that the directory layout of the project is as shown below:
    . 
    ├── ai_ws
    ├── catkin_ws
    │   ├── build
    │   ├── devel
    │   └── src
    ├── notebook_ws
    │   ├── default.ipynb
    │   └── images
    └── simulation_ws
        ├── build
        ├── devel
        └── src

Note that we use simulation_ws directory to contain all files related to simulations.
Those files not related to simulations will go to catkin_ws (like python scripts, launch files etc)

 


Step 2

We will create a new catkin project inside the catkin_ws/src directory

$ cd catkin_ws/src catkin_create_pkg poses_to_file rospy

Next we will create a python script to write the poses to the file. We will create this script inside the newly created project.

$ cd poses_to_file/src touch poses_to_file.py

We have now created a empty script file named poses_to_file.py. We also need to make it executable. Use following commands to make it executable

$ chmod +x poses_to_file.py

Verify the permission by using the listing unix command

$ ls -l


Step 3

We will now write the program inside the python script.

The script contains class named SavePoses with following member functions

  • __init__
  • sub_callback
  • write_to_file

+
The role of __init__ function would be to initialize a Pose type (_pose), a dictionary type (poses_dict) to hold Poses. We will subscribe to the /odom topic as well.

Function sub_callback is the callback for the subscriber we create in the __init__. Inside this callback function we receive the current pose and copy it inside variable _pose.

The function write_to_file is tasked with following:

  • initializing dictionary keys pose1, pose2 and pose3
  • open a file in write mode
  • iterate through the keys of dictionary and write a formatted string containing values corresponding to the keys.

The complete code is as follows:

#! /usr/bin/env python

import rospy
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Pose
import time


class SavePoses(object):
    def __init__(self):
        
        self._pose = Pose()
        self.poses_dict = {"pose1":self._pose, "pose2":self._pose, "pose3":self._pose}
        self._pose_sub = rospy.Subscriber('/odom', Odometry , self.sub_callback)
        self.write_to_file()

    def sub_callback(self, msg):
        
        self._pose = msg.pose.pose
    
    def write_to_file(self):
        
        time.sleep(5)
        self.poses_dict["pose1"] = self._pose
        rospy.loginfo("Written pose1")
        time.sleep(5)
        self.poses_dict["pose2"] = self._pose
        rospy.loginfo("Written pose2")
        time.sleep(5)
        self.poses_dict["pose3"] = self._pose
        rospy.loginfo("Written pose3")
            
        
        with open('poses.txt', 'w') as file:
            
            for key, value in self.poses_dict.iteritems():
                if value:
                    file.write(str(key) + ':\n----------\n' + str(value) + '\n===========\n')
                    
        rospy.loginfo("Written all Poses to poses.txt file")
        


if __name__ == "__main__":
    rospy.init_node('spot_recorder', log_level=rospy.INFO) 
    save_spots_object = SavePoses()
    #rospy.spin() # mantain the service open.

Before we run this script we need to start a simulation that will create a `/odom` topic. This topic will publish the odometry messages. Use the Simulations menu and select Turtlebot2 to start a simulation. Once the simulation is running you will see something like follows:

Now we can run the script with following command

$ rosrun poses_to_file poses_to_file.py

Let the script finish writing the poses to file. You will find a new file called poses.txt inside the src directory. This file will contain the poses as expected.

Thats about it. Enjoy learning ROS with Robot Ignite Academy.

 

// RELATED LINKS

▸ ROS Development Studio: https://goo.gl/Mx18Zn
▸ Robot Ignite Academy: https://goo.gl/XFkCpk
▸ ROS Navigation in 5 days online course: https://goo.gl/AV4hv4

[ROS Q&A] 120 – How To Convert a PointCloud Into a Laser Scan

In this video we are going to see how to convert a PointCloud into a laser scan. You can use later that laserscan as a lidar for robot navigation. This allows to do navigation with a very cheap sensor.

▸ Get the code of the video by clicking on this link: https://goo.gl/z3fNCs
(You need an account on ROSDS. Once you click on the link, the whole code will appear on your ROSDS account ready to use. Watch the video for additional instructions)

Below are the step by step instructions for creating the project on ROSDS.

NOTE: if you are not using the ROSDS and you are using your local ROS installation, then you need to have installed the ROS package PointCloud To LaserScan (find the link below in the related links section).

Step 1

Head to Robot Development Studio and create a new project
Provide a suitable project name and some useful description (we are using the name “pointcloud to laser”)
Open the project (this will take few seconds)
Once the project is loaded run the IDE from the tools menu. Also verify that the inital directory structure should look like following

.
├────── ai_ws
├────── catkin_ws
│     ├─── build
│     ├─── devel
│     └─── 
├────── notebook_ws
│     ├─── default.ipynb
│     └─── images
└────── simulation_ws
      ├─── build
      ├─── devel
      └─── src

Step 2

Simulate a turtlebot2 by choosing Turtlebot 2 option in the Simulation menu.

Next we will create a package pc2l (stands for PointCloud2Laser). Open a shell, browse to ~/catkin_ws/src directory and create a package with name pc2l

$ cd ~/catkin_ws/src

$ catkin_create_pkg pc2l

Further, we need to create a launch file that will create a node to do the conversion of pointcloud data to laser scan data (using pointcloud_to_laserscan library). We will create a launch directory then a launch file by name start.launch. We can use the IDE tool to achieve the same

Create a new file (inside the launch folder) with name start.launch and input the following contents (Note: The pointcloud_to_laserscan library needs to be installed in the system)

<launch>
    <node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">
      <remap from="cloud_in" to="/camera/depth/points"/>
      <remap from="scan" to="/camera/scan" />
      <rosparam>
        target_frame: camera_link
        transform_tolerance: 0.01
        min_height: 0.0
        max_height: 1.0
        
        angle_min: -1.5708
        angle_max: 1.5708
        angle_increment: 0.0087
        scan_time: 0.3333
        range_min: 0.45
        range_max: 4.0
        use_inf: true
        
        #concurrency_level affects number of pc queued for processing and the number of threadsused
        # 0: Detect number of cores
        # 1: Single threaded
        # 2: inf : Parallelism level
        concurrency_level: 1
      </rosparam>
    </node>
</launch>

This **launch file** set’s up various parameters which are briefed below:

cloud_in : This argument contains the name of the topic that is publishing the pointcloud data.
scan : This argument contains topic name (created by pointcloud_to_laserscan) that contains the converted scan data.
target_frame : Tells the frame of reference for laser scan
min and max height : Define the height range which will be taken into account while converting the pointcloud
min and max angle : Define the range of angle over which laser scan data is composed about
angle increment : The step increase in angle between conscuting scans
scan time : The scan time is the interval between publishing two scans
min and max range : This arguments defines the range of distance captured in the scans
use inf : A point detected further than max range is encoded as inf point
concurrency : This parameters controls how much cpu you want to allocate to the pointcloud to laser scan processing

Step 3

Now we can run the launch file, visualize the data (laser scan) and compare the pointcloud converted laser scan data with laser scan data provided by a laser scanner mounted on the robot.

In the Shell type the following command to run the launch file

$ roslaunchg pc2l start.launch

This command will start a new topic with name /camera/scan. To verify start a new Shell and run the command

$ rostopic list

To visualize the data published over the /camera/scan topic we will run rviz In the shell, type the following command

$ rosrun rviz rviz

To see the rviz window we require a Graphical Tool

 

Once the rviz loads we will fix the errors by choosing appropriate value for Fixed Frame parameter. Then we will add following new displays

  • Laser display 1 : We will select the topic /camera/scan
  • Laser display 2 : We will select the topic /kobuki/laser/scan
  • Robot Model : We will add a robot model to visualize a turtlebot inside the rviz window

After we are done with all settings we will see the following

There are two laser scans one is from the /camera/scan (white colored) and the other is from /kobuki/laser/scan. We are done at this point.

// RELATED LINKS

▸ Point Cloud To LaserScan ROS package: http://wiki.ros.org/pointcloud_to_laserscan
▸ ROS Development Studio: https://goo.gl/tnjCD5
▸ Robot Ignite Academy: https://goo.gl/8EhkWn
▸ ROS Navigation in 5 days online course: https://goo.gl/mkRYiR

[ROS Q&A] 119 – ROS Mapping Tutorial. How To Provide a Map

[ROS Q&A] 119 – ROS Mapping Tutorial. How To Provide a Map

 

In this ROS Mapping tutorial video we will see how to provide a previously created and saved map through topics, either using the command line or a ROS launch file.

The topic explained into this video is part of the ROS Navigation in 5 Days Course that you can find in the Robot Ignite Academy. You will find the links to both the academy and the Navigation Course right below.

Follow the steps to recreate the project as shown in the video

  • Go to Robot Ignite Academy. You will need to signup if you have not registered already.
  • In the home page, choose the course ROS Navigation in 5 Days

 

Once the course page opens select start course option

You should be greeted with the workspace like following (it will take a while to load the full page)

  • Open the Units menu from bottom left of the screen and select the Unit 2: Mapping

  • Start the gmapping package by executing the following command in a SHELL (there are four SHELL in the bottom of the screen)

    $ roslaunch turtlebot_navigation_gazebo gmapping_demo.launch

  • We will need a RVIZ windown to see the map visualization. Open a graphical interface window (see image)

  • Enter the following command in another SHELL to start the RVIZ visualization

    $ rosrun rviz rviz

The above command will load the RVIZ in the graphical interface window as shown. We will add displays to the RVIZ window (as shown in image)

  • RobotModel
  • LaserScan
  • Map

  • Make the following settings
    • Map : set Topic to /map
    • Laser : set Topic to /kobuki/laser/scan

Once the above settings are made, you should start to see the visualization in the RVIZ window

  • Now that mapping package is working, we will move the robot around to capture more data. To do so we will use the keyboard teleoperation, execute the following commands in a SHELL

$ roslaunch turtlebot_teleop keyboard_teleop.launch

Use the keys I(forward), J(left), L(right), M(backwards) to move the robot around. As you move the robot around the MAP in rviz should grow

  • To save the current map, execute the following command in a new SHELL

$ cd ~catkin_ws/src $ mkdir maps $ cd maps $ rosrun map_server map_saver -f my_map $ ls

You should see two new files created inside the ~/catkin_ws/src/maps directory

  • my_map.pgm
  • my_map.yaml

Now we can load this map to the map_server with the following command

$ rosrun map_server map_server my_map.yaml

The above command will start a map node with the map data in it. There is an alternate way to provide this map with the launch file. To provide the map using a launch file lets create a new package called provide_map. Execute the following commands in a SHELL

$ cd ~catkin_ws/src

$ catkin_create_pkg provide_map

$ cd provide_map

$ mkdir launch

$ mv ../maps . cd launch

$ touch provide_map.launch

The above commands create a new package by name provide_map and create a directory named launch inside the new package, it copies the maps directory (where we saved the maps) to inside the package and lastly creates a launch file provide_map.launch. Put the following content inside the launch file

<launch> 
  <arg name="map_file" value="/home/user/catkin_ws/src/provide_map/maps/my_map.yaml">
  </arg> 
  <node pkg="map_server" type="map_server" name="map_server" args="$(arg map_file)"> 
  </node> 
</launch>

 

To test everythin works well we can run this launch file. Close the running instance of RVIZ and in the same SHELL use the following commands

$ roslaunch provide_map provide_map.launch

You should see the RVIZ window load with map. With this we finish this part.

 

// RELATED LINKS

▸ Robot Ignite Academy: https://goo.gl/e7AQtb
▸ ROS Navigation in 5 Days online course: https://goo.gl/U5MGrs

Pin It on Pinterest