[ROS Projects] OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part2

[ROS Projects] OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part2

 

In this new ROS Project you are going to learn Step-by-Step how to create a moving cube and that it learns to move using OpenAI environment.

This second video is for learning the creation the basics of Reinforcement learning and how to connect to the various systems of the robot to get the state, perform actions and calculate rewards.

ROS Development Studio
OpenAi Course

Moving Cube Git:https://bitbucket.org/theconstructcore/moving_cube/src/master/
Moving Cube Training AI Git: https://bitbucket.org/theconstructcore/moving_cube_training/src/master/

If you didn’t follow up, please check the link below for the last post.

[irp posts=”9744″ name=”[ROS Projects] OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part1″]

 

Step 1. Clone the simulation

In order to make sure we have the same project. Please run the following command

cd ~/simulation_ws/src
git clone https://bitbucket.org/theconstructcore/moving_cube.git

NOTICE: please delete the previous code if you have problems to compile the code

In the /moving_cube/moving_cube_description/urdf/moving_cube.urdf file, please uncomment the following part. We’ll need this part to publish the odom topic.

    <gazebo>
        <plugin name="p3d_base_controller" filename="libgazebo_ros_p3d.so">
            <robotNamespace>/moving_cube</robotNamespace>
            <alwaysOn>true</alwaysOn>
            <updateRate>50.0</updateRate>
            <bodyName>cube_body</bodyName>
            <topicName>odom</topicName>
            <gaussianNoise>0.01</gaussianNoise>
            <frameName>world</frameName>
            <xyzOffsets>0 0 0</xyzOffsets>
            <rpyOffsets>0 0 0</rpyOffsets>
        </plugin>
    </gazebo>

Step 2. Create a training package

To train the robot, let’s create a package for training under the catkin_ws/src

cd ~/catkin_ws/src
catkin_create_pkg my_moving_cube_traning_pkg rospy

Then we’ll create a script folder inside the package and put a file called cube_rl_utils.py inside it with the following content

#!/usr/bin/env python

import time
import rospy
import math
import copy
import numpy
from std_msgs.msg import Float64
from sensor_msgs.msg import JointState
from nav_msgs.msg import Odometry
from geometry_msgs.msg import Point
from tf.transformations import euler_from_quaternion

class CubeRLUtils(object):

    def __init__(self):

        self.check_all_sensors_ready()
        
        rospy.Subscriber("/moving_cube/joint_states", JointState, self.joints_callback)
        rospy.Subscriber("/moving_cube/odom", Odometry, self.odom_callback)

        self._roll_vel_pub = rospy.Publisher('/moving_cube/inertia_wheel_roll_joint_velocity_controller/command', Float64, queue_size=1)
        
        self.check_publishers_connection()
        
    
    def check_all_sensors_ready(self):
        self.disk_joints_data = None
        while self.disk_joints_data is None and not rospy.is_shutdown():
            try:
                self.disk_joints_data = rospy.wait_for_message("/moving_cube/joint_states", JointState, timeout=1.0)
                rospy.loginfo("Current moving_cube/joint_states READY=>"+str(self.disk_joints_data))

            except:
                rospy.logerr("Current moving_cube/joint_states not ready yet, retrying for getting joint_states")

        self.cube_odom_data = None
        while self.disk_joints_data is None and not rospy.is_shutdown():
            try:
                self.cube_odom_data = rospy.wait_for_message("/moving_cube/odom", Odometry, timeout=1.0)
                rospy.loginfo("Current /moving_cube/odom READY=>" + str(self.cube_odom_data))

            except:
                rospy.logerr("Current /moving_cube/odom not ready yet, retrying for getting odom")
        rospy.loginfo("ALL SENSORS READY")
        
    def check_publishers_connection(self):
        """
        Checks that all the publishers are working
        :return:
        """
        rate = rospy.Rate(10)  # 10hz
        while (self._roll_vel_pub.get_num_connections() == 0 and not rospy.is_shutdown()):
            rospy.loginfo("No susbribers to _roll_vel_pub yet so we wait and try again")
            try:
                rate.sleep()
            except rospy.ROSInterruptException:
                # This is to avoid error when world is rested, time when backwards.
                pass
        rospy.loginfo("_base_pub Publisher Connected")

        rospy.loginfo("All Publishers READY")
        
    def joints_callback(self, data):
        self.joints = data

    def odom_callback(self, data):
        self.odom = data

    
    # Reinforcement Learning Utility Code
    def move_joints(self, roll_speed):
        
        joint_speed_value = Float64()
        joint_speed_value.data = roll_speed
        rospy.loginfo("Single Disk Roll Velocity>>"+str(joint_speed_value))
        self._roll_vel_pub.publish(joint_speed_value)
        
    def get_cube_state(self):
        
        # We convert from quaternions to euler
        orientation_list = [self.odom.pose.pose.orientation.x,
                            self.odom.pose.pose.orientation.y,
                            self.odom.pose.pose.orientation.z,
                            self.odom.pose.pose.orientation.w]
                            
        roll, pitch, yaw = euler_from_quaternion(orientation_list)
        
        # We get the distance from the origin
        start_position = Point()
        start_position.x = 0.0
        start_position.y = 0.0
        start_position.z = 0.0
        
        distance = self.get_distance_from_point(start_position,
                                                self.odom.pose.pose.position)
        
        cube_state = [
                        round(self.joints.velocity[0],1),
                        round(distance,1),
                        round(roll,1),
                        round(pitch,1),
                        round(yaw,1)
                    ]
        
        return cube_state

    
    def observation_checks(self, cube_state):
        
        
        # MAximum distance to travel permited in meters from origin
        max_distance=2.0
        
        if (cube_state[1] > max_distance): 
            rospy.logerr("Cube Too Far==>"+str(cube_state[1]))
            done = True
        else:
            rospy.loginfo("Cube NOT Too Far==>"+str(cube_state[1]))
            done = False
        
        return done

    def get_distance_from_point(self, pstart, p_end):
        """
        Given a Vector3 Object, get distance from current position
        :param p_end:
        :return:
        """
        a = numpy.array((pstart.x, pstart.y, pstart.z))
        b = numpy.array((p_end.x, p_end.y, p_end.z))

        distance = numpy.linalg.norm(a - b)

        return distance

    def get_reward_for_observations(self, state):

        # We reward it for lower speeds and distance traveled
        
        speed = state[0]
        distance = state[1]

        # Positive Reinforcement
        reward_distance = distance * 10.0
        # Negative Reinforcement for magnitude of speed
        reward_for_efective_movement = -1 * abs(speed) 
        
        reward = reward_distance + reward_for_efective_movement

        rospy.loginfo("Reward_distance="+str(reward_distance))
        rospy.loginfo("Reward_for_efective_movement= "+str(reward_for_efective_movement))
        
        return reward



def cube_rl_systems_test():
    
    rospy.init_node('cube_rl_systems_test_node', anonymous=True, log_level=rospy.INFO)
    cube_rl_utils_object = CubeRLUtils()
    
    rospy.loginfo("Moving to Speed==>80")
    cube_rl_utils_object.move_joints(roll_speed=80.0)
    time.sleep(2)
    rospy.loginfo("Moving to Speed==>-80")
    cube_rl_utils_object.move_joints(roll_speed=-80.0)
    time.sleep(2)
    rospy.loginfo("Moving to Speed==>0.0")
    cube_rl_utils_object.move_joints(roll_speed=0.0)
    time.sleep(2)
    
    cube_state = cube_rl_utils_object.get_cube_state()
    done = cube_rl_utils_object.observation_checks(cube_state)
    reward = cube_rl_utils_object.get_reward_for_observations(cube_state)
    
    rospy.loginfo("Done==>"+str(done))
    rospy.loginfo("Reward==>"+str(reward))
    

if __name__ == "__main__":
    cube_rl_systems_test()

In this post, we’ll focus on the cube_rl_systems_test() function. The function uses the class to move the cube, get the observation, calculate reward and check if it’s done. To run it, you have to run the simulation first. Please go to Simulations->Select launch file-> main.launch

NOTICE: You have to unpause the simulation by clicking the arrow key in the simulation window

Then you can run the following command to run the script

cd ~/catkin_ws/src/my_moving_cube_training_pkg/script
chmod +x cube_rl_utils.py
cd ~/catkin_ws
source devel_setup.bash
rosrun my_moving_cube_training_pkg cube_rl_utils.py

You should see the cube moving around and the reward and the done state is calculated.

 

 

Edit by: Tony Huang

[irp posts=”9976″ name=”ROS Projects OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part3″]

[ROS Projects] OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part1

[ROS Projects] OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part1

 

In this new ROS Project you are going to learn Step-by-Step how to create a moving cube and that it learns to move using OpenAI environment.

 

Part 1

This first video is for learning the creation of the URDF and control systems.

Moving Cube Git: https://bitbucket.org/theconstructcore/moving_cube/src/master/

Step 1. Create a project in ROS Development Studio(ROSDS)

We’ll use ROSDS through this project in order to avoid setting up the environment, manage packages and etc. You can create a free account here if you haven’t had an account yet.

Step 2. Create package

Since this is a simulation, let’s create a package called my_moving_cube_description under the simulation_ws.

cd ~/simulation_ws/src
catkin_create_pkg my_moving_cube_description rospy

We’ll start by building the URDF description of the robot. To do that, we’ll create a new folder called urdf under the my_moving_cube_description directory and create a file called my_moving_cube.urdf inside it with the following initial content. The robot tag indicates the name of the robot – my_moving_cube.

<robot name="my_moving_cube">
...
</robot>

Then let’s create the first link inside the robot. This includes 3 parts :

  1. inertial: It defines the physical property of the link. You can calculate the inertia of an object by using this tool: rosrun spawn_robot_tools_pkg inertial_calculator.py
  2. collision: It defines the collision property when the object interacts with other objects in the simulation.
  3. visual: It defines the visual property, how the object will visually show in the simulation.
    <link name="cube_body">
        <inertial>
            <origin xyz="0 0 0" rpy="0 0 0"/>
            <mass value="0.5" />
            <inertia ixx="0.00333333333333" ixy="0.0" ixz="0.0" iyy="0.00333333333333" iyz="0.0" izz="0.00333333333333"/>
        </inertial>
        <collision>
            <origin xyz="0 0 0" rpy="0 0 0"/>
            <geometry>
                <box size="0.2 0.2 0.2"/>
            </geometry>
        </collision>
	    <visual>
	      <geometry>
	        <box size="0.2 0.2 0.2"/>
	      </geometry>
	    </visual>
    </link>

You also need to define the material property after the link if you want to use it in gazebo.  (NOTICE: the reference of the material property should have the same name as the link)

    <gazebo reference="cube_body">
        <kp>1000000.0</kp>
        <kd>1000000.0</kd>
        <mu1>1000000.0</mu1>
        <mu2>1000000.0</mu2>
        <material>Gazebo/Black</material>
    </gazebo>

Then we can create a spawn_moving_cube.launch file under the my_moving_cube_description/launch directory with the following content to spawn the cube.

<?xml version="1.0" encoding="UTF-8"?>
<launch>
    <include file="$(find spawn_robot_tools_pkg)/launch/spawn_robot_urdf.launch">
        <arg name="x" default="0.0" />
        <arg name="y" default="0.0" />
        <arg name="z" default="0.11" />
        <arg name="roll" default="0"/>
        <arg name="pitch" default="0"/>
        <arg name="yaw" default="0.0" />
        <arg name="urdf_robot_file" default="$(find my_moving_cube_description)/urdf/my_moving_cube.urdf" />
        <arg name="robot_name" default="my_moving_cube" />
    </include>
</launch>

Now, go to simulations->–Empty– to launch an empty world. Then go to Tools->shell to run the command

roslaunch my_moving_cube_description spawn_moving_cube.launch

You should see the cube now appears in the empty world like this

Similarly, we can add another link called inertia_whell_roll

    <link name="inertia_wheel_roll">
        <inertial>
            <origin xyz="0 0 0" rpy="0 0 0"/>
            <mass value="0.5" />
            <inertia ixx="0.000804166666667" ixy="0.0" ixz="0.0" iyy="0.000804166666667" iyz="0.0" izz="0.0016"/>
        </inertial>
        <collision>
            <origin xyz="0 0 0" rpy="0 0 0"/>
            <geometry>
                <cylinder radius="0.08" length="0.01"/>
            </geometry>
        </collision>
	    <visual>
	      <geometry>
	        <cylinder radius="0.08" length="0.01"/>
	      </geometry>
	    </visual>
    </link>
    
    <gazebo reference="inertia_wheel_roll">
        <kp>1000.0</kp>
        <kd>1000.0</kd>
        <mu1>0.5</mu1>
        <mu2>0.5</mu2>
        <material>Gazebo/Red</material>
    </gazebo>

Then we also need to define the joint type which connects these two links

    <joint name="inertia_wheel_roll_joint" type="continuous">
        <origin xyz="0.1 0.0 0.0" rpy="0 1.57 0"/>
        <parent link="cube_body"/>
        <child  link="inertia_wheel_roll"/>
        <limit effort="200" velocity="1000.0"/>
        <axis xyz="0 0 1"/>
    </joint>

If you launch it again, you should see the red cylinder appear.

Step 3. Make the robot move

We need to include the controller package first in the urdf

    <gazebo>
        <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
            <robotNamespace>/my_moving_cube</robotNamespace>
        </plugin>
    </gazebo>

We’ll add the transmission part to actuate the robot.(NOTICE: the joint_name should be the same as the joint)

    <transmission name="inertia_wheel_roll_joint_trans">
      <type>transmission_interface/SimpleTransmission</type>
      <joint name="inertia_wheel_roll_joint">
        <hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
      </joint>
      <actuator name="inertia_wheel_roll_jointMotor">
        <hardwareInterface>hardware_interface/EffortJointInterface</hardwareInterface>
        <mechanicalReduction>1</mechanicalReduction>
      </actuator>
    </transmission>

Then we create a moving_cube.yaml file under the my_moving_cube_description/config to define parameters for the controller

# .yaml config file
#
# The PID gains and controller settings must be saved in a yaml file that gets loaded
# to the param server via the roslaunch file (moving_cube_control.launch).

my_moving_cube:
  # Publish all joint states -----------------------------------
  # Creates the /joint_states topic necessary in ROS
  joint_state_controller:
    type: joint_state_controller/JointStateController
    publish_rate: 30

  # Effort Controllers ---------------------------------------
  inertia_wheel_roll_joint_velocity_controller:
    type: effort_controllers/JointVelocityController
    joint: inertia_wheel_roll_joint
    pid: {p: 1.0, i: 0.0, d: 0.0}

In the end, you should create one new launch file called moving_cube control.launch under the launch folder to launch the controller

<?xml version="1.0" encoding="UTF-8"?>
<launch>

  <rosparam file="$(find my_moving_cube_description)/config/moving_cube.yaml"
            command="load"/>

  <node name="robot_state_publisher_moving_cube" pkg="robot_state_publisher" type="robot_state_publisher"
        respawn="false" output="screen">
            <param name="publish_frequency" type="double" value="30.0" />
            <param name="ignore_timestamp" type="bool" value="true" />
            <param name="tf_prefix" type="string" value="moving_cube" />
            <remap from="/joint_states" to="/moving_cube/joint_states" />
        </node>


  <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false"
        output="screen" args="--namespace=/my_moving_cube
                              joint_state_controller
                              inertia_wheel_roll_joint_velocity_controller">
  </node>

</launch>

Let’s spawn the cube and launch the controller

roslaunch my_moving_cube_descriptiospawn_moving_cube.launch
roslaunch my_moving_cube_description moving_cube_control.launch

Then we can kick the robot and make it move by publishing one topic like this

rostopic pub /my_moving_cube/inertia_wheel_roll_joint_velocity_controller/command std_msgs/Float64 "data: 80.0"

Congratulations! Your cube robot moved a bit!

 

Edit by: Tony Huang

 

[irp posts=”9860″ name=”ROS Projects OpenAI with Moving Cube Robot in Gazebo Step-by-Step Part2″]

ROS Developers LIVE-Class #22: How to create an OpenAI environment for your robotic problem

ROS Developers LIVE-Class #22: How to create an OpenAI environment for your robotic problem

In this ROS LIVE-Class we’re going to create an OpenAI environment that can interact with your simulated robot in Gazebo, using ROS.

We will see:

  • How to define the actions required for the task
  • How to define the observation space
  • How to build the _step function that is executed at each training step.

Every Tuesday at 18:00 CET/CEST.

This is a LIVE Class on how to develop with ROS. In Live Classes you practice with me at the same time that I explain, with the provided free ROS material.

IMPORTANT: Remember to be on time for the class because at the beginning of the class we will share the code with the attendants.

IMPORTANT 2: in order to start practicing quickly, we are using the ROS Development Studio for doing the practice. You will need a free account to attend the class. Go to http://rds.theconstructsim.com and create an account prior to the class.

[irp posts=”9725″ name=”ROS Developers LIVE-Class #21: A Basic Example of OpenAI with ROS”]

RELATED LINKS & RESOURCES

 


open-ai-gym-for-robotics-ros-course-banner

ROS Developers LIVE-Class #21: A Basic Example of OpenAI with ROS

ROS Developers LIVE-Class #21: A Basic Example of OpenAI with ROS

 

In this ROS LIVE-Class we’re going to learn how to create our own Qlearning training for a cart pole in Gazebo using both OpenAI and ROS.

We will see:

  •  How to create a Python training program that uses OpenAI infrastructure
  •  How to create the environment that allows to get the observations and take the actions on a Gazebo simulation
  •  How to interface everything with ROS

Every Tuesday at 18:00 CET/CEST.

This is a LIVE Class on how to develop with ROS. In Live Classes you practice with me at the same time that I explain, with the provided free ROS material.

IMPORTANT: Remember to be on time for the class because at the beginning of the class we will share the code with the attendants.

IMPORTANT 2: in order to start practicing quickly, we are using the ROS Development Studio for doing the practice. You will need a free account to attend the class. Go to ROS Development Stdio and create an account prior to the class.

[irp posts=”9720″ name=”ROS Developers LIVE-Class #22: How to create an OpenAI environment for your robotic problem”]

RELATED LINKS & RESOURCES

▸ OpenAI libraries: https://gym.openai.com/
ROS Development Studio
Next ROS Developers Live Classes

***


open-ai-gym-for-robotics-ros-course-banner

[ROS Projects] OpenAI with Hopper Robot in Gazebo Step-by-Step #Part2

[ROS Projects] OpenAI with Hopper Robot in Gazebo Step-by-Step #Part2

We continue setting up OpenAI-Gym to make a Hopper robot learn in Gazebo simulator, using ROS Development Studio.

If you didn’t follow the previous post, here are the links:

[ROS Projects] Create a Hopper Robot in Gazebo Step-by-Step

[ROS Projects] OpenAI with Hopper Robot in Gazebo Step-by-Step

Got a suggestion for the next steps to take of this project? We would love to hear them in the comments bellow :).

Part 2

Step 1. Environment setup

We’ll start by explaining how to create the gym environment for training. You can create a file called monoped_env.py under the my_hopper_training/src directory with the following content

#!/usr/bin/env python
'''
    By Miguel Angel Rodriguez <duckfrost@theconstructsim.com>
    Visit our website at ec2-54-246-60-98.eu-west-1.compute.amazonaws.com
'''
import gym
import rospy
import numpy as np
import time
from gym import utils, spaces
from geometry_msgs.msg import Pose
from gym.utils import seeding
from gym.envs.registration import register
from gazebo_connection import GazeboConnection
from joint_publisher import JointPub
from monoped_state import MonopedState
from controllers_connection import ControllersConnection

#register the training environment in the gym as an available one
reg = register(
    id='Monoped-v0',
    entry_point='monoped_env:MonopedEnv',
    timestep_limit=50,
    )


class MonopedEnv(gym.Env):

    def __init__(self):
        
        # We assume that a ROS node has already been created
        # before initialising the environment

        # gets training parameters from param server
        self.desired_pose = Pose()
        self.desired_pose.position.x = rospy.get_param("/desired_pose/x")
        self.desired_pose.position.y = rospy.get_param("/desired_pose/y")
        self.desired_pose.position.z = rospy.get_param("/desired_pose/z")
        self.running_step = rospy.get_param("/running_step")
        self.max_incl = rospy.get_param("/max_incl")
        self.max_height = rospy.get_param("/max_height")
        self.min_height = rospy.get_param("/min_height")
        self.joint_increment_value = rospy.get_param("/joint_increment_value")
        self.done_reward = rospy.get_param("/done_reward")
        self.alive_reward = rospy.get_param("/alive_reward")
        self.desired_force = rospy.get_param("/desired_force")
        self.desired_yaw = rospy.get_param("/desired_yaw")

        self.weight_r1 = rospy.get_param("/weight_r1")
        self.weight_r2 = rospy.get_param("/weight_r2")
        self.weight_r3 = rospy.get_param("/weight_r3")
        self.weight_r4 = rospy.get_param("/weight_r4")
        self.weight_r5 = rospy.get_param("/weight_r5")

        # stablishes connection with simulator
        self.gazebo = GazeboConnection()

        self.controllers_object = ControllersConnection(namespace="monoped")

        self.monoped_state_object = MonopedState(   max_height=self.max_height,
                                                    min_height=self.min_height,
                                                    abs_max_roll=self.max_incl,
                                                    abs_max_pitch=self.max_incl,
                                                    joint_increment_value=self.joint_increment_value,
                                                    done_reward=self.done_reward,
                                                    alive_reward=self.alive_reward,
                                                    desired_force=self.desired_force,
                                                    desired_yaw=self.desired_yaw,
                                                    weight_r1=self.weight_r1,
                                                    weight_r2=self.weight_r2,
                                                    weight_r3=self.weight_r3,
                                                    weight_r4=self.weight_r4,
                                                    weight_r5=self.weight_r5
                                                )

        self.monoped_state_object.set_desired_world_point(self.desired_pose.position.x,
                                                          self.desired_pose.position.y,
                                                          self.desired_pose.position.z)

        self.monoped_joint_pubisher_object = JointPub()
        


        """
        For this version, we consider 6 actions
        1-2) Increment/Decrement haa_joint
        3-4) Increment/Decrement hfe_joint
        5-6) Increment/Decrement kfe_joint
        """
        self.action_space = spaces.Discrete(6)
        self.reward_range = (-np.inf, np.inf)

        self._seed()

    # A function to initialize the random generator
    def _seed(self, seed=None):
        self.np_random, seed = seeding.np_random(seed)
        return [seed]
        
    # Resets the state of the environment and returns an initial observation.
    def _reset(self):

        # 0st: We pause the Simulator
        rospy.logdebug("Pausing SIM...")
        self.gazebo.pauseSim()

        # 1st: resets the simulation to initial values
        rospy.logdebug("Reset SIM...")
        self.gazebo.resetSim()

        # 2nd: We Set the gravity to 0.0 so that we dont fall when reseting joints
        # It also UNPAUSES the simulation
        rospy.logdebug("Remove Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, 0.0)

        # EXTRA: Reset JoinStateControlers because sim reset doesnt reset TFs, generating time problems
        rospy.logdebug("reset_monoped_joint_controllers...")
        self.controllers_object.reset_monoped_joint_controllers()

        # 3rd: resets the robot to initial conditions
        rospy.logdebug("set_init_pose...")
        self.monoped_joint_pubisher_object.set_init_pose()

        # 5th: Check all subscribers work.
        # Get the state of the Robot defined by its RPY orientation, distance from
        # desired point, contact force and JointState of the three joints
        rospy.logdebug("check_all_systems_ready...")
        self.monoped_state_object.check_all_systems_ready()
        rospy.logdebug("get_observations...")
        observation = self.monoped_state_object.get_observations()

        # 6th: We restore the gravity to original
        rospy.logdebug("Restore Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, -9.81)

        # 7th: pauses simulation
        rospy.logdebug("Pause SIM...")
        self.gazebo.pauseSim()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state

    def _step(self, action):

        # Given the action selected by the learning algorithm,
        # we perform the corresponding movement of the robot

        # 1st, decide which action corresponsd to which joint is incremented
        next_action_position = self.monoped_state_object.get_action_to_position(action)

        # We move it to that pos
        self.gazebo.unpauseSim()
        self.monoped_joint_pubisher_object.move_joints(next_action_position)
        # Then we send the command to the robot and let it go
        # for running_step seconds
        time.sleep(self.running_step)
        self.gazebo.pauseSim()

        # We now process the latest data saved in the class state to calculate
        # the state and the rewards. This way we guarantee that they work
        # with the same exact data.
        # Generate State based on observations
        observation = self.monoped_state_object.get_observations()

        # finally we get an evaluation based on what happened in the sim
        reward,done = self.monoped_state_object.process_data()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state, reward, done, {}

    def get_state(self, observation):
        """
        We retrieve the Stringuified-Discrete version of the given observation
        :return: state
        """
        return self.monoped_state_object.get_state_as_string(observation)

This script creates the gym environment with the following parts:

1.Register the training environment

You will always need to do this step to inform the gym package that a new training environment is created.

2.Load the desired pose

Parameters related to the training is loaded at this step.

3.Connect with the gazebo

In order to connect the gym environment with gazebo simulation, we create a script called gazebo_connection.py in the same directory with the following content

#!/usr/bin/env python

import rospy
from std_srvs.srv import Empty
from gazebo_msgs.msg import ODEPhysics
from gazebo_msgs.srv import SetPhysicsProperties, SetPhysicsPropertiesRequest
from std_msgs.msg import Float64
from geometry_msgs.msg import Vector3

class GazeboConnection():
    
    def __init__(self):
        
        self.unpause = rospy.ServiceProxy('/gazebo/unpause_physics', Empty)
        self.pause = rospy.ServiceProxy('/gazebo/pause_physics', Empty)
        self.reset_proxy = rospy.ServiceProxy('/gazebo/reset_simulation', Empty)

        # Setup the Gravity Controle system
        service_name = '/gazebo/set_physics_properties'
        rospy.logdebug("Waiting for service " + str(service_name))
        rospy.wait_for_service(service_name)
        rospy.logdebug("Service Found " + str(service_name))

        self.set_physics = rospy.ServiceProxy(service_name, SetPhysicsProperties)
        self.init_values()
        # We always pause the simulation, important for legged robots learning
        self.pauseSim()

    def pauseSim(self):
        rospy.wait_for_service('/gazebo/pause_physics')
        try:
            self.pause()
        except rospy.ServiceException, e:
            print ("/gazebo/pause_physics service call failed")
        
    def unpauseSim(self):
        rospy.wait_for_service('/gazebo/unpause_physics')
        try:
            self.unpause()
        except rospy.ServiceException, e:
            print ("/gazebo/unpause_physics service call failed")
        
    def resetSim(self):
        rospy.wait_for_service('/gazebo/reset_simulation')
        try:
            self.reset_proxy()
        except rospy.ServiceException, e:
            print ("/gazebo/reset_simulation service call failed")

    def resetWorld(self):
        rospy.wait_for_service('/gazebo/reset_world')
        try:
            self.reset_proxy()
        except rospy.ServiceException, e:
            print ("/gazebo/reset_world service call failed")

    def init_values(self):

        rospy.wait_for_service('/gazebo/reset_simulation')
        try:
            # reset_proxy.call()
            self.reset_proxy()
        except rospy.ServiceException, e:
            print ("/gazebo/reset_simulation service call failed")

        self._time_step = Float64(0.001)
        self._max_update_rate = Float64(1000.0)

        self._gravity = Vector3()
        self._gravity.x = 0.0
        self._gravity.y = 0.0
        self._gravity.z = 0.0

        self._ode_config = ODEPhysics()
        self._ode_config.auto_disable_bodies = False
        self._ode_config.sor_pgs_precon_iters = 0
        self._ode_config.sor_pgs_iters = 50
        self._ode_config.sor_pgs_w = 1.3
        self._ode_config.sor_pgs_rms_error_tol = 0.0
        self._ode_config.contact_surface_layer = 0.001
        self._ode_config.contact_max_correcting_vel = 0.0
        self._ode_config.cfm = 0.0
        self._ode_config.erp = 0.2
        self._ode_config.max_contacts = 20

        self.update_gravity_call()

    def update_gravity_call(self):

        self.pauseSim()

        set_physics_request = SetPhysicsPropertiesRequest()
        set_physics_request.time_step = self._time_step.data
        set_physics_request.max_update_rate = self._max_update_rate.data
        set_physics_request.gravity = self._gravity
        set_physics_request.ode_config = self._ode_config

        rospy.logdebug(str(set_physics_request.gravity))

        result = self.set_physics(set_physics_request)
        rospy.logdebug("Gravity Update Result==" + str(result.success) + ",message==" + str(result.status_message))

        self.unpauseSim()

    def change_gravity(self, x, y, z):
        self._gravity.x = x
        self._gravity.y = y
        self._gravity.z = z

        self.update_gravity_call()

This script is responsible to reset, pause and unpause the simulation for the training script.

4.Connect with the controller

Since TF doesn’t like that we reset the environment and it will generate some problems. We have to manually reset the controller with the controller_connection.py script. We won’t go into detail here. If you want to learn more about the controller, please check our controller 101 course.

#!/usr/bin/env python
'''
    By Miguel Angel Rodriguez <duckfrost@theconstructsim.com>
    Visit our website at ec2-54-246-60-98.eu-west-1.compute.amazonaws.com
'''
import gym
import rospy
import numpy as np
import time
from gym import utils, spaces
from geometry_msgs.msg import Pose
from gym.utils import seeding
from gym.envs.registration import register
from gazebo_connection import GazeboConnection
from joint_publisher import JointPub
from monoped_state import MonopedState
from controllers_connection import ControllersConnection

#register the training environment in the gym as an available one
reg = register(
    id='Monoped-v0',
    entry_point='monoped_env:MonopedEnv',
    timestep_limit=50,
    )


class MonopedEnv(gym.Env):

    def __init__(self):
        
        # We assume that a ROS node has already been created
        # before initialising the environment

        # gets training parameters from param server
        self.desired_pose = Pose()
        self.desired_pose.position.x = rospy.get_param("/desired_pose/x")
        self.desired_pose.position.y = rospy.get_param("/desired_pose/y")
        self.desired_pose.position.z = rospy.get_param("/desired_pose/z")
        self.running_step = rospy.get_param("/running_step")
        self.max_incl = rospy.get_param("/max_incl")
        self.max_height = rospy.get_param("/max_height")
        self.min_height = rospy.get_param("/min_height")
        self.joint_increment_value = rospy.get_param("/joint_increment_value")
        self.done_reward = rospy.get_param("/done_reward")
        self.alive_reward = rospy.get_param("/alive_reward")
        self.desired_force = rospy.get_param("/desired_force")
        self.desired_yaw = rospy.get_param("/desired_yaw")

        self.weight_r1 = rospy.get_param("/weight_r1")
        self.weight_r2 = rospy.get_param("/weight_r2")
        self.weight_r3 = rospy.get_param("/weight_r3")
        self.weight_r4 = rospy.get_param("/weight_r4")
        self.weight_r5 = rospy.get_param("/weight_r5")

        # stablishes connection with simulator
        self.gazebo = GazeboConnection()

        self.controllers_object = ControllersConnection(namespace="monoped")

        self.monoped_state_object = MonopedState(   max_height=self.max_height,
                                                    min_height=self.min_height,
                                                    abs_max_roll=self.max_incl,
                                                    abs_max_pitch=self.max_incl,
                                                    joint_increment_value=self.joint_increment_value,
                                                    done_reward=self.done_reward,
                                                    alive_reward=self.alive_reward,
                                                    desired_force=self.desired_force,
                                                    desired_yaw=self.desired_yaw,
                                                    weight_r1=self.weight_r1,
                                                    weight_r2=self.weight_r2,
                                                    weight_r3=self.weight_r3,
                                                    weight_r4=self.weight_r4,
                                                    weight_r5=self.weight_r5
                                                )

        self.monoped_state_object.set_desired_world_point(self.desired_pose.position.x,
                                                          self.desired_pose.position.y,
                                                          self.desired_pose.position.z)

        self.monoped_joint_pubisher_object = JointPub()
        


        """
        For this version, we consider 6 actions
        1-2) Increment/Decrement haa_joint
        3-4) Increment/Decrement hfe_joint
        5-6) Increment/Decrement kfe_joint
        """
        self.action_space = spaces.Discrete(6)
        self.reward_range = (-np.inf, np.inf)

        self._seed()

    # A function to initialize the random generator
    def _seed(self, seed=None):
        self.np_random, seed = seeding.np_random(seed)
        return [seed]
        
    # Resets the state of the environment and returns an initial observation.
    def _reset(self):

        # 0st: We pause the Simulator
        rospy.logdebug("Pausing SIM...")
        self.gazebo.pauseSim()

        # 1st: resets the simulation to initial values
        rospy.logdebug("Reset SIM...")
        self.gazebo.resetSim()

        # 2nd: We Set the gravity to 0.0 so that we dont fall when reseting joints
        # It also UNPAUSES the simulation
        rospy.logdebug("Remove Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, 0.0)

        # EXTRA: Reset JoinStateControlers because sim reset doesnt reset TFs, generating time problems
        rospy.logdebug("reset_monoped_joint_controllers...")
        self.controllers_object.reset_monoped_joint_controllers()

        # 3rd: resets the robot to initial conditions
        rospy.logdebug("set_init_pose...")
        self.monoped_joint_pubisher_object.set_init_pose()

        # 5th: Check all subscribers work.
        # Get the state of the Robot defined by its RPY orientation, distance from
        # desired point, contact force and JointState of the three joints
        rospy.logdebug("check_all_systems_ready...")
        self.monoped_state_object.check_all_systems_ready()
        rospy.logdebug("get_observations...")
        observation = self.monoped_state_object.get_observations()

        # 6th: We restore the gravity to original
        rospy.logdebug("Restore Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, -9.81)

        # 7th: pauses simulation
        rospy.logdebug("Pause SIM...")
        self.gazebo.pauseSim()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state

    def _step(self, action):

        # Given the action selected by the learning algorithm,
        # we perform the corresponding movement of the robot

        # 1st, decide which action corresponsd to which joint is incremented
        next_action_position = self.monoped_state_object.get_action_to_position(action)

        # We move it to that pos
        self.gazebo.unpauseSim()
        self.monoped_joint_pubisher_object.move_joints(next_action_position)
        # Then we send the command to the robot and let it go
        # for running_step seconds
        time.sleep(self.running_step)
        self.gazebo.pauseSim()

        # We now process the latest data saved in the class state to calculate
        # the state and the rewards. This way we guarantee that they work
        # with the same exact data.
        # Generate State based on observations
        observation = self.monoped_state_object.get_observations()

        # finally we get an evaluation based on what happened in the sim
        reward,done = self.monoped_state_object.process_data()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state, reward, done, {}

    def get_state(self, observation):
        """
        We retrieve the Stringuified-Discrete version of the given observation
        :return: state
        """
        return self.monoped_state_object.get_state_as_string(observation)

5.Generate state object

The state object is then generated with the current state with the following monoped_state.py script

#!/usr/bin/env python
'''
    By Miguel Angel Rodriguez <duckfrost@theconstructsim.com>
    Visit our website at ec2-54-246-60-98.eu-west-1.compute.amazonaws.com
'''
import gym
import rospy
import numpy as np
import time
from gym import utils, spaces
from geometry_msgs.msg import Pose
from gym.utils import seeding
from gym.envs.registration import register
from gazebo_connection import GazeboConnection
from joint_publisher import JointPub
from monoped_state import MonopedState
from controllers_connection import ControllersConnection

#register the training environment in the gym as an available one
reg = register(
    id='Monoped-v0',
    entry_point='monoped_env:MonopedEnv',
    timestep_limit=50,
    )


class MonopedEnv(gym.Env):

    def __init__(self):
        
        # We assume that a ROS node has already been created
        # before initialising the environment

        # gets training parameters from param server
        self.desired_pose = Pose()
        self.desired_pose.position.x = rospy.get_param("/desired_pose/x")
        self.desired_pose.position.y = rospy.get_param("/desired_pose/y")
        self.desired_pose.position.z = rospy.get_param("/desired_pose/z")
        self.running_step = rospy.get_param("/running_step")
        self.max_incl = rospy.get_param("/max_incl")
        self.max_height = rospy.get_param("/max_height")
        self.min_height = rospy.get_param("/min_height")
        self.joint_increment_value = rospy.get_param("/joint_increment_value")
        self.done_reward = rospy.get_param("/done_reward")
        self.alive_reward = rospy.get_param("/alive_reward")
        self.desired_force = rospy.get_param("/desired_force")
        self.desired_yaw = rospy.get_param("/desired_yaw")

        self.weight_r1 = rospy.get_param("/weight_r1")
        self.weight_r2 = rospy.get_param("/weight_r2")
        self.weight_r3 = rospy.get_param("/weight_r3")
        self.weight_r4 = rospy.get_param("/weight_r4")
        self.weight_r5 = rospy.get_param("/weight_r5")

        # stablishes connection with simulator
        self.gazebo = GazeboConnection()

        self.controllers_object = ControllersConnection(namespace="monoped")

        self.monoped_state_object = MonopedState(   max_height=self.max_height,
                                                    min_height=self.min_height,
                                                    abs_max_roll=self.max_incl,
                                                    abs_max_pitch=self.max_incl,
                                                    joint_increment_value=self.joint_increment_value,
                                                    done_reward=self.done_reward,
                                                    alive_reward=self.alive_reward,
                                                    desired_force=self.desired_force,
                                                    desired_yaw=self.desired_yaw,
                                                    weight_r1=self.weight_r1,
                                                    weight_r2=self.weight_r2,
                                                    weight_r3=self.weight_r3,
                                                    weight_r4=self.weight_r4,
                                                    weight_r5=self.weight_r5
                                                )

        self.monoped_state_object.set_desired_world_point(self.desired_pose.position.x,
                                                          self.desired_pose.position.y,
                                                          self.desired_pose.position.z)

        self.monoped_joint_pubisher_object = JointPub()
        


        """
        For this version, we consider 6 actions
        1-2) Increment/Decrement haa_joint
        3-4) Increment/Decrement hfe_joint
        5-6) Increment/Decrement kfe_joint
        """
        self.action_space = spaces.Discrete(6)
        self.reward_range = (-np.inf, np.inf)

        self._seed()

    # A function to initialize the random generator
    def _seed(self, seed=None):
        self.np_random, seed = seeding.np_random(seed)
        return [seed]
        
    # Resets the state of the environment and returns an initial observation.
    def _reset(self):

        # 0st: We pause the Simulator
        rospy.logdebug("Pausing SIM...")
        self.gazebo.pauseSim()

        # 1st: resets the simulation to initial values
        rospy.logdebug("Reset SIM...")
        self.gazebo.resetSim()

        # 2nd: We Set the gravity to 0.0 so that we dont fall when reseting joints
        # It also UNPAUSES the simulation
        rospy.logdebug("Remove Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, 0.0)

        # EXTRA: Reset JoinStateControlers because sim reset doesnt reset TFs, generating time problems
        rospy.logdebug("reset_monoped_joint_controllers...")
        self.controllers_object.reset_monoped_joint_controllers()

        # 3rd: resets the robot to initial conditions
        rospy.logdebug("set_init_pose...")
        self.monoped_joint_pubisher_object.set_init_pose()

        # 5th: Check all subscribers work.
        # Get the state of the Robot defined by its RPY orientation, distance from
        # desired point, contact force and JointState of the three joints
        rospy.logdebug("check_all_systems_ready...")
        self.monoped_state_object.check_all_systems_ready()
        rospy.logdebug("get_observations...")
        observation = self.monoped_state_object.get_observations()

        # 6th: We restore the gravity to original
        rospy.logdebug("Restore Gravity...")
        self.gazebo.change_gravity(0.0, 0.0, -9.81)

        # 7th: pauses simulation
        rospy.logdebug("Pause SIM...")
        self.gazebo.pauseSim()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state

    def _step(self, action):

        # Given the action selected by the learning algorithm,
        # we perform the corresponding movement of the robot

        # 1st, decide which action corresponsd to which joint is incremented
        next_action_position = self.monoped_state_object.get_action_to_position(action)

        # We move it to that pos
        self.gazebo.unpauseSim()
        self.monoped_joint_pubisher_object.move_joints(next_action_position)
        # Then we send the command to the robot and let it go
        # for running_step seconds
        time.sleep(self.running_step)
        self.gazebo.pauseSim()

        # We now process the latest data saved in the class state to calculate
        # the state and the rewards. This way we guarantee that they work
        # with the same exact data.
        # Generate State based on observations
        observation = self.monoped_state_object.get_observations()

        # finally we get an evaluation based on what happened in the sim
        reward,done = self.monoped_state_object.process_data()

        # Get the State Discrete Stringuified version of the observations
        state = self.get_state(observation)

        return state, reward, done, {}

    def get_state(self, observation):
        """
        We retrieve the Stringuified-Discrete version of the given observation
        :return: state
        """
        return self.monoped_state_object.get_state_as_string(observation)

6.Reset

Setup the environment back to its initial observation. This includes the steps like pause the simulation, reset the controller and etc. as we described before.

7.Step

In this part, an action will be decided based on the current state and then be published with the following joint_publisher.py script. After that, the reward is calculated based on the new observation.

#!/usr/bin/env python

import rospy
import math
from std_msgs.msg import String
from std_msgs.msg import Float64

class JointPub(object):
    def __init__(self):

        self.publishers_array = []
        self._haa_joint_pub = rospy.Publisher('/monoped/haa_joint_position_controller/command', Float64, queue_size=1)
        self._hfe_joint_pub = rospy.Publisher('/monoped/hfe_joint_position_controller/command', Float64, queue_size=1)
        self._kfe_joint_pub = rospy.Publisher('/monoped/kfe_joint_position_controller/command', Float64, queue_size=1)
        
        self.publishers_array.append(self._haa_joint_pub)
        self.publishers_array.append(self._hfe_joint_pub)
        self.publishers_array.append(self._kfe_joint_pub)

        self.init_pos = [0.0,0.0,0.0]

    def set_init_pose(self):
        """
        Sets joints to initial position [0,0,0]
        :return:
        """
        self.check_publishers_connection()
        self.move_joints(self.init_pos)


    def check_publishers_connection(self):
        """
        Checks that all the publishers are working
        :return:
        """
        rate = rospy.Rate(10)  # 10hz
        while (self._haa_joint_pub.get_num_connections() == 0):
            rospy.logdebug("No susbribers to _haa_joint_pub yet so we wait and try again")
            try:
                rate.sleep()
            except rospy.ROSInterruptException:
                # This is to avoid error when world is rested, time when backwards.
                pass
        rospy.logdebug("_haa_joint_pub Publisher Connected")

        while (self._hfe_joint_pub.get_num_connections() == 0):
            rospy.logdebug("No susbribers to _hfe_joint_pub yet so we wait and try again")
            try:
                rate.sleep()
            except rospy.ROSInterruptException:
                # This is to avoid error when world is rested, time when backwards.
                pass
        rospy.logdebug("_hfe_joint_pub Publisher Connected")

        while (self._kfe_joint_pub.get_num_connections() == 0):
            rospy.logdebug("No susbribers to _kfe_joint_pub yet so we wait and try again")
            try:
                rate.sleep()
            except rospy.ROSInterruptException:
                # This is to avoid error when world is rested, time when backwards.
                pass
        rospy.logdebug("_kfe_joint_pub Publisher Connected")

        rospy.logdebug("All Publishers READY")

    def joint_mono_des_callback(self, msg):
        rospy.logdebug(str(msg.joint_state.position))

        self.move_joints(msg.joint_state.position)

    def move_joints(self, joints_array):

        i = 0
        for publisher_object in self.publishers_array:
          joint_value = Float64()
          joint_value.data = joints_array[i]
          rospy.logdebug("JointsPos>>"+str(joint_value))
          publisher_object.publish(joint_value)
          i += 1


    def start_loop(self, rate_value = 2.0):
        rospy.logdebug("Start Loop")
        pos1 = [0.0,0.0,1.6]
        pos2 = [0.0,0.0,-1.6]
        position = "pos1"
        rate = rospy.Rate(rate_value)
        while not rospy.is_shutdown():
          if position == "pos1":
            self.move_joints(pos1)
            position = "pos2"
          else:
            self.move_joints(pos2)
            position = "pos1"
          rate.sleep()

    def start_sinus_loop(self, rate_value = 2.0):
        rospy.logdebug("Start Loop")
        w = 0.0
        x = 2.0*math.sin(w)
        #pos_x = [0.0,0.0,x]
        #pos_x = [x, 0.0, 0.0]
        pos_x = [0.0, x, 0.0]
        rate = rospy.Rate(rate_value)
        while not rospy.is_shutdown():
            self.move_joints(pos_x)
            w += 0.05
            x = 2.0 * math.sin(w)
            #pos_x = [0.0, 0.0, x]
            #pos_x = [x, 0.0, 0.0]
            pos_x = [0.0, x, 0.0]
            rate.sleep()


if __name__=="__main__":
    rospy.init_node('joint_publisher_node')
    joint_publisher = JointPub()
    rate_value = 50.0
    #joint_publisher.start_loop(rate_value)
    joint_publisher.start_sinus_loop(rate_value)

Step 2. Training

Now you have all the code you need to start training. Let’s run the simulation first. Go to Simulations -> Select launch file -> my_legged_robot_sims->main.launch to launch the hopper robot simulation in Gazebo.

Then you can run the training with

roslaunch my_hopper_training main.launch

The algorithm is working and start to train the robot to perform the task we want, however, it still requires lots of tuning to work properly.

You can have multiple instances and train robots with different parameters in parallel with our paid program in ROSDS. Please check it here if you are interested.

 

Edit by: Tony Huang

 

[irp]

Here you will find all the code:
https://bitbucket.org/theconstructcore/hopper/src/master/

Or use directly the project of ROSDevelopementStudio:
https://rds.theconstructsim.com/tc_projects/use_project_share_link/0a40a3bf-e0be-4119-8364-ac443c68545a

Check Out this OpenAI course in RobotIgnite Academy for learning the basics step by step: https://wp.me/P9Rthq-1UZ


open-ai-gym-for-robotics-ros-course-banner

Pin It on Pinterest