
The random CartPole agent
Although the environment is much more complex than our first example in The anatomy of the agent section, the code of the agent is much shorter. This is the power of reusability, abstractions, and third-party libraries!
So, here is the code (you can find it in Chapter02/02_cartpole_random.py
):
import gym if __name__ == "__main__": env = gym.make("CartPole-v0") total_reward = 0.0 total_steps = 0 obs = env.reset()
Here, we create the environment and initialize the counter of steps and the reward accumulator. On the last line, we reset the environment to obtain the first observation (which we'll not use, as our agent is stochastic):
while True: action = env.action_space.sample() obs, reward, done, _ = env.step(action) total_reward += reward total_steps += 1 if done: break print("Episode done in %d steps, total reward %.2f" % (total_steps, total_reward))
In this loop, we sample a random action, then ask the environment to execute it and return to us the next observation(obs
), the reward
, and the done
flag. If the episode is over, we stop the loop and show how many steps we've done and how much reward has been accumulated. If you start this example, you'll see something like this (not exactly, due to the agent's randomness):
rl_book_samples/Chapter02$ python 02_cartpole_random.py WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Episode done in 12 steps, total reward 12.00
As with the interactive session, the warning is not related to our code, but to Gym's internals. On average, our random agent makes 12–15 steps before the pole falls and the episode ends. Most of the environments in Gym have a "reward boundary," which is the average reward that the agent should gain during 100 consecutive episodes to "solve" the environment. For CartPole, this boundary is 195, which means that on average, the agent must hold the stick during 195-time steps or longer. Using this perspective, our random agent's performance looks poor. However, don't be disappointed too early, because we are just at the beginning, and soon we will solve CartPole and many other much more interesting and challenging environments.