Implementing Deep Q-Learning Using Tensorflow



This article will demonstrate how to conduct reinforcement learning in a broader environment than previously demonstrated. We will be implementing a deep learning technique using Tensorflow.

Note: The following demo requires a graphics rendering library. For the Windows operating system, PyOpenGl, is recommended, and for the Ubuntu operating system —  OpenGl .

Step 1: Import required libraries

import numpy as np

import gym

 

from keras. models import Sequential

from keras.layers import Dense, Activation, Flatten

from keras.optimizers import Adam

  

from rl.agents.dqn import DQNAgent

from rl.policy import EpsGreedyQPolicy

from rl.memory import SequentialMemory

Step 2: Create Environment

Note. The preloaded environment will be used from the OpenAI gym module, which contains many different environments for different purposes. A list of environments can be viewed at their website .

The & # 39; MountainCar-v0 & # environment will be used here 39 ;. In this case, the car (agent) is stuck between two mountains and must drive uphill on one of them. The car`s engine is not strong enough to drive on its own and therefore needs to gain momentum to climb the hill

# Create environment

environment_name = `MountainCar-v0`

env = gym.make (environment_name)

np.random.seed ( 0 )

env.seed ( 0 )

 
# Retrieve the number of possible actions

num_actions = env.action_space.n

 

Step 3: Building the training agent

The training agent will be built using a deep neural network, and for the same purpose we will use the Sequential class of the Keras module.

agent = Sequential ()

agent.add (Flatten (input_shape = ( 1 ,) + env.observation_space.shape ))

agent.add (Dense ( 16 ))

agent.add (Activation ( ` relu` ))

agent.add (Dense (num_actions))

agent.add (Activation ( `linear` ))

Step 4: Finding the optimal strategy

# Building a model to find the optimal strategy

strategy = EpsGreedyQPolicy ()

memory = SequentialMemory (limit = 10000 , window_length = 1 )

dqn = DQNAgent (model = agent, nb_actions = num_actions,

memory = memory, nb_steps_warmup = 10 ,

target_model_update = 1e - 2 , policy = strategy)

dqn. compile (Adam (lr = 1e - 3 ), metrics = [ `mae` ])

 
# Learning visualization

dqn. fit (env, nb_steps = 5000 , visualize = True , verbose = 2 )

The agent tries different methods to reach the top and thus gain knowledge from each ep isod.

Step 5: Test the training agent

# Testing the learning agent

dqn.test (env, nb_episodes = 5 , visualize = True )

The agent is trying to apply his knowledge to reach the top.