Change language

Implementing Deep Q-Learning Using Tensorflow

| |

This article will demonstrate how to conduct reinforcement learning in a broader environment than previously demonstrated. We will be implementing a deep learning technique using Tensorflow.

Note: The following demo requires a graphics rendering library. For the Windows operating system, PyOpenGl, is recommended, and for the Ubuntu operating system —  OpenGl .

Step 1: Import required libraries

import numpy as np

import gym

 

from keras. models import Sequential

from keras.layers import Dense, Activation, Flatten

from keras.optimizers import Adam

  

from rl.agents.dqn import DQNAgent

from rl.policy import EpsGreedyQPolicy

from rl.memory import SequentialMemory

Step 2: Create Environment

Note. The preloaded environment will be used from the OpenAI gym module, which contains many different environments for different purposes. A list of environments can be viewed at their website .

The & # 39; MountainCar-v0 & # environment will be used here 39 ;. In this case, the car (agent) is stuck between two mountains and must drive uphill on one of them. The car’s engine is not strong enough to drive on its own and therefore needs to gain momentum to climb the hill

# Create environment

environment_name = ’MountainCar-v0’

env = gym.make (environment_name)

np.random.seed ( 0 )

env.seed ( 0 )

 
# Retrieve the number of possible actions

num_actions = env.action_space.n

 

Step 3: Building the training agent

The training agent will be built using a deep neural network, and for the same purpose we will use the Sequential class of the Keras module.

agent = Sequential ()

agent.add (Flatten (input_shape = ( 1 ,) + env.observation_space.shape ))

agent.add (Dense ( 16 ))

agent.add (Activation ( ’ relu’ ))

agent.add (Dense (num_actions))

agent.add (Activation ( ’linear’ ))

Step 4: Finding the optimal strategy

# Building a model to find the optimal strategy

strategy = EpsGreedyQPolicy ()

memory = SequentialMemory (limit = 10000 , window_length = 1 )

dqn = DQNAgent (model = agent, nb_actions = num_actions,

memory = memory, nb_steps_warmup = 10 ,

target_model_update = 1e - 2 , policy = strategy)

dqn. compile (Adam (lr = 1e - 3 ), metrics = [ ’mae’ ])

 
# Learning visualization

dqn. fit (env, nb_steps = 5000 , visualize = True , verbose = 2 )

The agent tries different methods to reach the top and thus gain knowledge from each ep isod.

Step 5: Test the training agent

# Testing the learning agent

dqn.test (env, nb_episodes = 5 , visualize = True )

The agent is trying to apply his knowledge to reach the top.

Shop

Learn programming in R: courses

$

Best Python online courses for 2022

$

Best laptop for Fortnite

$

Best laptop for Excel

$

Best laptop for Solidworks

$

Best laptop for Roblox

$

Best computer for crypto mining

$

Best laptop for Sims 4

$

Latest questions

NUMPYNUMPY

Common xlabel/ylabel for matplotlib subplots

12 answers

NUMPYNUMPY

How to specify multiple return types using type-hints

12 answers

NUMPYNUMPY

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

12 answers

NUMPYNUMPY

Flake8: Ignore specific warning for entire file

12 answers

NUMPYNUMPY

glob exclude pattern

12 answers

NUMPYNUMPY

How to avoid HTTP error 429 (Too Many Requests) python

12 answers

NUMPYNUMPY

Python CSV error: line contains NULL byte

12 answers

NUMPYNUMPY

csv.Error: iterator should return strings, not bytes

12 answers

News


Wiki

Python | How to copy data from one Excel sheet to another

Common xlabel/ylabel for matplotlib subplots

Check if one list is a subset of another in Python

sin

How to specify multiple return types using type-hints

exp

Printing words vertically in Python

exp

Python Extract words from a given string

Cyclic redundancy check in Python

Finding mean, median, mode in Python without libraries

cos

Python add suffix / add prefix to strings in a list

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

Python - Move item to the end of the list

Python - Print list vertically