代码示例 / 强化学习 / 深度确定性策略梯度 (DDPG)

深度确定性策略梯度 (DDPG)

作者: amifunny
创建日期: 2020/06/04
最后修改日期: 2024/03/23
描述: 在倒立摆问题上实现DDPG算法。

在Colab中查看 GitHub源代码


介绍

深度确定性策略梯度 (DDPG) 是一种无模型的离策略算法,用于学习连续动作。

它结合了DPG(确定性策略梯度)和DQN(深度Q网络)的思想。 它使用经验回放和来自DQN的慢学习目标网络,并且基于DPG,可以在连续动作空间上运行。

本教程紧密跟随这篇论文 - 用深度强化学习进行连续控制


问题

我们正在尝试解决经典的倒立摆控制问题。 在这个设置中,我们只能进行两个动作:向左摆动或向右摆动。

对于Q学习算法来说,使这个问题具有挑战性的原因是动作是连续的而不是离散的。也就是说,除了使用两个离散动作如-1+1外,我们还必须从范围为-2+2的无限动作中进行选择。


快速理论

就像演员-评论家方法一样,我们有两个网络:

  1. 演员 - 根据状态提出一个动作。
  2. 评论家 - 根据状态和动作预测动作是否良好(正值)或不良(负值)。

DDPG使用了原始DQN中不存在的两种技术:

首先,它使用两个目标网络。

为什么? 因为它为训练增加了稳定性。简而言之,我们是从估计的目标中学习,而目标网络是慢慢更新的,因此保持我们的估计目标稳定。

从概念上讲,这就像在说:“我对如何很好地玩这个有一个想法,我会尝试它一段时间,直到我找到更好的方法”,而不是说“我将在每一步后重新学习如何玩整个游戏”。
请参见这个 StackOverflow答案

第二,它使用经验回放。

我们存储元组(state, action, reward, next_state)的列表,而不是仅从最近的经验中学习,我们从采样我们迄今为止积累的所有经验中学习。

现在,让我们看看它是如何实现的。

import os

os.environ["KERAS_BACKEND"] = "tensorflow"

import keras
from keras import layers

import tensorflow as tf
import gymnasium as gym
import numpy as np
import matplotlib.pyplot as plt

我们使用 Gymnasium 来创建环境。 稍后我们将使用upper_bound参数来缩放我们的动作。

# 指定`render_mode`参数以在弹出窗口中显示代理的尝试。
env = gym.make("Pendulum-v1", render_mode="human")

num_states = env.observation_space.shape[0]
print("状态空间的大小 ->  {}".format(num_states))
num_actions = env.action_space.shape[0]
print("动作空间的大小 ->  {}".format(num_actions))

upper_bound = env.action_space.high[0]
lower_bound = env.action_space.low[0]

print("动作的最大值 ->  {}".format(upper_bound))
print("动作的最小值 ->  {}".format(lower_bound))
状态空间的大小 ->  3
动作空间的大小 ->  1
动作的最大值 ->  2.0
动作的最小值 ->  -2.0

为了通过演员网络实现更好的探索,我们使用带噪声的扰动,特别是奥恩斯坦-乌伦贝克过程来生成噪声,如论文中所述。它从相关的正态分布中采样噪声。

class OUActionNoise:
    def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None):
        self.theta = theta
        self.mean = mean
        self.std_dev = std_deviation
        self.dt = dt
        self.x_initial = x_initial
        self.reset()

    def __call__(self):
        # 公式来源于 https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process
        x = (
            self.x_prev
            + self.theta * (self.mean - self.x_prev) * self.dt
            + self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape)
        )
        # 将x存入x_prev
        # 使下一个噪声依赖于当前噪声
        self.x_prev = x
        return x

    def reset(self):
        if self.x_initial is not None:
            self.x_prev = self.x_initial
        else:
            self.x_prev = np.zeros_like(self.mean)

Buffer类实现经验回放。


算法

评论损失 - y - Q(s, a) 的均方误差,其中 y 是目标网络所看到的期望回报,Q(s, a) 是评论网络预测的动作值。y 是评论模型试图达到的一个移动目标;我们通过慢慢更新目标模型来使这个目标保持稳定。

演员损失 - 这是通过计算评论网络对演员网络所采取动作给出的值的均值来计算的。我们寻求最大化这个量。

因此,我们更新演员网络,使其产生在给定状态下由评论所看到的最大预测值的动作。

class Buffer:
    def __init__(self, buffer_capacity=100000, batch_size=64):
        # 最多存储的“经历”数
        self.buffer_capacity = buffer_capacity
        # 用于训练的元组数量
        self.batch_size = batch_size

        # 它告诉我们 record() 被调用的次数。
        self.buffer_counter = 0

        # 不使用元组列表作为经验重放概念
        # 我们使用不同的 np.arrays 为每个元组元素
        self.state_buffer = np.zeros((self.buffer_capacity, num_states))
        self.action_buffer = np.zeros((self.buffer_capacity, num_actions))
        self.reward_buffer = np.zeros((self.buffer_capacity, 1))
        self.next_state_buffer = np.zeros((self.buffer_capacity, num_states))

    # 以 (s,a,r,s') 观测元组作为输入
    def record(self, obs_tuple):
        # 如果超出 buffer_capacity,将索引设置为零,
        # 替换旧记录
        index = self.buffer_counter % self.buffer_capacity

        self.state_buffer[index] = obs_tuple[0]
        self.action_buffer[index] = obs_tuple[1]
        self.reward_buffer[index] = obs_tuple[2]
        self.next_state_buffer[index] = obs_tuple[3]

        self.buffer_counter += 1

    # 默认情况下,TensorFlow 2 已开启急切执行。使用 tf.function 装饰可以
    # 使 TensorFlow 从我们函数中的逻辑和计算构建一个静态图。
    # 这为包含许多小的 TensorFlow 操作的代码块提供了巨大的速度提升,比如这个。
    @tf.function
    def update(
        self,
        state_batch,
        action_batch,
        reward_batch,
        next_state_batch,
    ):
        # 训练和更新演员和评论网络。
        # 见伪代码。
        with tf.GradientTape() as tape:
            target_actions = target_actor(next_state_batch, training=True)
            y = reward_batch + gamma * target_critic(
                [next_state_batch, target_actions], training=True
            )
            critic_value = critic_model([state_batch, action_batch], training=True)
            critic_loss = keras.ops.mean(keras.ops.square(y - critic_value))

        critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables)
        critic_optimizer.apply_gradients(
            zip(critic_grad, critic_model.trainable_variables)
        )

        with tf.GradientTape() as tape:
            actions = actor_model(state_batch, training=True)
            critic_value = critic_model([state_batch, actions], training=True)
            # 使用 `-value` 是因为我们希望最大化评论者给我们动作的值
            actor_loss = -keras.ops.mean(critic_value)

        actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables)
        actor_optimizer.apply_gradients(
            zip(actor_grad, actor_model.trainable_variables)
        )

    # 我们计算损失并更新参数
    def learn(self):
        # 获取采样范围
        record_range = min(self.buffer_counter, self.buffer_capacity)
        # 随机采样索引
        batch_indices = np.random.choice(record_range, self.batch_size)

        # 转换为张量
        state_batch = keras.ops.convert_to_tensor(self.state_buffer[batch_indices])
        action_batch = keras.ops.convert_to_tensor(self.action_buffer[batch_indices])
        reward_batch = keras.ops.convert_to_tensor(self.reward_buffer[batch_indices])
        reward_batch = keras.ops.cast(reward_batch, dtype="float32")
        next_state_batch = keras.ops.convert_to_tensor(
            self.next_state_buffer[batch_indices]
        )

        self.update(state_batch, action_batch, reward_batch, next_state_batch)


# 这个更新目标参数缓慢
# 基于比一小得多的速率 `tau`。
def update_target(target, original, tau):
    target_weights = target.get_weights()
    original_weights = original.get_weights()

    for i in range(len(target_weights)):
        target_weights[i] = original_weights[i] * tau + target_weights[i] * (1 - tau)

    target.set_weights(target_weights)

在这里,我们定义演员和评论网络。这些是基本的 Dense 模型

注意:我们需要演员最后一层的初始化在… -0.003 和 0.003 是因为这防止我们在初始阶段获得 1 或 -1 的输出值,这将挤压我们的梯度到零,因为我们使用 tanh 激活函数。

def get_actor():
    # 初始化权重在 -3e-3 和 3e-3 之间
    last_init = keras.initializers.RandomUniform(minval=-0.003, maxval=0.003)

    inputs = layers.Input(shape=(num_states,))
    out = layers.Dense(256, activation="relu")(inputs)
    out = layers.Dense(256, activation="relu")(out)
    outputs = layers.Dense(1, activation="tanh", kernel_initializer=last_init)(out)

    # 我们的上限是 2.0,对于 Pendulum。
    outputs = outputs * upper_bound
    model = keras.Model(inputs, outputs)
    return model


def get_critic():
    # 状态作为输入
    state_input = layers.Input(shape=(num_states,))
    state_out = layers.Dense(16, activation="relu")(state_input)
    state_out = layers.Dense(32, activation="relu")(state_out)

    # 动作作为输入
    action_input = layers.Input(shape=(num_actions,))
    action_out = layers.Dense(32, activation="relu")(action_input)

    # 两者在拼接之前经过独立层
    concat = layers.Concatenate()([state_out, action_out])

    out = layers.Dense(256, activation="relu")(concat)
    out = layers.Dense(256, activation="relu")(out)
    outputs = layers.Dense(1)(out)

    # 输出给定状态-动作的单个值
    model = keras.Model([state_input, action_input], outputs)

    return model

policy() 返回从我们的 Actor 网络样本的动作加上一些用于探索的噪声。

def policy(state, noise_object):
    sampled_actions = keras.ops.squeeze(actor_model(state))
    noise = noise_object()
    # 添加噪声到动作
    sampled_actions = sampled_actions.numpy() + noise

    # 我们确保动作在范围内
    legal_action = np.clip(sampled_actions, lower_bound, upper_bound)

    return [np.squeeze(legal_action)]

训练超参数

std_dev = 0.2
ou_noise = OUActionNoise(mean=np.zeros(1), std_deviation=float(std_dev) * np.ones(1))

actor_model = get_actor()
critic_model = get_critic()

target_actor = get_actor()
target_critic = get_critic()

# 初始时使权重相等
target_actor.set_weights(actor_model.get_weights())
target_critic.set_weights(critic_model.get_weights())

# Actor-Critic 模型的学习率
critic_lr = 0.002
actor_lr = 0.001

critic_optimizer = keras.optimizers.Adam(critic_lr)
actor_optimizer = keras.optimizers.Adam(actor_lr)

total_episodes = 100
# 未来奖励的折扣因子
gamma = 0.99
# 用于更新目标网络
tau = 0.005

buffer = Buffer(50000, 64)

现在我们实现主训练循环,并在剧集上迭代。 我们使用 policy() 进行动作采样,并在每个时间步与 learn() 一起训练,同时以 tau 的速率更新目标网络。

# 用于存储每个剧集的奖励历史
ep_reward_list = []
# 用于存储最近几次剧集的平均奖励历史
avg_reward_list = []

# 训练大约需要 4 分钟
for ep in range(total_episodes):
    prev_state, _ = env.reset()
    episodic_reward = 0

    while True:
        tf_prev_state = keras.ops.expand_dims(
            keras.ops.convert_to_tensor(prev_state), 0
        )

        action = policy(tf_prev_state, ou_noise)
        # 从环境中接收状态和奖励。
        state, reward, done, truncated, _ = env.step(action)

        buffer.record((prev_state, action, reward, state))
        episodic_reward += reward

        buffer.learn()

        update_target(target_actor, actor_model, tau)
        update_target(target_critic, critic_model, tau)

        # 当 `done` 或 `truncated` 为 True 时结束此剧集
        if done or truncated:
            break

        prev_state = state

    ep_reward_list.append(episodic_reward)

    # 最近 40 个剧集的平均值
    avg_reward = np.mean(ep_reward_list[-40:])
    print("剧集 * {} * 平均奖励是 ==> {}".format(ep, avg_reward))
    avg_reward_list.append(avg_reward)

# 绘制图表
# 剧集与平均奖励
plt.plot(avg_reward_list)
plt.xlabel("剧集")
plt.ylabel("平均剧集奖励")
plt.show()

Episode * 0 * Avg Reward is ==> -1020.8244931732263

Episode * 1 * Avg Reward is ==> -1338.2811167733332

Episode * 2 * Avg Reward is ==> -1450.0427316158366

Episode * 3 * Avg Reward is ==> -1529.0751774957375

Episode * 4 * Avg Reward is ==> -1560.3468658090717

Episode * 5 * Avg Reward is ==> -1525.6201906715812

Episode * 6 * Avg Reward is ==> -1522.0047531836371

Episode * 7 * Avg Reward is ==> -1507.4391205141226

Episode * 8 * Avg Reward is ==> -1443.4147334537984

Episode * 9 * Avg Reward is ==> -1452.0432974943765

Episode * 10 * Avg Reward is ==> -1344.1960761302823

Episode * 11 * Avg Reward is ==> -1327.0472948059835

Episode * 12 * Avg Reward is ==> -1332.4638031402194

Episode * 13 * Avg Reward is ==> -1287.4884456842617

Episode * 14 * Avg Reward is ==> -1257.3643575644046

Episode * 15 * Avg Reward is ==> -1210.9679762262906

Episode * 16 * Avg Reward is ==> -1165.8684037899104

Episode * 17 * Avg Reward is ==> -1107.6228192573426

Episode * 18 * Avg Reward is ==> -1049.4192654959388

Episode * 19 * Avg Reward is ==> -1003.3255480245641

Episode * 20 * Avg Reward is ==> -961.6386918013155

Episode * 21 * Avg Reward is ==> -929.1847739440876

Episode * 22 * Avg Reward is ==> -894.356849609832

Episode * 23 * Avg Reward is ==> -872.3450419603026

Episode * 24 * Avg Reward is ==> -842.5992147531034

Episode * 25 * Avg Reward is ==> -818.8730806655396

Episode * 26 * Avg Reward is ==> -793.3147256249664

Episode * 27 * Avg Reward is ==> -769.6124209263007

Episode * 28 * Avg Reward is ==> -747.5122117563488

Episode * 29 * Avg Reward is ==> -726.8111953151997

Episode * 30 * Avg Reward is ==> -707.3781885286952

Episode * 31 * Avg Reward is ==> -688.9993520703357

Episode * 32 * Avg Reward is ==> -672.0164054875188

Episode * 33 * Avg Reward is ==> -652.3297236089893

Episode * 34 * Avg Reward is ==> -633.7305579653394

Episode * 35 * Avg Reward is ==> -622.6444438529929

Episode * 36 * Avg Reward is ==> -612.2391199605028

Episode * 37 * Avg Reward is ==> -599.2441039477458

Episode * 38 * Avg Reward is ==> -593.713500114108

Episode * 39 * Avg Reward is ==> -582.062487157142

Episode * 40 * Avg Reward is ==> -556.559275313473

Episode * 41 * Avg Reward is ==> -518.053376711216

Episode * 42 * Avg Reward is ==> -482.2191305356082

Episode * 43 * Avg Reward is ==> -441.1561293090619

Episode * 44 * Avg Reward is ==> -402.0403515001418

Episode * 45 * Avg Reward is ==> -371.3376110030464

Episode * 46 * Avg Reward is ==> -336.8145387714556

Episode * 47 * Avg Reward is ==> -301.7732070717081

Episode * 48 * Avg Reward is ==> -281.4823965447058

Episode * 49 * Avg Reward is ==> -243.2750024568545

Episode * 50 * Avg Reward is ==> -236.6512197943394

Episode * 51 * Avg Reward is ==> -211.20860968588096

Episode * 52 * Avg Reward is ==> -176.31339260650844

Episode * 53 * Avg Reward is ==> -158.77021134671222

Episode * 54 * Avg Reward is ==> -146.76749516161257

Episode * 55 * Avg Reward is ==> -133.93793525539664

Episode * 56 * Avg Reward is ==> -129.24881351771964

Episode * 57 * Avg Reward is ==> -129.49219614666802

Episode * 58 * Avg Reward is ==> -132.53205721511375

Episode * 59 * Avg Reward is ==> -132.60389802731262

Episode * 60 * Avg Reward is ==> -132.62344822194035

Episode * 61 * Avg Reward is ==> -133.2372468795715

Episode * 62 * Avg Reward is ==> -133.1046546040286

Episode * 63 * Avg Reward is ==> -127.17488349564069

Episode * 64 * Avg Reward is ==> -130.02349725294775

Episode * 65 * Avg Reward is ==> -127.32475296620544

Episode * 66 * Avg Reward is ==> -126.99528350924034

Episode * 67 * Avg Reward is ==> -126.65903554713267

Episode * 68 * Avg Reward is ==> -126.63950221408372

Episode * 69 * Avg Reward is ==> -129.4066259498526

Episode * 70 * Avg Reward is ==> -129.34372109952105

Episode * 71 * Avg Reward is ==> -132.29705860930432

Episode * 72 * Avg Reward is ==> -132.00732697620566

Episode * 73 * Avg Reward is ==> -138.01483877165032

Episode * 74 * Avg Reward is ==> -145.33430273020608

Episode * 75 * Avg Reward is ==> -145.32777005464345

Episode * 76 * Avg Reward is ==> -142.4835146046417

Episode * 77 * Avg Reward is ==> -139.59338840338395

Episode * 78 * Avg Reward is ==> -133.04552232142163

Episode * 79 * Avg Reward is ==> -132.93288588036899

Episode * 80 * Avg Reward is ==> -136.16012471382237

Episode * 81 * Avg Reward is ==> -139.21305348031393

Episode * 82 * Avg Reward is ==> -133.23691621529298

Episode * 83 * Avg Reward is ==> -135.92990594024982

Episode * 84 * Avg Reward is ==> -136.03027429930435

Episode * 85 * Avg Reward is ==> -135.97360824863455

Episode * 86 * Avg Reward is ==> -136.10527880830494

Episode * 87 * Avg Reward is ==> -139.05391439010512

Episode * 88 * Avg Reward is ==> -142.56133171606365

Episode * 89 * Avg Reward is ==> -161.33989090345662

Episode * 90 * Avg Reward is ==> -170.82788477632195

Episode * 91 * Avg Reward is ==> -170.8558841498521

Episode * 92 * Avg Reward is ==> -173.9910213401168

Episode * 93 * Avg Reward is ==> -176.87631595893498

Episode * 94 * Avg Reward is ==> -170.97863292694336

Episode * 95 * Avg Reward is ==> -173.88549953443538

Episode * 96 * Avg Reward is ==> -170.7028462286189

Episode * 97 * Avg Reward is ==> -173.47564018610032

Episode * 98 * Avg Reward is ==> -173.42104867150212

Episode * 99 * Avg Reward is ==> -173.2394285933109

png

如果训练顺利进行,平均每集奖励将随着时间增加。

随意尝试不同的学习率、tau 值和 Actor 和 Critic 网络的架构。

倒立摆问题复杂度低,但 DDPG 在许多其他问题上表现出色。

另一个很好的试验环境是 LunarLander-v2 连续环境,但需要更多的集数才能获得良好的结果。

# 保存权重
actor_model.save_weights("pendulum_actor.weights.h5")
critic_model.save_weights("pendulum_critic.weights.h5")

target_actor.save_weights("pendulum_target_actor.weights.h5")
target_critic.save_weights("pendulum_target_critic.weights.h5")

训练前:

before_img

经过 100 集:

after_img