How to use the stable-baselines.stable_baselines.a2c.a2c.A2CRunner function in stable-baselines

To help you get started, we’ve selected a few stable-baselines examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github harvard-edge / quarl / stable-baselines / stable_baselines / a2c / a2c.py View on Github external
def _make_runner(self) -> AbstractEnvRunner:
        return A2CRunner(self.env, self, n_steps=self.n_steps, gamma=self.gamma)
github harvard-edge / quarl / stable-baselines / stable_baselines / a2c / a2c.py View on Github external
def __init__(self, env, model, n_steps=5, gamma=0.99):
        """
        A runner to learn the policy of an environment for an a2c model

        :param env: (Gym environment) The environment to learn from
        :param model: (Model) The model to learn
        :param n_steps: (int) The number of steps to run for each environment
        :param gamma: (float) Discount factor
        """
        super(A2CRunner, self).__init__(env=env, model=model, n_steps=n_steps)
        self.gamma = gamma

stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms.

MIT
Latest version published 4 years ago

Package Health Score

57 / 100
Full package analysis