• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python tensorflow.make_template函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.make_template函数的典型用法代码示例。如果您正苦于以下问题:Python make_template函数的具体用法?Python make_template怎么用?Python make_template使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了make_template函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: __init__

    def __init__(self, arch, is_training=False):
        '''
        Variational auto-encoder implemented in 2D convolutional neural nets
        Input:
            `arch`: network architecture (`dict`)
            `is_training`: (unused now) it was kept for historical reasons (for `BatchNorm`)
        '''
        self.arch = arch
        self._sanity_check()
        self.is_training = is_training

        with tf.name_scope('SpeakerRepr'):
            self.y_emb = self._l2_regularized_embedding(
                self.arch['y_dim'],
                self.arch['z_dim'],
                'y_embedding')

        self._generate = tf.make_template(
            'Generator',
            self._generator)

        self._encode = tf.make_template(
            'Encoder',
            self._encoder)

        self.generate = self.decode  # for VAE-GAN extension
开发者ID:QianQQ,项目名称:Voice-Conversion,代码行数:26,代码来源:vae.py


示例2: _build_networks

  def _build_networks(self):
    """Builds the Q-value network computations needed for acting and training.

    These are:
      self.online_convnet: For computing the current state's Q-values.
      self.target_convnet: For computing the next state's target Q-values.
      self._net_outputs: The actual Q-values.
      self._q_argmax: The action maximizing the current state's Q-values.
      self._replay_net_outputs: The replayed states' Q-values.
      self._replay_next_target_net_outputs: The replayed next states' target
        Q-values (see Mnih et al., 2015 for details).
    """
    # Calling online_convnet will generate a new graph as defined in
    # self._get_network_template using whatever input is passed, but will always
    # share the same weights.
    self.online_convnet = tf.make_template('Online', self._network_template)
    self.target_convnet = tf.make_template('Target', self._network_template)
    self._net_outputs = self.online_convnet(self.state_ph)
    # TODO(bellemare): Ties should be broken. They are unlikely to happen when
    # using a deep network, but may affect performance with a linear
    # approximation scheme.
    self._q_argmax = tf.argmax(self._net_outputs.q_values, axis=1)[0]

    self._replay_net_outputs = self.online_convnet(self._replay.states)
    self._replay_next_target_net_outputs = self.target_convnet(
        self._replay.next_states)
开发者ID:veronicachelu,项目名称:dopamine,代码行数:26,代码来源:dqn_agent.py


示例3: __init__

 def __init__(self, trainable=False,
              state_preprocess_net=lambda states: states,
              action_embed_net=lambda actions, *args, **kwargs: actions,
              ndims=None):
   self.trainable = trainable
   self._scope = tf.get_variable_scope().name
   self._ndims = ndims
   self._state_preprocess_net = tf.make_template(
       self.STATE_PREPROCESS_NET_SCOPE, state_preprocess_net,
       create_scope_now_=True)
   self._action_embed_net = tf.make_template(
       self.ACTION_EMBED_NET_SCOPE, action_embed_net,
       create_scope_now_=True)
开发者ID:Exscotticus,项目名称:models,代码行数:13,代码来源:agent.py


示例4: __init__

    def __init__(self, corpus, **opts):
        self.corpus = corpus

        self.opts = opts

        self.global_step = get_or_create_global_step()
        self.increment_global_step_op = tf.assign(self.global_step, self.global_step + 1, name="increment_global_step")

        self.corpus_size = get_corpus_size(self.corpus["train"])
        self.corpus_size_valid = get_corpus_size(self.corpus["valid"])

        self.word2idx, self.idx2word = build_vocab(self.corpus["train"])
        self.vocab_size = len(self.word2idx)

        self.generator_template = tf.make_template(GENERATOR_PREFIX, generator)
        self.discriminator_template = tf.make_template(DISCRIMINATOR_PREFIX, discriminator)

        self.enqueue_data, _, source, target, sequence_length = \
            prepare_data(self.corpus["train"], self.word2idx, num_threads=7, **self.opts)

        # TODO: option to either do pretrain or just generate?
        self.g_tensors_pretrain = self.generator_template(
            source, target, sequence_length, self.vocab_size, **self.opts)

        self.enqueue_data_valid, self.input_ph, source_valid, target_valid, sequence_length_valid = \
            prepare_data(self.corpus["valid"], self.word2idx, num_threads=1, **self.opts)

        self.g_tensors_pretrain_valid = self.generator_template(
            source_valid, target_valid, sequence_length_valid, self.vocab_size, **self.opts)

        self.decoder_fn = prepare_custom_decoder(
            sequence_length, self.g_tensors_pretrain.embedding_matrix, self.g_tensors_pretrain.output_projections)

        self.g_tensors_fake = self.generator_template(
            source, target, sequence_length, self.vocab_size, decoder_fn=self.decoder_fn, **self.opts)

        self.g_tensors_fake_valid = self.generator_template(
            source_valid, target_valid, sequence_length_valid, self.vocab_size, decoder_fn=self.decoder_fn, **self.opts)

        # TODO: using the rnn outputs from pretraining as "real" instead of target embeddings (aka professor forcing)
        self.d_tensors_real = self.discriminator_template(
            self.g_tensors_pretrain.rnn_outputs, sequence_length, is_real=True, **self.opts)

        # TODO: check to see if sequence_length is correct
        self.d_tensors_fake = self.discriminator_template(
            self.g_tensors_fake.rnn_outputs, None, is_real=False, **self.opts)

        self.g_tvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=GENERATOR_PREFIX)
        self.d_tvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=DISCRIMINATOR_PREFIX)
开发者ID:drakh,项目名称:text-gan-tensorflow,代码行数:49,代码来源:model.py


示例5: _initialize_policy

  def _initialize_policy(self):
    """Initialize the policy.

    Run the policy network on dummy data to initialize its parameters for later
    reuse and to analyze the policy distribution. Initializes the attributes
    `self._network` and `self._policy_type`.

    Raises:
      ValueError: Invalid policy distribution.

    Returns:
      Parameters of the policy distribution and policy state.
    """
    with tf.device('/gpu:0' if self._use_gpu else '/cpu:0'):
      network = functools.partial(
          self._config.network, self._config, self._batch_env.action_space)
      self._network = tf.make_template('network', network)
      output = self._network(
          tf.zeros_like(self._batch_env.observ)[:, None],
          tf.ones(len(self._batch_env)))
    if output.policy.event_shape != self._batch_env.action.shape[1:]:
      message = 'Policy event shape {} does not match action shape {}.'
      message = message.format(
          output.policy.event_shape, self._batch_env.action.shape[1:])
      raise ValueError(message)
    self._policy_type = type(output.policy)
    is_tensor = lambda x: isinstance(x, tf.Tensor)
    policy_params = tools.nested.filter(is_tensor, output.policy.parameters)
    set_batch_dim = lambda x: utility.set_dimension(x, 0, len(self._batch_env))
    tools.nested.map(set_batch_dim, policy_params)
    if output.state is not None:
      tools.nested.map(set_batch_dim, output.state)
    return policy_params, output.state
开发者ID:shamanez,项目名称:agents,代码行数:33,代码来源:ppo.py


示例6: __init__

        def __init__(self, *args, **kwargs):
            self.func = func
            self.args = args
            self.kwargs = kwargs
            self.name = self.kwargs.get("name", self.func.__name__)

            self._template = tf.make_template(self.name, self.func, create_scope_now_=True)
            self._unique_name = self._template.variable_scope.name.split("/")[-1]
            self._summary_added = False
开发者ID:drakh,项目名称:text-gan-tensorflow,代码行数:9,代码来源:layers.py


示例7: test_variable_reuse_with_template

    def test_variable_reuse_with_template(self):
        tmpl1 = tf.make_template("test", tf.contrib.layers.legacy_fully_connected, num_output_units=8)
        output1 = tmpl1(self.input)
        output2 = tmpl1(self.input)

        with tf.Session() as sess:
            tf.initialize_all_variables().run()
            out_value1, out_value2 = sess.run([output1, output2])
        self.assertAllClose(out_value1, out_value2)
开发者ID:ninotoshi,项目名称:tensorflow,代码行数:9,代码来源:layers_test.py


示例8: build_model

  def build_model(self):
    sc = predictron_arg_scope()
    with tf.variable_scope('state'):
      with slim.arg_scope(sc):
        state = slim.conv2d(self.inputs, 32, [3, 3], scope='conv1')
        state = layers.batch_norm(state, activation_fn=tf.nn.relu, scope='conv1/preact')
        state = slim.conv2d(state, 32, [3, 3], scope='conv2')
        state = layers.batch_norm(state, activation_fn=tf.nn.relu, scope='conv2/preact')

    iter_template = tf.make_template('iter', self.iter_func, unique_name_='iter')

    rewards_arr = []
    gammas_arr = []
    lambdas_arr = []
    values_arr = []

    for k in range(self.max_depth):
      state, reward, gamma, lambda_, value = iter_template(state)
      rewards_arr.append(reward)
      gammas_arr.append(gamma)
      lambdas_arr.append(lambda_)
      values_arr.append(value)

    _, _, _, _, value = iter_template(state)
    # K + 1 elements
    values_arr.append(value)

    bs = tf.shape(self.inputs)[0]
    # [batch_size, K * maze_size]
    self.rewards = tf.pack(rewards_arr, axis=1)
    # [batch_size, K, maze_size]
    self.rewards = tf.reshape(self.rewards, [bs, self.max_depth, self.maze_size])
    # [batch_size, K + 1, maze_size]
    self.rewards = tf.concat_v2(values=[tf.zeros(shape=[bs, 1, self.maze_size], dtype=tf.float32), self.rewards],
                                axis=1, name='rewards')

    # [batch_size, K * maze_size]
    self.gammas = tf.pack(gammas_arr, axis=1)
    # [batch_size, K, maze_size]
    self.gammas = tf.reshape(self.gammas, [bs, self.max_depth, self.maze_size])
    # [batch_size, K + 1, maze_size]
    self.gammas = tf.concat_v2(values=[tf.ones(shape=[bs, 1, self.maze_size], dtype=tf.float32), self.gammas],
                               axis=1, name='gammas')

    # [batch_size, K * maze_size]
    self.lambdas = tf.pack(lambdas_arr, axis=1)
    # [batch_size, K, maze_size]
    self.lambdas = tf.reshape(self.lambdas, [-1, self.max_depth, self.maze_size])

    # [batch_size, (K + 1) * maze_size]
    self.values = tf.pack(values_arr, axis=1)
    # [batch_size, K + 1, maze_size]
    self.values = tf.reshape(self.values, [-1, (self.max_depth + 1), self.maze_size])

    self.build_preturns()
    self.build_lambda_preturns()
开发者ID:b-kartal,项目名称:predictron,代码行数:56,代码来源:predictron.py


示例9: test_variable_reuse_with_template

    def test_variable_reuse_with_template(self):
        tmpl1 = tf.make_template("test", tf.learn.fully_connected, num_output_nodes=8)
        output1 = tmpl1(self.input)
        output2 = tmpl1(self.input)

        with tf.Session() as sess:
            tf.initialize_all_variables().run()
            out_value1, out_value2 = sess.run([output1, output2])
        self.assertAllClose(out_value1, out_value2)
        assert_summary_scope(r"test(_\d)?/fully_connected")
开发者ID:DeepThoughtTeam,项目名称:tensorflow,代码行数:10,代码来源:learn_test.py


示例10: testBijectorConditionKwargs

  def testBijectorConditionKwargs(self):
    batch_size = 3
    x_ = np.linspace(-1.0, 1.0, (batch_size * 4 * 2)).astype(
        np.float32).reshape((batch_size, 4 * 2))

    conditions = {
        "a": tf.random_normal((batch_size, 4), dtype=tf.float32),
        "b": tf.random_normal((batch_size, 2), dtype=tf.float32),
    }

    def _condition_shift_and_log_scale_fn(x0, output_units, a, b):
      x = tf.concat((x0, a, b), axis=-1)
      out = tf.layers.dense(
          inputs=x,
          units=2 * output_units)
      shift, log_scale = tf.split(out, 2, axis=-1)
      return shift, log_scale

    condition_shift_and_log_scale_fn = tf.make_template(
        "real_nvp_condition_template", _condition_shift_and_log_scale_fn)

    nvp = tfb.RealNVP(
        num_masked=4,
        validate_args=True,
        is_constant_jacobian=False,
        shift_and_log_scale_fn=condition_shift_and_log_scale_fn)

    x = tf.constant(x_)

    forward_x = nvp.forward(x, **conditions)
    # Use identity to invalidate cache.
    inverse_y = nvp.inverse(tf.identity(forward_x), **conditions)
    forward_inverse_y = nvp.forward(inverse_y, **conditions)
    fldj = nvp.forward_log_det_jacobian(x, event_ndims=1, **conditions)
    # Use identity to invalidate cache.
    ildj = nvp.inverse_log_det_jacobian(
        tf.identity(forward_x), event_ndims=1, **conditions)
    self.evaluate(tf.global_variables_initializer())
    [
        forward_x_,
        inverse_y_,
        forward_inverse_y_,
        ildj_,
        fldj_,
    ] = self.evaluate([
        forward_x,
        inverse_y,
        forward_inverse_y,
        ildj,
        fldj,
    ])
    self.assertEqual("real_nvp", nvp.name)
    self.assertAllClose(forward_x_, forward_inverse_y_, rtol=1e-6, atol=0.)
    self.assertAllClose(x_, inverse_y_, rtol=1e-6, atol=0.)
    self.assertAllClose(ildj_, -fldj_, rtol=1e-6, atol=0.)
开发者ID:asudomoeva,项目名称:probability,代码行数:55,代码来源:real_nvp_test.py


示例11: initialize_graph

 def initialize_graph(self, input_statistics):
   """Save templates for components, which can then be used repeatedly.
   This method is called every time a new graph is created. It's safe to start
   adding ops to the current default graph here, but the graph should be
   constructed from scratch.
   Args:
     input_statistics: A math_utils.InputStatistics object.
   """
   super(_LSTMModel, self).initialize_graph(input_statistics=input_statistics)
   self._lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=self._num_units)
   # Create templates so we don't have to worry about variable reuse.
   self._lstm_cell_run = tf.make_template(
       name_="lstm_cell",
       func_=self._lstm_cell,
       create_scope_now_=True)
   # Transforms LSTM output into mean predictions.
   self._predict_from_lstm_output = tf.make_template(
       name_="predict_from_lstm_output",
       func_=lambda inputs: tf.layers.dense(inputs=inputs, units=self.num_features),
       create_scope_now_=True)
开发者ID:Lagogoy,项目名称:Deep-Learning-21-Examples,代码行数:20,代码来源:train_lstm_multivariate.py


示例12: __init__

    def __init__(self, mode=None, batch_size=hp_default.batch_size, queue=True):
        self.mode = mode
        self.batch_size = batch_size
        self.queue = queue
        self.is_training = self.get_is_training(mode)

        # Input
        self.x_mfcc, self.y_ppgs, self.y_spec, self.y_mel, self.num_batch = self.get_input(mode, batch_size, queue)

        # Networks
        self.net_template = tf.make_template('net', self._net2)
        self.ppgs, self.pred_ppg, self.logits_ppg, self.pred_spec, self.pred_mel = self.net_template()
开发者ID:QianQQ,项目名称:Voice-Conversion,代码行数:12,代码来源:models.py


示例13: __init__

  def __init__(self,
               f,
               g,
               num_layers=1,
               f_side_input=None,
               g_side_input=None,
               use_efficient_backprop=True):

    if isinstance(f, list):
      assert len(f) == num_layers
    else:
      f = [f] * num_layers

    if isinstance(g, list):
      assert len(g) == num_layers
    else:
      g = [g] * num_layers

    scope_prefix = "revblock/revlayer_%d/"
    f_scope = scope_prefix + "f"
    g_scope = scope_prefix + "g"

    f = [
        tf.make_template(f_scope % i, fn, create_scope_now_=True)
        for i, fn in enumerate(f)
    ]
    g = [
        tf.make_template(g_scope % i, fn, create_scope_now_=True)
        for i, fn in enumerate(g)
    ]

    self.f = f
    self.g = g

    self.num_layers = num_layers
    self.f_side_input = f_side_input or []
    self.g_side_input = g_side_input or []

    self._use_efficient_backprop = use_efficient_backprop
开发者ID:chqiwang,项目名称:tensor2tensor,代码行数:39,代码来源:rev_block.py


示例14: __init__

  def __init__(self, name):
    """
    Initialize the module. Each subclass must call this constructor with a name.

    Args:
      name: Name of this module. Used for `tf.make_template`.
    """
    self.name = name
    self._template = tf.make_template(name, self._build, create_scope_now_=True)
    # Docstrings for the class should be the docstring for the _build method
    self.__doc__ = self._build.__doc__
    # pylint: disable=E1101
    self.__call__.__func__.__doc__ = self._build.__doc__
开发者ID:AbhinavJain13,项目名称:seq2seq,代码行数:13,代码来源:graph_module.py


示例15: test_all_ckpt

def test_all_ckpt(modelPath, fileOrDir,flags):
    tf.reset_default_graph()

    tf.logging.warning(modelPath)
    tem = [f for f in os.listdir(modelPath) if 'data' in f]
    ckptFiles = sorted([r.split('.data')[0] for r in tem])
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    with tf.Session(config=config) as sess:
        input_tensor = tf.placeholder(tf.float32, shape=(1, None, None, 1))
        shared_model = tf.make_template('shared_model', model)
        output_tensor, weights = shared_model(input_tensor)
        output_tensor = tf.clip_by_value(output_tensor, 0., 1.)
        output_tensor = output_tensor * 255

        saver = tf.train.Saver()
        sess.run(tf.global_variables_initializer())

        original_ycbcr, gt_y, fileName_list = prepare_test_data(fileOrDir)

        for ckpt in ckptFiles:
            epoch = int(ckpt.split('_')[-1].split('.')[0])
            if flags==0:
                if epoch != 555:
                    continue
            elif flags==1:
                if epoch!= 555:
                    continue
            else:
                if epoch != 555:
                    continue

            tf.logging.warning("epoch:%d\t"%epoch)
            saver.restore(sess,os.path.join(modelPath,ckpt))
            total_imgs = len(fileName_list)
            for i in range(total_imgs):
                imgY = original_ycbcr[i][0]
                out = sess.run(output_tensor, feed_dict={input_tensor: imgY})
                out = np.reshape(out, (out.shape[1], out.shape[2]))
                out = np.around(out)
                out = out.astype('int')
                out = out.tolist()
                return out
开发者ID:IVC-Projects,项目名称:cnn_In-loop_filter,代码行数:43,代码来源:TEST.py


示例16: define_train

def define_train(hparams, environment_spec, event_dir):
  """Define the training setup."""
  if isinstance(environment_spec, str):
    env_lambda = lambda: gym.make(environment_spec)
  else:
    env_lambda = environment_spec
  policy_lambda = hparams.network
  env = env_lambda()
  action_space = env.action_space

  batch_env = utils.define_batch_env(env_lambda, hparams.num_agents)

  policy_factory = tf.make_template(
      "network",
      functools.partial(policy_lambda, action_space, hparams))

  with tf.variable_scope("train"):
    memory, collect_summary = collect.define_collect(
        policy_factory, batch_env, hparams, eval_phase=False)
  ppo_summary = ppo.define_ppo_epoch(memory, policy_factory, hparams)
  summary = tf.summary.merge([collect_summary, ppo_summary])

  with tf.variable_scope("eval"):
    eval_env_lambda = env_lambda
    if event_dir and hparams.video_during_eval:
      # Some environments reset environments automatically, when reached done
      # state. For them we shall record only every second episode.
      d = 2 if env_lambda().metadata.get("semantics.autoreset") else 1
      eval_env_lambda = lambda: gym.wrappers.Monitor(  # pylint: disable=g-long-lambda
          env_lambda(), event_dir, video_callable=lambda i: i % d == 0)
    wrapped_eval_env_lambda = lambda: utils.EvalVideoWrapper(eval_env_lambda())
    _, eval_summary = collect.define_collect(
        policy_factory,
        utils.define_batch_env(wrapped_eval_env_lambda, hparams.num_eval_agents,
                               xvfb=hparams.video_during_eval),
        hparams, eval_phase=True)
  return summary, eval_summary
开发者ID:chqiwang,项目名称:tensor2tensor,代码行数:37,代码来源:rl_trainer_lib.py


示例17: testMakeLogJointFnTemplate

  def testMakeLogJointFnTemplate(self):
    """Test `make_log_joint_fn` on program returned by tf.make_template."""
    def variational():
      loc = tf.get_variable("loc", [])
      qz = ed.Normal(loc=loc, scale=0.5, name="qz")
      return qz

    def true_log_joint(loc, qz):
      log_prob = tf.reduce_sum(tfd.Normal(loc=loc, scale=0.5).log_prob(qz))
      return log_prob

    qz_value = 1.23
    variational_template = tf.make_template("variational", variational)

    log_joint = ed.make_log_joint_fn(variational_template)
    expected_log_prob = log_joint(qz=qz_value)
    loc = tf.trainable_variables("variational")[0]
    actual_log_prob = true_log_joint(loc, qz_value)

    with self.test_session() as sess:
      sess.run(tf.initialize_all_variables())
      actual_log_prob_, expected_log_prob_ = sess.run(
          [actual_log_prob, expected_log_prob])
      self.assertEqual(actual_log_prob_, expected_log_prob_)
开发者ID:lewisKit,项目名称:probability,代码行数:24,代码来源:program_transformations_test.py


示例18: masked_autoregressive_default_template

def masked_autoregressive_default_template(hidden_layers,
                                           shift_only=False,
                                           activation=tf.nn.relu,
                                           log_scale_min_clip=-5.,
                                           log_scale_max_clip=3.,
                                           log_scale_clip_gradient=False,
                                           name=None,
                                           *args,  # pylint: disable=keyword-arg-before-vararg
                                           **kwargs):
  """Build the Masked Autoregressive Density Estimator (Germain et al., 2015).

  This will be wrapped in a make_template to ensure the variables are only
  created once. It takes the input and returns the `loc` ("mu" in [Germain et
  al. (2015)][1]) and `log_scale` ("alpha" in [Germain et al. (2015)][1]) from
  the MADE network.

  Warning: This function uses `masked_dense` to create randomly initialized
  `tf.Variables`. It is presumed that these will be fit, just as you would any
  other neural architecture which uses `tf.layers.dense`.

  #### About Hidden Layers

  Each element of `hidden_layers` should be greater than the `input_depth`
  (i.e., `input_depth = tf.shape(input)[-1]` where `input` is the input to the
  neural network). This is necessary to ensure the autoregressivity property.

  #### About Clipping

  This function also optionally clips the `log_scale` (but possibly not its
  gradient). This is useful because if `log_scale` is too small/large it might
  underflow/overflow making it impossible for the `MaskedAutoregressiveFlow`
  bijector to implement a bijection. Additionally, the `log_scale_clip_gradient`
  `bool` indicates whether the gradient should also be clipped. The default does
  not clip the gradient; this is useful because it still provides gradient
  information (for fitting) yet solves the numerical stability problem. I.e.,
  `log_scale_clip_gradient = False` means
  `grad[exp(clip(x))] = grad[x] exp(clip(x))` rather than the usual
  `grad[clip(x)] exp(clip(x))`.

  Args:
    hidden_layers: Python `list`-like of non-negative integer, scalars
      indicating the number of units in each hidden layer. Default: `[512, 512].
    shift_only: Python `bool` indicating if only the `shift` term shall be
      computed. Default: `False`.
    activation: Activation function (callable). Explicitly setting to `None`
      implies a linear activation.
    log_scale_min_clip: `float`-like scalar `Tensor`, or a `Tensor` with the
      same shape as `log_scale`. The minimum value to clip by. Default: -5.
    log_scale_max_clip: `float`-like scalar `Tensor`, or a `Tensor` with the
      same shape as `log_scale`. The maximum value to clip by. Default: 3.
    log_scale_clip_gradient: Python `bool` indicating that the gradient of
      `tf.clip_by_value` should be preserved. Default: `False`.
    name: A name for ops managed by this function. Default:
      "masked_autoregressive_default_template".
    *args: `tf.layers.dense` arguments.
    **kwargs: `tf.layers.dense` keyword arguments.

  Returns:
    shift: `Float`-like `Tensor` of shift terms (the "mu" in
      [Germain et al.  (2015)][1]).
    log_scale: `Float`-like `Tensor` of log(scale) terms (the "alpha" in
      [Germain et al. (2015)][1]).

  Raises:
    NotImplementedError: if rightmost dimension of `inputs` is unknown prior to
      graph execution.

  #### References

  [1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE:
       Masked Autoencoder for Distribution Estimation. In _International
       Conference on Machine Learning_, 2015. https://arxiv.org/abs/1502.03509
  """
  name = name or "masked_autoregressive_default_template"
  with tf.name_scope(name, values=[log_scale_min_clip, log_scale_max_clip]):
    def _fn(x):
      """MADE parameterized via `masked_autoregressive_default_template`."""
      # TODO(b/67594795): Better support of dynamic shape.
      input_depth = x.shape.with_rank_at_least(1)[-1].value
      if input_depth is None:
        raise NotImplementedError(
            "Rightmost dimension must be known prior to graph execution.")
      input_shape = (
          np.int32(x.shape.as_list())
          if x.shape.is_fully_defined() else tf.shape(x))
      for i, units in enumerate(hidden_layers):
        x = masked_dense(
            inputs=x,
            units=units,
            num_blocks=input_depth,
            exclusive=True if i == 0 else False,
            activation=activation,
            *args,  # pylint: disable=keyword-arg-before-vararg
            **kwargs)
      x = masked_dense(
          inputs=x,
          units=(1 if shift_only else 2) * input_depth,
          num_blocks=input_depth,
          activation=None,
          *args,  # pylint: disable=keyword-arg-before-vararg
#.........这里部分代码省略.........
开发者ID:lewisKit,项目名称:probability,代码行数:101,代码来源:masked_autoregressive.py


示例19: real_nvp_default_template

def real_nvp_default_template(hidden_layers,
                              shift_only=False,
                              activation=tf.nn.relu,
                              name=None,
                              *args,  # pylint: disable=keyword-arg-before-vararg
                              **kwargs):
  """Build a scale-and-shift function using a multi-layer neural network.

  This will be wrapped in a make_template to ensure the variables are only
  created once. It takes the `d`-dimensional input x[0:d] and returns the `D-d`
  dimensional outputs `loc` ("mu") and `log_scale` ("alpha").

  The default template does not support conditioning and will raise an
  exception if `condition_kwargs` are passed to it. To use conditioning in
  real nvp bijector, implement a conditioned shift/scale template that
  handles the `condition_kwargs`.

  Arguments:
    hidden_layers: Python `list`-like of non-negative integer, scalars
      indicating the number of units in each hidden layer. Default: `[512, 512].
    shift_only: Python `bool` indicating if only the `shift` term shall be
      computed (i.e. NICE bijector). Default: `False`.
    activation: Activation function (callable). Explicitly setting to `None`
      implies a linear activation.
    name: A name for ops managed by this function. Default:
      "real_nvp_default_template".
    *args: `tf.layers.dense` arguments.
    **kwargs: `tf.layers.dense` keyword arguments.

  Returns:
    shift: `Float`-like `Tensor` of shift terms ("mu" in
      [Papamakarios et al.  (2016)][1]).
    log_scale: `Float`-like `Tensor` of log(scale) terms ("alpha" in
      [Papamakarios et al. (2016)][1]).

  Raises:
    NotImplementedError: if rightmost dimension of `inputs` is unknown prior to
      graph execution, or if `condition_kwargs` is not empty.

  #### References

  [1]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked
       Autoregressive Flow for Density Estimation. In _Neural Information
       Processing Systems_, 2017. https://arxiv.org/abs/1705.07057
  """

  with tf.name_scope(name, "real_nvp_default_template"):

    def _fn(x, output_units, **condition_kwargs):
      """Fully connected MLP parameterized via `real_nvp_template`."""
      if condition_kwargs:
        raise NotImplementedError(
            "Conditioning not implemented in the default template.")

      for units in hidden_layers:
        x = tf.layers.dense(
            inputs=x,
            units=units,
            activation=activation,
            *args,  # pylint: disable=keyword-arg-before-vararg
            **kwargs)
      x = tf.layers.dense(
          inputs=x,
          units=(1 if shift_only else 2) * output_units,
          activation=None,
          *args,  # pylint: disable=keyword-arg-before-vararg
          **kwargs)
      if shift_only:
        return x, None
      shift, log_scale = tf.split(x, 2, axis=-1)
      return shift, log_scale

    return tf.make_template("real_nvp_default_template", _fn)
开发者ID:asudomoeva,项目名称:probability,代码行数:73,代码来源:real_nvp.py


示例20: _build_networks

  def _build_networks(self):
    """Builds the IQN computations needed for acting and training.

    These are:
      self.online_convnet: For computing the current state's quantile values.
      self.target_convnet: For computing the next state's target quantile
        values.
      self._net_outputs: The actual quantile values.
      self._q_argmax: The action maximizing the current state's Q-values.
      self._replay_net_outputs: The replayed states' quantile values.
      self._replay_next_target_net_outputs: The replayed next states' target
        quantile values.
    """
    # Calling online_convnet will generate a new graph as defined in
    # self._get_network_template using whatever input is passed, but will always
    # share the same weights.
    self.online_convnet = tf.make_template('Online', self._network_template)
    self.target_convnet = tf.make_template('Target', self._network_template)

    # Compute the Q-values which are used for action selection in the current
    # state.
    self._net_outputs = self.online_convnet(self.state_ph,
                                            self.num_quantile_samples)
    # Shape of self._net_outputs.quantile_values:
    # num_quantile_samples x num_actions.
    # e.g. if num_actions is 2, it might look something like this:
    # Vals for Quantile .2  Vals for Quantile .4  Vals for Quantile .6
    #    [[0.1, 0.5],         [0.15, -0.3],          [0.15, -0.2]]
    # Q-values = [(0.1 + 0.15 + 0.15)/3, (0.5 + 0.15 + -0.2)/3].
    self._q_values = tf.reduce_mean(self._net_outputs.quantile_values, axis=0)
    self._q_argmax = tf.argmax(self._q_values, axis=0)

    self._replay_net_outputs = self.online_convnet(self._replay.states,
                                                   self.num_tau_samples)
    # Shape: (num_tau_samples x batch_size) x num_actions.
    self._replay_net_quantile_values = self._replay_net_outputs.quantile_values
    self._replay_net_quantiles = self._replay_net_outputs.quantiles

    # Do the same for next states in the replay buffer.
    self._replay_net_target_outputs = self.target_convnet(
        self._replay.next_states, self.num_tau_prime_samples)
    # Shape: (num_tau_prime_samples x batch_size) x num_actions.
    vals = self._replay_net_target_outputs.quantile_values
    self._replay_net_target_quantile_values = vals

    # Compute Q-values which are used for action selection for the next states
    # in the replay buffer.
    outputs_q = self.target_convnet(
        self._replay.next_states, self.num_quantile_samples)
    # Shape: (num_quantile_samples x batch_size) x num_actions.
    target_quantile_values_q = outputs_q.quantile_values
    # Shape: num_quantile_samples x batch_size x num_actions.
    target_quantile_values_q = tf.reshape(target_quantile_values_q,
                                          [self.num_quantile_samples,
                                           self._replay.batch_size,
                                           self.num_actions])
    # Shape: batch_size x num_actions.
    self._replay_net_target_q_values = tf.squeeze(tf.reduce_mean(
        target_quantile_values_q, axis=0))

    # Compute the argmax over target net Q-values using different quantile
    # inputs.
    outputs_action = self.target_convnet(self._replay.next_states,
                                         self.num_quantile_samples)

    # Shape: (num_quantile_samples x batch_size) x num_actions.
    target_quantile_values_action = outputs_action.quantile_values
    # Shape: num_quantile_samples x batch_size x num_actions.
    target_quantile_values_action = tf.reshape(target_quantile_values_action,
                                               [self.num_quantile_samples,
                                                self._replay.batch_size,
                                                self.num_actions])
    # Shape: batch_size x num_actions.
    target_q_values_action = tf.squeeze(tf.reduce_mean(
        target_quantile_values_action, axis=0))
    self._replay_next_qt_argmax = tf.argmax(target_q_values_action, axis=1)
开发者ID:veronicachelu,项目名称:dopamine,代码行数:76,代码来源:implicit_quantile_agent.py



注:本文中的tensorflow.make_template函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python tensorflow.map_fn函数代码示例发布时间:2022-05-27
下一篇:
Python tensorflow.logical_or函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap