• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python tensorflow.scatter_update函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.scatter_update函数的典型用法代码示例。如果您正苦于以下问题:Python scatter_update函数的具体用法?Python scatter_update怎么用?Python scatter_update使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了scatter_update函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: loop_body

 def loop_body(j):
   ns1 = tf.scatter_update(select1, j, 10.0)
   ns2 = tf.scatter_update(select2, j, 10.0)
   nj = tf.add(j, 1)
   op = control_flow_ops.group(ns1, ns2)
   nj = control_flow_ops.with_dependencies([op], nj)
   return [nj]
开发者ID:hypatiad,项目名称:tensorflow,代码行数:7,代码来源:control_flow_ops_py_test.py


示例2: replace

  def replace(self, episodes, length, rows=None):
    """Replace full episodes.

    Args:
      episodes: Tuple of transition quantities with batch and time dimensions.
      length: Batch of sequence lengths.
      rows: Episodes to replace, defaults to all.

    Returns:
      Operation.
    """
    rows = tf.range(self._capacity) if rows is None else rows
    assert rows.shape.ndims == 1
    assert_capacity = tf.assert_less(
        rows, self._capacity, message='capacity exceeded')
    with tf.control_dependencies([assert_capacity]):
      assert_max_length = tf.assert_less_equal(
          length, self._max_length, message='max length exceeded')
    replace_ops = []
    with tf.control_dependencies([assert_max_length]):
      for buffer_, elements in zip(self._buffers, episodes):
        replace_op = tf.scatter_update(buffer_, rows, elements)
        replace_ops.append(replace_op)
    with tf.control_dependencies(replace_ops):
      return tf.scatter_update(self._length, rows, length)
开发者ID:AndrewMeadows,项目名称:bullet3,代码行数:25,代码来源:memory.py


示例3: _forward

    def _forward(self, obs_prob_list):
        
        with tf.name_scope('init_scaling_factor'):
            self.scale = tf.Variable(tf.zeros([self.N], tf.float64)) #scale factors
        
        with tf.name_scope('forward_first_step'):
            # initialize with state starting priors
            init_prob = tf.mul(self.T0, tf.squeeze(obs_prob_list[0]))

            # scaling factor at t=0
            self.scale = tf.scatter_update(self.scale, 0, 1.0 / tf.reduce_sum(init_prob))

            # scaled belief at t=0
            self.forward = tf.scatter_update(self.forward, 0, self.scale[0] * init_prob)

        # propagate belief
        for step, obs_prob in enumerate(obs_prob_list[1:]):
            with tf.name_scope('time_step-%s' %step):
                # previous state probability
                prev_prob = tf.expand_dims(self.forward[step, :], 0)
                # transition prior
                prior_prob = tf.matmul(prev_prob, self.T)
                # forward belief propagation
                forward_score = tf.mul(prior_prob, tf.squeeze(obs_prob))

                forward_prob = tf.squeeze(forward_score)
                # scaling factor
                self.scale = tf.scatter_update(self.scale, step+1, 1.0 / tf.reduce_sum(forward_prob))
                # Update forward matrix
                self.forward = tf.scatter_update(self.forward, step+1, self.scale[step+1] * forward_prob)
开发者ID:aliziaei,项目名称:HiddenMarkovModel_TensorFlow,代码行数:30,代码来源:HiddenMarkovModel.py


示例4: build_init_cell

    def build_init_cell(self):
        with tf.variable_scope("init_cell"):
            # always zero
            dummy = tf.placeholder(tf.float32, [1, 1], name='dummy')

            # memory
            M_init_linear = tf.tanh(Linear(dummy, self.mem_size * self.mem_dim, name='M_init_linear'))
            M_init = tf.reshape(M_init_linear, [self.mem_size, self.mem_dim])

            # read weights
            read_w_init = tf.Variable(tf.zeros([self.read_head_size, self.mem_size]))
            read_init = tf.Variable(tf.zeros([self.read_head_size, 1, self.mem_dim]))

            for idx in xrange(self.read_head_size):
                # initialize bias distribution with `tf.range(mem_size-2, 0, -1)`
                read_w_linear_idx = Linear(dummy, self.mem_size, is_range=True,
                                           name='read_w_linear_%s' % idx)
                read_w_init = tf.scatter_update(read_w_init, [idx], tf.nn.softmax(read_w_linear_idx))

                read_init_idx = tf.tanh(Linear(dummy, self.mem_dim, name='read_init_%s' % idx))
                read_init = tf.scatter_update(read_init, [idx], tf.reshape(read_init_idx, [1, 1, self.mem_dim]))

            # write weights
            write_w_init = tf.Variable(tf.zeros([self.write_head_size, self.mem_size]))
            for idx in xrange(self.write_head_size):
                write_w_linear_idx = Linear(dummy, self.mem_size, is_range=True,
                                            name='write_w_linear_%s' % idx)
                write_w_init = tf.scatter_update(write_w_init, [idx], tf.nn.softmax(write_w_linear_idx))

            # controller state
            output_init = tf.Variable(tf.zeros([self.controller_layer_size, self.controller_dim]))
            hidden_init = tf.Variable(tf.zeros([self.controller_layer_size, self.controller_dim]))

            for idx in xrange(self.controller_layer_size):
                output_init = tf.scatter_update(output_init, [idx], tf.reshape(
                        tf.tanh(Linear(dummy, self.controller_dim, name='output_init_%s' % idx)),
                        [1, self.controller_dim]
                    )
                )
                hidden_init = tf.scatter_update(hidden_init, [idx], tf.reshape(
                        tf.tanh(Linear(dummy, self.controller_dim, name='hidden_init_%s' % idx)),
                        [1, self.controller_dim]
                    )
                )

            new_output= tf.tanh(Linear(dummy, self.output_dim, name='new_output'))

            inputs = {
                'input': dummy,
            }
            outputs = {
                'new_output': new_output,
                'M': M_init,
                'read_w': read_w_init,
                'write_w': write_w_init,
                'read': tf.reshape(read_init, [self.read_head_size, self.mem_dim]),
                'output': output_init,
                'hidden': hidden_init
            }
            return inputs, outputs
开发者ID:ramtej,项目名称:NTM-tensorflow,代码行数:60,代码来源:model.py


示例5: shortlist_insert

 def shortlist_insert():
   larger_ids = tf.boolean_mask(tf.to_int64(ids), larger_scores)
   larger_score_values = tf.boolean_mask(scores, larger_scores)
   shortlist_ids, new_ids, new_scores = self.ops.top_n_insert(
       self.sl_ids, self.sl_scores, larger_ids, larger_score_values)
   u1 = tf.scatter_update(self.sl_ids, shortlist_ids, new_ids)
   u2 = tf.scatter_update(self.sl_scores, shortlist_ids, new_scores)
   return tf.group(u1, u2)
开发者ID:BloodD,项目名称:tensorflow,代码行数:8,代码来源:topn.py


示例6: build_update

  def build_update(self):
    """Perform sampling and exchange.
    """
    # Sample by Metropolis-Hastings for each replica.
    replica_sample = []
    replica_accept = []
    for i in range(self.n_replica):
      sample_, accept_ = self._mh_sample(self.replica_vars[i],
                                         self.inverse_temperatures[i])
      replica_sample.append(sample_)
      replica_accept.append(accept_)
    accept = replica_accept[0]

    # Variable to store order of replicas after exchange
    new_replica_idx = tf.Variable(tf.range(self.n_replica))
    new_replica_idx = tf.assign(new_replica_idx, tf.range(self.n_replica))

    # Variable to store ratio of current samples
    replica_ratio = tf.Variable(tf.zeros(
        self.n_replica, dtype=list(self.latent_vars)[0].dtype))
    replica_ratio = self._replica_ratio(replica_ratio, replica_sample)

    # Exchange adjacent replicas at frequency of exchange_freq
    u = tf.random_uniform([])
    exchange = u < self.exchange_freq
    new_replica_idx = tf.cond(
        exchange, lambda: self._replica_exchange(
            new_replica_idx, replica_ratio), lambda: new_replica_idx)

    # New replica sorted by new_replica_idx
    new_replica_sample = []
    for i in range(self.n_replica):
      new_replica_sample.append(
          {z: tf.case({tf.equal(tf.gather(new_replica_idx, i), j):
                      _stateful_lambda(replica_sample[j][z])
                      for j in range(self.n_replica)},
           default=lambda: replica_sample[0][z], exclusive=True) for z, qz in
           six.iteritems(self.latent_vars)})

    assign_ops = []

    # Update Empirical random variables.
    for z, qz in six.iteritems(self.latent_vars):
      variable = qz.get_variables()[0]
      assign_ops.append(tf.scatter_update(variable, self.t,
                                          new_replica_sample[0][z]))

    for i in range(self.n_replica):
      for z, qz in six.iteritems(self.replica_vars[i]):
        variable = qz.get_variables()[0]
        assign_ops.append(tf.scatter_update(variable, self.t,
                                            new_replica_sample[i][z]))

    # Increment n_accept (if accepted).
    assign_ops.append(self.n_accept.assign_add(tf.where(accept, 1, 0)))

    return tf.group(*assign_ops)
开发者ID:JoyceYa,项目名称:edward,代码行数:57,代码来源:replica_exchange_mc.py


示例7: _reset_non_empty

 def _reset_non_empty(self, indices):
   op_zero = tf.scatter_update(
       self._time_elapsed, indices,
       tf.gather(tf.zeros((len(self),), tf.int32), indices))
   # pylint: disable=protected-access
   new_values = self._batch_env._reset_non_empty(indices)
   # pylint: enable=protected-access
   assign_op = tf.scatter_update(self._observ, indices, new_values)
   with tf.control_dependencies([op_zero, assign_op]):
     return tf.gather(self.observ, indices)
开发者ID:kltony,项目名称:tensor2tensor,代码行数:10,代码来源:tf_atari_wrappers.py


示例8: testBooleanScatterUpdate

  def testBooleanScatterUpdate(self):
    with self.test_session(use_gpu=False) as session:
      var = tf.Variable([True, False])
      update0 = tf.scatter_update(var, 1, True)
      update1 = tf.scatter_update(var, tf.constant(0, dtype=tf.int64), False)
      var.initializer.run()

      session.run([update0, update1])

      self.assertAllEqual([False, True], var.eval())
开发者ID:13331151,项目名称:tensorflow,代码行数:10,代码来源:scatter_ops_test.py


示例9: build_controller

    def build_controller(self, input, read_prev, output_prev, hidden_prev):
        with tf.variable_scope("controller"):
            output = tf.Variable(tf.zeros([self.controller_layer_size, self.controller_dim]))
            hidden = tf.Variable(tf.zeros([self.controller_layer_size, self.controller_dim]))
            for layer_idx in xrange(self.controller_layer_size):
                if self.controller_layer_size == 1:
                    o_prev = output_prev
                    h_prev = hidden_prev
                else:
                    o_prev = tf.reshape(tf.gather(output_prev, layer_idx), [1, -1])
                    h_prev = tf.reshape(tf.gather(hidden_prev, layer_idx), [1, -1])

                if layer_idx == 0:
                    def new_gate(gate_name):
                        in_modules = [
                            Linear(input, self.controller_dim,
                                   name='%s_gate_1_%s' % (gate_name, layer_idx)),
                            Linear(o_prev, self.controller_dim,
                                   name='%s_gate_2_%s' % (gate_name, layer_idx)),
                        ]
                        if self.read_head_size == 1:
                            in_modules.append(
                                Linear(read_prev, self.controller_dim,
                                       name='%s_gate_3_%s' % (gate_name, layer_idx))
                            )
                        else:
                            for read_idx in xrange(self.read_head_size):
                                vec = tf.reshape(tf.gather(read_prev, read_idx), [1, -1])
                                in_modules.append(
                                    Linear(vec, self.controller_dim,
                                           name='%s_gate_3_%s_%s' % (gate_name, layer_idx, read_idx))
                                )
                        return tf.add_n(in_modules)
                else:
                    def new_gate(gate_name):
                        return tf.add_n([
                            Linear(tf.reshape(tf.gather(output, layer_idx-1), [1, -1]),
                                   self.controller_dim, name='%s_gate_1_%s' % (gate_name, layer_idx)),
                            Linear(o_prev, self.controller_dim,
                                   name='%s_gate_2_%s' % (gate_name, layer_idx)),
                        ])

                # input, forget, and output gates for LSTM
                i = tf.sigmoid(new_gate('input'))
                f = tf.sigmoid(new_gate('forget'))
                o = tf.sigmoid(new_gate('output'))
                update = tf.tanh(new_gate('update'))

                # update the sate of the LSTM cell
                hidden = tf.scatter_update(hidden, [layer_idx],
                                           tf.add_n([f * h_prev, i * update]))
                output = tf.scatter_update(output, [layer_idx],
                                           o * tf.tanh(tf.gather(hidden,layer_idx)))

            return output, hidden
开发者ID:ramtej,项目名称:NTM-tensorflow,代码行数:55,代码来源:model.py


示例10: viterbi_inference

    def viterbi_inference(self, obs_seq):
        
        # length of observed sequence
        self.N = len(obs_seq)
        
        # shape path Variables
        shape = [self.N, self.S]
        
        # observed sequence
        x = tf.constant(obs_seq, dtype=tf.int32, name='observation_sequence')
        
        with tf.name_scope('Init_viterbi_variables'):
            # Initialize variables
            pathStates, pathScores, states_seq = self.initialize_viterbi_variables(shape)       
        
        with tf.name_scope('Emission_seq_'):
            # log probability of emission sequence
            obs_prob_seq = tf.log(tf.gather(self.E, x))
            obs_prob_list = tf.split(0, self.N, obs_prob_seq)

        with tf.name_scope('Starting_log-priors'):
            # initialize with state starting log-priors
            pathScores = tf.scatter_update(pathScores, 0, tf.log(self.T0) + tf.squeeze(obs_prob_list[0]))
            
        
        with tf.name_scope('Belief_Propagation'):
            for step, obs_prob in enumerate(obs_prob_list[1:]):

                with tf.name_scope('Belief_Propagation_step_%s' %step):
                    # propagate state belief
                    belief = self.belief_propagation(pathScores[step, :])

                    # the inferred state by maximizing global function
                    # and update state and score matrices 
                    pathStates = tf.scatter_update(pathStates, step + 1, tf.argmax(belief, 0))
                    pathScores = tf.scatter_update(pathScores, step + 1, tf.reduce_max(belief, 0) + tf.squeeze(obs_prob))

            with tf.name_scope('Max_Likelyhood_update'):
                # infer most likely last state
                states_seq = tf.scatter_update(states_seq, self.N-1, tf.argmax(pathScores[self.N-1, :], 0))
        
        with tf.name_scope('Backtrack'):
            for step in range(self.N - 1, 0, -1):
                with tf.name_scope('Back_track_step_%s' %step):
                    # for every timestep retrieve inferred state
                    state = states_seq[step]
                    idx = tf.reshape(tf.pack([step, state]), [1, -1])
                    state_prob = tf.gather_nd(pathStates, idx)
                    states_seq = tf.scatter_update(states_seq, step - 1,  state_prob[0])

        return states_seq, tf.exp(pathScores) # turn scores back to probabilities
开发者ID:aliziaei,项目名称:HiddenMarkovModel_TensorFlow,代码行数:51,代码来源:HiddenMarkovModel.py


示例11: build_update

  def build_update(self):
    """
    Simulate Langevin dynamics using a discretized integrator. Its
    discretization error goes to zero as the learning rate decreases.
    """
    old_sample = {z: tf.gather(qz.params, tf.maximum(self.t - 1, 0))
                  for z, qz in six.iteritems(self.latent_vars)}

    # Simulate Langevin dynamics.
    learning_rate = self.step_size / tf.cast(self.t + 1, tf.float32)
    grad_log_joint = tf.gradients(self._log_joint(old_sample),
                                  list(six.itervalues(old_sample)))
    sample = {}
    for z, qz, grad_log_p in \
        zip(six.iterkeys(self.latent_vars),
            six.itervalues(self.latent_vars),
            grad_log_joint):
      event_shape = qz.get_event_shape()
      normal = Normal(mu=tf.zeros(event_shape),
                      sigma=learning_rate * tf.ones(event_shape))
      sample[z] = old_sample[z] + 0.5 * learning_rate * grad_log_p + \
          normal.sample()

    # Update Empirical random variables.
    assign_ops = []
    variables = {x.name: x for x in
                 tf.get_default_graph().get_collection(tf.GraphKeys.VARIABLES)}
    for z, qz in six.iteritems(self.latent_vars):
      variable = variables[qz.params.op.inputs[0].op.inputs[0].name]
      assign_ops.append(tf.scatter_update(variable, self.t, sample[z]))

    # Increment n_accept.
    assign_ops.append(self.n_accept.assign_add(1))
    return tf.group(*assign_ops)
开发者ID:blei-lab,项目名称:edward,代码行数:34,代码来源:sgld.py


示例12: _forward

    def _forward(self, obs_prob_seq):
        # initialize with state starting priors
        self.forward = tf.scatter_update(self.forward, 0, self.T0)

        # propagate belief
        for step in range(self.N):
            # previous state probability
            prev_prob = tf.reshape(self.forward[step, :], [1, -1])
            # transition prior
            prior_prob = tf.matmul(prev_prob, self.T)
            # forward belief propagation
            forward_score = tf.multiply(prior_prob, tf.cast(obs_prob_seq[step, :], tf.float64))
            # Normalize score into a probability
            forward_prob = tf.reshape(forward_score / tf.reduce_sum(forward_score), [-1])
            # Update forward matrix
            self.forward = tf.scatter_update(self.forward, step + 1, forward_prob)
开发者ID:MarvinBertin,项目名称:HiddenMarkovModel_TensorFlow,代码行数:16,代码来源:forward_bakward.py


示例13: _apply_dense

 def _apply_dense(self, grad, var):
     memory = self.get_slot(var, "memory")
     memsum = tf.reduce_mean(memory, [0])
     mem = tf.gather(memory, self.batch_ind)
     delta = grad - mem + memsum
     mem_op = tf.scatter_update(memory, self.batch_ind, grad)
     return tf.group(var.assign_sub(tf.mul(delta, self.learning_rate)), mem_op)
开发者ID:yk,项目名称:tfutils,代码行数:7,代码来源:optimization.py


示例14: test_state_grads

def test_state_grads(sess):
    v = tf.Variable([0., 0., 0.])
    x = tf.ones((3,))

    y0 = tf.assign(v, x)
    y1 = tf.assign_add(v, x)

    grad0 = tf.gradients(y0, [v, x])
    grad1 = tf.gradients(y1, [v, x])

    grad_vals = sess.run((grad0, grad1))

    assert np.allclose(grad_vals[0][0], 0)
    assert np.allclose(grad_vals[0][1], 1)
    assert np.allclose(grad_vals[1][0], 1)
    assert np.allclose(grad_vals[1][1], 1)

    v = tf.Variable([0., 0., 0.])
    x = tf.ones((1,))
    y0 = tf.scatter_update(v, [0], x)
    y1 = tf.scatter_add(v, [0], x)

    grad0 = tf.gradients(y0, [v._ref(), x])
    grad1 = tf.gradients(y1, [v._ref(), x])

    grad_vals = sess.run((grad0, grad1))

    assert np.allclose(grad_vals[0][0], [0, 1, 1])
    assert np.allclose(grad_vals[0][1], 1)
    assert np.allclose(grad_vals[1][0], 1)
    assert np.allclose(grad_vals[1][1], 1)
开发者ID:nengo,项目名称:nengo_deeplearning,代码行数:31,代码来源:test_tensorflow_patch.py


示例15: encode

    def encode(self, x=None):
        if x is None:
            x = CharLSTMEmbeddings.create_placeholder(self.name)
        self.x = x
        with tf.variable_scope(self.scope, reuse=tf.AUTO_REUSE):
            Wch = tf.get_variable(
                "Wch",
                initializer=tf.constant_initializer(self.weights, dtype=tf.float32, verify_shape=True),
                shape=[self.vsz, self.dsz],
                trainable=True
            )
            ech0 = tf.scatter_update(Wch, tf.constant(Offsets.PAD, dtype=tf.int32, shape=[1]), tf.zeros(shape=[1, self.dsz]))

            shape = tf.shape(x)
            B = shape[0]
            T = shape[1]
            W = shape[2]
            flat_chars = tf.reshape(x, [-1, W])
            word_lengths = tf.reduce_sum(tf.cast(tf.equal(flat_chars, Offsets.PAD), tf.int32), axis=1)
            with tf.control_dependencies([ech0]):
                embed_chars =  tf.nn.embedding_lookup(Wch, flat_chars)

            fwd_lstm = stacked_lstm(self.lstmsz // 2, self.pdrop, self.layers)
            bwd_lstm = stacked_lstm(self.lstmsz // 2, self.pdrop, self.layers)
            _, rnn_state = tf.nn.bidirectional_dynamic_rnn(fwd_lstm, bwd_lstm, embed_chars, sequence_length=word_lengths, dtype=tf.float32)

            result = tf.concat([rnn_state[0][-1].h, rnn_state[1][-1].h], axis=1)
            return tf.reshape(result, [B, T, self.lstmsz])
开发者ID:dpressel,项目名称:baseline,代码行数:28,代码来源:embeddings.py


示例16: _reset_non_empty

 def _reset_non_empty(self, indices):
   # pylint: disable=protected-access
   new_values = self._batch_env._reset_non_empty(indices)
   # pylint: enable=protected-access
   initial_frames = getattr(self._batch_env, "history_observations", None)
   if initial_frames is not None:
     # Using history buffer frames for initialization, if they are available.
     with tf.control_dependencies([new_values]):
       # Transpose to [batch, height, width, history, channels] and merge
       # history and channels into one dimension.
       initial_frames = tf.transpose(initial_frames, [0, 2, 3, 1, 4])
       initial_frames = tf.reshape(initial_frames,
                                   (len(self),) + self.observ_shape)
   else:
     inx = tf.concat(
         [
             tf.ones(tf.size(tf.shape(new_values)),
                     dtype=tf.int64)[:-1],
             [self.history]
         ],
         axis=0)
     initial_frames = tf.tile(new_values, inx)
   assign_op = tf.scatter_update(self._observ, indices, initial_frames)
   with tf.control_dependencies([assign_op]):
     return tf.gather(self.observ, indices)
开发者ID:qixiuai,项目名称:tensor2tensor,代码行数:25,代码来源:tf_atari_wrappers.py


示例17: add_val_to_col

def add_val_to_col(var, col, val):
    vector_with_zeros = tf.Variable(tf.zeros(var.get_shape()[1]),
                                    dtype=tf.float32)
    vector_with_zeros = tf.scatter_update(vector_with_zeros,[col],[val])
    vector_with_zeros = tf.reshape(vector_with_zeros,
                                   [1,var.get_shape().as_list()[1]])
    return var+vector_with_zeros
开发者ID:kundajelab,项目名称:deeplift,代码行数:7,代码来源:helper_functions.py


示例18: build_update

  def build_update(self):
    """Simulate Langevin dynamics using a discretized integrator. Its
    discretization error goes to zero as the learning rate decreases.

    #### Notes

    The updates assume each Empirical random variable is directly
    parameterized by `tf.Variable`s.
    """
    old_sample = {z: tf.gather(qz.params, tf.maximum(self.t - 1, 0))
                  for z, qz in six.iteritems(self.latent_vars)}

    # Simulate Langevin dynamics.
    learning_rate = self.step_size / tf.cast(self.t + 1, tf.float32)
    grad_log_joint = tf.gradients(self._log_joint(old_sample),
                                  list(six.itervalues(old_sample)))
    sample = {}
    for z, grad_log_p in zip(six.iterkeys(old_sample), grad_log_joint):
      qz = self.latent_vars[z]
      event_shape = qz.event_shape
      normal = Normal(loc=tf.zeros(event_shape),
                      scale=learning_rate * tf.ones(event_shape))
      sample[z] = old_sample[z] + \
          0.5 * learning_rate * tf.convert_to_tensor(grad_log_p) + \
          normal.sample()

    # Update Empirical random variables.
    assign_ops = []
    for z, qz in six.iteritems(self.latent_vars):
      variable = qz.get_variables()[0]
      assign_ops.append(tf.scatter_update(variable, self.t, sample[z]))

    # Increment n_accept.
    assign_ops.append(self.n_accept.assign_add(1))
    return tf.group(*assign_ops)
开发者ID:ekostem,项目名称:edward,代码行数:35,代码来源:sgld.py


示例19: deconv_pooling_n_filter

def deconv_pooling_n_filter(pool_s, pool_layer_scope, kheight=2, kwidth=2):
    with tf.variable_scope(pool_layer_scope, reuse=True) as scope:
        pool_shape = pool_s.get_shape().as_list()
        # if pool_shape[1] < 4 or pool_shape[2] < 4:
        #    pool_s2 = tf.nn.dropout(pool_s, 0.5)
        #    switches = tf.ones_like(pool_s2)
        #    return pool_s2
        # Recreate 1D switches for scatter update
        dim = 1

        for d in pool_shape:
            dim *= d

        [pool_s2, ind] = tf.nn.max_pool_with_argmax(
            pool_s, ksize=[1, kheight, kwidth, 1], strides=[1, kheight, kwidth, 1], padding="SAME"
        )

        _print_tensor_size(pool_s2)

        # ones_temp = tf.ones_like([(dim // kheight) // kwidth])
        ones_temp = tf.ones_like(ind, dtype=tf.float32)
        # temp_zeros =

        switches = tf.Variable(tf.zeros([dim]), name="switches")

        switches = tf.assign(switches, tf.zeros([dim]))

        # set switches
        switches_out2 = tf.scatter_update(switches, ind, ones_temp)

        # reshape back to batches
        switches_out2 = tf.reshape(switches_out2, pool_shape)

    return pool_s2, switches_out2
开发者ID:ZijingMao,项目名称:ROICNN,代码行数:34,代码来源:rsvp_quick_deconv.py


示例20: body

def body(sequence_len, 
        step, 
        feature_pl,
        path_pl,
        flattened_idx_offset,
        contextual_features):
  
  begin = tf.get_variable("begin1",[3],dtype=tf.int32,initializer=tf.constant_initializer(0))
  begin = tf.scatter_update(begin,1,step,use_locking=None)

  step_feature = tf.squeeze(tf.slice(feature_pl,begin,[-1,1,-1]))

  input_idx = tf.slice(path_pl, begin, [-1,1,1])
  input_idx = tf.reshape(input_idx,[-1])
  input_idx_flattened = flattened_idx_offset + input_idx
  max_seq_len = FLAGS.max_seq_len

  begin2 = tf.get_variable("begin2",[3],dtype=tf.int32,initializer=tf.constant_initializer(0))
  begin2 = tf.scatter_update(begin2,1,step,use_locking=None)
  begin2 = tf.scatter_update(begin2,2,1,use_locking=None)

  tf.get_variable_scope().reuse_variables()
    
  contextual_features = tf.get_variable("contextual_features")
                                          # [max_seq_len * max_seq_len, encoding_nn_output_size],
                                          # dtype=tf.float32)

  step_contextual_features = tf.gather(contextual_features,input_idx_flattened)  # use flattened indices1
  
  inputs = tf.concat(1,[step_contextual_features,step_feature])
  updated_contextual_vectors = single_layer_neural_network1(inputs)

  updated_contextual_vectors = tf.tanh(updated_contextual_vectors)
  output_idx = tf.reshape(tf.slice(path_pl, begin2, [-1,1, 1]),[-1])
  output_idx_flattened =  flattened_idx_offset + output_idx
  
  contextual_features =  tf.scatter_add(contextual_features,
                                        output_idx_flattened,
                                        updated_contextual_vectors, use_locking=None)

  with tf.control_dependencies([contextual_features]):
    return (sequence_len, 
            step+1, 
            feature_pl,
            path_pl,
            flattened_idx_offset,
            contextual_features)
开发者ID:jskDr,项目名称:Deep-Learning-in-Chemoinformatics,代码行数:47,代码来源:ugrnn.py



注:本文中的tensorflow.scatter_update函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python tensorflow.segment_sum函数代码示例发布时间:2022-05-27
下一篇:
Python tensorflow.scatter_nd函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap