• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python rnn.rnn函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.python.ops.rnn.rnn函数的典型用法代码示例。如果您正苦于以下问题:Python rnn函数的具体用法?Python rnn怎么用?Python rnn使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了rnn函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: create_decoder

  def create_decoder(self):
    start_time = time.time()

    with vs.variable_scope("embedding" or scope):
      tokens = self.tokens[:-1]
      embeddings = []
      with tf.device("/cpu:0"):
        sqrt3 = np.sqrt(3)
        embedding = vs.get_variable(
            "embedding", [self.vocab_size, self.embedding_size],
            initializer=tf.random_uniform_initializer(-sqrt3, sqrt3))

        for token in tokens:
          # Create the embedding layer.
          emb = embedding_ops.embedding_lookup(embedding, token)
          emb.set_shape([self.batch_size, self.embedding_size])
          embeddings.append(emb)

    cell = rnn_cell.GRUCell(self.decoder_cell_size)
    cell = rnn_cell.OutputProjectionWrapper(cell, self.vocab_size)
    self.decoder_states = rnn.rnn(
        cell, embeddings, dtype=tf.float32, sequence_length=self.tokens_len)[0]
    self.logits = self.decoder_states

    print('create_decoder graph time %f' % (time.time() - start_time))
开发者ID:suriyadeepan,项目名称:tensorflow,代码行数:25,代码来源:lm.py


示例2: tied_rnn_seq2seq

def tied_rnn_seq2seq(encoder_inputs, decoder_inputs, cell,
                     loop_function=None, dtype=dtypes.float32, scope=None):
  """RNN sequence-to-sequence model with tied encoder and decoder parameters.

  This model first runs an RNN to encode encoder_inputs into a state vector, and
  then runs decoder, initialized with the last encoder state, on decoder_inputs.
  Encoder and decoder use the same RNN cell and share parameters.

  Args:
    encoder_inputs: A list of 2D Tensors [batch_size x cell.input_size].
    decoder_inputs: A list of 2D Tensors [batch_size x cell.input_size].
    cell: rnn_cell.RNNCell defining the cell function and size.
    loop_function: If not None, this function will be applied to i-th output
      in order to generate i+1-th input, and decoder_inputs will be ignored,
      except for the first element ("GO" symbol), see rnn_decoder for details.
    dtype: The dtype of the initial state of the rnn cell (default: tf.float32).
    scope: VariableScope for the created subgraph; default: "tied_rnn_seq2seq".

  Returns:
    A tuple of the form (outputs, state), where:
      outputs: A list of the same length as decoder_inputs of 2D Tensors with
        shape [batch_size x cell.output_size] containing the generated outputs.
      state: The state of each decoder cell in each time-step. This is a list
        with length len(decoder_inputs) -- one item for each time-step.
        It is a 2D Tensor of shape [batch_size x cell.state_size].
  """
  with variable_scope.variable_scope("combined_tied_rnn_seq2seq"):
    scope = scope or "tied_rnn_seq2seq"
    _, enc_state = rnn.rnn(
        cell, encoder_inputs, dtype=dtype, scope=scope)
    variable_scope.get_variable_scope().reuse_variables()
    return rnn_decoder(decoder_inputs, enc_state, cell,
                       loop_function=loop_function, scope=scope)
开发者ID:maxkarlovitz,项目名称:tensorflow,代码行数:33,代码来源:seq2seq.py


示例3: _rnn

 def _rnn(self, name, enc_inputs):
     encoder_cell = rnn_cell.EmbeddingWrapper(self.cell, self.dict_size)
     _, encoder_states = rnn.rnn(encoder_cell, enc_inputs, dtype=tf.float32)
     w = tf.get_variable(name + '-w', (self.cell.state_size, self.num_outputs),
                         initializer=tf.random_normal_initializer(stddev=0.1))
     b = tf.get_variable(name + 'b', (self.num_outputs,), initializer=tf.constant_initializer())
     return tf.matmul(encoder_states[-1], w) + b
开发者ID:pdsujnow,项目名称:tgen,代码行数:7,代码来源:tfclassif.py


示例4: basic_rnn_seq2seq

def basic_rnn_seq2seq(
    encoder_inputs, decoder_inputs, cell, dtype=dtypes.float32, scope=None):
  """Basic RNN sequence-to-sequence model.

  This model first runs an RNN to encode encoder_inputs into a state vector,
  then runs decoder, initialized with the last encoder state, on decoder_inputs.
  Encoder and decoder use the same RNN cell type, but don't share parameters.

  Args:
    encoder_inputs: A list of 2D Tensors [batch_size x cell.input_size].
    decoder_inputs: A list of 2D Tensors [batch_size x cell.input_size].
    cell: rnn_cell.RNNCell defining the cell function and size.
    dtype: The dtype of the initial state of the RNN cell (default: tf.float32).
    scope: VariableScope for the created subgraph; default: "basic_rnn_seq2seq".

  Returns:
    A tuple of the form (outputs, state), where:
      outputs: A list of the same length as decoder_inputs of 2D Tensors with
        shape [batch_size x cell.output_size] containing the generated outputs.
      state: The state of each decoder cell in the final time-step.
        It is a 2D Tensor of shape [batch_size x cell.state_size].
  """
  with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):
    _, enc_state = rnn.rnn(cell, encoder_inputs, dtype=dtype)
    return rnn_decoder(decoder_inputs, enc_state, cell)
开发者ID:maxkarlovitz,项目名称:tensorflow,代码行数:25,代码来源:seq2seq.py


示例5: RNN

def RNN(x, weights):

    # Prepare data shape to match `rnn` function requirements
    # Current data input shape: (batch_size, n_steps, n_input)
    # Required shape: 'n_steps' tensors list of shape (batch_size, n_input)
    # pdb.set_trace()
    # Permuting batch_size and n_steps
    x = tf.transpose(x, [1, 0, 2])
    # Reshaping to (n_steps*batch_size, n_input)
    x = tf.reshape(x, [-1, n_input])
    # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
    x = tf.split(0, n_steps, x)

    # Define a lstm cell with tensorflow
    lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)

    # Define a multi layers lstm cell: multi_lstm_cell
    lstm_cell = rnn_cell.MultiRNNCell([lstm_cell] * 2)

    #pdb.set_trace()
    # Get lstm cell output
    # https://github.com/tensorflow/tensorflow/blob/r0.8/tensorflow/python/ops/rnn.py
    outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
    #sequence_length参数暂时不会用,可以先不用。只不过计算速度可能会稍微慢一些。 sequence_length.shape=batch_size*n_hidden
    # outputs, states = rnn.rnn(lstm_cell, x, sequence_length=w, dtype=tf.float32)

    # Linear activation, using rnn inner loop every output
    pred_list = []
    for output_step in outputs:
        reluinput = tf.add(tf.matmul(x_profile, weights['profile_out']), output_step)
        hidden_layer_1 = tf.nn.relu(tf.matmul(reluinput, weights['reluhidden_in']) + weights['reluhidden_in_biases'])   # Question + 的执行过程
        pred_list.append(tf.matmul(hidden_layer_1, weights['reluhidden_out']))
    # return tf.matmul(outputs[-1], weights['out']), outputs, states
    return pred_list
开发者ID:shawnlxh,项目名称:Blood_Pressure_Prediction,代码行数:34,代码来源:my_lstm.py


示例6: basic_seq2seq

def basic_seq2seq(encoder_inputs, decoder_inputs, cell, input_size, hidden_size, output_size, dtype=dtypes.float32, scope=None, feed_previous=False):

	with variable_scope.variable_scope(scope or "basic_rnn_seq2seq"):


		cell = tf.nn.rnn_cell.InputProjectionWrapper(cell, hidden_size, input_size)
		cell = tf.nn.rnn_cell.OutputProjectionWrapper(cell, output_size)

		_, enc_state = rnn.rnn(cell, encoder_inputs, dtype=dtype)


		if feed_previous:
			def simple_loop_function(prev, _):
				_next = tf.greater_equal(prev, 0.5)
				_next = tf.to_float(_next)
				return _next

			# softmax_w = tf.get_variable("softmax_w", [self.hidden_size, self.output_size])
			# softmax_b = tf.get_variable("softmax_b", [self.output_size])
			# def simple_softmax_function(prev, _):
				
			loop_function = simple_loop_function
		else:
			loop_function = None
		return tf.nn.seq2seq.rnn_decoder(decoder_inputs, enc_state, cell, loop_function=loop_function)
开发者ID:kahitomi,项目名称:autobid,代码行数:25,代码来源:tf_seq2seq.py


示例7: __call__

  def __call__(self,
               inputs,
               initial_state=None,
               dtype=None,
               sequence_length=None,
               scope=None):
    is_list = isinstance(inputs, list)
    if self._use_dynamic_rnn:
      if is_list:
        inputs = array_ops.pack(inputs)
      outputs, state = rnn.dynamic_rnn(
          self._cell,
          inputs,
          sequence_length=sequence_length,
          initial_state=initial_state,
          dtype=dtype,
          time_major=True,
          scope=scope)
      if is_list:
        # Convert outputs back to list
        outputs = array_ops.unpack(outputs)
    else:  # non-dynamic rnn
      if not is_list:
        inputs = array_ops.unpack(inputs)
      outputs, state = rnn.rnn(self._cell,
                               inputs,
                               initial_state=initial_state,
                               dtype=dtype,
                               sequence_length=sequence_length,
                               scope=scope)
      if not is_list:
        # Convert outputs back to tensor
        outputs = array_ops.pack(outputs)

    return outputs, state
开发者ID:MostafaGazar,项目名称:tensorflow,代码行数:35,代码来源:fused_rnn_cell.py


示例8: _build_graph

    def _build_graph(self, input_vars, is_training):
        input, nextinput = input_vars

        cell = rnn_cell.BasicLSTMCell(num_units=param.rnn_size)
        cell = rnn_cell.MultiRNNCell([cell] * param.num_rnn_layer)

        self.initial = initial = cell.zero_state(tf.shape(input)[0], tf.float32)

        embeddingW = tf.get_variable('embedding', [param.vocab_size, param.rnn_size])
        input_feature = tf.nn.embedding_lookup(embeddingW, input) # B x seqlen x rnnsize

        input_list = tf.split(1, param.seq_len, input_feature)    #seqlen x (Bx1xrnnsize)
        input_list = [tf.squeeze(x, [1]) for x in input_list]

        # seqlen is 1 in inference. don't need loop_function
        outputs, last_state = rnn.rnn(cell, input_list, initial, scope='rnnlm')
        self.last_state = tf.identity(last_state, 'last_state')
        # seqlen x (Bxrnnsize)
        output = tf.reshape(tf.concat(1, outputs), [-1, param.rnn_size])  # (seqlenxB) x rnnsize
        logits = FullyConnected('fc', output, param.vocab_size, nl=tf.identity)
        self.prob = tf.nn.softmax(logits / param.softmax_temprature)

        xent_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
                logits, symbolic_functions.flatten(nextinput))
        self.cost = tf.reduce_mean(xent_loss, name='cost')
        summary.add_param_summary([('.*/W', ['histogram'])])   # monitor histogram of all W
开发者ID:Jothecat,项目名称:tensorpack,代码行数:26,代码来源:char-rnn.py


示例9: RNN

    def RNN(x, weights, biases, type):

        # Prepare data shape to match `rnn` function requirements
        # Current data input shape: (batch_size, n_steps, n_input)
        # Required shape: 'n_steps' tensors list of shape (batch_size, n_input)

        # Permuting batch_size and n_steps
        x = tf.transpose(x, [1, 0, 2])
        # Reshaping to (n_steps*batch_size, n_input)
        x = tf.reshape(x, [-1, n_input])
        # Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
        x = tf.split(0, n_steps, x)
        # Define a lstm cell with tensorflow
        cell_class_map = {
             "LSTM": rnn_cell.BasicLSTMCell(n_hidden),
             "GRU": rnn_cell.GRUCell(n_hidden),
             "BasicRNN": rnn_cell.BasicRNNCell(n_hidden),
             "LNGRU": LNGRUCell(n_hidden),
             "LNLSTM": LNBasicLSTMCell(n_hidden)}

        lstm_cell = cell_class_map.get(type)
        cell = rnn_cell.MultiRNNCell([lstm_cell] * FLAGS.layers)
        print "Using %s model" % type
        # Get lstm cell output
        outputs, states = rnn.rnn(cell, x, dtype=tf.float32)

        # Linear activation, using rnn inner loop last output
        return tf.matmul(outputs[-1], weights['out']) + biases['out']
开发者ID:BenJamesbabala,项目名称:tf-layer-norm,代码行数:28,代码来源:mnist.py


示例10: embedding_encoder

def embedding_encoder(encoder_inputs,
                      cell,
                      embedding,
                      num_symbols,
                      embedding_size,
                      bidirectional=False,
                      dtype=None,
                      weight_initializer=None,
                      scope=None):

  with variable_scope.variable_scope(
      scope or "embedding_encoder", dtype=dtype) as scope:
    dtype = scope.dtype
    # Encoder.
    if not embedding:
      embedding = variable_scope.get_variable("embedding", [num_symbols, embedding_size],
              initializer=weight_initializer())
    emb_inp = [embedding_ops.embedding_lookup(embedding, i) for i in encoder_inputs]
    if bidirectional:
      _, output_state_fw, output_state_bw = rnn.bidirectional_rnn(cell, cell, emb_inp,
              dtype=dtype)
      encoder_state = tf.concat(1, [output_state_fw, output_state_bw])
    else:
      _, encoder_state = rnn.rnn(
        cell, emb_inp, dtype=dtype)

    return encoder_state
开发者ID:noble6emc2,项目名称:Question_Answering,代码行数:27,代码来源:seq2seq.py


示例11: _tf_enc_embedding_attention_seq2seq

    def _tf_enc_embedding_attention_seq2seq(self, encoder_inputs, cell,
                                    num_encoder_symbols,
                                    embedding_size,
                                    num_heads=1,
                                    dtype=dtypes.float32,
                                    scope=None,
                                    encoder="reverse",
                                    sequence_length=None,
                                    bucket_length=None,
                                    init_backward=False,
                                    bow_emb_size=None,
                                    single_src_embedding=False):
        """Embedding sequence-to-sequence model with attention.
        """
        with tf.variable_scope(scope or "embedding_attention_seq2seq", reuse=True):    
            # Encoder.
            if encoder == "bidirectional":
              encoder_cell_fw = rnn_cell.EmbeddingWrapper(
                cell.get_fw_cell(), embedding_classes=num_encoder_symbols,
                embedding_size=embedding_size)
              embed_scope = None
              if single_src_embedding:
                logging.info("Reuse forward src embedding for backward encoder")
                with variable_scope.variable_scope("BiRNN/FW/EmbeddingWrapper") as es:
                  embed_scope = es

              encoder_cell_bw = rnn_cell.EmbeddingWrapper(
                cell.get_bw_cell(), embedding_classes=num_encoder_symbols,
                embedding_size=embedding_size, embed_scope=embed_scope)
              encoder_outputs, encoder_state, encoder_state_bw = rnn.bidirectional_rnn(encoder_cell_fw, encoder_cell_bw, 
                                 encoder_inputs, dtype=dtype, 
                                 sequence_length=sequence_length,
                                 bucket_length=bucket_length)
              logging.debug("Bidirectional state size=%d" % cell.state_size) # this shows double the size for lstms
            elif encoder == "reverse": 
              encoder_cell = rnn_cell.EmbeddingWrapper(
                cell, embedding_classes=num_encoder_symbols,
                embedding_size=embedding_size)
              encoder_outputs, encoder_state = rnn.rnn(
                encoder_cell, encoder_inputs, dtype=dtype, sequence_length=sequence_length, bucket_length=bucket_length, reverse=True)
              logging.debug("Unidirectional state size=%d" % cell.state_size)
            elif encoder == "bow":
              encoder_outputs, encoder_state = cell.embed(rnn_cell.Embedder, num_encoder_symbols,
                                                  bow_emb_size, encoder_inputs, dtype=dtype)               
        
            # First calculate a concatenation of encoder outputs to put attention on.
            if encoder == "bow":
              top_states = [array_ops.reshape(e, [-1, 1, bow_emb_size])
                  for e in encoder_outputs]
            else:
              top_states = [array_ops.reshape(e, [-1, 1, cell.output_size])
                          for e in encoder_outputs]
            attention_states = array_ops.concat(1, top_states)

            initial_state = encoder_state
            if encoder == "bidirectional" and init_backward:
              initial_state = encoder_state_bw

            return self._tf_enc_embedding_attention_decoder(
                attention_states, initial_state, cell, num_heads=num_heads)     
开发者ID:ehasler,项目名称:tensorflow,代码行数:60,代码来源:tf_seq2seq.py


示例12: __init__

  def __init__(self, vocabularySize, config_param):
    self.vocabularySize = vocabularySize
    self.config = config_param

    self._inputX = tf.placeholder(tf.int32, [self.config.batch_size, self.config.sequence_size], "InputsX")
    self._inputTargetsY = tf.placeholder(tf.int32, [self.config.batch_size, self.config.sequence_size], "InputTargetsY")


    #Converting Input in an Embedded form
    with tf.device("/cpu:0"): #Tells Tensorflow what GPU to use specifically
      embedding = tf.get_variable("embedding", [self.vocabularySize, self.config.embeddingSize])
      embeddingLookedUp = tf.nn.embedding_lookup(embedding, self._inputX)
      inputs = tf.split(1, self.config.sequence_size, embeddingLookedUp)
      inputTensorsAsList = [tf.squeeze(input_, [1]) for input_ in inputs]


    #Define Tensor RNN
    singleRNNCell = rnn_cell.BasicRNNCell(self.config.hidden_size)
    self.multilayerRNN =  rnn_cell.MultiRNNCell([singleRNNCell] * self.config.num_layers)
    self._initial_state = self.multilayerRNN.zero_state(self.config.batch_size, tf.float32)

    #Defining Logits
    hidden_layer_output, last_state = rnn.rnn(self.multilayerRNN, inputTensorsAsList, initial_state=self._initial_state)
    hidden_layer_output = tf.reshape(tf.concat(1, hidden_layer_output), [-1, self.config.hidden_size])
    self._logits = tf.nn.xw_plus_b(hidden_layer_output, tf.get_variable("softmax_w", [self.config.hidden_size, self.vocabularySize]), tf.get_variable("softmax_b", [self.vocabularySize]))
    self._predictionSoftmax = tf.nn.softmax(self._logits)

    #Define the loss
    loss = seq2seq.sequence_loss_by_example([self._logits], [tf.reshape(self._inputTargetsY, [-1])], [tf.ones([self.config.batch_size * self.config.sequence_size])], self.vocabularySize)
    self._cost = tf.div(tf.reduce_sum(loss), self.config.batch_size)

    self._final_state = last_state
开发者ID:killianlevacher,项目名称:TrumpBSQuoteRNNGenerator,代码行数:32,代码来源:RNN_Model.py


示例13: apply_lm

def apply_lm(cell, inputs, sequence_length=None, dropout=None, dtype=tf.float32):
    """

    Parameters
    ----------
    cell
    inputs
    sequence_length
    dropout
    dtype

    Returns
    -------

    """
    if dropout is not None:

        for c in cell._cells:
            c.input_keep_prob = 1.0 - dropout

    cell_outputs, cell_state = rnn.rnn(cell=cell,
                                       inputs=inputs,
                                       sequence_length=sequence_length,
                                       dtype=dtype)

    return cell_outputs, cell_state
开发者ID:chagge,项目名称:attentive_lm,代码行数:26,代码来源:lm_ops.py


示例14: fit

    def fit(self, data_function):
        with tf.Graph().as_default(), tf.Session() as sess:
            n, s, p = data_function.train.X.shape
            X_pl = tf.placeholder(tf.float32, [self.batch_size, s, p])
            Y_pl = tf.placeholder(tf.float32, [self.batch_size, p])
            lstm_cell = rnn_cell.BasicLSTMCell(self.hidden_size)
            cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * self.num_layers)
            outputs, _ = rnn.rnn(cell, [X_pl[:,i,:] for i in xrange(s)],
                dtype = tf.float32)
            
            softmax_w = tf.get_variable("softmax_w", [self.hidden_size, p])
            softmax_b = tf.get_variable("softmax_b", [p])
            logits = tf.matmul(outputs[-1], softmax_w) + softmax_b
            loss = loss_dict['ce'](logits, Y_pl)
            tvars = tf.trainable_variables()
            print([i.get_shape() for i in tvars])
            grads, _ = tf.clip_by_global_norm(tf.gradients(loss,
                tvars), self.max_grad_norm)
            optimizer = tf.train.AdamOptimizer()
            train_op  = optimizer.apply_gradients(zip(grads, tvars))

            initializer = tf.random_uniform_initializer(-self.init_scale,
                    self.init_scale)
            tf.initialize_all_variables().run()
            for i in xrange(self.n_step):
                batch_xs, batch_ys = data_function.train.next_batch(
                                        self.batch_size)
                feed_dict = {X_pl: batch_xs, Y_pl: batch_ys}
                _, loss_value = sess.run([train_op, loss], 
                        feed_dict = feed_dict)
                if i % 100 == 0:
                    PrintMessage(data_function.train.epochs_completed, 
                            loss_value , 0, 0)
开发者ID:hduongtrong,项目名称:ScikitFlow,代码行数:33,代码来源:rnn.py


示例15: embedding_rnn_seq2seq

def embedding_rnn_seq2seq(encoder_inputs, decoder_inputs, cell,
                          num_encoder_symbols, num_decoder_symbols,
                          embedding_size, output_projection=None,
                          feed_previous=False, dtype=dtypes.float32,
                          scope=None, beam_search=True, beam_size=10):
  """Embedding RNN sequence-to-sequence model.

  This model first embeds encoder_inputs by a newly created embedding (of shape
  [num_encoder_symbols x input_size]). Then it runs an RNN to encode
  embedded encoder_inputs into a state vector. Next, it embeds decoder_inputs
  by another newly created embedding (of shape [num_decoder_symbols x
  input_size]). Then it runs RNN decoder, initialized with the last
  encoder state, on embedded decoder_inputs.

  Args:
    encoder_inputs: A list of 1D int32 Tensors of shape [batch_size].
    decoder_inputs: A list of 1D int32 Tensors of shape [batch_size].
    cell: rnn_cell.RNNCell defining the cell function and size.
    num_encoder_symbols: Integer; number of symbols on the encoder side.
    num_decoder_symbols: Integer; number of symbols on the decoder side.
    embedding_size: Integer, the length of the embedding vector for each symbol.
    output_projection: None or a pair (W, B) of output projection weights and
      biases; W has shape [output_size x num_decoder_symbols] and B has
      shape [num_decoder_symbols]; if provided and feed_previous=True, each
      fed previous output will first be multiplied by W and added B.
    feed_previous: Boolean or scalar Boolean Tensor; if True, only the first
      of decoder_inputs will be used (the "GO" symbol), and all other decoder
      inputs will be taken from previous outputs (as in embedding_rnn_decoder).
      If False, decoder_inputs are used as given (the standard decoder case).
    dtype: The dtype of the initial state for both the encoder and encoder
      rnn cells (default: tf.float32).
    scope: VariableScope for the created subgraph; defaults to
      "embedding_rnn_seq2seq"

  Returns:
    A tuple of the form (outputs, state), where:
      outputs: A list of the same length as decoder_inputs of 2D Tensors with
        shape [batch_size x num_decoder_symbols] containing the generated
        outputs.
      state: The state of each decoder cell in each time-step. This is a list
        with length len(decoder_inputs) -- one item for each time-step.
        It is a 2D Tensor of shape [batch_size x cell.state_size].
  """
  with variable_scope.variable_scope(scope or "embedding_rnn_seq2seq"):
    # Encoder.
    encoder_cell = rnn_cell.EmbeddingWrapper(
        cell, embedding_classes=num_encoder_symbols,
        embedding_size=embedding_size)
    _, encoder_state = rnn.rnn(encoder_cell, encoder_inputs, dtype=dtype)

    # Decoder.
    if output_projection is None:
      cell = rnn_cell.OutputProjectionWrapper(cell, num_decoder_symbols)


    return embedding_rnn_decoder(
          decoder_inputs, encoder_state, cell, num_decoder_symbols,
          embedding_size, output_projection=output_projection,
          feed_previous=feed_previous, beam_search=beam_search, beam_size=beam_size)
开发者ID:Vunb,项目名称:Neural_Conversation_Models,代码行数:59,代码来源:my_seq2seq.py


示例16: RNN

def RNN(x, weights, biases):

    x = tf.transpose(x, [1, 0, 2])
    x = tf.reshape(x, [-1, n_input]) # reshape to n_steps*batch_size
    x = tf.split(0, n_steps, x)

    # Define a lstm ccell with tensorflow
    lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
    outpus, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)

    return tf.matmul(outpus[-1], weights['out']) + biases['out']
开发者ID:saicoco,项目名称:_practice,代码行数:11,代码来源:tf_rnn.py


示例17: single_lstm

def single_lstm(name,
                incoming,
                n_units,
                use_peepholes=True,
                return_seq=False,
                return_state=False):
    with tf.name_scope(name) as scope:
        cell = tf.nn.rnn_cell.LSTMCell(n_units, use_peepholes=use_peepholes)
        output, _cell_state = rnn.rnn(cell, incoming, dtype=tf.float32)
        out = output if return_seq else output[-1]
        return (out, _cell_state) if return_state else out
开发者ID:Biocodings,项目名称:Paddle,代码行数:11,代码来源:rnn.py


示例18: __inner_predict

    def __inner_predict(self, features):
        features = tf.transpose(features, [1, 0, 2])
        features = tf.reshape(features, [-1, self.n_input])
        features = tf.split(0, self.n_steps, features)

        cell = rnn_cell.BasicLSTMCell(self.n_hidden, forget_bias=1.0)
        multi_cell = rnn_cell.MultiRNNCell([cell] * self.n_layers)

        outputs, states = rnn.rnn(multi_cell, features, dtype=tf.float32)

        return tf.matmul(outputs[-1], self.weights['out']) + self.biases['out']
开发者ID:mishadev,项目名称:stuff,代码行数:11,代码来源:nns.py


示例19: RNN

def RNN(x, weights, biases):

    x = tf.transpose(x, [1, 0, 2])
    x = tf.reshape(x, [-1, n_input])
    x = tf.split(0, n_steps, x)

    cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
    multi_cell = rnn_cell.MultiRNNCell([cell] * n_layers)

    outputs, states = rnn.rnn(multi_cell, x, dtype=tf.float32)

    return tf.matmul(outputs[-1], weights['out']) + biases['out']
开发者ID:mishadev,项目名称:stuff,代码行数:12,代码来源:recurrent_network.py


示例20: __classOptoRNN__

    def __classOptoRNN__(self,_Z1):

        ''' Reccurent neural network with a classifer (logistic) as output layer
            that tries to predicted if there was an otpogenetic stimulation in 
            a neuron j. Input will be time serie of neuron(s) i starting at time t 
            and output will be a binary value, where the label is whether x was 
            stimulated or not at t-z. 


        '''

                #Defining weights
        self.weights = { 
                         'classi_HO_W' : varInit([self.nhidclassi,1],
                                                  'classi_HO_W', std = 0.01 )
                        }

        self.biases  = { 'classi_HO_B': varInit([1], 'classi_HO_B',
                                                std = 1) } 

        self.masks   = { }


        #classiCell = rnn_cell.BasicLSTMCell(self.nhidclassi)
        classiCell = rnn_cell.BasicRNNCell(self.nhidclassi, activation = self.actfct)
        #classiCell = rnn_cell.GRUCell(self.nhidclassi, activation = self.actfct)

        #INITIAL STATE DOES NOT WORK
        #initClassi = tf.zeros([self.batchSize,classiCell.state_size], dtype='float32') 

        if self.multiLayer:
            #Stacking classifier cells
            stackCell = rnn_cell.MultiRNNCell([classiCell] * self.multiLayer)
            S = stackCell.zero_state(self._batchSize, tf.float32)
            with tf.variable_scope("") as scope:
                for i in range(self.seqLen):
                    if i == 1:
                        scope.reuse_variables()
                    O,S = stackCell(_Z1,S)

            predCell = tf.matmul(O, self.weights['classi_HO_W'])  + \
                       self.biases['classi_HO_B']

        else:
            #classi
            O, S = rnn.rnn(classiCell, _Z1, dtype = tf.float32) #Output and state

            #classi to output layer
            predCell = tf.matmul(O[-1], self.weights['classi_HO_W'])  + \
                       self.biases['classi_HO_B']

        return predCell
开发者ID:TuragaLab,项目名称:activeConn,代码行数:52,代码来源:graphs.py



注:本文中的tensorflow.python.ops.rnn.rnn函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python rnn_cell.linear函数代码示例发布时间:2022-05-27
下一篇:
Python rnn.dynamic_rnn函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap