• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python tf_logging.log_first_n函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.python.platform.tf_logging.log_first_n函数的典型用法代码示例。如果您正苦于以下问题:Python log_first_n函数的具体用法?Python log_first_n怎么用?Python log_first_n使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了log_first_n函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: after_run

  def after_run(self, run_context, run_values):
    _ = run_context

    stale_global_step = run_values.results
    if self._timer.should_trigger_for_step(stale_global_step +
                                           self._steps_per_run):
      # get the real value after train op.
      global_step = run_context.session.run(self._global_step_tensor)
      if self._timer.should_trigger_for_step(global_step):
        elapsed_time, elapsed_steps = self._timer.update_last_triggered_step(
            global_step)
        if elapsed_time is not None:
          self._log_and_record(elapsed_steps, elapsed_time, global_step)

    # Check whether the global step has been increased. Here, we do not use the
    # timer.last_triggered_step as the timer might record a different global
    # step value such that the comparison could be unreliable. For simplicity,
    # we just compare the stale_global_step with previously recorded version.
    if stale_global_step == self._last_global_step:
      # Here, we give a warning in the first 5 times if we have observed that
      # the global step has not been increased. For some Optimizers, the global
      # step is not increased each time by design. For example,
      # SyncReplicaOptimizer doesn't increase the global step in worker's main
      # train step.
      logging.log_first_n(
          logging.WARN,
          "It seems that global step (tf.train.get_global_step) has not "
          "been increased. Current value (could be stable): %s vs previous "
          "value: %s. You could increase the global step by passing "
          "tf.train.get_global_step() to Optimizer.apply_gradients or "
          "Optimizer.minimize.", 5, stale_global_step, self._last_global_step)

    self._last_global_step = stale_global_step
开发者ID:aritratony,项目名称:tensorflow,代码行数:33,代码来源:basic_session_run_hooks.py


示例2: _do_batch_all_reduce_dense

  def _do_batch_all_reduce_dense(self, reduce_op, per_replica_values):
    """All-reduce across all workers in a batch."""

    logging.log_first_n(
        logging.INFO, "Collective batch_all_reduce: %d all-reduces, "
        "num_workers = %d" % (len(per_replica_values), self._num_workers), 10)

    chunked_gv = self._make_gradient_chunks(per_replica_values,
                                            self._all_reduce_merge_scope)

    reduced_gv_list = []
    for chunk in chunked_gv:
      with ops.name_scope("allreduce"):
        for grad_and_vars in chunk:
          # Gradients for the same variable but from different devices.
          scaled_grads = [g for g, _ in grad_and_vars]
          collective_reduced = cross_device_utils.build_collective_reduce(
              scaled_grads, self._num_workers, self._collective_keys, "Add",
              "Id")
          result = []
          for (_, v), g in zip(grad_and_vars, collective_reduced):
            result.append([g, v])
          reduced_gv_list.append(result)

    new_device_grads = [list(x) for x in zip(*reduced_gv_list)]
    return _ungroup_and_make_mirrored(
        new_device_grads,
        per_replica_values[0],
        reduce_op,
        num_between_graph_workers=self._num_workers)
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:30,代码来源:cross_device_ops.py


示例3: _batch_all_reduce

  def _batch_all_reduce(self, aggregation, per_device_values):
    """All reduce algorithm in a batch."""
    logging.log_first_n(
        logging.INFO, "batch_all_reduce invoked for batches size = %d with "
        "algorithm = %s, num_packs = %d, agg_small_grads_max_bytes = %d and "
        "agg_small_grads_max_group = %d" %
        (len(per_device_values), self._all_reduce_alg, self._num_packs,
         self._agg_small_grads_max_bytes, self._agg_small_grads_max_group), 10)
    destinations = per_device_values[0].devices
    grouped = _group_value_by_device(per_device_values)

    device_grad_packs, tensor_packer = _pack_tensors(
        grouped, self._num_packs, self._agg_small_grads_max_bytes,
        self._agg_small_grads_max_group)

    # The actual aggregation of the repacked gradients. Note that they are
    # sharded among different aggregation trees. So it is important to strike
    # the balance on num_splits.
    if self._all_reduce_alg == "nccl":
      # TODO(yuefengz): merge this into the all-reduce library.
      reduced = cross_tower_utils.aggregate_gradients_using_nccl(
          device_grad_packs)
    else:
      # TODO(yuefengz): check that gpu ids in `destinations` are in ascending
      # order.
      reduced = (
          cross_tower_utils.aggregate_gradients_using_hierarchical_copy(
              destinations, device_grad_packs))

    reduced = _unpack_tensors(reduced, tensor_packer)
    return _ungroup_and_make_mirrored(reduced, per_device_values[0].devices,
                                      aggregation)
开发者ID:AnishShah,项目名称:tensorflow,代码行数:32,代码来源:cross_tower_ops.py


示例4: _ProcessHealthPillSummary

  def _ProcessHealthPillSummary(self, value, event):
    """Process summaries containing health pills.

    These summaries are distinguished by the fact that they have a Tensor field
    and have a special tag value.

    This method emits ERROR-level messages to the logs if it encounters Tensor
    summaries that it cannot process.

    Args:
      value: A summary_pb2.Summary.Value with a Tensor field.
      event: The event_pb2.Event containing that value.
    """
    elements = np.fromstring(value.tensor.tensor_content, dtype=np.float64)

    # The node_name property of the value object is actually a watch key: a
    # combination of node name, output slot, and a suffix. We capture the
    # actual node name and the output slot with a regular expression.
    match = re.match(r'^(.*):(\d+):DebugNumericSummary$', value.node_name)
    if not match:
      logging.log_first_n(
          logging.ERROR,
          'Unsupported watch key %s for health pills; skipping this sequence.',
          1,
          value.node_name)
      return

    node_name = match.group(1)
    output_slot = int(match.group(2))
    self._ProcessHealthPill(
        event.wall_time, event.step, node_name, output_slot, elements)
开发者ID:brainwy12,项目名称:tensorflow,代码行数:31,代码来源:event_accumulator.py


示例5: _do_batch_all_reduce_sparse

 def _do_batch_all_reduce_sparse(self, reduce_op, sparse_values):
   """Run batch all-reduce for sparse values."""
   logging.log_first_n(
       logging.WARN,
       "Efficient allreduce is not supported for %d IndexedSlices" %
       len(sparse_values), 10)
   # Use `sparse_values` as destinations to do all-reduces. It is effectively
   # an allgather under the hood but not an efficient one.
   return self._simple_cross_replica_ops.batch_reduce(
       reduce_op, zip(sparse_values, sparse_values))
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:10,代码来源:cross_device_ops.py


示例6: _reduce

 def _reduce(self, reduce_op, per_replica_value, destinations):
   assert check_destinations(destinations)
   devices = get_devices_from(destinations)
   reduce_to_device = self.reduce_to_device or devices[0]
   logging.log_first_n(
       logging.INFO,
       "Reduce to %s then broadcast to %r." % (reduce_to_device, devices), 10)
   reduced = _simple_reduce(per_replica_value, reduce_to_device,
                            self.accumulation_fn, reduce_op)
   return self.broadcast(reduced, destinations)
开发者ID:Wajih-O,项目名称:tensorflow,代码行数:10,代码来源:cross_device_ops.py


示例7: gradient

  def gradient(self, target, sources, output_gradients=None):
    """Computes the gradient using operations recorded in context of this tape.

    Args:
      target: Tensor (or list of tensors) to be differentiated.
      sources: a list or nested structure of Tensors or Variables. `target`
        will be differentiated against elements in `sources`.
      output_gradients: a list of gradients, one for each element of
        target. Defaults to None.

    Returns:
      a list or nested structure of Tensors (or IndexedSlices, or None),
      one for each element in `sources`. Returned structure is the same as
      the structure of `sources`.

    Raises:
      RuntimeError: if called inside the context of the tape, or if called more
       than once on a non-persistent tape.
    """
    if self._tape is None:
      raise RuntimeError("GradientTape.gradient can only be called once on "
                         "non-persistent tapes.")
    if self._recording:
      if not self._persistent:
        self._pop_tape()
      else:
        logging.log_first_n(logging.WARN,
                            "Calling GradientTape.gradient on a persistent "
                            "tape inside it's context is significantly less "
                            "efficient than calling it outside the context (it "
                            "causes the gradient ops to be recorded on the "
                            "tape, leading to increased CPU and memory usage). "
                            "Only call GradientTape.gradient inside the "
                            "context if you actually want to trace the "
                            "gradient in order to compute higher order "
                            "derrivatives.", 1)

    flat_sources = nest.flatten(sources)
    flat_sources = [_handle_or_self(x) for x in flat_sources]

    if output_gradients is not None:
      output_gradients = [None if x is None else ops.convert_to_tensor(x)
                          for x in nest.flatten(output_gradients)]

    flat_grad = imperative_grad.imperative_grad(
        self._tape,
        nest.flatten(target),
        flat_sources,
        output_gradients=output_gradients)

    if not self._persistent:
      self._tape = None

    grad = nest.pack_sequence_as(sources, flat_grad)
    return grad
开发者ID:gunan,项目名称:tensorflow,代码行数:55,代码来源:backprop.py


示例8: run_loop

 def run_loop(self):
     # Count the steps.
     current_step = training_util.global_step(self._sess, self._sv.global_step)
     added_steps = current_step - self._last_step
     self._last_step = current_step
     # Measure the elapsed time.
     current_time = time.time()
     elapsed_time = current_time - self._last_time
     self._last_time = current_time
     # Reports the number of steps done per second
     steps_per_sec = added_steps / elapsed_time
     summary = Summary(value=[Summary.Value(tag=self._summary_tag, simple_value=steps_per_sec)])
     if self._sv.summary_writer:
         self._sv.summary_writer.add_summary(summary, current_step)
     logging.log_first_n(logging.INFO, "%s: %g", 10, self._summary_tag, steps_per_sec)
开发者ID:paolodedios,项目名称:tensorflow,代码行数:15,代码来源:supervisor.py


示例9: batch_reduce_implementation

  def batch_reduce_implementation(self, reduce_op, value_destination_pairs):
    all_devices_match = _all_devices_match(value_destination_pairs)
    if all_devices_match:
      return self._batch_all_reduce(reduce_op,
                                    [v[0] for v in value_destination_pairs])
    else:
      if not all_devices_match:
        logging.log_first_n(
            logging.WARN, "Efficient batch_reduce is not supported if "
            "destinations are different.", 10)

      return [
          self.reduce_implementation(reduce_op, t, destinations=v)
          for t, v in value_destination_pairs
      ]
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:15,代码来源:cross_device_ops.py


示例10: _reduce

  def _reduce(self, aggregation, per_device_value, destinations):
    contains_indexed_slices = cross_tower_utils.contains_indexed_slices(
        per_device_value)
    if ((destinations is None or _devices_match(per_device_value, destinations))
        and not context.executing_eagerly()
        and not contains_indexed_slices):
      return self._batch_all_reduce(aggregation, [per_device_value])[0]
    else:
      if contains_indexed_slices:
        logging.log_first_n(
            logging.WARN,
            "Efficient allreduce is not supported for IndexedSlices.", 10)

      devices = get_devices_from(destinations or per_device_value)
      reduce_to_device = devices[0]
      reduced = _simple_reduce(per_device_value, reduce_to_device,
                               math_ops.add_n, aggregation)
      return self.broadcast(reduced, devices)
开发者ID:ChristinaEricka,项目名称:tensorflow,代码行数:18,代码来源:cross_tower_ops.py


示例11: watch

  def watch(self, tensor):
    """Ensures that `tensor` is being traced by this tape.

    Args:
      tensor: a Tensor or list of Tensors.
    """
    for t in nest.flatten(tensor):
      if not t.dtype.is_floating:
        logging.log_first_n(
            logging.WARN, "The dtype of the watched tensor must be "
            "floating (e.g. tf.float32), got %r", 5, t.dtype)
      if hasattr(t, "handle"):
        # There are many variable-like objects, all of them currently have
        # `handle` attribute that points to a tensor. If this changes, internals
        # of watch_variable need to change as well.
        tape.watch_variable(self._tape, t)
      else:
        tape.watch(self._tape, t)
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:18,代码来源:backprop.py


示例12: batch_reduce_implementation

  def batch_reduce_implementation(self, reduce_op, value_destination_pairs):
    if cross_device_utils.contains_indexed_slices(value_destination_pairs):
      raise ValueError(
          "`IndexSlices` is not supported for Collective All-Reduce.")

    all_devices_match = _all_devices_match(value_destination_pairs)
    if all_devices_match:
      return self._batch_all_reduce(reduce_op,
                                    [v[0] for v in value_destination_pairs])
    else:
      if not all_devices_match:
        logging.log_first_n(
            logging.WARN, "Efficient batch_reduce is not supported if "
            "destinations are different.", 10)

      return [
          self.reduce_implementation(reduce_op, t, destinations=v)
          for t, v in value_destination_pairs
      ]
开发者ID:kylin9872,项目名称:tensorflow,代码行数:19,代码来源:cross_device_ops.py


示例13: _batch_reduce

  def _batch_reduce(self, aggregation, value_destination_pairs):
    all_devices_match = _all_devices_match(value_destination_pairs)
    contains_indexed_slices = cross_tower_utils.contains_indexed_slices(
        value_destination_pairs)
    if (all_devices_match and not context.executing_eagerly()
        and not contains_indexed_slices):
      return self._batch_all_reduce(aggregation,
                                    [v[0] for v in value_destination_pairs])
    else:
      if not all_devices_match:
        logging.log_first_n(logging.WARN,
                            "Efficient batch_reduce is not supported if "
                            "destinations are different.",
                            10)

      return [
          self._reduce(aggregation, t, destinations=v)
          for t, v in value_destination_pairs
      ]
开发者ID:AnishShah,项目名称:tensorflow,代码行数:19,代码来源:cross_tower_ops.py


示例14: _batch_all_reduce

  def _batch_all_reduce(self, reduce_op, per_replica_values):
    """All reduce algorithm in a batch."""
    logging.log_first_n(
        logging.INFO, "Collective batch_all_reduce: %d all-reduces, "
        "num_workers = %d" % (len(per_replica_values), self._num_workers), 10)

    dense_values, dense_indices, sparse_values, sparse_indices = (
        cross_device_utils.split_by_sparsity(per_replica_values))
    if dense_values:
      dense_results = self._do_batch_all_reduce_dense(reduce_op, dense_values)
    else:
      dense_results = []
    if sparse_values:
      sparse_results = self._do_batch_all_reduce_sparse(reduce_op,
                                                        sparse_values)
    else:
      sparse_results = []
    return cross_device_utils.stitch_values(((dense_results, dense_indices),
                                             (sparse_results, sparse_indices)))
开发者ID:adit-chandra,项目名称:tensorflow,代码行数:19,代码来源:cross_device_ops.py


示例15: _reduce

  def _reduce(self, reduce_op, per_replica_value, destinations):
    contains_indexed_slices = cross_device_utils.contains_indexed_slices(
        per_replica_value)
    if (_devices_match(per_replica_value, destinations)
        and not context.executing_eagerly()
        and not contains_indexed_slices):
      return self._batch_all_reduce(reduce_op, [per_replica_value])[0]
    else:
      if contains_indexed_slices:
        logging.log_first_n(
            logging.WARN,
            "Efficient allreduce is not supported for IndexedSlices.", 10)

      if check_destinations(destinations):
        devices = get_devices_from(destinations)
      else:
        devices = get_devices_from(per_replica_value)
      reduce_to_device = devices[0]
      reduced = _simple_reduce(per_replica_value, reduce_to_device,
                               math_ops.add_n, reduce_op)
      return self.broadcast(reduced, devices)
开发者ID:JonathanRaiman,项目名称:tensorflow,代码行数:21,代码来源:cross_device_ops.py


示例16: is_whitelisted_for_graph

def is_whitelisted_for_graph(o):
  """Check whether an entity is whitelisted for use in graph mode.

  Examples of whitelisted entities include all members of the tensorflow
  package.

  Args:
    o: A Python entity.
  Returns:
    Boolean
  """
  # TODO(b/120224672): Fix this.
  if isinstance(o, functools.partial):
    # tf_inspect.getmodule(functools.partial(...)) otherwise returns None since
    # functools.partial objects do not have a __module__ attribute.
    m = functools
  else:
    m = tf_inspect.getmodule(o)
  for prefix, in config.DEFAULT_UNCOMPILED_MODULES:
    if m.__name__.startswith(prefix):
      return True

  if hasattr(o, 'autograph_info__'):
    return True

  if inspect_utils.isnamedtuple(o):
    # Due to the way they're constructed, namedtuple types cannot be converted
    # because they don't expose source code. But we assume they are safe for
    # graph mode since they are just containers.
    if tf_inspect.isclass(o) and len(o.__bases__) > 1:
      logging.log_first_n(
          logging.level_warning(),
          'Entity {} looks like a namedtuple subclass. If it has any custom'
          ' methods, they will not be converted by AutoGraph.'.format(o), 1)
    return True

  return False
开发者ID:aeverall,项目名称:tensorflow,代码行数:37,代码来源:conversion.py


示例17: warn_first_n

def warn_first_n(msg, *args, **kwargs):
  logging.log_first_n(logging.WARNING, msg, *args, **kwargs)
开发者ID:tthhee,项目名称:tensorflow,代码行数:2,代码来源:ag_logging.py


示例18: LogErrorOnce

 def LogErrorOnce(msg):
   logging.log_first_n(logging.ERROR, msg, 1)
开发者ID:duzy,项目名称:tensorflow,代码行数:2,代码来源:event_accumulator.py


示例19: disable_partitioned_variables

 def disable_partitioned_variables(getter, *args, **kwargs):
   if kwargs.pop("partitioner", None) is not None:
     tf_logging.log_first_n(
         tf_logging.WARN, "Partitioned variables are disabled when using "
         "DistributionStrategy.", 1)
   return getter(*args, **kwargs)
开发者ID:LiuCKind,项目名称:tensorflow,代码行数:6,代码来源:distribute.py


示例20: map_fn


#.........这里部分代码省略.........
    ```

    ```python
    elems=ragged.constant([[1, 2, 3], [4, 5], [6, 7]])
    mean = map_fn(tf.reduce_mean, elems)
    # mean == [2, 4, 6]
    ```

    ```python
    elems=ragged.constant([[1, 2, 3], [4, 5], [6, 7]], dtype=tf.int64)
    out = map_fn(fn=lambda x: x+1, elems,
      dtype=ragged.RaggedTensorType(type=tf.int64, ragged_rank=0))
    # out = ragged.constant([[2, 3, 4], [5, 6], [7, 8]])
    ```
  """
  if not callable(fn):
    raise TypeError("fn must be callable.")

  if isinstance(elems, sparse_tensor.SparseTensor):
    raise TypeError(
        "To perform a map on the values of a sparse tensor use either "
        " SparseTensor(input.indices, fn(input.values), input.dense_shape) or "
        " SparseTensor(input.indices, map_fn(fn, input.values), "
        "input.dense_shape)")

  in_graph_mode = not context.executing_eagerly()
  # Set the default number of parallel_iterations depending on graph/eager mode.
  if in_graph_mode and not parallel_iterations:
    parallel_iterations = 10
  elif not in_graph_mode and not parallel_iterations:
    parallel_iterations = 1

  if not in_graph_mode and parallel_iterations > 1:
    logging.log_first_n(logging.WARN, "Setting parallel_iterations > 1 has no "
                        "effect when executing eagerly. Consider calling map_fn"
                        " with tf.contrib.eager.defun to execute fn in "
                        "parallel.", 1)
    parallel_iterations = 1

  input_is_sequence = nest.is_sequence(elems)
  input_flatten = lambda x: nest.flatten(x) if input_is_sequence else [x]

  def input_pack(x):
    return nest.pack_sequence_as(elems, x) if input_is_sequence else x[0]

  elems_flat = input_flatten(elems)

  with ops.name_scope(name, "map", elems_flat):
    # TODO(akshayka): Remove the in_graph_mode check once caching devices are
    # supported in Eager
    if in_graph_mode:
      # Any get_variable calls in fn will cache the first call locally
      # and not issue repeated network I/O requests for each iteration.
      varscope = vs.get_variable_scope()
      varscope_caching_device_was_none = False
      if varscope.caching_device is None:
        # TODO(ebrevdo): Change to using colocate_with here and in other
        # methods.
        varscope.set_caching_device(lambda op: op.device)
        varscope_caching_device_was_none = True

    elems_flat = [
        ragged_factory_ops.convert_to_tensor_or_ragged_tensor(
            elem, name="elem") for elem in elems_flat
    ]
开发者ID:JonathanRaiman,项目名称:tensorflow,代码行数:66,代码来源:ragged_map_ops.py



注:本文中的tensorflow.python.platform.tf_logging.log_first_n函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python tf_logging.vlog函数代码示例发布时间:2022-05-27
下一篇:
Python tf_logging.info函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap