• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Coder类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中com.google.cloud.dataflow.sdk.coders.Coder的典型用法代码示例。如果您正苦于以下问题:Java Coder类的具体用法?Java Coder怎么用?Java Coder使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Coder类属于com.google.cloud.dataflow.sdk.coders包,在下文中一共展示了Coder类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: translateNode

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
@Override
public void translateNode(TransformTreeNode node, Bound<String> transform, TranslationContext translation) {
	String path = transform.getFilepattern();
	String name = transform.getName(); 
	Coder<?> coder = transform.getDefaultOutputCoder(transform.getOutput());
	
	if (coder != null && coder != TextIO.DEFAULT_TEXT_CODER) {
		throw new UnsupportedOperationException("Currently only supports UTF-8 inputs.");
	}
	
	DataSource<String> source = translation.getExecutionEnvironment().readTextFile(path);
	if (name != null) {
		source = source.name(name);
	}
	
	translation.registerDataSet(source, node);
}
 
开发者ID:StephanEwen,项目名称:flink-dataflow,代码行数:18,代码来源:FlinkTransformTranslators.java


示例2: gbk

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private static <K, V> TransformEvaluator<GroupByKey.GroupByKeyOnly<K, V>> gbk() {
  return new TransformEvaluator<GroupByKey.GroupByKeyOnly<K, V>>() {
    @Override
    public void evaluate(GroupByKey.GroupByKeyOnly<K, V> transform, EvaluationContext context) {
      @SuppressWarnings("unchecked")
      JavaRDDLike<WindowedValue<KV<K, V>>, ?> inRDD =
          (JavaRDDLike<WindowedValue<KV<K, V>>, ?>) context.getInputRDD(transform);
      @SuppressWarnings("unchecked")
      KvCoder<K, V> coder = (KvCoder<K, V>) context.getInput(transform).getCoder();
      Coder<K> keyCoder = coder.getKeyCoder();
      Coder<V> valueCoder = coder.getValueCoder();

      // Use coders to convert objects in the PCollection to byte arrays, so they
      // can be transferred over the network for the shuffle.
      JavaRDDLike<WindowedValue<KV<K, Iterable<V>>>, ?> outRDD = fromPair(
            toPair(inRDD.map(WindowingHelpers.<KV<K, V>>unwindowFunction()))
          .mapToPair(CoderHelpers.toByteFunction(keyCoder, valueCoder))
          .groupByKey()
          .mapToPair(CoderHelpers.fromByteFunctionIterable(keyCoder, valueCoder)))
          // empty windows are OK here, see GroupByKey#evaluateHelper in the SDK
          .map(WindowingHelpers.<KV<K, Iterable<V>>>windowFunction());
      context.setOutputRDD(transform, outRDD);
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:26,代码来源:TransformTranslator.java


示例3: getSideInputs

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private static Map<TupleTag<?>, BroadcastHelper<?>> getSideInputs(
    List<PCollectionView<?>> views,
    EvaluationContext context) {
  if (views == null) {
    return ImmutableMap.of();
  } else {
    Map<TupleTag<?>, BroadcastHelper<?>> sideInputs = Maps.newHashMap();
    for (PCollectionView<?> view : views) {
      Iterable<? extends WindowedValue<?>> collectionView = context.getPCollectionView(view);
      Coder<Iterable<WindowedValue<?>>> coderInternal = view.getCoderInternal();
      @SuppressWarnings("unchecked")
      BroadcastHelper<?> helper =
          BroadcastHelper.create((Iterable<WindowedValue<?>>) collectionView, coderInternal);
      //broadcast side inputs
      helper.broadcast(context.getSparkContext());
      sideInputs.put(view.getTagInternal(), helper);
    }
    return sideInputs;
  }
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:21,代码来源:TransformTranslator.java


示例4: createAggregator

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * Creates and aggregator and associates it with the specified name.
 *
 * @param named     Name of aggregator.
 * @param combineFn Combine function used in aggregation.
 * @param <IN>      Type of inputs to aggregator.
 * @param <INTER>   Intermediate data type
 * @param <OUT>     Type of aggregator outputs.
 * @return Specified aggregator
 */
public synchronized <IN, INTER, OUT> Aggregator<IN, OUT> createAggregator(
    String named,
    Combine.CombineFn<? super IN, INTER, OUT> combineFn) {
  @SuppressWarnings("unchecked")
  Aggregator<IN, OUT> aggregator = (Aggregator<IN, OUT>) aggregators.get(named);
  if (aggregator == null) {
    @SuppressWarnings("unchecked")
    NamedAggregators.CombineFunctionState<IN, INTER, OUT> state =
        new NamedAggregators.CombineFunctionState<>(
            (Combine.CombineFn<IN, INTER, OUT>) combineFn,
            (Coder<IN>) getCoder(combineFn),
            this);
    accum.add(new NamedAggregators(named, state));
    aggregator = new SparkAggregator<>(named, state);
    aggregators.put(named, aggregator);
  }
  return aggregator;
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:29,代码来源:SparkRuntimeContext.java


示例5: create

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private static <T> TransformEvaluator<Create.Values<T>> create() {
  return new TransformEvaluator<Create.Values<T>>() {
    @SuppressWarnings("unchecked")
    @Override
    public void evaluate(Create.Values<T> transform, EvaluationContext context) {
      StreamingEvaluationContext sec = (StreamingEvaluationContext) context;
      Iterable<T> elems = transform.getElements();
      Coder<T> coder = sec.getOutput(transform).getCoder();
      if (coder != VoidCoder.of()) {
        // actual create
        sec.setOutputRDDFromValues(transform, elems, coder);
      } else {
        // fake create as an input
        // creates a stream with a single batch containing a single null element
        // to invoke following transformations once
        // to support DataflowAssert
        sec.setDStreamFromQueue(transform,
            Collections.<Iterable<Void>>singletonList(Collections.singletonList((Void) null)),
            (Coder<Void>) coder);
      }
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:24,代码来源:StreamingTransformTranslator.java


示例6: fromByteFunctionIterable

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * A function wrapper for converting a byte array pair to a key-value pair, where
 * values are {@link Iterable}.
 *
 * @param keyCoder Coder to deserialize keys.
 * @param valueCoder Coder to deserialize values.
 * @param <K>   The type of the key being deserialized.
 * @param <V>   The type of the value being deserialized.
 * @return A function that accepts a pair of byte arrays and returns a key-value pair.
 */
static <K, V> PairFunction<Tuple2<ByteArray, Iterable<byte[]>>, K, Iterable<V>>
    fromByteFunctionIterable(final Coder<K> keyCoder, final Coder<V> valueCoder) {
  return new PairFunction<Tuple2<ByteArray, Iterable<byte[]>>, K, Iterable<V>>() {
    @Override
    public Tuple2<K, Iterable<V>> call(Tuple2<ByteArray, Iterable<byte[]>> tuple) {
      return new Tuple2<>(fromByteArray(tuple._1().getValue(), keyCoder),
        Iterables.transform(tuple._2(), new com.google.common.base.Function<byte[], V>() {
          @Override
          public V apply(byte[] bytes) {
            return fromByteArray(bytes, valueCoder);
          }
        }));
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:26,代码来源:CoderHelpers.java


示例7: encode

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
@Override
public void encode(Accum value, OutputStream outStream,
    com.google.cloud.dataflow.sdk.coders.Coder.Context context) throws CoderException,
    IOException {

  TSPROTO_CODER.encode(value.lastCandle, outStream, context.nested());
  LIST_CODER.encode(value.candles, outStream, context.nested());

}
 
开发者ID:GoogleCloudPlatform,项目名称:data-timeseries-java,代码行数:10,代码来源:CompleteTimeSeriesAggCombiner.java


示例8: decode

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
@Override
public Accum decode(InputStream inStream,
    com.google.cloud.dataflow.sdk.coders.Coder.Context context) throws CoderException,
    IOException {
  Accum accum = new Accum();
  accum.lastCandle = TSPROTO_CODER.decode(inStream, context.nested());
  accum.candles = LIST_CODER.decode(inStream, context.nested());
  return accum;
}
 
开发者ID:GoogleCloudPlatform,项目名称:data-timeseries-java,代码行数:10,代码来源:CompleteTimeSeriesAggCombiner.java


示例9: pCollectionCreateAndVerify

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * PCollectionCreateAndVerify asserts that the contents of the collection can be created, then returns the resulting
 * PCollection.
 *
 * @param p     to use with Create.of()
 * @param list  the items to put into the PCollection
 * @param coder the coder for the items, new GATKReadCoder.
 * @param <T>   the item type, e.g., GATKRead
 * @return a PCollection containing the items from list.
 */
public static <T> PCollection<T> pCollectionCreateAndVerify(Pipeline p, Iterable<T> list, Coder<T> coder) {
    for (T value : list) {
        try {
            CoderProperties.coderDecodeEncodeEqual(coder, value);
        } catch (Exception e) {
            throw new GATKException("DecodeEncodeEqual are not equal for: " + value.getClass().getSimpleName());
        }
    }
    PCollection<T> pCollection = p.apply(Create.of(list).withCoder(coder));
    DataflowAssert.that(pCollection).containsInAnyOrder(list); // Remove me when DavidR's fix is in.
    return pCollection;
}
 
开发者ID:broadinstitute,项目名称:gatk-dataflow,代码行数:23,代码来源:DataflowTestUtils.java


示例10: create

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private static <T> TransformEvaluator<Create.Values<T>> create() {
  return new TransformEvaluator<Create.Values<T>>() {
    @Override
    public void evaluate(Create.Values<T> transform, EvaluationContext context) {
      Iterable<T> elems = transform.getElements();
      // Use a coder to convert the objects in the PCollection to byte arrays, so they
      // can be transferred over the network.
      Coder<T> coder = context.getOutput(transform).getCoder();
      context.setOutputRDDFromValues(transform, elems, coder);
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:13,代码来源:TransformTranslator.java


示例11: deserialize

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private T deserialize() {
  T val;
  try {
    val = coder.decode(new ByteArrayInputStream(bcast.value()),
        new Coder.Context(true));
  } catch (IOException ioe) {
    // this should not ever happen, log it if it does.
    LOG.warn(ioe.getMessage());
    val = null;
  }
  return val;
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:13,代码来源:BroadcastHelper.java


示例12: getCoder

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private Coder<?> getCoder(Combine.CombineFn<?, ?, ?> combiner) {
  try {
    if (combiner.getClass() == Sum.SumIntegerFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Integer.class));
    } else if (combiner.getClass() == Sum.SumLongFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Long.class));
    } else if (combiner.getClass() == Sum.SumDoubleFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Double.class));
    } else if (combiner.getClass() == Min.MinIntegerFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Integer.class));
    } else if (combiner.getClass() == Min.MinLongFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Long.class));
    } else if (combiner.getClass() == Min.MinDoubleFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Double.class));
    } else if (combiner.getClass() == Max.MaxIntegerFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Integer.class));
    } else if (combiner.getClass() == Max.MaxLongFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Long.class));
    } else if (combiner.getClass() == Max.MaxDoubleFn.class) {
      return getCoderRegistry().getDefaultCoder(TypeDescriptor.of(Double.class));
    } else {
      throw new IllegalArgumentException("unsupported combiner in Aggregator: "
          + combiner.getClass().getName());
    }
  } catch (CannotProvideCoderException e) {
    throw new IllegalStateException("Could not determine default coder for combiner", e);
  }
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:29,代码来源:SparkRuntimeContext.java


示例13: CombineFunctionState

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
public CombineFunctionState(
    Combine.CombineFn<IN, INTER, OUT> combineFn,
    Coder<IN> inCoder,
    SparkRuntimeContext ctxt) {
  this.combineFn = combineFn;
  this.inCoder = inCoder;
  this.ctxt = ctxt;
  this.state = combineFn.createAccumulator();
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:10,代码来源:NamedAggregators.java


示例14: writeObject

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private void writeObject(ObjectOutputStream oos) throws IOException {
  oos.writeObject(ctxt);
  oos.writeObject(combineFn);
  oos.writeObject(inCoder);
  try {
    combineFn.getAccumulatorCoder(ctxt.getCoderRegistry(), inCoder)
        .encode(state, oos, Coder.Context.NESTED);
  } catch (CannotProvideCoderException e) {
    throw new IllegalStateException("Could not determine coder for accumulator", e);
  }
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:12,代码来源:NamedAggregators.java


示例15: readObject

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
@SuppressWarnings("unchecked")
private void readObject(ObjectInputStream ois) throws IOException, ClassNotFoundException {
  ctxt = (SparkRuntimeContext) ois.readObject();
  combineFn = (Combine.CombineFn<IN, INTER, OUT>) ois.readObject();
  inCoder = (Coder<IN>) ois.readObject();
  try {
    state = combineFn.getAccumulatorCoder(ctxt.getCoderRegistry(), inCoder)
        .decode(ois, Coder.Context.NESTED);
  } catch (CannotProvideCoderException e) {
    throw new IllegalStateException("Could not determine coder for accumulator", e);
  }
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:13,代码来源:NamedAggregators.java


示例16: createFromQueue

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
private static <T> TransformEvaluator<CreateStream.QueuedValues<T>> createFromQueue() {
  return new TransformEvaluator<CreateStream.QueuedValues<T>>() {
    @Override
    public void evaluate(CreateStream.QueuedValues<T> transform, EvaluationContext context) {
      StreamingEvaluationContext sec = (StreamingEvaluationContext) context;
      Iterable<Iterable<T>> values = transform.getQueuedValues();
      Coder<T> coder = sec.getOutput(transform).getCoder();
      sec.setDStreamFromQueue(transform, values, coder);
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:12,代码来源:StreamingTransformTranslator.java


示例17: toByteArray

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * Utility method for serializing an object using the specified coder.
 *
 * @param value Value to serialize.
 * @param coder Coder to serialize with.
 * @param <T> type of value that is serialized
 * @return Byte array representing serialized object.
 */
static <T> byte[] toByteArray(T value, Coder<T> coder) {
  ByteArrayOutputStream baos = new ByteArrayOutputStream();
  try {
    coder.encode(value, baos, new Coder.Context(true));
  } catch (IOException e) {
    throw new IllegalStateException("Error encoding value: " + value, e);
  }
  return baos.toByteArray();
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:18,代码来源:CoderHelpers.java


示例18: fromByteArray

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * Utility method for deserializing a byte array using the specified coder.
 *
 * @param serialized bytearray to be deserialized.
 * @param coder      Coder to deserialize with.
 * @param <T>        Type of object to be returned.
 * @return Deserialized object.
 */
static <T> T fromByteArray(byte[] serialized, Coder<T> coder) {
  ByteArrayInputStream bais = new ByteArrayInputStream(serialized);
  try {
    return coder.decode(bais, new Coder.Context(true));
  } catch (IOException e) {
    throw new IllegalStateException("Error decoding bytes for coder: " + coder, e);
  }
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:17,代码来源:CoderHelpers.java


示例19: toByteFunction

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * A function wrapper for converting an object to a bytearray.
 *
 * @param coder Coder to serialize with.
 * @param <T>   The type of the object being serialized.
 * @return A function that accepts an object and returns its coder-serialized form.
 */
static <T> Function<T, byte[]> toByteFunction(final Coder<T> coder) {
  return new Function<T, byte[]>() {
    @Override
    public byte[] call(T t) throws Exception {
      return toByteArray(t, coder);
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:16,代码来源:CoderHelpers.java


示例20: fromByteFunction

import com.google.cloud.dataflow.sdk.coders.Coder; //导入依赖的package包/类
/**
 * A function wrapper for converting a byte array to an object.
 *
 * @param coder Coder to deserialize with.
 * @param <T>   The type of the object being deserialized.
 * @return A function that accepts a byte array and returns its corresponding object.
 */
static <T> Function<byte[], T> fromByteFunction(final Coder<T> coder) {
  return new Function<byte[], T>() {
    @Override
    public T call(byte[] bytes) throws Exception {
      return fromByteArray(bytes, coder);
    }
  };
}
 
开发者ID:shakamunyi,项目名称:spark-dataflow,代码行数:16,代码来源:CoderHelpers.java



注:本文中的com.google.cloud.dataflow.sdk.coders.Coder类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java CacheDefaults类代码示例发布时间:2022-05-22
下一篇:
Java OCSPResp类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap