• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java StreamingContext类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.spark.streaming.StreamingContext的典型用法代码示例。如果您正苦于以下问题:Java StreamingContext类的具体用法?Java StreamingContext怎么用?Java StreamingContext使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StreamingContext类属于org.apache.spark.streaming包,在下文中一共展示了StreamingContext类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: WatermarkSyncedDStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public WatermarkSyncedDStream(final Queue<JavaRDD<WindowedValue<T>>> rdds,
                              final Long batchDuration,
                              final StreamingContext ssc) {
  super(ssc, JavaSparkContext$.MODULE$.<WindowedValue<T>>fakeClassTag());
  this.rdds = rdds;
  this.batchDuration = batchDuration;
}
 
开发者ID:apache,项目名称:beam,代码行数:8,代码来源:WatermarkSyncedDStream.java


示例2: SourceDStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
SourceDStream(
    StreamingContext ssc,
    UnboundedSource<T, CheckpointMarkT> unboundedSource,
    SerializablePipelineOptions options,
    Long boundMaxRecords) {
  super(ssc, JavaSparkContext$.MODULE$.<scala.Tuple2<Source<T>, CheckpointMarkT>>fakeClassTag());
  this.unboundedSource = unboundedSource;
  this.options = options;

  SparkPipelineOptions sparkOptions = options.get().as(
      SparkPipelineOptions.class);

  // Reader cache expiration interval. 50% of batch interval is added to accommodate latency.
  this.readerCacheInterval = 1.5 * sparkOptions.getBatchIntervalMillis();

  this.boundReadDuration = boundReadDuration(sparkOptions.getReadTimePercentage(),
      sparkOptions.getMinReadTimeMillis());
  // set initial parallelism once.
  this.initialParallelism = ssc().sparkContext().defaultParallelism();
  checkArgument(this.initialParallelism > 0, "Number of partitions must be greater than zero.");

  this.boundMaxRecords = boundMaxRecords;

  try {
    this.numPartitions =
        createMicrobatchSource()
            .split(sparkOptions)
            .size();
  } catch (Exception e) {
    throw new RuntimeException(e);
  }
}
 
开发者ID:apache,项目名称:beam,代码行数:33,代码来源:SourceDStream.java


示例3: FacebookInputDStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public FacebookInputDStream(StreamingContext ssc, String accessToken, BatchRequestBuilder[] batchRequestBuilders,
        StorageLevel storageLevel) {
    super(ssc, scala.reflect.ClassTag$.MODULE$.apply(String.class));
    this.accessToken = accessToken;
    this.storageLevel = storageLevel;
    this.batchRequestBuilders = batchRequestBuilders;
}
 
开发者ID:ogidogi,项目名称:laughing-octo-sansa,代码行数:8,代码来源:FacebookInputDStream.java


示例4: SparkScheduler

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public SparkScheduler(JobQueue queue) {
    SparkConf conf = new SparkConf();
    conf.setMaster(System.getProperty("resource.runner.spark.host","local"));
    conf.setAppName("OODT Spark Job");

    URL location = SparkScheduler.class.getResource('/'+SparkScheduler.class.getName().replace('.', '/')+".class");
    conf.setJars(new String[]{"../lib/cas-resource-0.8-SNAPSHOT.jar"});
    sc = new SparkContext(conf);
    ssc = new StreamingContext(sc,new Duration(10000));
    this.queue = queue;
}
 
开发者ID:apache,项目名称:oodt,代码行数:12,代码来源:SparkScheduler.java


示例5: PubsubInputDStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public PubsubInputDStream(final StreamingContext _ssc, final String _subscription, final Integer _batchSize,
		final boolean _decodeData) {
	super(_ssc, new PubsubReceiver(_subscription, _batchSize, _decodeData), STRING_CLASS_TAG);
}
 
开发者ID:SignifAi,项目名称:Spark-PubSub,代码行数:5,代码来源:PubsubInputDStream.java


示例6: PubsubReceiverInputDStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public PubsubReceiverInputDStream(final StreamingContext _ssc, final String _subscription, final Integer _batchSize,
		final boolean _decodeData) {
	super(new PubsubInputDStream(_ssc, _subscription, _batchSize, _decodeData), STRING_CLASS_TAG);
}
 
开发者ID:SignifAi,项目名称:Spark-PubSub,代码行数:5,代码来源:PubsubReceiverInputDStream.java


示例7: createStream

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public static ReceiverInputDStream<String> createStream(StreamingContext ssc, String accessToken,
        BatchRequestBuilder[] batchRequestBuilders) {
    return new FacebookInputDStream(ssc, accessToken, batchRequestBuilders, StorageLevel.MEMORY_AND_DISK_2());
}
 
开发者ID:ogidogi,项目名称:laughing-octo-sansa,代码行数:5,代码来源:FacebookUtils.java


示例8: launch

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
public static <E> DStream<MessageAndMetadata<E>> launch(
        StreamingContext ssc, Properties pros, int numberOfReceivers,
        StorageLevel storageLevel, KafkaMessageHandler<E> messageHandler) {
    JavaStreamingContext jsc = new JavaStreamingContext(ssc);
    return createStream(jsc, pros, numberOfReceivers, storageLevel, messageHandler).dstream();
}
 
开发者ID:dibbhatt,项目名称:kafka-spark-consumer,代码行数:7,代码来源:ReceiverLauncher.java


示例9: setStreamingContext

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
@Override
public void setStreamingContext(StreamingContext context) {
    this.ssc = new JavaStreamingContext(context);
}
 
开发者ID:apache,项目名称:oodt,代码行数:5,代码来源:StreamingPalindromeExample.java


示例10: asStreamOf

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
/**
 * @param ssc, the (Scala based) Spark Streaming Context
 * @return a Spark Stream, belonging to the provided Context, that will collect NATS Messages
 */
public ReceiverInputDStream<R> asStreamOf(StreamingContext ssc) {
	return ssc.receiverStream(this, scala.reflect.ClassTag$.MODULE$.apply(String.class));
}
 
开发者ID:Logimethods,项目名称:nats-connector-spark,代码行数:8,代码来源:StandardNatsToSparkConnectorImpl.java


示例11: asStreamOfKeyValue

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
/**
 * @param ssc, the (Scala based) Spark Streaming Context
 * @return a Spark Stream, belonging to the provided Context, 
 * that will collect NATS Messages as Tuples of (the NATS Subject) / (the NATS Payload)
 */
public ReceiverInputDStream<Tuple2<String, R>> asStreamOfKeyValue(StreamingContext ssc) {
	return ssc.receiverStream(this.storedAsKeyValue(), scala.reflect.ClassTag$.MODULE$.apply(Tuple2.class));
}
 
开发者ID:Logimethods,项目名称:nats-connector-spark,代码行数:9,代码来源:StandardNatsToSparkConnectorImpl.java


示例12: setStreamingContext

import org.apache.spark.streaming.StreamingContext; //导入依赖的package包/类
/**
 * Set the context to run by.
 * @param context
 */
public void setStreamingContext(StreamingContext context);
 
开发者ID:apache,项目名称:oodt,代码行数:6,代码来源:StreamingInstance.java



注:本文中的org.apache.spark.streaming.StreamingContext类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java BODY类代码示例发布时间:2022-05-22
下一篇:
Java HasTouchScreen类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap