• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java DAG类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中com.datatorrent.api.DAG的典型用法代码示例。如果您正苦于以下问题:Java DAG类的具体用法?Java DAG怎么用?Java DAG使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DAG类属于com.datatorrent.api包,在下文中一共展示了DAG类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration configuration) {
    LogLevelProperties props = new LogLevelProperties(configuration);

    //dag.setAttribute(Context.DAGContext.STREAMING_WINDOW_SIZE_MILLIS, props.getWindowMillis());

    // create the operator to receive data from NiFi
    WindowDataManager inManager = new WindowDataManager.NoopWindowDataManager();
    NiFiSinglePortInputOperator nifiInput = getNiFiInput(dag, props, inManager);

    // create the operator to count log levels over a window
    String attributName = props.getLogLevelAttribute();
    LogLevelWindowCount count = dag.addOperator("count", new LogLevelWindowCount(attributName));
    dag.setAttribute(count, Context.OperatorContext.APPLICATION_WINDOW_COUNT, props.getAppWindowCount());

    // create the operator to send data back to NiFi
    WindowDataManager outManager = new WindowDataManager.NoopWindowDataManager();
    NiFiSinglePortOutputOperator nifiOutput = getNiFiOutput(dag, props, outManager);

    // configure the dag to get nifi-in -> count -> nifi-out
    dag.addStream("nifi-in-count", nifiInput.outputPort, count.input);
    dag.addStream("count-nifi-out", count.output, nifiOutput.inputPort);
}
 
开发者ID:bbende,项目名称:nifi-streaming-examples,代码行数:24,代码来源:LogLevelApplication.java


示例2: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{

  /*
   * Define HDFS and S3 as input and output module operators respectively.
   */
  FSInputModule inputModule = dag.addModule("HDFSInputModule", new FSInputModule());
  S3OutputModule outputModule = dag.addModule("S3OutputModule", new S3OutputModule());

  /*
   * Create a stream for Metadata blocks from HDFS to S3 output modules.
   * Note: DAG locality is set to CONTAINER_LOCAL for performance benefits by
   * avoiding any serialization/deserialization of objects.
   */
  dag.addStream("FileMetaData", inputModule.filesMetadataOutput, outputModule.filesMetadataInput);
  dag.addStream("BlocksMetaData", inputModule.blocksMetadataOutput, outputModule.blocksMetadataInput)
          .setLocality(DAG.Locality.CONTAINER_LOCAL);

  /*
   * Create a stream for Data blocks from HDFS to S3 output modules.
   */
  dag.addStream("BlocksData", inputModule.messages, outputModule.blockData).setLocality(DAG.Locality.CONTAINER_LOCAL);
}
 
开发者ID:DataTorrent,项目名称:app-templates,代码行数:25,代码来源:Application.java


示例3: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{
  //Add S3 as input and redshift as output operators to DAG
  S3RecordReaderModule inputModule = dag.addModule("S3Input", new S3RecordReaderModule());
  setS3FilesToInput(inputModule, conf);

  CsvParser csvParser = dag.addOperator("csvParser", CsvParser.class);
  TransformOperator transform = dag.addOperator("transform", new TransformOperator());
  Map<String, String> expMap = Maps.newHashMap();
  expMap.put("name", "{$.name}.toUpperCase()");
  transform.setExpressionMap(expMap);
  CsvFormatter formatter = dag.addOperator("formatter", new CsvFormatter());
  StringToByteArrayConverterOperator converterOp = dag.addOperator("converter", new StringToByteArrayConverterOperator());
  RedshiftOutputModule redshiftOutput = dag.addModule("RedshiftOutput", new RedshiftOutputModule());

  //Create streams
  dag.addStream("data", inputModule.records, csvParser.in);
  dag.addStream("pojo", csvParser.out, transform.input);
  dag.addStream("transformed", transform.output, formatter.in);
  dag.addStream("string", formatter.out, converterOp.input).setLocality(DAG.Locality.THREAD_LOCAL);
  dag.addStream("writeToJDBC", converterOp.output, redshiftOutput.input);
}
 
开发者ID:DataTorrent,项目名称:app-templates,代码行数:24,代码来源:Application.java


示例4: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
public void populateDAG(DAG dag, Configuration conf)
{
  KafkaSinglePortInputOperator kafkaInputOperator = dag.addOperator("kafkaInput", KafkaSinglePortInputOperator.class);
  JsonParser parser = dag.addOperator("parser", JsonParser.class);
  TransformOperator transform = dag.addOperator("transform", new TransformOperator());
  FilterOperator filterOperator = dag.addOperator("filter", new FilterOperator());
  JsonFormatter formatter = dag.addOperator("formatter", JsonFormatter.class);

  StringFileOutputOperator fileOutput = dag.addOperator("fileOutput", new StringFileOutputOperator());
  
  dag.addStream("data", kafkaInputOperator.outputPort, parser.in);
  dag.addStream("pojo", parser.out, filterOperator.input);
  dag.addStream("filtered", filterOperator.truePort, transform.input);
  dag.addStream("transformed", transform.output, formatter.in);
  dag.addStream("string", formatter.out, fileOutput.input);
}
 
开发者ID:DataTorrent,项目名称:app-templates,代码行数:17,代码来源:Application.java


示例5: copyShallow

import com.datatorrent.api.DAG; //导入依赖的package包/类
private static void copyShallow(DAG from, DAG to) {
  checkArgument(from.getClass() == to.getClass(), "must be same class %s %s",
      from.getClass(), to.getClass());
  Field[] fields = from.getClass().getDeclaredFields();
  AccessibleObject.setAccessible(fields, true);
  for (int i = 0; i < fields.length; i++) {
    Field field = fields[i];
    if (!java.lang.reflect.Modifier.isStatic(field.getModifiers())) {
      try {
        field.set(to,  field.get(from));
      } catch (IllegalArgumentException | IllegalAccessException e) {
        throw new RuntimeException(e);
      }
    }
  }
}
 
开发者ID:apache,项目名称:beam,代码行数:17,代码来源:ApexYarnLauncher.java


示例6: getApexLauncher

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
protected Launcher<?> getApexLauncher() {
  return new Launcher<AppHandle>() {
    @Override
    public AppHandle launchApp(StreamingApplication application,
        Configuration configuration, AttributeMap launchParameters)
        throws org.apache.apex.api.Launcher.LauncherException {
      EmbeddedAppLauncher<?> embeddedLauncher = Launcher.getLauncher(LaunchMode.EMBEDDED);
      DAG dag = embeddedLauncher.getDAG();
      application.populateDAG(dag, new Configuration(false));
      String appName = dag.getValue(DAGContext.APPLICATION_NAME);
      Assert.assertEquals("DummyApp", appName);
      return new AppHandle() {
        @Override
        public boolean isFinished() {
          return true;
        }
        @Override
        public void shutdown(org.apache.apex.api.Launcher.ShutdownMode arg0) {
        }
      };
    }
  };
}
 
开发者ID:apache,项目名称:beam,代码行数:25,代码来源:ApexYarnLauncherTest.java


示例7: testParDoChaining

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Test
public void testParDoChaining() throws Exception {
  Pipeline p = Pipeline.create();
  long numElements = 1000;
  PCollection<Long> input = p.apply(GenerateSequence.from(0).to(numElements));
  PAssert.thatSingleton(input.apply("Count", Count.<Long>globally())).isEqualTo(numElements);

  ApexPipelineOptions options = PipelineOptionsFactory.as(ApexPipelineOptions.class);
  DAG dag = TestApexRunner.translate(p, options);

  String[] expectedThreadLocal = { "/CreateActual/FilterActuals/Window.Assign" };
  Set<String> actualThreadLocal = Sets.newHashSet();
  for (DAG.StreamMeta sm : dag.getAllStreamsMeta()) {
    DAG.OutputPortMeta opm = sm.getSource();
    if (sm.getLocality() == Locality.THREAD_LOCAL) {
       String name = opm.getOperatorMeta().getName();
       String prefix = "PAssert$";
       if (name.startsWith(prefix)) {
         // remove indeterministic prefix
         name = name.substring(prefix.length() + 1);
       }
       actualThreadLocal.add(name);
    }
  }
  Assert.assertThat(actualThreadLocal, Matchers.hasItems(expectedThreadLocal));
}
 
开发者ID:apache,项目名称:beam,代码行数:27,代码来源:ApexRunnerTest.java


示例8: getFilteredApacheAggregationCountOper

import com.datatorrent.api.DAG; //导入依赖的package包/类
private MultiWindowDimensionAggregation getFilteredApacheAggregationCountOper(String name, DAG dag)
{
  MultiWindowDimensionAggregation oper = dag.addOperator(name, MultiWindowDimensionAggregation.class);
  oper.setWindowSize(3);
  List<int[]> dimensionArrayList = new ArrayList<int[]>();
  int[] dimensionArray1 = {0};
  int[] dimensionArray2 = {1};

  dimensionArrayList.add(dimensionArray1);
  dimensionArrayList.add(dimensionArray2);

  oper.setDimensionArray(dimensionArrayList);

  oper.setTimeBucket(TIME_BUCKETS.m.name());
  oper.setDimensionKeyVal("0"); // aggregate on count
  oper.setWindowSize(2); // 1 sec window

  return oper;
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:20,代码来源:Application.java


示例9: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{
  // Setup the operator to get the data from twitter sample stream injected into the system.
  TwitterSampleInput twitterFeed = new TwitterSampleInput();
  twitterFeed = dag.addOperator("TweetSampler", twitterFeed);

  // Setup a node to count the unique Hashtags within a window.
  UniqueCounter<String> uniqueCounter = dag.addOperator("UniqueHashtagCounter", new UniqueCounter<String>());

  // Get the aggregated Hashtag counts and count them over last 5 mins.
  WindowedTopCounter<String> topCounts = dag.addOperator("TopCounter", new WindowedTopCounter<String>());
  topCounts.setTopCount(10);
  topCounts.setSlidingWindowWidth(600);
  topCounts.setDagWindowWidth(1);

  dag.addStream("TwittedHashtags", twitterFeed.hashtag, uniqueCounter.data).setLocality(locality);
  // Count unique Hashtags
  dag.addStream("UniqueHashtagCounts", uniqueCounter.count, topCounts.input);

  TwitterTopCounterApplication.consoleOutput(dag, "topHashtags", topCounts.output, SNAPSHOT_SCHEMA, "hashtag");
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:23,代码来源:TwitterTrendingHashtagsApplication.java


示例10: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration configuration)
{
  WordGenerator inputOperator = new WordGenerator();
  KeyedWindowedOperatorImpl<String, Long, MutableLong, Long> windowedOperator = new KeyedWindowedOperatorImpl<>();
  Accumulation<Long, MutableLong, Long> sum = new SumAccumulation();

  windowedOperator.setAccumulation(sum);
  windowedOperator.setDataStorage(new InMemoryWindowedKeyedStorage<String, MutableLong>());
  windowedOperator.setRetractionStorage(new InMemoryWindowedKeyedStorage<String, Long>());
  windowedOperator.setWindowStateStorage(new InMemoryWindowedStorage<WindowState>());
  windowedOperator.setWindowOption(new WindowOption.TimeWindows(Duration.standardMinutes(1)));
  windowedOperator.setTriggerOption(TriggerOption.AtWatermark().withEarlyFiringsAtEvery(Duration.millis(1000)).accumulatingAndRetractingFiredPanes());
  //windowedOperator.setAllowedLateness(Duration.millis(14000));

  ConsoleOutputOperator outputOperator = new ConsoleOutputOperator();
  dag.addOperator("inputOperator", inputOperator);
  dag.addOperator("windowedOperator", windowedOperator);
  dag.addOperator("outputOperator", outputOperator);
  dag.addStream("input_windowed", inputOperator.output, windowedOperator.input);
  dag.addStream("windowed_output", windowedOperator.output, outputOperator.input);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:23,代码来源:Application.java


示例11: buildDataset

import com.datatorrent.api.DAG; //导入依赖的package包/类
public void buildDataset()
{
  hashMapping1[0] = "prop1:t1.col1:INT";
  hashMapping1[1] = "prop3:t1.col3:STRING";
  hashMapping1[2] = "prop2:t1.col2:DATE";
  hashMapping1[3] = "prop4:t2.col1:STRING";
  hashMapping1[4] = "prop5:t2.col2:INT";

  arrayMapping1[0] = "t1.col1:INT";
  arrayMapping1[1] = "t1.col3:STRING";
  arrayMapping1[2] = "t1.col2:DATE";
  arrayMapping1[3] = "t2.col2:STRING";
  arrayMapping1[4] = "t2.col1:INT";

  attrmap.put(DAG.APPLICATION_ID, "myMongoDBOouputOperatorAppId");

}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:18,代码来源:MongoDBOutputOperatorTest.java


示例12: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration entries)
{
      /* Generate random key-value pairs */
  RandomDataGenerator randGen = dag.addOperator("randomgen", new RandomDataGenerator());

      /* Initialize with three partition to start with */
  UniqueCounter<KeyValPair<String, Object>> uniqCount =
      dag.addOperator("uniqevalue", new UniqueCounter<KeyValPair<String, Object>>());
  MapToKeyHashValuePairConverter<KeyValPair<String, Object>, Integer> converter = dag.addOperator("converter", new MapToKeyHashValuePairConverter());
  uniqCount.setCumulative(false);
  dag.setAttribute(randGen, Context.OperatorContext.PARTITIONER, new StatelessPartitioner<UniqueCounter<KeyValPair<String, Object>>>(3));

  ConsoleOutputOperator output = dag.addOperator("output", new ConsoleOutputOperator());

  dag.addStream("datain", randGen.outPort, uniqCount.data);
  dag.addStream("convert", uniqCount.count, converter.input).setLocality(Locality.THREAD_LOCAL);
  dag.addStream("consoutput", converter.output, output.input);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:20,代码来源:UniqueKeyValCountExample.java


示例13: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{
  CustomRandomEventGenerator randomEventGenerator = dag.addOperator(
      "randomEventGenerator", new CustomRandomEventGenerator());
  randomEventGenerator.setMaxCountOfWindows(MAX_WINDOW_COUNT);
  randomEventGenerator.setTuplesBlastIntervalMillis(TUPLE_BLAST_MILLIS);
  randomEventGenerator.setTuplesBlast(TUPLE_BLAST);

  LOG.debug("Before making output operator");
  MemsqlPOJOOutputOperator memsqlOutputOperator = dag.addOperator("memsqlOutputOperator",
      new MemsqlPOJOOutputOperator());
  LOG.debug("After making output operator");

  memsqlOutputOperator.setBatchSize(DEFAULT_BATCH_SIZE);

  dag.addStream("memsqlConnector",
      randomEventGenerator.integer_data,
      memsqlOutputOperator.input);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:21,代码来源:MemsqlOutputBenchmark.java


示例14: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{
  // Create operators for each step
  // settings are applied by the platform using the config file.
  KafkaSinglePortStringInputOperator kafkaInput = dag.addOperator("kafkaInput", new KafkaSinglePortStringInputOperator());
  DeserializeJSON deserializeJSON = dag.addOperator("deserialize", new DeserializeJSON());
  FilterTuples filterTuples = dag.addOperator("filterTuples", new FilterTuples());
  FilterFields filterFields = dag.addOperator("filterFields", new FilterFields());
  RedisJoin redisJoin = dag.addOperator("redisJoin", new RedisJoin());
  CampaignProcessor campaignProcessor = dag.addOperator("campaignProcessor", new CampaignProcessor());

  // Connect the Ports in the Operators
  dag.addStream("deserialize", kafkaInput.outputPort, deserializeJSON.input).setLocality(DAG.Locality.CONTAINER_LOCAL);
  dag.addStream("filterTuples", deserializeJSON.output, filterTuples.input).setLocality(DAG.Locality.CONTAINER_LOCAL);
  dag.addStream("filterFields", filterTuples.output, filterFields.input).setLocality(DAG.Locality.CONTAINER_LOCAL);
  dag.addStream("redisJoin", filterFields.output, redisJoin.input).setLocality(DAG.Locality.CONTAINER_LOCAL);
  dag.addStream("output", redisJoin.output, campaignProcessor.input);

  dag.setInputPortAttribute(deserializeJSON.input, Context.PortContext.PARTITION_PARALLEL, true);
  dag.setInputPortAttribute(filterTuples.input, Context.PortContext.PARTITION_PARALLEL, true);
  dag.setInputPortAttribute(filterFields.input, Context.PortContext.PARTITION_PARALLEL, true);
  dag.setInputPortAttribute(redisJoin.input, Context.PortContext.PARTITION_PARALLEL, true);
}
 
开发者ID:yahoo,项目名称:streaming-benchmarks,代码行数:25,代码来源:Application.java


示例15: testJdbcInputOperator

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Test
public void testJdbcInputOperator()
{
  JdbcStore store = new JdbcStore();
  store.setDatabaseDriver(DB_DRIVER);
  store.setDatabaseUrl(URL);

  com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap attributeMap = new com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap();
  attributeMap.put(DAG.APPLICATION_ID, APP_ID);
  OperatorContext context = mockOperatorContext(OPERATOR_ID, attributeMap);

  TestInputOperator inputOperator = new TestInputOperator();
  inputOperator.setStore(store);
  insertEventsInTable(10);

  CollectorTestSink<Object> sink = new CollectorTestSink<>();
  inputOperator.outputPort.setSink(sink);

  inputOperator.setup(context);
  inputOperator.beginWindow(0);
  inputOperator.emitTuples();
  inputOperator.endWindow();

  Assert.assertEquals("rows from db", 10, sink.collectedTuples.size());
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:26,代码来源:JdbcPojoOperatorTest.java


示例16: setup

import com.datatorrent.api.DAG; //导入依赖的package包/类
@BeforeClass
public static void setup() throws Exception
{
  valueCounter = new IntegerUniqueValueCountAppender();
  Class.forName(INMEM_DB_DRIVER).newInstance();
  Connection con = DriverManager.getConnection(INMEM_DB_URL, new Properties());
  Statement stmt = con.createStatement();
  stmt.execute("CREATE TABLE IF NOT EXISTS " + TABLE_NAME + " (col1 INTEGER, col2 INTEGER, col3 BIGINT)");
  com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap attributes = new com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap();
  attributes.put(DAG.APPLICATION_ID, APP_ID);
  attributes.put(DAG.APPLICATION_PATH, applicationPath);
  attributes.put(OperatorContext.ACTIVATION_WINDOW_ID, 0L);
  valueCounter.setTableName(TABLE_NAME);
  valueCounter.getStore().setDatabaseDriver(INMEM_DB_DRIVER);
  valueCounter.getStore().setDatabaseUrl(INMEM_DB_URL);
  OperatorContext context = mockOperatorContext(OPERATOR_ID, attributes);
  valueCounter.setup(context);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:19,代码来源:DistributedDistinctTest.java


示例17: starting

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
protected void starting(Description description)
{
  super.starting(description);
  outputPath = new File(
      "target" + Path.SEPARATOR + description.getClassName() + Path.SEPARATOR + description.getMethodName())
          .getPath();

  Attribute.AttributeMap attributes = new Attribute.AttributeMap.DefaultAttributeMap();
  attributes.put(DAG.DAGContext.APPLICATION_ID, description.getClassName());
  attributes.put(DAG.DAGContext.APPLICATION_PATH, outputPath);
  context = mockOperatorContext(1, attributes);

  underTest = new S3Reconciler();
  underTest.setAccessKey("");
  underTest.setSecretKey("");

  underTest.setup(context);

  MockitoAnnotations.initMocks(this);
  PutObjectResult result = new PutObjectResult();
  result.setETag(outputPath);
  when(s3clientMock.putObject((PutObjectRequest)any())).thenReturn(result);
  underTest.setS3client(s3clientMock);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:26,代码来源:S3ReconcilerTest.java


示例18: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
/**
 * Create the DAG
 */
@Override
public void populateDAG(DAG dag, Configuration conf)
{

  InputGenerator randomInputGenerator = dag.addOperator("rand", new InputGenerator());
  FaithfulRScript rScriptOp = dag.addOperator("rScriptOp", new FaithfulRScript("com/datatorrent/examples/r/oldfaithful/eruptionModel.R", "eruptionModel", "retVal"));
  ConsoleOutputOperator consoles = dag.addOperator("consoles", new ConsoleOutputOperator());

  Map<String, FaithfulRScript.REXP_TYPE> argTypeMap = new HashMap<String, FaithfulRScript.REXP_TYPE>();

  argTypeMap.put("ELAPSEDTIME", FaithfulRScript.REXP_TYPE.REXP_INT);
  argTypeMap.put("ERUPTIONS", FaithfulRScript.REXP_TYPE.REXP_ARRAY_DOUBLE);
  argTypeMap.put("WAITING", FaithfulRScript.REXP_TYPE.REXP_ARRAY_INT);

  rScriptOp.setArgTypeMap(argTypeMap);

  dag.addStream("ingen_faithfulRscript", randomInputGenerator.outputPort, rScriptOp.faithfulInput).setLocality(locality);
  dag.addStream("ingen_faithfulRscript_eT", randomInputGenerator.elapsedTime, rScriptOp.inputElapsedTime).setLocality(locality);
  dag.addStream("faithfulRscript_console_s", rScriptOp.strOutput, consoles.input).setLocality(locality);

}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:25,代码来源:OldFaithfulApplication.java


示例19: testDeduperRedeploy

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Test
public void testDeduperRedeploy() throws Exception
{
  com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap attributes =
      new com.datatorrent.api.Attribute.AttributeMap.DefaultAttributeMap();
  attributes.put(DAG.APPLICATION_ID, APP_ID);
  attributes.put(DAG.APPLICATION_PATH, applicationPath);
  SimpleEvent simpleEvent = new SimpleEvent();
  simpleEvent.setId(100);
  simpleEvent.setHhmm(System.currentTimeMillis() + "");
  deduper.addEventManuallyToWaiting(simpleEvent);
  deduper.setup(new OperatorContextTestHelper.TestIdOperatorContext(0, attributes));
  eventBucketExchanger.exchange(null, 500, TimeUnit.MILLISECONDS);
  deduper.endWindow();
  deduper.teardown();
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:17,代码来源:DeduperSimpleEventTest.java


示例20: populateDAG

import com.datatorrent.api.DAG; //导入依赖的package包/类
@Override
public void populateDAG(DAG dag, Configuration conf)
{
  Level1Module m1 = dag.addModule("M1", new Level1Module());
  m1.setMemory(1024);
  m1.setPortMemory(1);
  m1.setLevel1ModuleProp(level2ModuleAProp1);

  Level1Module m2 = dag.addModule("M2", new Level1Module());
  m2.setMemory(2048);
  m2.setPortMemory(2);
  m2.setLevel1ModuleProp(level2ModuleAProp2);

  DummyOperator o1 = dag.addOperator("O1", new DummyOperator());
  o1.setOperatorProp(level2ModuleAProp3);

  dag.addStream("M1_M2&O1", m1.mOut, m2.mIn, o1.in).setLocality(DAG.Locality.CONTAINER_LOCAL);

  mIn.set(m1.mIn);
  mOut1.set(m2.mOut);
  mOut2.set(o1.out1);
}
 
开发者ID:apache,项目名称:apex-core,代码行数:23,代码来源:TestModuleExpansion.java



注:本文中的com.datatorrent.api.DAG类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IRepositoryManager类代码示例发布时间:2022-05-22
下一篇:
Java OWLClassNodeSet类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap