• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java StreamJob类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.streaming.StreamJob的典型用法代码示例。如果您正苦于以下问题:Java StreamJob类的具体用法?Java StreamJob怎么用?Java StreamJob使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StreamJob类属于org.apache.hadoop.streaming包,在下文中一共展示了StreamJob类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: addJob

import org.apache.hadoop.streaming.StreamJob; //导入依赖的package包/类
public void addJob(int numReducers, boolean mapoutputCompressed,
    boolean outputCompressed, Size size, JobControl gridmix) {
  final String prop = String.format("streamSort.%sJobs.inputFiles", size);
  final String indir = getInputDirsFor(prop, size.defaultPath(VARINFLTEXT));
  final String outdir = addTSSuffix("perf-out/stream-out-dir-" + size);

  StringBuffer sb = new StringBuffer();
  sb.append("-input ").append(indir).append(" ");
  sb.append("-output ").append(outdir).append(" ");
  sb.append("-mapper cat ");
  sb.append("-reducer cat ");
  sb.append("-numReduceTasks ").append(numReducers);
  String[] args = sb.toString().split(" ");

  clearDir(outdir);
  try {
    JobConf jobconf = StreamJob.createJob(args);
    jobconf.setJobName("GridmixStreamingSorter." + size);
    jobconf.setCompressMapOutput(mapoutputCompressed);
    jobconf.setBoolean("mapred.output.compress", outputCompressed);
    Job job = new Job(jobconf);
    gridmix.addJob(job);
  } catch (Exception ex) {
    ex.printStackTrace();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:27,代码来源:GridMixRunner.java


示例2: getJobId

import org.apache.hadoop.streaming.StreamJob; //导入依赖的package包/类
private JobID getJobId(String [] runtimeArgs, String [] otherArgs) 
    throws IOException {
  JobID jobId = null;
  final RunStreamJob runSJ;
  StreamJob streamJob = new StreamJob();
  int counter = 0;
  JTClient jtClient = cluster.getJTClient();
  JobClient jobClient = jtClient.getClient();
  int totalJobs = jobClient.getAllJobs().length;
  String [] args = buildArgs(runtimeArgs, otherArgs);
  cleanup(outputDir, conf);
  conf.setBoolean("mapreduce.job.complete.cancel.delegation.tokens", false);
  runSJ = new RunStreamJob(conf, streamJob, args);
  runSJ.start();
  while (counter++ < 60) {
    if (jobClient.getAllJobs().length - totalJobs == 0) {
      UtilsForTests.waitFor(1000);
    } else if (jobClient.getAllJobs()[0].getRunState() == JobStatus.RUNNING) {
      jobId = jobClient.getAllJobs()[0].getJobID();
      break;
    } else {
      UtilsForTests.waitFor(1000);
    }
  }  
  return jobId;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:27,代码来源:TestStreamingJobProcessTree.java


示例3: getJobIdOfRunningStreamJob

import org.apache.hadoop.streaming.StreamJob; //导入依赖的package包/类
private JobID getJobIdOfRunningStreamJob(String [] runtimeArgs) 
    throws IOException {
  JobID jobId = null;
  StreamJob streamJob = new StreamJob();
  int counter = 0;
  jtClient = cluster.getJTClient();
  client = jtClient.getClient();
  int totalJobs = client.getAllJobs().length;
  String [] streamingArgs = generateArgs(runtimeArgs);
  cleanup(outputDir, conf);
  conf.setBoolean("mapreduce.job.complete.cancel.delegation.tokens", false);
  final RunStreamingJob streamJobThread = new RunStreamingJob(conf,
      streamJob,streamingArgs);
  streamJobThread.start();
  while (counter++ < 60) {
    if (client.getAllJobs().length - totalJobs == 0) {
      UtilsForTests.waitFor(1000);
    } else if (client.getAllJobs()[0].getRunState() == JobStatus.RUNNING) {
      jobId = client.getAllJobs()[0].getJobID();
      break;
    } else {
     UtilsForTests.waitFor(1000);
    }
  }  
  return jobId;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:27,代码来源:TestTaskKillingOfStreamingJob.java


示例4: addJob

import org.apache.hadoop.streaming.StreamJob; //导入依赖的package包/类
public void addJob(int numReducers, boolean mapoutputCompressed,
    boolean outputCompressed, Size size, JobControl gridmix) {
  final String prop = String.format("streamSort.%sJobs.inputFiles", size);
  final String indir = 
    getInputDirsFor(prop, size.defaultPath(VARINFLTEXT));
  final String outdir = addTSSuffix("perf-out/stream-out-dir-" + size);

  StringBuffer sb = new StringBuffer();
  sb.append("-input ").append(indir).append(" ");
  sb.append("-output ").append(outdir).append(" ");
  sb.append("-mapper cat ");
  sb.append("-reducer cat ");
  sb.append("-numReduceTasks ").append(numReducers);
  String[] args = sb.toString().split(" ");

  clearDir(outdir);
  try {
    Configuration conf = StreamJob.createJob(args);
    conf.setBoolean(FileOutputFormat.COMPRESS, outputCompressed);
    conf.setBoolean(MRJobConfig.MAP_OUTPUT_COMPRESS, mapoutputCompressed);
    Job job = new Job(conf, "GridmixStreamingSorter." + size);
    ControlledJob cjob = new ControlledJob(job, null);
    gridmix.addJob(cjob);
  } catch (Exception ex) {
    ex.printStackTrace();
  }
}
 
开发者ID:rekhajoshm,项目名称:mapreduce-fork,代码行数:28,代码来源:GridMixRunner.java


示例5: executeStreamingJob

import org.apache.hadoop.streaming.StreamJob; //导入依赖的package包/类
private void executeStreamingJob(boolean sameUser) throws Exception {
    conf = cluster.getConf();
  if (sameUser == true) {
    UserGroupInformation ugi = UserGroupInformation.getLoginUser();
    LOG.info("LoginUser:" + ugi);
  } else {
    conf.set("user.name","hadoop1");
    LOG.info("user name changed is :" + conf.get("user.name"));
  }

  if (conf.get("mapred.task.tracker.task-controller").
      equals("org.apache.hadoop.mapred.LinuxTaskController")) {
    StreamJob streamJob = new StreamJob();
    String shellFile = System.getProperty("user.dir") +
      "/src/test/system/scripts/StreamMapper.sh";
    String runtimeArgs [] = {
      "-D", "mapred.job.name=Streaming job",
      "-D", "mapred.map.tasks=1",
      "-D", "mapred.reduce.tasks=1",
      "-D", "mapred.tasktracker.tasks.sleeptime-before-sigkill=3000",
      "-input", inputDir.toString(),
      "-output", outputDir.toString(),
      "-mapper", "StreamMapper.sh",
      "-reducer","/bin/cat",
      "-file", shellFile
    };

    createInput(inputDir, conf);
    cleanup(outputDir, conf);

    //If job submtitted with same user, it should pass
    //If job submitted with different user, it should fail with assertion
    //the testcase should assert.
    if (sameUser == true) {
      Assert.assertEquals(0, ToolRunner.run(conf, streamJob, runtimeArgs));
    } else {
      if ((ToolRunner.run(conf, streamJob, runtimeArgs)) != 0) {
        LOG.info("Job failed as expected");
      } else {
        Assert.fail("Job passed with different user");
      }
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:45,代码来源:TestLinuxTaskControllerOtherUserStreamingJob.java



注:本文中的org.apache.hadoop.streaming.StreamJob类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Analytics类代码示例发布时间:2022-05-22
下一篇:
Java PatchResult类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap