• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java MRBuilderUtils类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils的典型用法代码示例。如果您正苦于以下问题:Java MRBuilderUtils类的具体用法?Java MRBuilderUtils怎么用?Java MRBuilderUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MRBuilderUtils类属于org.apache.hadoop.mapreduce.v2.util包,在下文中一共展示了MRBuilderUtils类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: completeJobTasks

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private static void completeJobTasks(JobImpl job) {
  // complete the map tasks and the reduce tasks so we start committing
  int numMaps = job.getTotalMaps();
  for (int i = 0; i < numMaps; ++i) {
    job.handle(new JobTaskEvent(
        MRBuilderUtils.newTaskId(job.getID(), 1, TaskType.MAP),
        TaskState.SUCCEEDED));
    Assert.assertEquals(JobState.RUNNING, job.getState());
  }
  int numReduces = job.getTotalReduces();
  for (int i = 0; i < numReduces; ++i) {
    job.handle(new JobTaskEvent(
        MRBuilderUtils.newTaskId(job.getID(), 1, TaskType.MAP),
        TaskState.SUCCEEDED));
    Assert.assertEquals(JobState.RUNNING, job.getState());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:TestJobImpl.java


示例2: createReq

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private ContainerRequestEvent
    createReq(JobId jobId, int taskAttemptId, int memory, String[] hosts,
        boolean earlierFailedAttempt, boolean reduce) {
  TaskId taskId;
  if (reduce) {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE);
  } else {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP);
  }
  TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(taskId,
      taskAttemptId);
  Resource containerNeed = Resource.newInstance(memory, 1);
  if (earlierFailedAttempt) {
    return ContainerRequestEvent
        .createContainerRequestEventForFailedContainer(attemptId,
            containerNeed);
  }
  return new ContainerRequestEvent(attemptId, containerNeed, hosts,
      new String[] { NetworkTopology.DEFAULT_RACK });
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestRMContainerAllocator.java


示例3: createTce

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private static TaskAttemptCompletionEvent createTce(int eventId,
    boolean isMap, TaskAttemptCompletionEventStatus status) {
  JobId jid = MRBuilderUtils.newJobId(12345, 1, 1);
  TaskId tid = MRBuilderUtils.newTaskId(jid, 0,
      isMap ? org.apache.hadoop.mapreduce.v2.api.records.TaskType.MAP
          : org.apache.hadoop.mapreduce.v2.api.records.TaskType.REDUCE);
  TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(tid, 0);
  RecordFactory recordFactory =
    RecordFactoryProvider.getRecordFactory(null);
  TaskAttemptCompletionEvent tce = recordFactory
      .newRecordInstance(TaskAttemptCompletionEvent.class);
  tce.setEventId(eventId);
  tce.setAttemptId(attemptId);
  tce.setStatus(status);
  return tce;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:TestTaskAttemptListenerImpl.java


示例4: getReport

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Override
public JobReport getReport() {
  readLock.lock();
  try {
    JobState state = getState();

    // jobFile can be null if the job is not yet inited.
    String jobFile =
        remoteJobConfFile == null ? "" : remoteJobConfFile.toString();

    StringBuilder diagsb = new StringBuilder();
    for (String s : getDiagnostics()) {
      diagsb.append(s).append("\n");
    }

    if (getInternalState() == JobStateInternal.NEW) {
      return MRBuilderUtils.newJobReport(jobId, jobName, username, state,
          appSubmitTime, startTime, finishTime, setupProgress, 0.0f, 0.0f,
          cleanupProgress, jobFile, amInfos, isUber, diagsb.toString());
    }

    computeProgress();
    JobReport report = MRBuilderUtils.newJobReport(jobId, jobName, username,
        state, appSubmitTime, startTime, finishTime, setupProgress,
        this.mapProgress, this.reduceProgress,
        cleanupProgress, jobFile, amInfos, isUber, diagsb.toString());
    return report;
  } finally {
    readLock.unlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:JobImpl.java


示例5: createTce

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private static TaskAttemptCompletionEvent createTce(int eventId,
    boolean isMap, TaskAttemptCompletionEventStatus status) {
  JobId jid = MRBuilderUtils.newJobId(12345, 1, 1);
  TaskId tid = MRBuilderUtils.newTaskId(jid, 0,
      isMap ? org.apache.hadoop.mapreduce.v2.api.records.TaskType.MAP
          : org.apache.hadoop.mapreduce.v2.api.records.TaskType.REDUCE);
  TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(tid, 0);
  RecordFactory recordFactory = RecordFactoryProvider.getRecordFactory(null);
  TaskAttemptCompletionEvent tce = recordFactory
      .newRecordInstance(TaskAttemptCompletionEvent.class);
  tce.setEventId(eventId);
  tce.setAttemptId(attemptId);
  tce.setStatus(status);
  return tce;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestTaskAttemptListenerImpl.java


示例6: testRenameMapOutputForReduce

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test
public void testRenameMapOutputForReduce() throws Exception {
  final JobConf conf = new JobConf();

  final MROutputFiles mrOutputFiles = new MROutputFiles();
  mrOutputFiles.setConf(conf);

  // make sure both dirs are distinct
  //
  conf.set(MRConfig.LOCAL_DIR, localDirs[0].toString());
  final Path mapOut = mrOutputFiles.getOutputFileForWrite(1);
  conf.set(MRConfig.LOCAL_DIR, localDirs[1].toString());
  final Path mapOutIdx = mrOutputFiles.getOutputIndexFileForWrite(1);
  Assert.assertNotEquals("Paths must be different!",
      mapOut.getParent(), mapOutIdx.getParent());

  // make both dirs part of LOCAL_DIR
  conf.setStrings(MRConfig.LOCAL_DIR, localDirs);

  final FileContext lfc = FileContext.getLocalFSFileContext(conf);
  lfc.create(mapOut, EnumSet.of(CREATE)).close();
  lfc.create(mapOutIdx, EnumSet.of(CREATE)).close();

  final JobId jobId = MRBuilderUtils.newJobId(12345L, 1, 2);
  final TaskId tid = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP);
  final TaskAttemptId taid = MRBuilderUtils.newTaskAttemptId(tid, 0);

  LocalContainerLauncher.renameMapOutputForReduce(conf, taid, mrOutputFiles);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:TestLocalContainerLauncher.java


示例7: getMockMapTask

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private MapTaskImpl getMockMapTask(long clusterTimestamp, EventHandler eh) {

    ApplicationId appId = ApplicationId.newInstance(clusterTimestamp, 1);
    JobId jobId = MRBuilderUtils.newJobId(appId, 1);

    int partitions = 2;

    Path remoteJobConfFile = mock(Path.class);
    JobConf conf = new JobConf();
    TaskAttemptListener taskAttemptListener = mock(TaskAttemptListener.class);
    Token<JobTokenIdentifier> jobToken =
        (Token<JobTokenIdentifier>) mock(Token.class);
    Credentials credentials = null;
    Clock clock = new SystemClock();
    int appAttemptId = 3;
    MRAppMetrics metrics = mock(MRAppMetrics.class);
    Resource minContainerRequirements = mock(Resource.class);
    when(minContainerRequirements.getMemory()).thenReturn(1000);

    ClusterInfo clusterInfo = mock(ClusterInfo.class);
    AppContext appContext = mock(AppContext.class);
    when(appContext.getClusterInfo()).thenReturn(clusterInfo);

    TaskSplitMetaInfo taskSplitMetaInfo = mock(TaskSplitMetaInfo.class);
    MapTaskImpl mapTask = new MapTaskImpl(jobId, partitions,
        eh, remoteJobConfFile, conf,
        taskSplitMetaInfo, taskAttemptListener, jobToken, credentials, clock,
        appAttemptId, metrics, appContext);
    return mapTask;
  }
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:TestRecovery.java


示例8: createMapTaskAttemptImplForTest

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private TaskAttemptImpl createMapTaskAttemptImplForTest(
    EventHandler eventHandler, TaskSplitMetaInfo taskSplitMetaInfo, Clock clock) {
  ApplicationId appId = ApplicationId.newInstance(1, 1);
  JobId jobId = MRBuilderUtils.newJobId(appId, 1);
  TaskId taskId = MRBuilderUtils.newTaskId(jobId, 1, TaskType.MAP);
  TaskAttemptListener taListener = mock(TaskAttemptListener.class);
  Path jobFile = mock(Path.class);
  JobConf jobConf = new JobConf();
  TaskAttemptImpl taImpl =
      new MapTaskAttemptImpl(taskId, 1, eventHandler, jobFile, 1,
          taskSplitMetaInfo, jobConf, taListener, null,
          null, clock, null);
  return taImpl;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:TestTaskAttempt.java


示例9: testAbortJobCalledAfterKillingTasks

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test
public void testAbortJobCalledAfterKillingTasks() throws IOException {
  Configuration conf = new Configuration();
  conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir);
  conf.set(MRJobConfig.MR_AM_COMMITTER_CANCEL_TIMEOUT_MS, "1000");
  InlineDispatcher dispatcher = new InlineDispatcher();
  dispatcher.init(conf);
  dispatcher.start();
  OutputCommitter committer = Mockito.mock(OutputCommitter.class);
  CommitterEventHandler commitHandler =
      createCommitterEventHandler(dispatcher, committer);
  commitHandler.init(conf);
  commitHandler.start();
  JobImpl job = createRunningStubbedJob(conf, dispatcher, 2, null);

  //Fail one task. This should land the JobImpl in the FAIL_WAIT state
  job.handle(new JobTaskEvent(
    MRBuilderUtils.newTaskId(job.getID(), 1, TaskType.MAP),
    TaskState.FAILED));
  //Verify abort job hasn't been called
  Mockito.verify(committer, Mockito.never())
    .abortJob((JobContext) Mockito.any(), (State) Mockito.any());
  assertJobState(job, JobStateInternal.FAIL_WAIT);

  //Verify abortJob is called once and the job failed
  Mockito.verify(committer, Mockito.timeout(2000).times(1))
    .abortJob((JobContext) Mockito.any(), (State) Mockito.any());
  assertJobState(job, JobStateInternal.FAILED);

  dispatcher.stop();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:TestJobImpl.java


示例10: createAMInfo

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private static AMInfo createAMInfo(int attempt) {
  ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance(
      ApplicationId.newInstance(100, 1), attempt);
  ContainerId containerId = ContainerId.newContainerId(appAttemptId, 1);
  return MRBuilderUtils.newAMInfo(appAttemptId, System.currentTimeMillis(),
      containerId, NM_HOST, NM_PORT, NM_HTTP_PORT);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:MockJobs.java


示例11: makeTaskAttemptId

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
public static TaskAttemptId makeTaskAttemptId(long ts, int appId, int taskId, 
    TaskType taskType, int id) {
  ApplicationId aID = ApplicationId.newInstance(ts, appId);
  JobId jID = MRBuilderUtils.newJobId(aID, id);
  TaskId tID = MRBuilderUtils.newTaskId(jID, taskId, taskType);
  return MRBuilderUtils.newTaskAttemptId(tID, id);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:TestContainerLauncherImpl.java


示例12: testTimeout

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
public void testTimeout() throws InterruptedException {
  EventHandler mockHandler = mock(EventHandler.class);
  Clock clock = new SystemClock();
  TaskHeartbeatHandler hb = new TaskHeartbeatHandler(mockHandler, clock, 1);
  
  
  Configuration conf = new Configuration();
  conf.setInt(MRJobConfig.TASK_TIMEOUT, 10); //10 ms
  conf.setInt(MRJobConfig.TASK_TIMEOUT_CHECK_INTERVAL_MS, 10); //10 ms
  
  hb.init(conf);
  hb.start();
  try {
    ApplicationId appId = ApplicationId.newInstance(0l, 5);
    JobId jobId = MRBuilderUtils.newJobId(appId, 4);
    TaskId tid = MRBuilderUtils.newTaskId(jobId, 3, TaskType.MAP);
    TaskAttemptId taid = MRBuilderUtils.newTaskAttemptId(tid, 2);
    hb.register(taid);
    Thread.sleep(100);
    //Events only happen when the task is canceled
    verify(mockHandler, times(2)).handle(any(Event.class));
  } finally {
    hb.stop();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:TestTaskHeartbeatHandler.java


示例13: createFailEvent

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private ContainerFailedEvent createFailEvent(JobId jobId, int taskAttemptId,
    String host, boolean reduce) {
  TaskId taskId;
  if (reduce) {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE);
  } else {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP);
  }
  TaskAttemptId attemptId = MRBuilderUtils.newTaskAttemptId(taskId,
      taskAttemptId);
  return new ContainerFailedEvent(attemptId, host);    
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:TestRMContainerAllocator.java


示例14: createDeallocateEvent

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
private ContainerAllocatorEvent createDeallocateEvent(JobId jobId,
    int taskAttemptId, boolean reduce) {
  TaskId taskId;
  if (reduce) {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE);
  } else {
    taskId = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP);
  }
  TaskAttemptId attemptId =
      MRBuilderUtils.newTaskAttemptId(taskId, taskAttemptId);
  return new ContainerAllocatorEvent(attemptId,
      ContainerAllocator.EventType.CONTAINER_DEALLOCATE);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestRMContainerAllocator.java


示例15: getAMInfos

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Override
public List<AMInfo> getAMInfos() {
  List<AMInfo> amInfos = new LinkedList<AMInfo>();
  for (org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo jhAmInfo : jobInfo
      .getAMInfos()) {
    AMInfo amInfo =
        MRBuilderUtils.newAMInfo(jhAmInfo.getAppAttemptId(),
            jhAmInfo.getStartTime(), jhAmInfo.getContainerId(),
            jhAmInfo.getNodeManagerHost(), jhAmInfo.getNodeManagerPort(),
            jhAmInfo.getNodeManagerHttpPort());
 
    amInfos.add(amInfo);
  }
  return amInfos;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:CompletedJob.java


示例16: testAverageMergeTime

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test(timeout = 10000)
public void testAverageMergeTime() throws IOException {
  String historyFileName =
      "job_1329348432655_0001-1329348443227-user-Sleep+job-1329348468601-10-1-SUCCEEDED-default.jhist";
  String confFileName =
      "job_1329348432655_0001_conf.xml";
  Configuration conf = new Configuration();
  JobACLsManager jobAclsMgr = new JobACLsManager(conf);
  Path fulleHistoryPath =
      new Path(TestJobHistoryEntities.class.getClassLoader()
          .getResource(historyFileName)
          .getFile());
  Path fullConfPath =
      new Path(TestJobHistoryEntities.class.getClassLoader()
          .getResource(confFileName)
          .getFile());

  HistoryFileInfo info = mock(HistoryFileInfo.class);
  when(info.getConfFile()).thenReturn(fullConfPath);

  JobId jobId = MRBuilderUtils.newJobId(1329348432655l, 1, 1);
  CompletedJob completedJob =
      new CompletedJob(conf, jobId, fulleHistoryPath, true, "user",
          info, jobAclsMgr);
  JobInfo jobInfo = new JobInfo(completedJob);
  // There are 2 tasks with merge time of 45 and 55 respectively. So average
  // merge time should be 50.
  Assert.assertEquals(50L, jobInfo.getAvgMergeTime().longValue());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:TestJobInfo.java


示例17: testWithSingleElement

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
/**
 * Trivial test case that verifies basic functionality of {@link
 * JobIdHistoryFileInfoMap}
 */
@Test(timeout = 2000)
public void testWithSingleElement() throws InterruptedException {
  JobIdHistoryFileInfoMap mapWithSize = new JobIdHistoryFileInfoMap();

  JobId jobId = MRBuilderUtils.newJobId(1, 1, 1);
  HistoryFileInfo fileInfo1 = Mockito.mock(HistoryFileInfo.class);
  Mockito.when(fileInfo1.getJobId()).thenReturn(jobId);

  // add it twice
  assertEquals("Incorrect return on putIfAbsent()",
      null, mapWithSize.putIfAbsent(jobId, fileInfo1));
  assertEquals("Incorrect return on putIfAbsent()",
      fileInfo1, mapWithSize.putIfAbsent(jobId, fileInfo1));

  // check get()
  assertEquals("Incorrect get()", fileInfo1, mapWithSize.get(jobId));
  assertTrue("Incorrect size()", checkSize(mapWithSize, 1));

  // check navigableKeySet()
  NavigableSet<JobId> set = mapWithSize.navigableKeySet();
  assertEquals("Incorrect navigableKeySet()", 1, set.size());
  assertTrue("Incorrect navigableKeySet()", set.contains(jobId));

  // check values()
  Collection<HistoryFileInfo> values = mapWithSize.values();
  assertEquals("Incorrect values()", 1, values.size());
  assertTrue("Incorrect values()", values.contains(fileInfo1));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:TestJobIdHistoryFileInfoMap.java


示例18: testAddExisting

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test (timeout = 1000)
public void testAddExisting() {
  JobListCache cache = new JobListCache(2, 1000);

  JobId jobId = MRBuilderUtils.newJobId(1, 1, 1);
  HistoryFileInfo fileInfo = Mockito.mock(HistoryFileInfo.class);
  Mockito.when(fileInfo.getJobId()).thenReturn(jobId);

  cache.addIfAbsent(fileInfo);
  cache.addIfAbsent(fileInfo);
  assertEquals("Incorrect number of cache entries", 1,
      cache.values().size());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestJobListCache.java


示例19: testEviction

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test (timeout = 1000)
public void testEviction() throws InterruptedException {
  int maxSize = 2;
  JobListCache cache = new JobListCache(maxSize, 1000);

  JobId jobId1 = MRBuilderUtils.newJobId(1, 1, 1);
  HistoryFileInfo fileInfo1 = Mockito.mock(HistoryFileInfo.class);
  Mockito.when(fileInfo1.getJobId()).thenReturn(jobId1);

  JobId jobId2 = MRBuilderUtils.newJobId(2, 2, 2);
  HistoryFileInfo fileInfo2 = Mockito.mock(HistoryFileInfo.class);
  Mockito.when(fileInfo2.getJobId()).thenReturn(jobId2);

  JobId jobId3 = MRBuilderUtils.newJobId(3, 3, 3);
  HistoryFileInfo fileInfo3 = Mockito.mock(HistoryFileInfo.class);
  Mockito.when(fileInfo3.getJobId()).thenReturn(jobId3);

  cache.addIfAbsent(fileInfo1);
  cache.addIfAbsent(fileInfo2);
  cache.addIfAbsent(fileInfo3);

  Collection <HistoryFileInfo> values;
  for (int i = 0; i < 9; i++) {
    values = cache.values();
    if (values.size() > maxSize) {
      Thread.sleep(100);
    } else {
      assertFalse("fileInfo1 should have been evicted",
        values.contains(fileInfo1));
      return;
    }
  }
  fail("JobListCache didn't delete the extra entry");
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestJobListCache.java


示例20: testCompletedTask

import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils; //导入依赖的package包/类
@Test (timeout=10000)
public void testCompletedTask() throws Exception {
  HistoryFileInfo info = mock(HistoryFileInfo.class);
  when(info.getConfFile()).thenReturn(fullConfPath);
  completedJob =
    new CompletedJob(conf, jobId, fullHistoryPath, loadTasks, "user",
        info, jobAclsManager);
  TaskId mt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.MAP);
  TaskId rt1Id = MRBuilderUtils.newTaskId(jobId, 0, TaskType.REDUCE);
  
  Map<TaskId, Task> mapTasks = completedJob.getTasks(TaskType.MAP);
  Map<TaskId, Task> reduceTasks = completedJob.getTasks(TaskType.REDUCE);
  assertEquals(10, mapTasks.size());
  assertEquals(2, reduceTasks.size());
  
  Task mt1 = mapTasks.get(mt1Id);
  assertEquals(1, mt1.getAttempts().size());
  assertEquals(TaskState.SUCCEEDED, mt1.getState());
  TaskReport mt1Report = mt1.getReport();
  assertEquals(TaskState.SUCCEEDED, mt1Report.getTaskState());
  assertEquals(mt1Id, mt1Report.getTaskId());
  Task rt1 = reduceTasks.get(rt1Id);
  assertEquals(1, rt1.getAttempts().size());
  assertEquals(TaskState.SUCCEEDED, rt1.getState());
  TaskReport rt1Report = rt1.getReport();
  assertEquals(TaskState.SUCCEEDED, rt1Report.getTaskState());
  assertEquals(rt1Id, rt1Report.getTaskId());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:TestJobHistoryEntities.java



注:本文中的org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DialogSelectionListener类代码示例发布时间:2022-05-22
下一篇:
Java SafeMode类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap