• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Task类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.v2.app.job.Task的典型用法代码示例。如果您正苦于以下问题:Java Task类的具体用法?Java Task怎么用?Java Task使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Task类属于org.apache.hadoop.mapreduce.v2.app.job包,在下文中一共展示了Task类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testTaskId

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskId() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      assertEquals("incorrect number of elements", 1, json.length());
      JSONObject info = json.getJSONObject("task");
      verifyHsSingleTask(info, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestHsWebServicesTasks.java


示例2: testTaskAttemptsSlash

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskAttemptsSlash() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("mapreduce")
          .path("jobs").path(jobId).path("tasks").path(tid).path("attempts/")
          .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      verifyAMTaskAttempts(json, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TestAMWebServicesAttempts.java


示例3: canCommit

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
/**
 * Child checking whether it can commit.
 * 
 * <br>
 * Commit is a two-phased protocol. First the attempt informs the
 * ApplicationMaster that it is
 * {@link #commitPending(TaskAttemptID, TaskStatus)}. Then it repeatedly polls
 * the ApplicationMaster whether it {@link #canCommit(TaskAttemptID)} This is
 * a legacy from the centralized commit protocol handling by the JobTracker.
 */
@Override
public boolean canCommit(TaskAttemptID taskAttemptID) throws IOException {
  LOG.info("Commit go/no-go request from " + taskAttemptID.toString());
  // An attempt is asking if it can commit its output. This can be decided
  // only by the task which is managing the multiple attempts. So redirect the
  // request there.
  org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId attemptID =
      TypeConverter.toYarn(taskAttemptID);

  taskHeartbeatHandler.progressing(attemptID);

  // tell task to retry later if AM has not heard from RM within the commit
  // window to help avoid double-committing in a split-brain situation
  long now = context.getClock().getTime();
  if (now - rmHeartbeatHandler.getLastHeartbeatTime() > commitWindowMs) {
    return false;
  }

  Job job = context.getJob(attemptID.getTaskId().getJobId());
  Task task = job.getTask(attemptID.getTaskId());
  return task.canCommit(attemptID);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:TaskAttemptListenerImpl.java


示例4: dataStatisticsForTask

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
protected DataStatistics dataStatisticsForTask(TaskId taskID) {
  JobId jobID = taskID.getJobId();
  Job job = context.getJob(jobID);

  if (job == null) {
    return null;
  }

  Task task = job.getTask(taskID);

  if (task == null) {
    return null;
  }

  return task.getType() == TaskType.MAP
          ? mapperStatistics.get(job)
          : task.getType() == TaskType.REDUCE
              ? reducerStatistics.get(job)
              : null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:StartEndTimesBase.java


示例5: TaskInfo

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
public TaskInfo(Task task) {
  TaskType ttype = task.getType();
  this.type = ttype.toString();
  TaskReport report = task.getReport();
  this.startTime = report.getStartTime();
  this.finishTime = report.getFinishTime();
  this.state = report.getTaskState();
  this.elapsedTime = Times.elapsed(this.startTime, this.finishTime,
    this.state == TaskState.RUNNING);
  if (this.elapsedTime == -1) {
    this.elapsedTime = 0;
  }
  this.progress = report.getProgress() * 100;
  this.status =  report.getStatus();
  this.id = MRApps.toString(task.getID());
  this.taskNum = task.getID().getId();
  this.successful = getSuccessfulAttempt(task);
  if (successful != null) {
    this.successfulAttempt = MRApps.toString(successful.getID());
  } else {
    this.successfulAttempt = "";
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:TaskInfo.java


示例6: testTimedOutTask

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
//All Task attempts are timed out, leading to Job failure
public void testTimedOutTask() throws Exception {
  MRApp app = new TimeOutTaskMRApp(1, 0);
  Configuration conf = new Configuration();
  int maxAttempts = 2;
  conf.setInt(MRJobConfig.MAP_MAX_ATTEMPTS, maxAttempts);
  // disable uberization (requires entire job to be reattempted, so max for
  // subtask attempts is overridden to 1)
  conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false);
  Job job = app.submit(conf);
  app.waitForState(job, JobState.FAILED);
  Map<TaskId,Task> tasks = job.getTasks();
  Assert.assertEquals("Num tasks is not correct", 1, tasks.size());
  Task task = tasks.values().iterator().next();
  Assert.assertEquals("Task state not correct", TaskState.FAILED,
      task.getReport().getTaskState());
  Map<TaskAttemptId, TaskAttempt> attempts =
      tasks.values().iterator().next().getAttempts();
  Assert.assertEquals("Num attempts is not correct", maxAttempts,
      attempts.size());
  for (TaskAttempt attempt : attempts.values()) {
    Assert.assertEquals("Attempt state not correct", TaskAttemptState.FAILED,
        attempt.getReport().getTaskAttemptState());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestFail.java


示例7: testTaskIdCountersSlash

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskIdCountersSlash() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .path("counters/").accept(MediaType.APPLICATION_JSON)
          .get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      assertEquals("incorrect number of elements", 1, json.length());
      JSONObject info = json.getJSONObject("jobTaskCounters");
      verifyHsJobTaskCounters(info, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:TestHsWebServicesTasks.java


示例8: getJobTaskAttempts

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@GET
@Path("/jobs/{jobid}/tasks/{taskid}/attempts")
@Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public TaskAttemptsInfo getJobTaskAttempts(@Context HttpServletRequest hsr,
    @PathParam("jobid") String jid, @PathParam("taskid") String tid) {

  init();
  TaskAttemptsInfo attempts = new TaskAttemptsInfo();
  Job job = getJobFromJobIdString(jid, appCtx);
  checkAccess(job, hsr);
  Task task = getTaskFromTaskIdString(tid, job);

  for (TaskAttempt ta : task.getAttempts().values()) {
    if (ta != null) {
      if (task.getType() == TaskType.REDUCE) {
        attempts.add(new ReduceTaskAttemptInfo(ta, task.getType()));
      } else {
        attempts.add(new TaskAttemptInfo(ta, task.getType(), true));
      }
    }
  }
  return attempts;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:AMWebServices.java


示例9: verifyHsJobTaskCounters

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
public void verifyHsJobTaskCounters(JSONObject info, Task task)
    throws JSONException {

  assertEquals("incorrect number of elements", 2, info.length());

  WebServicesTestUtils.checkStringMatch("id", MRApps.toString(task.getID()),
      info.getString("id"));
  // just do simple verification of fields - not data is correct
  // in the fields
  JSONArray counterGroups = info.getJSONArray("taskCounterGroup");
  for (int i = 0; i < counterGroups.length(); i++) {
    JSONObject counterGroup = counterGroups.getJSONObject(i);
    String name = counterGroup.getString("counterGroupName");
    assertTrue("name not set", (name != null && !name.isEmpty()));
    JSONArray counters = counterGroup.getJSONArray("counter");
    for (int j = 0; j < counters.length(); j++) {
      JSONObject counter = counters.getJSONObject(j);
      String counterName = counter.getString("name");
      assertTrue("name not set",
          (counterName != null && !counterName.isEmpty()));
      long value = counter.getLong("value");
      assertTrue("value  >= 0", value >= 0);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:TestHsWebServicesTasks.java


示例10: getSingleTaskCounters

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@GET
@Path("/mapreduce/jobs/{jobid}/tasks/{taskid}/counters")
@Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public JobTaskCounterInfo getSingleTaskCounters(
    @Context HttpServletRequest hsr, @PathParam("jobid") String jid,
    @PathParam("taskid") String tid) {

  init();
  Job job = AMWebServices.getJobFromJobIdString(jid, ctx);
  checkAccess(job, hsr);
  TaskId taskID = MRApps.toTaskID(tid);
  if (taskID == null) {
    throw new NotFoundException("taskid " + tid + " not found or invalid");
  }
  Task task = job.getTask(taskID);
  if (task == null) {
    throw new NotFoundException("task not found with id " + tid);
  }
  return new JobTaskCounterInfo(task);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:HsWebServices.java


示例11: getTaskAttempts

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Override
protected Collection<TaskAttempt> getTaskAttempts() {
  List<TaskAttempt> fewTaskAttemps = new ArrayList<TaskAttempt>();
  String taskTypeStr = $(TASK_TYPE);
  TaskType taskType = MRApps.taskType(taskTypeStr);
  String attemptStateStr = $(ATTEMPT_STATE);
  TaskAttemptStateUI neededState = MRApps
      .taskAttemptState(attemptStateStr);
  for (Task task : super.app.getJob().getTasks(taskType).values()) {
    Map<TaskAttemptId, TaskAttempt> attempts = task.getAttempts();
    for (TaskAttempt attempt : attempts.values()) {
      if (neededState.correspondsTo(attempt.getState())) {
        fewTaskAttemps.add(attempt);
      }
    }
  }
  return fewTaskAttemps;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:AttemptsPage.java


示例12: computeProgress

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
private void computeProgress() {
  this.readLock.lock();
  try {
    float mapProgress = 0f;
    float reduceProgress = 0f;
    for (Task task : this.tasks.values()) {
      if (task.getType() == TaskType.MAP) {
        mapProgress += (task.isFinished() ? 1f : task.getProgress());
      } else {
        reduceProgress += (task.isFinished() ? 1f : task.getProgress());
      }
    }
    if (this.numMapTasks != 0) {
      mapProgress = mapProgress / this.numMapTasks;
    }
    if (this.numReduceTasks != 0) {
      reduceProgress = reduceProgress / this.numReduceTasks;
    }
    this.mapProgress = mapProgress;
    this.reduceProgress = reduceProgress;
  } finally {
    this.readLock.unlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:JobImpl.java


示例13: testTaskIdDefault

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskIdDefault() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      assertEquals("incorrect number of elements", 1, json.length());
      JSONObject info = json.getJSONObject("task");
      verifyHsSingleTask(info, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestHsWebServicesTasks.java


示例14: verifyTaskGeneric

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
public void verifyTaskGeneric(Task task, String id, String state,
    String type, String successfulAttempt, long startTime, long finishTime,
    long elapsedTime, float progress) {

  TaskId taskid = task.getID();
  String tid = MRApps.toString(taskid);
  TaskReport report = task.getReport();

  WebServicesTestUtils.checkStringMatch("id", tid, id);
  WebServicesTestUtils.checkStringMatch("type", task.getType().toString(),
      type);
  WebServicesTestUtils.checkStringMatch("state", report.getTaskState()
      .toString(), state);
  // not easily checked without duplicating logic, just make sure its here
  assertNotNull("successfulAttempt null", successfulAttempt);
  assertEquals("startTime wrong", report.getStartTime(), startTime);
  assertEquals("finishTime wrong", report.getFinishTime(), finishTime);
  assertEquals("elapsedTime wrong", finishTime - startTime, elapsedTime);
  assertEquals("progress wrong", report.getProgress() * 100, progress, 1e-3f);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestHsWebServicesTasks.java


示例15: verifyHsTask

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
public void verifyHsTask(JSONArray arr, Job job, String type)
    throws JSONException {
  for (Task task : job.getTasks().values()) {
    TaskId id = task.getID();
    String tid = MRApps.toString(id);
    Boolean found = false;
    if (type != null && task.getType() == MRApps.taskType(type)) {

      for (int i = 0; i < arr.length(); i++) {
        JSONObject info = arr.getJSONObject(i);
        if (tid.matches(info.getString("id"))) {
          found = true;
          verifyHsSingleTask(info, task);
        }
      }
      assertTrue("task with id: " + tid + " not in web service output", found);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestHsWebServicesTasks.java


示例16: testTaskIdCounters

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskIdCounters() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .path("counters").accept(MediaType.APPLICATION_JSON)
          .get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      assertEquals("incorrect number of elements", 1, json.length());
      JSONObject info = json.getJSONObject("jobTaskCounters");
      verifyHsJobTaskCounters(info, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:TestHsWebServicesTasks.java


示例17: testTaskIdCountersDefault

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskIdCountersDefault() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .path("counters").get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      assertEquals("incorrect number of elements", 1, json.length());
      JSONObject info = json.getJSONObject("jobTaskCounters");
      verifyHsJobTaskCounters(info, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestHsWebServicesTasks.java


示例18: getTaskReports

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Override
public GetTaskReportsResponse getTaskReports(
    GetTaskReportsRequest request) throws IOException {
  JobId jobId = request.getJobId();
  TaskType taskType = request.getTaskType();
  
  GetTaskReportsResponse response = 
    recordFactory.newRecordInstance(GetTaskReportsResponse.class);
  
  Job job = verifyAndGetJob(jobId, JobACL.VIEW_JOB, true);
  Collection<Task> tasks = job.getTasks(taskType).values();
  LOG.info("Getting task report for " + taskType + "   " + jobId
      + ". Report-size will be " + tasks.size());

  // Take lock to allow only one call, otherwise heap will blow up because
  // of counters in the report when there are multiple callers.
  synchronized (getTaskReportsLock) {
    for (Task task : tasks) {
      response.addTaskReport(task.getReport());
    }
  }

  return response;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:MRClientService.java


示例19: getJobTaskAttempts

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@GET
@Path("/mapreduce/jobs/{jobid}/tasks/{taskid}/attempts")
@Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public TaskAttemptsInfo getJobTaskAttempts(@Context HttpServletRequest hsr,
    @PathParam("jobid") String jid, @PathParam("taskid") String tid) {

  init();
  TaskAttemptsInfo attempts = new TaskAttemptsInfo();
  Job job = AMWebServices.getJobFromJobIdString(jid, ctx);
  checkAccess(job, hsr);
  Task task = AMWebServices.getTaskFromTaskIdString(tid, job);
  for (TaskAttempt ta : task.getAttempts().values()) {
    if (ta != null) {
      if (task.getType() == TaskType.REDUCE) {
        attempts.add(new ReduceTaskAttemptInfo(ta, task.getType()));
      } else {
        attempts.add(new TaskAttemptInfo(ta, task.getType(), false));
      }
    }
  }
  return attempts;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:HsWebServices.java


示例20: testTaskAttempts

import org.apache.hadoop.mapreduce.v2.app.job.Task; //导入依赖的package包/类
@Test
public void testTaskAttempts() throws JSONException, Exception {
  WebResource r = resource();
  Map<JobId, Job> jobsMap = appContext.getAllJobs();
  for (JobId id : jobsMap.keySet()) {
    String jobId = MRApps.toString(id);
    for (Task task : jobsMap.get(id).getTasks().values()) {

      String tid = MRApps.toString(task.getID());
      ClientResponse response = r.path("ws").path("v1").path("history")
          .path("mapreduce").path("jobs").path(jobId).path("tasks").path(tid)
          .path("attempts").accept(MediaType.APPLICATION_JSON)
          .get(ClientResponse.class);
      assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
      JSONObject json = response.getEntity(JSONObject.class);
      verifyHsTaskAttempts(json, task);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestHsWebServicesAttempts.java



注:本文中的org.apache.hadoop.mapreduce.v2.app.job.Task类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IAnnotationAccess类代码示例发布时间:2022-05-22
下一篇:
Java SSTableSliceIterator类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap