• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FileNameIndexUtils类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils的典型用法代码示例。如果您正苦于以下问题:Java FileNameIndexUtils类的具体用法?Java FileNameIndexUtils怎么用?Java FileNameIndexUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FileNameIndexUtils类属于org.apache.hadoop.mapreduce.v2.jobhistory包,在下文中一共展示了FileNameIndexUtils类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: addDirectoryToJobListCache

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
private void addDirectoryToJobListCache(Path path) throws IOException {
  if (LOG.isDebugEnabled()) {
    LOG.debug("Adding " + path + " to job list cache.");
  }
  List<FileStatus> historyFileList = scanDirectoryForHistoryFiles(path,
      doneDirFc);
  for (FileStatus fs : historyFileList) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Adding in history for " + fs.getPath());
    }
    JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(fs.getPath()
        .getName());
    String confFileName = JobHistoryUtils
        .getIntermediateConfFileName(jobIndexInfo.getJobId());
    String summaryFileName = JobHistoryUtils
        .getIntermediateSummaryFileName(jobIndexInfo.getJobId());
    HistoryFileInfo fileInfo = createHistoryFileInfo(fs.getPath(), new Path(fs
        .getPath().getParent(), confFileName), new Path(fs.getPath()
        .getParent(), summaryFileName), jobIndexInfo, true);
    jobListCache.addIfAbsent(fileInfo);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:HistoryFileManager.java


示例2: getJobFileInfo

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
/**
 * Searches the job history file FileStatus list for the specified JobId.
 * 
 * @param fileStatusList
 *          fileStatus list of Job History Files.
 * @param jobId
 *          The JobId to find.
 * @return A FileInfo object for the jobId, null if not found.
 * @throws IOException
 */
private HistoryFileInfo getJobFileInfo(List<FileStatus> fileStatusList,
    JobId jobId) throws IOException {
  for (FileStatus fs : fileStatusList) {
    JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(fs.getPath()
        .getName());
    if (jobIndexInfo.getJobId().equals(jobId)) {
      String confFileName = JobHistoryUtils
          .getIntermediateConfFileName(jobIndexInfo.getJobId());
      String summaryFileName = JobHistoryUtils
          .getIntermediateSummaryFileName(jobIndexInfo.getJobId());
      HistoryFileInfo fileInfo = createHistoryFileInfo(fs.getPath(), new Path(
          fs.getPath().getParent(), confFileName), new Path(fs.getPath()
          .getParent(), summaryFileName), jobIndexInfo, true);
      return fileInfo;
    }
  }
  return null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:HistoryFileManager.java


示例3: addDirectoryToJobListCache

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
private void addDirectoryToJobListCache(Path path) throws IOException {
  if (LOG.isDebugEnabled()) {
    LOG.debug("Adding " + path + " to job list cache.");
  }
  List<FileStatus> historyFileList = scanDirectoryForHistoryFiles(path,
      doneDirFc);
  for (FileStatus fs : historyFileList) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Adding in history for " + fs.getPath());
    }
    JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(fs.getPath()
        .getName());
    String confFileName = JobHistoryUtils
        .getIntermediateConfFileName(jobIndexInfo.getJobId());
    String summaryFileName = JobHistoryUtils
        .getIntermediateSummaryFileName(jobIndexInfo.getJobId());
    HistoryFileInfo fileInfo = new HistoryFileInfo(fs.getPath(), new Path(fs
        .getPath().getParent(), confFileName), new Path(fs.getPath()
        .getParent(), summaryFileName), jobIndexInfo, true);
    jobListCache.addIfAbsent(fileInfo);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:23,代码来源:HistoryFileManager.java


示例4: getJobFileInfo

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
/**
 * Searches the job history file FileStatus list for the specified JobId.
 * 
 * @param fileStatusList
 *          fileStatus list of Job History Files.
 * @param jobId
 *          The JobId to find.
 * @return A FileInfo object for the jobId, null if not found.
 * @throws IOException
 */
private HistoryFileInfo getJobFileInfo(List<FileStatus> fileStatusList,
    JobId jobId) throws IOException {
  for (FileStatus fs : fileStatusList) {
    JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(fs.getPath()
        .getName());
    if (jobIndexInfo.getJobId().equals(jobId)) {
      String confFileName = JobHistoryUtils
          .getIntermediateConfFileName(jobIndexInfo.getJobId());
      String summaryFileName = JobHistoryUtils
          .getIntermediateSummaryFileName(jobIndexInfo.getJobId());
      HistoryFileInfo fileInfo = new HistoryFileInfo(fs.getPath(), new Path(
          fs.getPath().getParent(), confFileName), new Path(fs.getPath()
          .getParent(), summaryFileName), jobIndexInfo, true);
      return fileInfo;
    }
  }
  return null;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:29,代码来源:HistoryFileManager.java


示例5: clean

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
/**
 * Clean up older history files.
 * 
 * @throws IOException
 *           on any error trying to remove the entries.
 */
@SuppressWarnings("unchecked")
void clean() throws IOException {
  long cutoff = System.currentTimeMillis() - maxHistoryAge;
  boolean halted = false;
  List<FileStatus> serialDirList = getHistoryDirsForCleaning(cutoff);
  // Sort in ascending order. Relies on YYYY/MM/DD/Serial
  Collections.sort(serialDirList);
  for (FileStatus serialDir : serialDirList) {
    List<FileStatus> historyFileList = scanDirectoryForHistoryFiles(
        serialDir.getPath(), doneDirFc);
    for (FileStatus historyFile : historyFileList) {
      JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(historyFile
          .getPath().getName());
      long effectiveTimestamp = getEffectiveTimestamp(
          jobIndexInfo.getFinishTime(), historyFile);
      if (effectiveTimestamp <= cutoff) {
        HistoryFileInfo fileInfo = this.jobListCache.get(jobIndexInfo
            .getJobId());
        if (fileInfo == null) {
          String confFileName = JobHistoryUtils
              .getIntermediateConfFileName(jobIndexInfo.getJobId());

          fileInfo = createHistoryFileInfo(historyFile.getPath(), new Path(
              historyFile.getPath().getParent(), confFileName), null,
              jobIndexInfo, true);
        }
        deleteJobFromDone(fileInfo);
      } else {
        halted = true;
        break;
      }
    }
    if (!halted) {
      deleteDir(serialDir);
      removeDirectoryFromSerialNumberIndex(serialDir.getPath());
      existingDoneSubdirs.remove(serialDir.getPath());
    } else {
      break; // Don't scan any more directories.
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:48,代码来源:HistoryFileManager.java


示例6: clean

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
/**
 * Clean up older history files.
 * 
 * @throws IOException
 *           on any error trying to remove the entries.
 */
@SuppressWarnings("unchecked")
void clean() throws IOException {
  long cutoff = System.currentTimeMillis() - maxHistoryAge;
  boolean halted = false;
  List<FileStatus> serialDirList = getHistoryDirsForCleaning(cutoff);
  // Sort in ascending order. Relies on YYYY/MM/DD/Serial
  Collections.sort(serialDirList);
  for (FileStatus serialDir : serialDirList) {
    List<FileStatus> historyFileList = scanDirectoryForHistoryFiles(
        serialDir.getPath(), doneDirFc);
    for (FileStatus historyFile : historyFileList) {
      JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(historyFile
          .getPath().getName());
      long effectiveTimestamp = getEffectiveTimestamp(
          jobIndexInfo.getFinishTime(), historyFile);
      if (effectiveTimestamp <= cutoff) {
        HistoryFileInfo fileInfo = this.jobListCache.get(jobIndexInfo
            .getJobId());
        if (fileInfo == null) {
          String confFileName = JobHistoryUtils
              .getIntermediateConfFileName(jobIndexInfo.getJobId());

          fileInfo = new HistoryFileInfo(historyFile.getPath(), new Path(
              historyFile.getPath().getParent(), confFileName), null,
              jobIndexInfo, true);
        }
        deleteJobFromDone(fileInfo);
      } else {
        halted = true;
        break;
      }
    }
    if (!halted) {
      deleteDir(serialDir);
      removeDirectoryFromSerialNumberIndex(serialDir.getPath());
      existingDoneSubdirs.remove(serialDir.getPath());
    } else {
      break; // Don't scan any more directories.
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:48,代码来源:HistoryFileManager.java


示例7: processDoneFiles

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
protected void processDoneFiles(JobId jobId) throws IOException {

    final MetaInfo mi = fileMap.get(jobId);
    if (mi == null) {
      throw new IOException("No MetaInfo found for JobId: [" + jobId + "]");
    }

    if (mi.getHistoryFile() == null) {
      LOG.warn("No file for job-history with " + jobId + " found in cache!");
    }
    if (mi.getConfFile() == null) {
      LOG.warn("No file for jobconf with " + jobId + " found in cache!");
    }

    Path qualifiedSummaryDoneFile = writeSummaryFile(jobId, mi.getJobSummary());

    try {

      // Move historyFile to Done Folder.
      Path qualifiedDoneFile = null;
      if (mi.getHistoryFile() != null) {
        Path historyFile = mi.getHistoryFile();
        Path qualifiedLogFile = stagingDirFS.makeQualified(historyFile);
        String doneJobHistoryFileName =
            getTempFileName(FileNameIndexUtils.getDoneFileName(mi
                .getJobIndexInfo()));
        qualifiedDoneFile =
            doneDirFS.makeQualified(new Path(doneDirPrefixPath,
                doneJobHistoryFileName));
        moveToDoneNow(qualifiedLogFile, qualifiedDoneFile);
      }

      // Move confFile to Done Folder
      Path qualifiedConfDoneFile = null;
      if (mi.getConfFile() != null) {
        Path confFile = mi.getConfFile();
        Path qualifiedConfFile = stagingDirFS.makeQualified(confFile);
        String doneConfFileName =
            getTempFileName(JobHistoryUtils
                .getIntermediateConfFileName(jobId));
        qualifiedConfDoneFile =
            doneDirFS.makeQualified(new Path(doneDirPrefixPath,
                doneConfFileName));
        moveToDoneNow(qualifiedConfFile, qualifiedConfDoneFile);
      }
      
      moveTmpToDone(qualifiedSummaryDoneFile);
      moveTmpToDone(qualifiedConfDoneFile);
      moveTmpToDone(qualifiedDoneFile);

    } catch (IOException e) {
      LOG.error("Error closing writer for JobID: " + jobId);
      throw e;
    }
  }
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:56,代码来源:JobHistoryEventHandler.java


示例8: clean

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
/**
 * Clean up older history files.
 * 
 * @throws IOException
 *           on any error trying to remove the entries.
 */
@SuppressWarnings("unchecked")
void clean() throws IOException {
  // TODO this should be replaced by something that knows about the directory
  // structure and will put less of a load on HDFS.
  long cutoff = System.currentTimeMillis() - maxHistoryAge;
  boolean halted = false;
  // TODO Delete YYYY/MM/DD directories.
  List<FileStatus> serialDirList = findTimestampedDirectories();
  // Sort in ascending order. Relies on YYYY/MM/DD/Serial
  Collections.sort(serialDirList);
  for (FileStatus serialDir : serialDirList) {
    List<FileStatus> historyFileList = scanDirectoryForHistoryFiles(
        serialDir.getPath(), doneDirFc);
    for (FileStatus historyFile : historyFileList) {
      JobIndexInfo jobIndexInfo = FileNameIndexUtils.getIndexInfo(historyFile
          .getPath().getName());
      long effectiveTimestamp = getEffectiveTimestamp(
          jobIndexInfo.getFinishTime(), historyFile);
      if (effectiveTimestamp <= cutoff) {
        HistoryFileInfo fileInfo = this.jobListCache.get(jobIndexInfo
            .getJobId());
        if (fileInfo == null) {
          String confFileName = JobHistoryUtils
              .getIntermediateConfFileName(jobIndexInfo.getJobId());

          fileInfo = new HistoryFileInfo(historyFile.getPath(), new Path(
              historyFile.getPath().getParent(), confFileName), null,
              jobIndexInfo, true);
        }
        deleteJobFromDone(fileInfo);
      } else {
        halted = true;
        break;
      }
    }
    if (!halted) {
      doneDirFc.delete(doneDirFc.makeQualified(serialDir.getPath()), true);
      removeDirectoryFromSerialNumberIndex(serialDir.getPath());
      existingDoneSubdirs.remove(serialDir.getPath());
    } else {
      break; // Don't scan any more directories.
    }
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:51,代码来源:HistoryFileManager.java


示例9: testHistoryParsingForFailedAttempts

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
@Test(timeout = 30000)
public void testHistoryParsingForFailedAttempts() throws Exception {
  LOG.info("STARTING testHistoryParsingForFailedAttempts");
  try {
    Configuration conf = new Configuration();
    conf.setClass(
        CommonConfigurationKeysPublic.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY,
        MyResolver.class, DNSToSwitchMapping.class);
    RackResolver.init(conf);
    MRApp app = new MRAppWithHistoryWithFailedAttempt(2, 1, true, this
        .getClass().getName(), true);
    app.submit(conf);
    Job job = app.getContext().getAllJobs().values().iterator().next();
    JobId jobId = job.getID();
    app.waitForState(job, JobState.SUCCEEDED);

    // make sure all events are flushed
    app.waitForState(Service.STATE.STOPPED);

    String jobhistoryDir = JobHistoryUtils
        .getHistoryIntermediateDoneDirForUser(conf);
    JobHistory jobHistory = new JobHistory();
    jobHistory.init(conf);

    JobIndexInfo jobIndexInfo = jobHistory.getJobFileInfo(jobId)
        .getJobIndexInfo();
    String jobhistoryFileName = FileNameIndexUtils
        .getDoneFileName(jobIndexInfo);

    Path historyFilePath = new Path(jobhistoryDir, jobhistoryFileName);
    FSDataInputStream in = null;
    FileContext fc = null;
    try {
      fc = FileContext.getFileContext(conf);
      in = fc.open(fc.makeQualified(historyFilePath));
    } catch (IOException ioe) {
      LOG.info("Can not open history file: " + historyFilePath, ioe);
      throw (new Exception("Can not open History File"));
    }

    JobHistoryParser parser = new JobHistoryParser(in);
    JobInfo jobInfo = parser.parse();
    Exception parseException = parser.getParseException();
    Assert.assertNull("Caught an expected exception " + parseException,
        parseException);
    int noOffailedAttempts = 0;
    Map<TaskID, TaskInfo> allTasks = jobInfo.getAllTasks();
    for (Task task : job.getTasks().values()) {
      TaskInfo taskInfo = allTasks.get(TypeConverter.fromYarn(task.getID()));
      for (TaskAttempt taskAttempt : task.getAttempts().values()) {
        TaskAttemptInfo taskAttemptInfo = taskInfo.getAllTaskAttempts().get(
            TypeConverter.fromYarn((taskAttempt.getID())));
        // Verify rack-name for all task attempts
        Assert.assertEquals("rack-name is incorrect",
            taskAttemptInfo.getRackname(), RACK_NAME);
        if (taskAttemptInfo.getTaskStatus().equals("FAILED")) {
          noOffailedAttempts++;
        }
      }
    }
    Assert.assertEquals("No of Failed tasks doesn't match.", 2,
        noOffailedAttempts);
  } finally {
    LOG.info("FINISHED testHistoryParsingForFailedAttempts");
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:67,代码来源:TestJobHistoryParsing.java


示例10: testCountersForFailedTask

import org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils; //导入依赖的package包/类
@Test(timeout = 60000)
public void testCountersForFailedTask() throws Exception {
  LOG.info("STARTING testCountersForFailedTask");
  try {
    Configuration conf = new Configuration();
    conf.setClass(
        CommonConfigurationKeysPublic.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY,
        MyResolver.class, DNSToSwitchMapping.class);
    RackResolver.init(conf);
    MRApp app = new MRAppWithHistoryWithFailedTask(2, 1, true, this
        .getClass().getName(), true);
    app.submit(conf);
    Job job = app.getContext().getAllJobs().values().iterator().next();
    JobId jobId = job.getID();
    app.waitForState(job, JobState.FAILED);

    // make sure all events are flushed
    app.waitForState(Service.STATE.STOPPED);

    String jobhistoryDir = JobHistoryUtils
        .getHistoryIntermediateDoneDirForUser(conf);
    JobHistory jobHistory = new JobHistory();
    jobHistory.init(conf);

    JobIndexInfo jobIndexInfo = jobHistory.getJobFileInfo(jobId)
        .getJobIndexInfo();
    String jobhistoryFileName = FileNameIndexUtils
        .getDoneFileName(jobIndexInfo);

    Path historyFilePath = new Path(jobhistoryDir, jobhistoryFileName);
    FSDataInputStream in = null;
    FileContext fc = null;
    try {
      fc = FileContext.getFileContext(conf);
      in = fc.open(fc.makeQualified(historyFilePath));
    } catch (IOException ioe) {
      LOG.info("Can not open history file: " + historyFilePath, ioe);
      throw (new Exception("Can not open History File"));
    }

    JobHistoryParser parser = new JobHistoryParser(in);
    JobInfo jobInfo = parser.parse();
    Exception parseException = parser.getParseException();
    Assert.assertNull("Caught an expected exception " + parseException,
        parseException);
    for (Map.Entry<TaskID, TaskInfo> entry : jobInfo.getAllTasks().entrySet()) {
      TaskId yarnTaskID = TypeConverter.toYarn(entry.getKey());
      CompletedTask ct = new CompletedTask(yarnTaskID, entry.getValue());
      Assert.assertNotNull("completed task report has null counters", ct
          .getReport().getCounters());
    }
  } finally {
    LOG.info("FINISHED testCountersForFailedTask");
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:56,代码来源:TestJobHistoryParsing.java



注:本文中的org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TaskAttemptContainerAssignedEvent类代码示例发布时间:2022-05-22
下一篇:
Java BasicNameValuePair类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap