• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java PersistentLongFile类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.util.PersistentLongFile的典型用法代码示例。如果您正苦于以下问题:Java PersistentLongFile类的具体用法?Java PersistentLongFile怎么用?Java PersistentLongFile使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



PersistentLongFile类属于org.apache.hadoop.hdfs.util包,在下文中一共展示了PersistentLongFile类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: assertEpochFilesCopied

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
private static void assertEpochFilesCopied(MiniQJMHACluster jnCluster)
    throws IOException {
  for (int i = 0; i < 3; i++) {
    File journalDir = jnCluster.getJournalCluster().getJournalDir(i, "ns1");
    File currDir = new File(journalDir, "current");
    File prevDir = new File(journalDir, "previous");
    for (String fileName : new String[]{ Journal.LAST_PROMISED_FILENAME,
        Journal.LAST_WRITER_EPOCH }) {
      File prevFile = new File(prevDir, fileName);
      // Possible the prev file doesn't exist, e.g. if there has never been a
      // writer before the upgrade.
      if (prevFile.exists()) {
        PersistentLongFile prevLongFile = new PersistentLongFile(prevFile, -10);
        PersistentLongFile currLongFile = new PersistentLongFile(new File(currDir,
            fileName), -11);
        assertTrue("Value in " + fileName + " has decreased on upgrade in "
            + journalDir, prevLongFile.get() <= currLongFile.get());
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:TestDFSUpgradeWithHA.java


示例2: writeTransactionIdFile

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Write last checkpoint time into a separate file.
 * @param sd storage directory
 * @throws IOException
 */
void writeTransactionIdFile(StorageDirectory sd, long txid) throws IOException {
  Preconditions.checkArgument(txid >= 0, "bad txid: " + txid);
  
  File txIdFile = getStorageFile(sd, NameNodeFile.SEEN_TXID);
  PersistentLongFile.writeFile(txIdFile, txid);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:NNStorage.java


示例3: refreshCachedData

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Reload any data that may have been cached. This is necessary
 * when we first load the Journal, but also after any formatting
 * operation, since the cached data is no longer relevant.
 */
private synchronized void refreshCachedData() {
  IOUtils.closeStream(committedTxnId);
  
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  this.lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  this.lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  this.committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsConstants.INVALID_TXID);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:Journal.java


示例4: refreshCachedData

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Reload any data that may have been cached. This is necessary
 * when we first load the Journal, but also after any formatting
 * operation, since the cached data is no longer relevant.
 */
private synchronized void refreshCachedData() {
  IOUtils.closeStream(committedTxnId);
  
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  this.lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  this.lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  this.committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsServerConstants.INVALID_TXID);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:18,代码来源:Journal.java


示例5: refreshCachedData

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Reload any data that may have been cached. This is necessary
 * when we first load the Journal, but also after any formatting
 * operation, since the cached data is no longer relevant.
 * @throws IOException 
 */
private synchronized void refreshCachedData() throws IOException {
  IOUtils.closeStream(committedTxnId);
  
  File currentDir = journalStorage.getSingularStorageDir().getCurrentDir();
  this.lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  this.lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  this.committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsConstants.INVALID_TXID);
  metrics.lastWriterEpoch.set(lastWriterEpoch.get());
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:20,代码来源:Journal.java


示例6: writeTransactionIdFile

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Write last checkpoint time into a separate file.
 *
 * @param sd
 * @throws IOException
 */
void writeTransactionIdFile(StorageDirectory sd, long txid) throws IOException {
  Preconditions.checkArgument(txid >= 0, "bad txid: " + txid);
  
  File txIdFile = getStorageFile(sd, NameNodeFile.SEEN_TXID);
  PersistentLongFile.writeFile(txIdFile, txid);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:13,代码来源:NNStorage.java


示例7: doUpgrade

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
public synchronized void doUpgrade(StorageInfo sInfo) throws IOException {
  long oldCTime = storage.getCTime();
  storage.cTime = sInfo.cTime;
  int oldLV = storage.getLayoutVersion();
  storage.layoutVersion = sInfo.layoutVersion;
  LOG.info("Starting upgrade of edits directory: "
      + ".\n   old LV = " + oldLV
      + "; old CTime = " + oldCTime
      + ".\n   new LV = " + storage.getLayoutVersion()
      + "; new CTime = " + storage.getCTime());
  storage.getJournalManager().doUpgrade(storage);
  storage.createPaxosDir();
  
  // Copy over the contents of the epoch data files to the new dir.
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  File previousDir = storage.getSingularStorageDir().getPreviousDir();
  
  PersistentLongFile prevLastPromisedEpoch = new PersistentLongFile(
      new File(previousDir, LAST_PROMISED_FILENAME), 0);
  PersistentLongFile prevLastWriterEpoch = new PersistentLongFile(
      new File(previousDir, LAST_WRITER_EPOCH), 0);
  
  lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  
  lastPromisedEpoch.set(prevLastPromisedEpoch.get());
  lastWriterEpoch.set(prevLastWriterEpoch.get());
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:31,代码来源:Journal.java


示例8: doUpgrade

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
public synchronized void doUpgrade(StorageInfo sInfo) throws IOException {
  long oldCTime = storage.getCTime();
  storage.cTime = sInfo.cTime;
  int oldLV = storage.getLayoutVersion();
  storage.layoutVersion = sInfo.layoutVersion;
  LOG.info("Starting upgrade of edits directory: "
      + ".\n   old LV = " + oldLV
      + "; old CTime = " + oldCTime
      + ".\n   new LV = " + storage.getLayoutVersion()
      + "; new CTime = " + storage.getCTime());
  storage.getJournalManager().doUpgrade(storage);
  storage.createPaxosDir();
  
  // Copy over the contents of the epoch data files to the new dir.
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  File previousDir = storage.getSingularStorageDir().getPreviousDir();
  
  PersistentLongFile prevLastPromisedEpoch = new PersistentLongFile(
      new File(previousDir, LAST_PROMISED_FILENAME), 0);
  PersistentLongFile prevLastWriterEpoch = new PersistentLongFile(
      new File(previousDir, LAST_WRITER_EPOCH), 0);
  BestEffortLongFile prevCommittedTxnId = new BestEffortLongFile(
      new File(previousDir, COMMITTED_TXID_FILENAME),
      HdfsConstants.INVALID_TXID);

  lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsConstants.INVALID_TXID);

  try {
    lastPromisedEpoch.set(prevLastPromisedEpoch.get());
    lastWriterEpoch.set(prevLastWriterEpoch.get());
    committedTxnId.set(prevCommittedTxnId.get());
  } finally {
    IOUtils.cleanup(LOG, prevCommittedTxnId);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:42,代码来源:Journal.java


示例9: doUpgrade

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
public synchronized void doUpgrade(StorageInfo sInfo) throws IOException {
  long oldCTime = storage.getCTime();
  storage.cTime = sInfo.cTime;
  int oldLV = storage.getLayoutVersion();
  storage.layoutVersion = sInfo.layoutVersion;
  LOG.info("Starting upgrade of edits directory: "
      + ".\n   old LV = " + oldLV
      + "; old CTime = " + oldCTime
      + ".\n   new LV = " + storage.getLayoutVersion()
      + "; new CTime = " + storage.getCTime());
  storage.getJournalManager().doUpgrade(storage);
  storage.createPaxosDir();
  
  // Copy over the contents of the epoch data files to the new dir.
  File currentDir = storage.getSingularStorageDir().getCurrentDir();
  File previousDir = storage.getSingularStorageDir().getPreviousDir();
  
  PersistentLongFile prevLastPromisedEpoch = new PersistentLongFile(
      new File(previousDir, LAST_PROMISED_FILENAME), 0);
  PersistentLongFile prevLastWriterEpoch = new PersistentLongFile(
      new File(previousDir, LAST_WRITER_EPOCH), 0);
  BestEffortLongFile prevCommittedTxnId = new BestEffortLongFile(
      new File(previousDir, COMMITTED_TXID_FILENAME),
      HdfsServerConstants.INVALID_TXID);

  lastPromisedEpoch = new PersistentLongFile(
      new File(currentDir, LAST_PROMISED_FILENAME), 0);
  lastWriterEpoch = new PersistentLongFile(
      new File(currentDir, LAST_WRITER_EPOCH), 0);
  committedTxnId = new BestEffortLongFile(
      new File(currentDir, COMMITTED_TXID_FILENAME),
      HdfsServerConstants.INVALID_TXID);

  try {
    lastPromisedEpoch.set(prevLastPromisedEpoch.get());
    lastWriterEpoch.set(prevLastWriterEpoch.get());
    committedTxnId.set(prevCommittedTxnId.get());
  } finally {
    IOUtils.cleanup(LOG, prevCommittedTxnId);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:42,代码来源:Journal.java


示例10: readTransactionIdFile

import org.apache.hadoop.hdfs.util.PersistentLongFile; //导入依赖的package包/类
/**
 * Determine the last transaction ID noted in this storage directory.
 * This txid is stored in a special seen_txid file since it might not
 * correspond to the latest image or edit log. For example, an image-only
 * directory will have this txid incremented when edits logs roll, even
 * though the edits logs are in a different directory.
 *
 * @param sd StorageDirectory to check
 * @return If file exists and can be read, last recorded txid. If not, 0L.
 * @throws IOException On errors processing file pointed to by sd
 */
static long readTransactionIdFile(StorageDirectory sd) throws IOException {
  File txidFile = getStorageFile(sd, NameNodeFile.SEEN_TXID);
  return PersistentLongFile.readFile(txidFile, 0);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:NNStorage.java



注:本文中的org.apache.hadoop.hdfs.util.PersistentLongFile类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IMAPStore类代码示例发布时间:2022-05-22
下一篇:
Java JavaScopes类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap