• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java JournalOutOfSyncException类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException的典型用法代码示例。如果您正苦于以下问题:Java JournalOutOfSyncException类的具体用法?Java JournalOutOfSyncException怎么用?Java JournalOutOfSyncException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



JournalOutOfSyncException类属于org.apache.hadoop.hdfs.qjournal.protocol包,在下文中一共展示了JournalOutOfSyncException类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: throwIfOutOfSync

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
private void throwIfOutOfSync()
    throws JournalOutOfSyncException, IOException {
  if (isOutOfSync()) {
    // Even if we're out of sync, it's useful to send an RPC
    // to the remote node in order to update its lag metrics, etc.
    heartbeatIfNecessary();
    throw new JournalOutOfSyncException(
        "Journal disabled until next roll");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:IPCLoggerChannel.java


示例2: checkSync

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * @throws JournalOutOfSyncException if the given expression is not true.
 * The message of the exception is formatted using the 'msg' and
 * 'formatArgs' parameters.
 */
private void checkSync(boolean expression, String msg,
    Object... formatArgs) throws JournalOutOfSyncException {
  if (!expression) {
    throw new JournalOutOfSyncException(String.format(msg, formatArgs));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:Journal.java


示例3: testFinalizeMissingSegment

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Ensure that finalizing a segment which doesn't exist throws the
 * appropriate exception.
 */
@Test (timeout = 10000)
public void testFinalizeMissingSegment() throws Exception {
  journal.newEpoch(FAKE_NSINFO, 1);
  try {
    journal.finalizeLogSegment(makeRI(1), 1000, 1001);
    fail("did not fail to finalize");
  } catch (JournalOutOfSyncException e) {
    GenericTestUtils.assertExceptionContains(
        "No log file to finalize at transaction ID 1000", e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestJournal.java


示例4: testFinalizeMissingSegment

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Ensure that finalizing a segment which doesn't exist throws the appropriate
 * exception.
 */
@Test
public void testFinalizeMissingSegment() throws Exception {
  journal.newEpoch(FAKE_NSINFO, 1);
  try {
    journal.finalizeLogSegment(makeRI(1), 1000, 1001);
    fail("did not fail to finalize");
  } catch (JournalOutOfSyncException e) {
    GenericTestUtils.assertExceptionContains(
        "No log file to finalize at transaction ID 1000", e);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:16,代码来源:TestJournal.java


示例5: throwIfOutOfSync

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
private void throwIfOutOfSync()
    throws JournalOutOfSyncException, IOException {
  if (isOutOfSync()) {
    // Even if we're out of sync, it's useful to send an RPC
    // to the remote node in order to update its lag metrics, etc.
    heartbeatIfNecessary();
    throw new JournalOutOfSyncException(
        "Journal disabled until next roll");
  } 
  metrics.setOutOfSync(outOfSync);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:12,代码来源:IPCLoggerChannel.java


示例6: journal

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Write a batch of edits to the journal.
 * {@see QJournalProtocol#journal(RequestInfo, long, long, int, byte[])}
 */
synchronized void journal(RequestInfo reqInfo,
    long segmentTxId, long firstTxnId,
    int numTxns, byte[] records) throws IOException {
  checkFormatted();
  checkWriteRequest(reqInfo);

  checkSync(curSegment != null,
      "Can't write, no segment open");
  
  if (curSegmentTxId != segmentTxId) {
    // Sanity check: it is possible that the writer will fail IPCs
    // on both the finalize() and then the start() of the next segment.
    // This could cause us to continue writing to an old segment
    // instead of rolling to a new one, which breaks one of the
    // invariants in the design. If it happens, abort the segment
    // and throw an exception.
    JournalOutOfSyncException e = new JournalOutOfSyncException(
        "Writer out of sync: it thinks it is writing segment " + segmentTxId
        + " but current segment is " + curSegmentTxId);
    abortCurSegment();
    throw e;
  }
    
  checkSync(nextTxId == firstTxnId,
      "Can't write txid " + firstTxnId + " expecting nextTxId=" + nextTxId);
  
  long lastTxnId = firstTxnId + numTxns - 1;
  if (LOG.isTraceEnabled()) {
    LOG.trace("Writing txid " + firstTxnId + "-" + lastTxnId);
  }

  // If the edit has already been marked as committed, we know
  // it has been fsynced on a quorum of other nodes, and we are
  // "catching up" with the rest. Hence we do not need to fsync.
  boolean isLagging = lastTxnId <= committedTxnId.get();
  boolean shouldFsync = !isLagging;
  
  curSegment.writeRaw(records, 0, records.length);
  curSegment.setReadyToFlush();
  StopWatch sw = new StopWatch();
  sw.start();
  curSegment.flush(shouldFsync);
  sw.stop();

  long nanoSeconds = sw.now();
  metrics.addSync(
      TimeUnit.MICROSECONDS.convert(nanoSeconds, TimeUnit.NANOSECONDS));
  long milliSeconds = TimeUnit.MILLISECONDS.convert(
      nanoSeconds, TimeUnit.NANOSECONDS);

  if (milliSeconds > WARN_SYNC_MILLIS_THRESHOLD) {
    LOG.warn("Sync of transaction range " + firstTxnId + "-" + lastTxnId +
             " took " + milliSeconds + "ms");
  }

  if (isLagging) {
    // This batch of edits has already been committed on a quorum of other
    // nodes. So, we are in "catch up" mode. This gets its own metric.
    metrics.batchesWrittenWhileLagging.incr(1);
  }
  
  metrics.batchesWritten.incr(1);
  metrics.bytesWritten.incr(records.length);
  metrics.txnsWritten.incr(numTxns);
  
  highestWrittenTxId = lastTxnId;
  nextTxId = lastTxnId + 1;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:73,代码来源:Journal.java


示例7: finalizeLogSegment

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Finalize the log segment at the given transaction ID.
 */
public synchronized void finalizeLogSegment(RequestInfo reqInfo, long startTxId,
    long endTxId) throws IOException {
  checkFormatted();
  checkRequest(reqInfo);

  boolean needsValidation = true;

  // Finalizing the log that the writer was just writing.
  if (startTxId == curSegmentTxId) {
    if (curSegment != null) {
      curSegment.close();
      curSegment = null;
      curSegmentTxId = HdfsConstants.INVALID_TXID;
    }
    
    checkSync(nextTxId == endTxId + 1,
        "Trying to finalize in-progress log segment %s to end at " +
        "txid %s but only written up to txid %s",
        startTxId, endTxId, nextTxId - 1);
    // No need to validate the edit log if the client is finalizing
    // the log segment that it was just writing to.
    needsValidation = false;
  }
  
  FileJournalManager.EditLogFile elf = fjm.getLogFile(startTxId);
  if (elf == null) {
    throw new JournalOutOfSyncException("No log file to finalize at " +
        "transaction ID " + startTxId);
  }

  if (elf.isInProgress()) {
    if (needsValidation) {
      LOG.info("Validating log segment " + elf.getFile() + " about to be " +
          "finalized");
      elf.scanLog();

      checkSync(elf.getLastTxId() == endTxId,
          "Trying to finalize in-progress log segment %s to end at " +
          "txid %s but log %s on disk only contains up to txid %s",
          startTxId, endTxId, elf.getFile(), elf.getLastTxId());
    }
    fjm.finalizeLogSegment(startTxId, endTxId);
  } else {
    Preconditions.checkArgument(endTxId == elf.getLastTxId(),
        "Trying to re-finalize already finalized log " +
            elf + " with different endTxId " + endTxId);
  }

  // Once logs are finalized, a different length will never be decided.
  // During recovery, we treat a finalized segment the same as an accepted
  // recovery. Thus, we no longer need to keep track of the previously-
  // accepted decision. The existence of the finalized log segment is enough.
  purgePaxosDecision(elf.getFirstTxId());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:58,代码来源:Journal.java


示例8: journal

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Write a batch of edits to the journal.
 * {@see QJournalProtocol#journal(RequestInfo, long, long, int, byte[])}
 */
synchronized void journal(RequestInfo reqInfo,
    long segmentTxId, long firstTxnId,
    int numTxns, byte[] records) throws IOException {
  checkFormatted();
  checkWriteRequest(reqInfo);

  checkSync(curSegment != null,
      "Can't write, no segment open");
  
  if (curSegmentTxId != segmentTxId) {
    // Sanity check: it is possible that the writer will fail IPCs
    // on both the finalize() and then the start() of the next segment.
    // This could cause us to continue writing to an old segment
    // instead of rolling to a new one, which breaks one of the
    // invariants in the design. If it happens, abort the segment
    // and throw an exception.
    JournalOutOfSyncException e = new JournalOutOfSyncException(
        "Writer out of sync: it thinks it is writing segment " + segmentTxId
        + " but current segment is " + curSegmentTxId);
    abortCurSegment();
    throw e;
  }
    
  checkSync(nextTxId == firstTxnId,
      "Can't write txid " + firstTxnId + " expecting nextTxId=" + nextTxId);
  
  long lastTxnId = firstTxnId + numTxns - 1;
  if (LOG.isTraceEnabled()) {
    LOG.trace("Writing txid " + firstTxnId + "-" + lastTxnId);
  }

  // If the edit has already been marked as committed, we know
  // it has been fsynced on a quorum of other nodes, and we are
  // "catching up" with the rest. Hence we do not need to fsync.
  boolean isLagging = lastTxnId <= committedTxnId.get();
  boolean shouldFsync = !isLagging;
  
  curSegment.writeRaw(records, 0, records.length);
  curSegment.setReadyToFlush();
  StopWatch sw = new StopWatch();
  sw.start();
  curSegment.flush(shouldFsync);
  sw.stop();

  long nanoSeconds = sw.now();
  metrics.addSync(
      TimeUnit.MICROSECONDS.convert(nanoSeconds, TimeUnit.NANOSECONDS));
  long milliSeconds = TimeUnit.MILLISECONDS.convert(
      nanoSeconds, TimeUnit.NANOSECONDS);

  if (milliSeconds > WARN_SYNC_MILLIS_THRESHOLD) {
    LOG.warn("Sync of transaction range " + firstTxnId + "-" + lastTxnId +
             " took " + milliSeconds + "ms");
  }

  if (isLagging) {
    // This batch of edits has already been committed on a quorum of other
    // nodes. So, we are in "catch up" mode. This gets its own metric.
    metrics.batchesWrittenWhileLagging.incr(1);
  }
  
  metrics.batchesWritten.incr(1);
  metrics.bytesWritten.incr(records.length);
  metrics.txnsWritten.incr(numTxns);
  
  updateHighestWrittenTxId(lastTxnId);
  nextTxId = lastTxnId + 1;
  lastJournalTimestamp = Time.now();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:74,代码来源:Journal.java


示例9: finalizeLogSegment

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Finalize the log segment at the given transaction ID.
 */
public synchronized void finalizeLogSegment(RequestInfo reqInfo, long startTxId,
    long endTxId) throws IOException {
  checkFormatted();
  checkRequest(reqInfo);

  boolean needsValidation = true;

  // Finalizing the log that the writer was just writing.
  if (startTxId == curSegmentTxId) {
    if (curSegment != null) {
      curSegment.close();
      curSegment = null;
      curSegmentTxId = HdfsServerConstants.INVALID_TXID;
    }
    
    checkSync(nextTxId == endTxId + 1,
        "Trying to finalize in-progress log segment %s to end at " +
        "txid %s but only written up to txid %s",
        startTxId, endTxId, nextTxId - 1);
    // No need to validate the edit log if the client is finalizing
    // the log segment that it was just writing to.
    needsValidation = false;
  }
  
  FileJournalManager.EditLogFile elf = fjm.getLogFile(startTxId);
  if (elf == null) {
    throw new JournalOutOfSyncException("No log file to finalize at " +
        "transaction ID " + startTxId);
  }

  if (elf.isInProgress()) {
    if (needsValidation) {
      LOG.info("Validating log segment " + elf.getFile() + " about to be " +
          "finalized");
      elf.scanLog(Long.MAX_VALUE, false);

      checkSync(elf.getLastTxId() == endTxId,
          "Trying to finalize in-progress log segment %s to end at " +
          "txid %s but log %s on disk only contains up to txid %s",
          startTxId, endTxId, elf.getFile(), elf.getLastTxId());
    }
    fjm.finalizeLogSegment(startTxId, endTxId);
  } else {
    Preconditions.checkArgument(endTxId == elf.getLastTxId(),
        "Trying to re-finalize already finalized log " +
            elf + " with different endTxId " + endTxId);
  }

  // Once logs are finalized, a different length will never be decided.
  // During recovery, we treat a finalized segment the same as an accepted
  // recovery. Thus, we no longer need to keep track of the previously-
  // accepted decision. The existence of the finalized log segment is enough.
  purgePaxosDecision(elf.getFirstTxId());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:58,代码来源:Journal.java


示例10: journal

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Write a batch of edits to the journal.
 * {@see QJournalProtocol#journal(RequestInfo, long, long, int, byte[])}
 */
synchronized void journal(RequestInfo reqInfo,
    long segmentTxId, long firstTxnId,
    int numTxns, byte[] records) throws IOException {
  checkFormatted();
  checkWriteRequest(reqInfo);

  checkSync(curSegment != null,
      "Can't write, no segment open");
  
  if (curSegmentTxId != segmentTxId) {
    // Sanity check: it is possible that the writer will fail IPCs
    // on both the finalize() and then the start() of the next segment.
    // This could cause us to continue writing to an old segment
    // instead of rolling to a new one, which breaks one of the
    // invariants in the design. If it happens, abort the segment
    // and throw an exception.
    JournalOutOfSyncException e = new JournalOutOfSyncException(
        "Writer out of sync: it thinks it is writing segment " + segmentTxId
        + " but current segment is " + curSegmentTxId);
    abortCurSegment();
    throw e;
  }
    
  checkSync(nextTxId == firstTxnId,
      "Can't write txid " + firstTxnId + " expecting nextTxId=" + nextTxId);
  
  long lastTxnId = firstTxnId + numTxns - 1;
  if (LOG.isTraceEnabled()) {
    LOG.trace("Writing txid " + firstTxnId + "-" + lastTxnId);
  }

  // If the edit has already been marked as committed, we know
  // it has been fsynced on a quorum of other nodes, and we are
  // "catching up" with the rest. Hence we do not need to fsync.
  boolean isLagging = lastTxnId <= committedTxnId.get();
  boolean shouldFsync = !isLagging;
  
  curSegment.writeRaw(records, 0, records.length);
  curSegment.setReadyToFlush();
  Stopwatch sw = new Stopwatch();
  sw.start();
  curSegment.flush(shouldFsync);
  sw.stop();
  
  metrics.addSync(sw.elapsedTime(TimeUnit.MICROSECONDS));
  if (sw.elapsedTime(TimeUnit.MILLISECONDS) > WARN_SYNC_MILLIS_THRESHOLD) {
    LOG.warn("Sync of transaction range " + firstTxnId + "-" + lastTxnId +
             " took " + sw.elapsedTime(TimeUnit.MILLISECONDS) + "ms");
  }

  if (isLagging) {
    // This batch of edits has already been committed on a quorum of other
    // nodes. So, we are in "catch up" mode. This gets its own metric.
    metrics.batchesWrittenWhileLagging.incr(1);
  }
  
  metrics.batchesWritten.incr(1);
  metrics.bytesWritten.incr(records.length);
  metrics.txnsWritten.incr(numTxns);
  
  highestWrittenTxId = lastTxnId;
  nextTxId = lastTxnId + 1;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:68,代码来源:Journal.java


示例11: journal

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Write a batch of edits to the journal.
 * {@see QJournalProtocol#journal(RequestInfo, long, long, int, byte[])}
 */
synchronized ShortVoid journal(RequestInfo reqInfo, 
    long segmentTxId, long firstTxnId, 
    int numTxns, byte[] records) throws IOException {
  checkJournalStorageFormatted();
  checkWriteRequest(reqInfo);

  if (curSegment == null) {
    checkSync(false, "Can't write, no segment open");
  }
  
  if (curSegmentTxId != segmentTxId) {
    // Sanity check: it is possible that the writer will fail IPCs
    // on both the finalize() and then the start() of the next segment.
    // This could cause us to continue writing to an old segment
    // instead of rolling to a new one, which breaks one of the
    // invariants in the design. If it happens, abort the segment
    // and throw an exception.
    JournalOutOfSyncException e = new JournalOutOfSyncException(
        "Writer out of sync: it thinks it is writing segment " + segmentTxId
        + " but current segment is " + curSegmentTxId);
    abortCurSegment();
    throw e;
  }
    
  if (nextTxId != firstTxnId) {
    checkSync(false, "Can't write txid " + firstTxnId
        + " expecting nextTxId=" + nextTxId);
  }
  
  long lastTxnId = firstTxnId + numTxns - 1;
  if (LOG.isTraceEnabled()) {
    LOG.trace("Writing txid " + firstTxnId + "-" + lastTxnId);
  }

  // If the edit has already been marked as committed, we know
  // it has been fsynced on a quorum of other nodes, and we are
  // "catching up" with the rest. Hence we do not need to fsync.
  boolean isLagging = lastTxnId <= committedTxnId.get();
  boolean shouldFsync = !isLagging;
  
  curSegment.writeRaw(records, 0, records.length);
  curSegment.setReadyToFlush();
  long start = System.nanoTime();
  curSegment.flush(shouldFsync);
  long time = DFSUtil.getElapsedTimeMicroSeconds(start);
  currentSegmentWrittenBytes += records.length;
  
  metrics.addSync(time);
  
  if (time / 1000 > WARN_SYNC_MILLIS_THRESHOLD) {
    LOG.warn("Sync of transaction range " + firstTxnId + "-" + lastTxnId + 
             " took " + (time / 1000) + "ms");
  }

  if (isLagging) {
    // This batch of edits has already been committed on a quorum of other
    // nodes. So, we are in "catch up" mode. This gets its own metric.
    metrics.batchesWrittenWhileLagging.inc(1);
  }
  
  metrics.batchesWritten.inc(1);
  metrics.bytesWritten.inc(records.length);
  metrics.txnsWritten.inc(numTxns);
  
  highestWrittenTxId = lastTxnId;
  metrics.setLastWrittenTxId(highestWrittenTxId);
  metrics.setCurrentTxnsLag(getCurrentLagTxns());
  nextTxId = lastTxnId + 1;
  return ShortVoid.instance;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:75,代码来源:Journal.java


示例12: finalizeLogSegment

import org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException; //导入依赖的package包/类
/**
 * Finalize the log segment at the given transaction ID.
 */
public synchronized void finalizeLogSegment(RequestInfo reqInfo, long startTxId,
    long endTxId) throws IOException {
  checkJournalStorageFormatted();
  checkRequest(reqInfo);
  
  boolean needsValidation = true;

  // Finalizing the log that the writer was just writing.
  if (startTxId == curSegmentTxId) {
    if (curSegment != null) {
      curSegment.close();
      curSegment = null;
      curSegmentTxId = HdfsConstants.INVALID_TXID;
      currentSegmentWrittenBytes = 0L;
    }
    
    checkSync(nextTxId == endTxId + 1,
        "Trying to finalize in-progress log segment %s to end at " +
        "txid %s but only written up to txid %s",
        startTxId, endTxId, nextTxId - 1);
    // No need to validate the edit log if the client is finalizing
    // the log segment that it was just writing to.
    needsValidation = false;
  }
  
  FileJournalManager.EditLogFile elf = fjm.getLogFile(startTxId);
  if (elf == null) {
    throw new JournalOutOfSyncException("No log file to finalize at " +
        "transaction ID " + startTxId);
  }

  if (elf.isInProgress()) {
    if (needsValidation) {
      LOG.info("Validating log segment " + elf.getFile() + " about to be " +
          "finalized");
      elf.validateLog();

      checkSync(elf.getLastTxId() == endTxId,
          "Trying to finalize in-progress log segment %s to end at " +
          "txid %s but log %s on disk only contains up to txid %s",
          startTxId, endTxId, elf.getFile(), elf.getLastTxId());
    }
    fjm.finalizeLogSegment(startTxId, endTxId);
  } else {
    Preconditions.checkArgument(endTxId == elf.getLastTxId(),
        "Trying to re-finalize already finalized log " +
            elf + " with different endTxId " + endTxId);
  }

  // Once logs are finalized, a different length will never be decided.
  // During recovery, we treat a finalized segment the same as an accepted
  // recovery. Thus, we no longer need to keep track of the previously-
  // accepted decision. The existence of the finalized log segment is enough.
  purgePaxosDecision(elf.getFirstTxId());
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:59,代码来源:Journal.java



注:本文中的org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TileEntityCarrier类代码示例发布时间:2022-05-22
下一篇:
Java EncapsulationUtility类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap