• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java LastSequenceId类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.LastSequenceId的典型用法代码示例。如果您正苦于以下问题:Java LastSequenceId类的具体用法?Java LastSequenceId怎么用?Java LastSequenceId使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



LastSequenceId类属于org.apache.hadoop.hbase.regionserver包,在下文中一共展示了LastSequenceId类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: WALSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
WALSplitter(final WALFactory factory, Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker,
    CoordinatedStateManager csm, RecoveryMode mode) {
  this.conf = HBaseConfiguration.create(conf);
  String codecClassName = conf
      .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
  this.conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName);
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.csm = (BaseCoordinatedStateManager)csm;
  this.walFactory = factory;
  this.controller = new PipelineController();

  entryBuffers = new EntryBuffers(controller,
      this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));

  // a larger minBatchSize may slow down recovery because replay writer has to wait for
  // enough edits before replaying them
  this.minBatchSize = this.conf.getInt("hbase.regionserver.wal.logreplay.batch.size", 64);
  this.distributedLogReplay = (RecoveryMode.LOG_REPLAY == mode);

  this.numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if (csm != null && this.distributedLogReplay) {
    outputSink = new LogReplayOutputSink(controller, entryBuffers, numWriterThreads);
  } else {
    if (this.distributedLogReplay) {
      LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly.");
    }
    this.distributedLogReplay = false;
    outputSink = new LogRecoveredEditsOutputSink(controller, entryBuffers, numWriterThreads);
  }

}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:36,代码来源:WALSplitter.java


示例2: WALSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
WALSplitter(final WALFactory factory, Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker,
    CoordinatedStateManager csm, RecoveryMode mode) {
  this.conf = HBaseConfiguration.create(conf);
  String codecClassName = conf
      .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
  this.conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName);
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.csm = (BaseCoordinatedStateManager)csm;
  this.walFactory = factory;

  entryBuffers = new EntryBuffers(
      this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));

  // a larger minBatchSize may slow down recovery because replay writer has to wait for
  // enough edits before replaying them
  this.minBatchSize = this.conf.getInt("hbase.regionserver.wal.logreplay.batch.size", 64);
  this.distributedLogReplay = (RecoveryMode.LOG_REPLAY == mode);

  this.numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if (csm != null && this.distributedLogReplay) {
    outputSink = new LogReplayOutputSink(numWriterThreads);
  } else {
    if (this.distributedLogReplay) {
      LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly.");
    }
    this.distributedLogReplay = false;
    outputSink = new LogRecoveredEditsOutputSink(numWriterThreads);
  }

}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:35,代码来源:WALSplitter.java


示例3: HLogSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
HLogSplitter(Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker, ZooKeeperWatcher zkw) {
  this.conf = HBaseConfiguration.create(conf);
  String codecClassName = conf
      .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
  this.conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName);
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.watcher = zkw;

  entryBuffers = new EntryBuffers(
      this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));

  // a larger minBatchSize may slow down recovery because replay writer has to wait for
  // enough edits before replaying them
  this.minBatchSize = this.conf.getInt("hbase.regionserver.wal.logreplay.batch.size", 64);
  this.distributedLogReplay = HLogSplitter.isDistributedLogReplay(this.conf);

  this.numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if (zkw != null && this.distributedLogReplay) {
    outputSink = new LogReplayOutputSink(numWriterThreads);
  } else {
    if (this.distributedLogReplay) {
      LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly.");
    }
    this.distributedLogReplay = false;
    outputSink = new LogRecoveredEditsOutputSink(numWriterThreads);
  }

}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:33,代码来源:HLogSplitter.java


示例4: WALSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
@VisibleForTesting
WALSplitter(final WALFactory factory, Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker,
    SplitLogWorkerCoordination splitLogWorkerCoordination) {
  this.conf = HBaseConfiguration.create(conf);
  String codecClassName = conf
      .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
  this.conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName);
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.splitLogWorkerCoordination = splitLogWorkerCoordination;

  this.walFactory = factory;
  PipelineController controller = new PipelineController();

  this.splitWriterCreationBounded = conf.getBoolean(SPLIT_WRITER_CREATION_BOUNDED, false);

  entryBuffers = new EntryBuffers(controller,
      this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize", 128 * 1024 * 1024),
      splitWriterCreationBounded);

  int numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if(splitWriterCreationBounded){
    outputSink = new BoundedLogWriterCreationOutputSink(
        controller, entryBuffers, numWriterThreads);
  }else {
    outputSink = new LogRecoveredEditsOutputSink(controller, entryBuffers, numWriterThreads);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:31,代码来源:WALSplitter.java


示例5: splitLogFile

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
/**
 * Splits a WAL file into region's recovered-edits directory.
 * This is the main entry point for distributed log splitting from SplitLogWorker.
 * <p>
 * If the log file has N regions then N recovered.edits files will be produced.
 * <p>
 * @return false if it is interrupted by the progress-able.
 */
public static boolean splitLogFile(Path rootDir, FileStatus logfile, FileSystem fs,
    Configuration conf, CancelableProgressable reporter, LastSequenceId idChecker,
    SplitLogWorkerCoordination splitLogWorkerCoordination, final WALFactory factory)
    throws IOException {
  WALSplitter s = new WALSplitter(factory, conf, rootDir, fs, idChecker,
      splitLogWorkerCoordination);
  return s.splitLogFile(logfile, reporter);
}
 
开发者ID:apache,项目名称:hbase,代码行数:17,代码来源:WALSplitter.java


示例6: HLogSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
HLogSplitter(Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker, ZooKeeperWatcher zkw,
    CoordinatedStateManager csm) {
  this.conf = HBaseConfiguration.create(conf);
  String codecClassName = conf
      .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName());
  this.conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName);
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.watcher = zkw;
  this.csm = csm;

  entryBuffers = new EntryBuffers(
      this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));

  // a larger minBatchSize may slow down recovery because replay writer has to wait for
  // enough edits before replaying them
  this.minBatchSize = this.conf.getInt("hbase.regionserver.wal.logreplay.batch.size", 64);
  this.distributedLogReplay = HLogSplitter.isDistributedLogReplay(this.conf);

  this.numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if (zkw != null && csm != null && this.distributedLogReplay) {
    outputSink = new LogReplayOutputSink(numWriterThreads);
  } else {
    if (this.distributedLogReplay) {
      LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly.");
    }
    this.distributedLogReplay = false;
    outputSink = new LogRecoveredEditsOutputSink(numWriterThreads);
  }

}
 
开发者ID:shenli-uiuc,项目名称:PyroDB,代码行数:35,代码来源:HLogSplitter.java


示例7: HLogSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
HLogSplitter(Configuration conf, Path rootDir,
    FileSystem fs, LastSequenceId idChecker, ZooKeeperWatcher zkw) {
  this.conf = conf;
  this.rootDir = rootDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;
  this.watcher = zkw;

  entryBuffers = new EntryBuffers(
      conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));

  // a larger minBatchSize may slow down recovery because replay writer has to wait for
  // enough edits before replaying them
  this.minBatchSize = conf.getInt("hbase.regionserver.wal.logreplay.batch.size", 64);
  this.distributedLogReplay = this.conf.getBoolean(HConstants.DISTRIBUTED_LOG_REPLAY_KEY,
    HConstants.DEFAULT_DISTRIBUTED_LOG_REPLAY_CONFIG);

  this.numWriterThreads = conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3);
  if (zkw != null && this.distributedLogReplay) {
    outputSink = new LogReplayOutputSink(numWriterThreads);
  } else {
    if (this.distributedLogReplay) {
      LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly.");
    }
    this.distributedLogReplay = false;
    outputSink = new LogRecoveredEditsOutputSink(numWriterThreads);
  }
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:30,代码来源:HLogSplitter.java


示例8: HLogSplitter

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
public HLogSplitter(Configuration conf, Path rootDir, Path srcDir,
    Path oldLogDir, FileSystem fs, LastSequenceId idChecker) {
  this.conf = conf;
  this.rootDir = rootDir;
  this.srcDir = srcDir;
  this.oldLogDir = oldLogDir;
  this.fs = fs;
  this.sequenceIdChecker = idChecker;

  entryBuffers = new EntryBuffers(
      conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
          128*1024*1024));
  outputSink = new OutputSink();
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:15,代码来源:HLogSplitter.java


示例9: splitLogFile

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
/**
 * Splits a WAL file into region's recovered-edits directory.
 * This is the main entry point for distributed log splitting from SplitLogWorker.
 * <p>
 * If the log file has N regions then N recovered.edits files will be produced.
 * <p>
 * @param rootDir
 * @param logfile
 * @param fs
 * @param conf
 * @param reporter
 * @param idChecker
 * @param cp coordination state manager
 * @return false if it is interrupted by the progress-able.
 * @throws IOException
 */
public static boolean splitLogFile(Path rootDir, FileStatus logfile, FileSystem fs,
    Configuration conf, CancelableProgressable reporter, LastSequenceId idChecker,
    CoordinatedStateManager cp, RecoveryMode mode, final WALFactory factory) throws IOException {
  WALSplitter s = new WALSplitter(factory, conf, rootDir, fs, idChecker, cp, mode);
  return s.splitLogFile(logfile, reporter);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:23,代码来源:WALSplitter.java


示例10: splitLogFile

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
/**
 * Splits a HLog file into region's recovered-edits directory.
 * This is the main entry point for distributed log splitting from SplitLogWorker.
 * <p>
 * If the log file has N regions then N recovered.edits files will be produced.
 * <p>
 * @param rootDir
 * @param logfile
 * @param fs
 * @param conf
 * @param reporter
 * @param idChecker
 * @param zkw ZooKeeperWatcher if it's null, we will back to the old-style log splitting where we
 *          dump out recoved.edits files for regions to replay on.
 * @return false if it is interrupted by the progress-able.
 * @throws IOException
 */
public static boolean splitLogFile(Path rootDir, FileStatus logfile, FileSystem fs,
    Configuration conf, CancelableProgressable reporter, LastSequenceId idChecker,
    ZooKeeperWatcher zkw) throws IOException {
  HLogSplitter s = new HLogSplitter(conf, rootDir, fs, idChecker, zkw);
  return s.splitLogFile(logfile, reporter);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:24,代码来源:HLogSplitter.java


示例11: splitLogFile

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
/**
 * Splits a HLog file into region's recovered-edits directory.
 * This is the main entry point for distributed log splitting from SplitLogWorker.
 * <p>
 * If the log file has N regions then N recovered.edits files will be produced.
 * <p>
 * @param rootDir
 * @param logfile
 * @param fs
 * @param conf
 * @param reporter
 * @param idChecker
 * @param zkw ZooKeeperWatcher if it's null, we will back to the old-style log splitting where we
 *          dump out recoved.edits files for regions to replay on.
 * @return false if it is interrupted by the progress-able.
 * @throws IOException
 */
public static boolean splitLogFile(Path rootDir, FileStatus logfile, FileSystem fs,
    Configuration conf, CancelableProgressable reporter, LastSequenceId idChecker,
    ZooKeeperWatcher zkw, CoordinatedStateManager cp) throws IOException {
  HLogSplitter s = new HLogSplitter(conf, rootDir, fs, idChecker, zkw,
    cp);
  return s.splitLogFile(logfile, reporter);
}
 
开发者ID:shenli-uiuc,项目名称:PyroDB,代码行数:25,代码来源:HLogSplitter.java


示例12: splitLogFile

import org.apache.hadoop.hbase.regionserver.LastSequenceId; //导入依赖的package包/类
/**
 * Splits a HLog file into region's recovered-edits directory
 * <p>
 * If the log file has N regions then N recovered.edits files will be
 * produced.
 * <p>
 * @param rootDir
 * @param logfile
 * @param fs
 * @param conf
 * @param reporter
 * @param idChecker
 * @return false if it is interrupted by the progress-able.
 * @throws IOException
 */
static public boolean splitLogFile(Path rootDir, FileStatus logfile,
    FileSystem fs, Configuration conf, CancelableProgressable reporter,
    LastSequenceId idChecker)
    throws IOException {
  HLogSplitter s = new HLogSplitter(conf, rootDir, null, null /* oldLogDir */, fs, idChecker);
  return s.splitLogFile(logfile, reporter);
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:23,代码来源:HLogSplitter.java



注:本文中的org.apache.hadoop.hbase.regionserver.LastSequenceId类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java WxMpMessageRouter类代码示例发布时间:2022-05-23
下一篇:
Java HtmlBlockForTest类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap