• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java HStore类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.HStore的典型用法代码示例。如果您正苦于以下问题:Java HStore类的具体用法?Java HStore怎么用?Java HStore使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HStore类属于org.apache.hadoop.hbase.regionserver包,在下文中一共展示了HStore类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createCompactJob

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private CompactJobQueue.CompactJob createCompactJob(final CompactionRequest request,
    final Path writtenPath, HStore store) throws IOException {
  // check reference file, not supported yet!
  boolean needToRebuild = false;
  for (StoreFile sf : request.getFiles()) {
    if (sf.getPath().getName().indexOf(".") != -1 || sf.isReference()) {
      needToRebuild = true;
      break;
    }
  }
  CompactJobQueue.CompactJob job;
  if (needToRebuild) {
    job = new CompactJobQueue.RebuildCompactJob(store, request, writtenPath);
  } else {
    job = new CompactJobQueue.NormalCompactJob(store, request, writtenPath);
  }
  return job;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:DefaultCompactor.java


示例2: LMDIndexWriter

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
public LMDIndexWriter(HStore store, Path rawDataPath, TimeRangeTracker timeRangeTracker,
    String opType) {
  tableRelation = store.indexTableRelation;
  this.rawDataPath = rawDataPath;
  this.store = store;
  tracker = timeRangeTracker;
  this.opType = opType;
  lmdIndexParameters = store.getLMDIndexParameters();
  int size = 0;
  for (Map.Entry<byte[], TreeSet<byte[]>> entry : tableRelation.getIndexFamilyMap().entrySet()) {
    size += entry.getValue().size();
  }
  dimensions = size;
  int[] mins = new int[dimensions];
  Arrays.fill(mins, 0);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:17,代码来源:LMDIndexWriter.java


示例3: getFromStoreFile

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * Do a small get/scan against one store. This is required because store
 * has no actual methods of querying itself, and relies on StoreScanner.
 */
public static List<Cell> getFromStoreFile(HStore store,
                                              Get get) throws IOException {
  Scan scan = new Scan(get);
  InternalScanner scanner = (InternalScanner) store.getScanner(scan,
      scan.getFamilyMap().get(store.getFamily().getName()),
      // originally MultiVersionConcurrencyControl.resetThreadReadPoint() was called to set
      // readpoint 0.
      0);

  List<Cell> result = new ArrayList<Cell>();
  scanner.next(result);
  if (!result.isEmpty()) {
    // verify that we are on the row we want:
    Cell kv = result.get(0);
    if (!CellUtil.matchingRow(kv, get.getRow())) {
      result.clear();
    }
  }
  scanner.close();
  return result;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:26,代码来源:HBaseTestingUtility.java


示例4: createReferences

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * @param services Master services instance.
 * @param htd
 * @param parent
 * @param daughter
 * @param midkey
 * @param top True if we are to write a 'top' reference.
 * @return Path to reference we created.
 * @throws IOException
 */
private Path createReferences(final MasterServices services,
    final HTableDescriptor htd, final HRegionInfo parent,
    final HRegionInfo daughter, final byte [] midkey, final boolean top)
throws IOException {
  Path rootdir = services.getMasterFileSystem().getRootDir();
  Path tabledir = FSUtils.getTableDir(rootdir, parent.getTable());
  Path storedir = HStore.getStoreHomedir(tabledir, daughter,
    htd.getColumnFamilies()[0].getName());
  Reference ref =
    top? Reference.createTopReference(midkey): Reference.createBottomReference(midkey);
  long now = System.currentTimeMillis();
  // Reference name has this format: StoreFile#REF_NAME_PARSER
  Path p = new Path(storedir, Long.toString(now) + "." + parent.getEncodedName());
  FileSystem fs = services.getMasterFileSystem().getFileSystem();
  ref.write(fs, p);
  return p;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:28,代码来源:TestCatalogJanitor.java


示例5: preStoreScannerOpen

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Override
public KeyValueScanner preStoreScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> c, Store store, final Scan scan,
    final NavigableSet<byte[]> targetCols, KeyValueScanner s) throws IOException {
  TableName tn = store.getTableName();
  if (!tn.isSystemTable()) {
    Long newTtl = ttls.get(store.getTableName());
    Integer newVersions = versions.get(store.getTableName());
    ScanInfo oldSI = store.getScanInfo();
    HColumnDescriptor family = store.getFamily();
    ScanInfo scanInfo = new ScanInfo(TEST_UTIL.getConfiguration(),
        family.getName(), family.getMinVersions(),
        newVersions == null ? family.getMaxVersions() : newVersions,
        newTtl == null ? oldSI.getTtl() : newTtl, family.getKeepDeletedCells(),
        oldSI.getTimeToPurgeDeletes(), oldSI.getComparator());
    return new StoreScanner(store, scanInfo, scan, targetCols,
        ((HStore) store).getHRegion().getReadpoint(IsolationLevel.READ_COMMITTED));
  } else {
    return s;
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:22,代码来源:TestCoprocessorScanPolicy.java


示例6: testPurgeExpiredFiles

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Test
public void testPurgeExpiredFiles() throws Exception {
  Configuration conf = TEST_UTIL.getConfiguration();
  conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10000);

  TEST_UTIL.startMiniCluster(1);
  try {
    Store store = prepareData();
    assertEquals(10, store.getStorefilesCount());
    TEST_UTIL.getHBaseAdmin().majorCompact(tableName);
    while (store.getStorefilesCount() > 1) {
      Thread.sleep(100);
    }
    assertTrue(store.getStorefilesCount() == 1);
  } finally {
    TEST_UTIL.shutdownMiniCluster();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:TestFIFOCompactionPolicy.java


示例7: testCompactionWithoutThroughputLimit

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private long testCompactionWithoutThroughputLimit() throws Exception {
  Configuration conf = TEST_UTIL.getConfiguration();
  conf.set(StoreEngine.STORE_ENGINE_CLASS_KEY, DefaultStoreEngine.class.getName());
  conf.setInt(CompactionConfiguration.HBASE_HSTORE_COMPACTION_MIN_KEY, 100);
  conf.setInt(CompactionConfiguration.HBASE_HSTORE_COMPACTION_MAX_KEY, 200);
  conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10000);
  conf.set(CompactionThroughputControllerFactory.HBASE_THROUGHPUT_CONTROLLER_KEY,
    NoLimitCompactionThroughputController.class.getName());
  TEST_UTIL.startMiniCluster(1);
  try {
    Store store = prepareData();
    assertEquals(10, store.getStorefilesCount());
    long startTime = System.currentTimeMillis();
    TEST_UTIL.getHBaseAdmin().majorCompact(tableName);
    while (store.getStorefilesCount() != 1) {
      Thread.sleep(20);
    }
    return System.currentTimeMillis() - startTime;
  } finally {
    TEST_UTIL.shutdownMiniCluster();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:23,代码来源:TestCompactionWithThroughputController.java


示例8: createHtd

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private HTableDescriptor createHtd(boolean isStripe) throws Exception {
  HTableDescriptor htd = new HTableDescriptor(TABLE_NAME);
  htd.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
  String noSplitsPolicy = DisabledRegionSplitPolicy.class.getName();
  htd.setConfiguration(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, noSplitsPolicy);
  if (isStripe) {
    htd.setConfiguration(StoreEngine.STORE_ENGINE_CLASS_KEY, StripeStoreEngine.class.getName());
    if (initialStripeCount != null) {
      htd.setConfiguration(
          StripeStoreConfig.INITIAL_STRIPE_COUNT_KEY, initialStripeCount.toString());
      htd.setConfiguration(
          HStore.BLOCKING_STOREFILES_KEY, Long.toString(10 * initialStripeCount));
    } else {
      htd.setConfiguration(HStore.BLOCKING_STOREFILES_KEY, "500");
    }
    if (splitSize != null) {
      htd.setConfiguration(StripeStoreConfig.SIZE_TO_SPLIT_KEY, splitSize.toString());
    }
    if (splitParts != null) {
      htd.setConfiguration(StripeStoreConfig.SPLIT_PARTS_KEY, splitParts.toString());
    }
  } else {
    htd.setConfiguration(HStore.BLOCKING_STOREFILES_KEY, "10"); // default
  }
  return htd;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:StripeCompactionsPerformanceEvaluation.java


示例9: getFromStoreFile

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * Do a small get/scan against one store. This is required because store
 * has no actual methods of querying itself, and relies on StoreScanner.
 */
public static List<Cell> getFromStoreFile(HStore store,
                                              Get get) throws IOException {
  Scan scan = new Scan(get);
  InternalScanner scanner = (InternalScanner) store.getScanner(scan,
      scan.getFamilyMap().get(store.getFamily().getName()),
      // originally MultiVersionConsistencyControl.resetThreadReadPoint() was called to set
      // readpoint 0.
      0);

  List<Cell> result = new ArrayList<Cell>();
  scanner.next(result);
  if (!result.isEmpty()) {
    // verify that we are on the row we want:
    Cell kv = result.get(0);
    if (!CellUtil.matchingRow(kv, get.getRow())) {
      result.clear();
    }
  }
  scanner.close();
  return result;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:26,代码来源:HBaseTestingUtility.java


示例10: preStoreScannerOpen

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Override
public KeyValueScanner preStoreScannerOpen(
    final ObserverContext<RegionCoprocessorEnvironment> c, Store store, final Scan scan,
    final NavigableSet<byte[]> targetCols, KeyValueScanner s) throws IOException {
  TableName tn = store.getTableName();
  if (!tn.isSystemTable()) {
    Long newTtl = ttls.get(store.getTableName());
    Integer newVersions = versions.get(store.getTableName());
    ScanInfo oldSI = store.getScanInfo();
    HColumnDescriptor family = store.getFamily();
    ScanInfo scanInfo = new ScanInfo(family.getName(), family.getMinVersions(),
        newVersions == null ? family.getMaxVersions() : newVersions,
        newTtl == null ? oldSI.getTtl() : newTtl, family.getKeepDeletedCells(),
        oldSI.getTimeToPurgeDeletes(), oldSI.getComparator());
    return new StoreScanner(store, scanInfo, scan, targetCols,
        ((HStore) store).getHRegion().getReadpoint(IsolationLevel.READ_COMMITTED));
  } else {
    return s;
  }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:21,代码来源:TestCoprocessorScanPolicy.java


示例11: countDelCellsInDelFiles

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * Gets the number of del cell in the del files
 * @param paths the del file paths
 * @return the cell size
 */
private int countDelCellsInDelFiles(List<Path> paths) throws IOException {
  List<HStoreFile> sfs = new ArrayList<>();
  int size = 0;
  for (Path path : paths) {
    HStoreFile sf = new HStoreFile(fs, path, conf, cacheConf, BloomType.NONE, true);
    sfs.add(sf);
  }
  List<KeyValueScanner> scanners = new ArrayList<>(StoreFileScanner.getScannersForStoreFiles(sfs,
    false, true, false, false, HConstants.LATEST_TIMESTAMP));
  long timeToPurgeDeletes = Math.max(conf.getLong("hbase.hstore.time.to.purge.deletes", 0), 0);
  long ttl = HStore.determineTTLFromFamily(hcd);
  ScanInfo scanInfo = new ScanInfo(conf, hcd, ttl, timeToPurgeDeletes, CellComparatorImpl.COMPARATOR);
  StoreScanner scanner = new StoreScanner(scanInfo, ScanType.COMPACT_RETAIN_DELETES, scanners);
  List<Cell> results = new ArrayList<>();
  boolean hasMore = true;

  while (hasMore) {
    hasMore = scanner.next(results);
    size += results.size();
    results.clear();
  }
  scanner.close();
  return size;
}
 
开发者ID:apache,项目名称:hbase,代码行数:30,代码来源:TestPartitionedMobCompactor.java


示例12: selectScannersFrom

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Override
protected List<KeyValueScanner> selectScannersFrom(HStore store,
    List<? extends KeyValueScanner> allScanners) {
  List<KeyValueScanner> scanners = super.selectScannersFrom(store, allScanners);
  List<KeyValueScanner> newScanners = new ArrayList<>(scanners.size());
  for (KeyValueScanner scanner : scanners) {
    newScanners.add(new DelegatingKeyValueScanner(scanner) {
      @Override
      public boolean reseek(Cell key) throws IOException {
        if (ON.get()) {
          REQ_COUNT.incrementAndGet();
          if (!THROW_ONCE.get() || REQ_COUNT.get() == 1) {
            if (IS_DO_NOT_RETRY.get()) {
              throw new DoNotRetryIOException("Injected exception");
            } else {
              throw new IOException("Injected exception");
            }
          }
        }
        return super.reseek(key);
      }
    });
  }
  return newScanners;
}
 
开发者ID:apache,项目名称:hbase,代码行数:26,代码来源:TestFromClientSideScanExcpetion.java


示例13: getFromStoreFile

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * Do a small get/scan against one store. This is required because store
 * has no actual methods of querying itself, and relies on StoreScanner.
 */
public static List<Cell> getFromStoreFile(HStore store,
                                              Get get) throws IOException {
  Scan scan = new Scan(get);
  InternalScanner scanner = (InternalScanner) store.getScanner(scan,
      scan.getFamilyMap().get(store.getColumnFamilyDescriptor().getName()),
      // originally MultiVersionConcurrencyControl.resetThreadReadPoint() was called to set
      // readpoint 0.
      0);

  List<Cell> result = new ArrayList<>();
  scanner.next(result);
  if (!result.isEmpty()) {
    // verify that we are on the row we want:
    Cell kv = result.get(0);
    if (!CellUtil.matchingRows(kv, get.getRow())) {
      result.clear();
    }
  }
  scanner.close();
  return result;
}
 
开发者ID:apache,项目名称:hbase,代码行数:26,代码来源:HBaseTestingUtility.java


示例14: createReferences

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private Path createReferences(final MasterServices services,
    final TableDescriptor td, final HRegionInfo parent,
    final HRegionInfo daughter, final byte [] midkey, final boolean top)
throws IOException {
  Path rootdir = services.getMasterFileSystem().getRootDir();
  Path tabledir = FSUtils.getTableDir(rootdir, parent.getTable());
  Path storedir = HStore.getStoreHomedir(tabledir, daughter,
    td.getColumnFamilies()[0].getName());
  Reference ref =
    top? Reference.createTopReference(midkey): Reference.createBottomReference(midkey);
  long now = System.currentTimeMillis();
  // Reference name has this format: StoreFile#REF_NAME_PARSER
  Path p = new Path(storedir, Long.toString(now) + "." + parent.getEncodedName());
  FileSystem fs = services.getMasterFileSystem().getFileSystem();
  ref.write(fs, p);
  return p;
}
 
开发者ID:apache,项目名称:hbase,代码行数:18,代码来源:TestCatalogJanitor.java


示例15: prepareData

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private HStore prepareData() throws IOException {
  Admin admin = TEST_UTIL.getAdmin();
  TableDescriptor desc = TableDescriptorBuilder.newBuilder(tableName)
      .setValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
        FIFOCompactionPolicy.class.getName())
      .setValue(HConstants.HBASE_REGION_SPLIT_POLICY_KEY,
        DisabledRegionSplitPolicy.class.getName())
      .addColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(family).setTimeToLive(1).build())
      .build();
  admin.createTable(desc);
  Table table = TEST_UTIL.getConnection().getTable(tableName);
  TimeOffsetEnvironmentEdge edge =
      (TimeOffsetEnvironmentEdge) EnvironmentEdgeManager.getDelegate();
  for (int i = 0; i < 10; i++) {
    for (int j = 0; j < 10; j++) {
      byte[] value = new byte[128 * 1024];
      ThreadLocalRandom.current().nextBytes(value);
      table.put(new Put(Bytes.toBytes(i * 10 + j)).addColumn(family, qualifier, value));
    }
    admin.flush(tableName);
    edge.increment(1001);
  }
  return getStoreWithName(tableName);
}
 
开发者ID:apache,项目名称:hbase,代码行数:25,代码来源:TestFIFOCompactionPolicy.java


示例16: testPurgeExpiredFiles

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Test
public void testPurgeExpiredFiles() throws Exception {
  HStore store = prepareData();
  assertEquals(10, store.getStorefilesCount());
  TEST_UTIL.getAdmin().majorCompact(tableName);
  TEST_UTIL.waitFor(30000, new ExplainingPredicate<Exception>() {

    @Override
    public boolean evaluate() throws Exception {
      return store.getStorefilesCount() == 1;
    }

    @Override
    public String explainFailure() throws Exception {
      return "The store file count " + store.getStorefilesCount() + " is still greater than 1";
    }
  });
}
 
开发者ID:apache,项目名称:hbase,代码行数:19,代码来源:TestFIFOCompactionPolicy.java


示例17: testSanityCheckBlockingStoreFiles

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
@Test
public void testSanityCheckBlockingStoreFiles() throws IOException {
  error.expect(DoNotRetryIOException.class);
  error.expectMessage("Blocking file count 'hbase.hstore.blockingStoreFiles'");
  error.expectMessage("is below recommended minimum of 1000 for column family");
  TableName tableName = TableName.valueOf(getClass().getSimpleName() + "-BlockingStoreFiles");
  TableDescriptor desc = TableDescriptorBuilder.newBuilder(tableName)
      .setValue(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY,
        FIFOCompactionPolicy.class.getName())
      .setValue(HConstants.HBASE_REGION_SPLIT_POLICY_KEY,
        DisabledRegionSplitPolicy.class.getName())
      .setValue(HStore.BLOCKING_STOREFILES_KEY, "10")
      .addColumnFamily(ColumnFamilyDescriptorBuilder.newBuilder(family).setTimeToLive(1).build())
      .build();
  TEST_UTIL.getAdmin().createTable(desc);
}
 
开发者ID:apache,项目名称:hbase,代码行数:17,代码来源:TestFIFOCompactionPolicy.java


示例18: prepareData

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private HStore prepareData() throws IOException {
  Admin admin = TEST_UTIL.getAdmin();
  if (admin.tableExists(tableName)) {
    admin.disableTable(tableName);
    admin.deleteTable(tableName);
  }
  Table table = TEST_UTIL.createTable(tableName, family);
  for (int i = 0; i < 10; i++) {
    for (int j = 0; j < 10; j++) {
      byte[] value = new byte[128 * 1024];
      ThreadLocalRandom.current().nextBytes(value);
      table.put(new Put(Bytes.toBytes(i * 10 + j)).addColumn(family, qualifier, value));
    }
    admin.flush(tableName);
  }
  return getStoreWithName(tableName);
}
 
开发者ID:apache,项目名称:hbase,代码行数:18,代码来源:TestCompactionWithThroughputController.java


示例19: testCompactionWithoutThroughputLimit

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
private long testCompactionWithoutThroughputLimit() throws Exception {
  Configuration conf = TEST_UTIL.getConfiguration();
  conf.set(StoreEngine.STORE_ENGINE_CLASS_KEY, DefaultStoreEngine.class.getName());
  conf.setInt(CompactionConfiguration.HBASE_HSTORE_COMPACTION_MIN_KEY, 100);
  conf.setInt(CompactionConfiguration.HBASE_HSTORE_COMPACTION_MAX_KEY, 200);
  conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10000);
  conf.set(CompactionThroughputControllerFactory.HBASE_THROUGHPUT_CONTROLLER_KEY,
    NoLimitThroughputController.class.getName());
  TEST_UTIL.startMiniCluster(1);
  try {
    HStore store = prepareData();
    assertEquals(10, store.getStorefilesCount());
    long startTime = System.currentTimeMillis();
    TEST_UTIL.getAdmin().majorCompact(tableName);
    while (store.getStorefilesCount() != 1) {
      Thread.sleep(20);
    }
    return System.currentTimeMillis() - startTime;
  } finally {
    TEST_UTIL.shutdownMiniCluster();
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:23,代码来源:TestCompactionWithThroughputController.java


示例20: generateAndFlushData

import org.apache.hadoop.hbase.regionserver.HStore; //导入依赖的package包/类
/**
 * Writes Puts to the table and flushes few times.
 * @return {@link Pair} of (throughput, duration).
 */
private Pair<Double, Long> generateAndFlushData(Table table) throws IOException {
  // Internally, throughput is controlled after every cell write, so keep value size less for
  // better control.
  final int NUM_FLUSHES = 3, NUM_PUTS = 50, VALUE_SIZE = 200 * 1024;
  Random rand = new Random();
  long duration = 0;
  for (int i = 0; i < NUM_FLUSHES; i++) {
    // Write about 10M (10 times of throughput rate) per iteration.
    for (int j = 0; j < NUM_PUTS; j++) {
      byte[] value = new byte[VALUE_SIZE];
      rand.nextBytes(value);
      table.put(new Put(Bytes.toBytes(i * 10 + j)).addColumn(family, qualifier, value));
    }
    long startTime = System.nanoTime();
    hbtu.getAdmin().flush(tableName);
    duration += System.nanoTime() - startTime;
  }
  HStore store = getStoreWithName(tableName);
  assertEquals(NUM_FLUSHES, store.getStorefilesCount());
  double throughput = (double)store.getStorefilesSize()
      / TimeUnit.NANOSECONDS.toSeconds(duration);
  return new Pair<>(throughput, duration);
}
 
开发者ID:apache,项目名称:hbase,代码行数:28,代码来源:TestFlushWithThroughputController.java



注:本文中的org.apache.hadoop.hbase.regionserver.HStore类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TaskAttemptCompletionEventProto类代码示例发布时间:2022-05-22
下一篇:
Java FunctionScope类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap