• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Compactor类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.compactions.Compactor的典型用法代码示例。如果您正苦于以下问题:Java Compactor类的具体用法?Java Compactor怎么用?Java Compactor使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Compactor类属于org.apache.hadoop.hbase.regionserver.compactions包,在下文中一共展示了Compactor类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * Performs memstore flush, writing data from scanner into sink.
 *
 * @param scanner           Scanner to get data from.
 * @param sink              Sink to write data to. Could be StoreFile.Writer.
 * @param smallestReadPoint Smallest read point used for the flush.
 */
protected void performFlush(InternalScanner scanner, Compactor.CellSink sink,
    long smallestReadPoint) throws IOException {
  int compactionKVMax =
      conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);

  ScannerContext scannerContext =
      ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();

  List<Cell> kvs = new ArrayList<Cell>();
  boolean hasMore;
  do {
    hasMore = scanner.next(kvs, scannerContext);
    if (!kvs.isEmpty()) {
      for (Cell c : kvs) {
        // If we know that this KV is going to be included always, then let us
        // set its memstoreTS to 0. This will help us save space when writing
        // to
        // disk.
        sink.append(c);
      }
      kvs.clear();
    }
  } while (hasMore);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:32,代码来源:StoreFlusher.java


示例2: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * Performs memstore flush, writing data from scanner into sink.
 * @param scanner Scanner to get data from.
 * @param sink Sink to write data to. Could be StoreFile.Writer.
 * @param smallestReadPoint Smallest read point used for the flush.
 */
protected void performFlush(InternalScanner scanner,
    Compactor.CellSink sink, long smallestReadPoint) throws IOException {
  int compactionKVMax =
    conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);
  List<Cell> kvs = new ArrayList<Cell>();
  boolean hasMore;
  do {
    hasMore = scanner.next(kvs, compactionKVMax);
    if (!kvs.isEmpty()) {
      for (Cell c : kvs) {
        // If we know that this KV is going to be included always, then let us
        // set its memstoreTS to 0. This will help us save space when writing to
        // disk.
        sink.append(c);
      }
      kvs.clear();
    }
  } while (hasMore);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:26,代码来源:StoreFlusher.java


示例3: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * Performs memstore flush, writing data from scanner into sink.
 * @param scanner Scanner to get data from.
 * @param sink Sink to write data to. Could be StoreFile.Writer.
 * @param smallestReadPoint Smallest read point used for the flush.
 * @return Bytes flushed.
 */
protected long performFlush(InternalScanner scanner,
    Compactor.CellSink sink, long smallestReadPoint) throws IOException {
  int compactionKVMax =
    conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);
  List<Cell> kvs = new ArrayList<Cell>();
  boolean hasMore;
  long flushed = 0;
  do {
    hasMore = scanner.next(kvs, compactionKVMax);
    if (!kvs.isEmpty()) {
      for (Cell c : kvs) {
        // If we know that this KV is going to be included always, then let us
        // set its memstoreTS to 0. This will help us save space when writing to
        // disk.
        KeyValue kv = KeyValueUtil.ensureKeyValue(c);
        if (kv.getMvccVersion() <= smallestReadPoint) {
          // let us not change the original KV. It could be in the memstore
          // changing its memstoreTS could affect other threads/scanners.
          kv = kv.shallowCopy();
          kv.setMvccVersion(0);
        }
        sink.append(kv);
        flushed += MemStore.heapSizeChange(kv, true);
      }
      kvs.clear();
    }
  } while (hasMore);
  return flushed;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:37,代码来源:StoreFlusher.java


示例4: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
   * Performs memstore flush, writing data from scanner into sink.
   * @param scanner Scanner to get data from.
   * @param sink Sink to write data to. Could be StoreFile.Writer.
   * @param smallestReadPoint Smallest read point used for the flush.
   * @return Bytes flushed.
s   */
  protected long performFlush(InternalScanner scanner,
      Compactor.CellSink sink, long smallestReadPoint) throws IOException {
    int compactionKVMax =
      conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);
    List<Cell> kvs = new ArrayList<Cell>();
    boolean hasMore;
    long flushed = 0;
    do {
      hasMore = scanner.next(kvs, compactionKVMax);
      if (!kvs.isEmpty()) {
        for (Cell c : kvs) {
          // If we know that this KV is going to be included always, then let us
          // set its memstoreTS to 0. This will help us save space when writing to
          // disk.
          KeyValue kv = KeyValueUtil.ensureKeyValue(c);
          if (kv.getMvccVersion() <= smallestReadPoint) {
            // let us not change the original KV. It could be in the memstore
            // changing its memstoreTS could affect other threads/scanners.
            kv = kv.shallowCopy();
            kv.setMvccVersion(0);
          }
          sink.append(kv);
          flushed += MemStore.heapSizeChange(kv, true);
        }
        kvs.clear();
      }
    } while (hasMore);
    return flushed;
  }
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:37,代码来源:StoreFlusher.java


示例5: testCompactionWithCorruptResult

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
@Test
public void testCompactionWithCorruptResult() throws Exception {
  int nfiles = 10;
  for (int i = 0; i < nfiles; i++) {
    createStoreFile(r);
  }
  HStore store = (HStore) r.getStore(COLUMN_FAMILY);

  Collection<StoreFile> storeFiles = store.getStorefiles();
  Compactor tool = store.storeEngine.getCompactor();

  List<Path> newFiles = tool.compactForTesting(storeFiles, false);

  // Now lets corrupt the compacted file.
  FileSystem fs = store.getFileSystem();
  // default compaction policy created one and only one new compacted file
  Path dstPath = store.getRegionFileSystem().createTempName();
  FSDataOutputStream stream = fs.create(dstPath, null, true, 512, (short)3, (long)1024, null);
  stream.writeChars("CORRUPT FILE!!!!");
  stream.close();
  Path origPath = store.getRegionFileSystem().commitStoreFile(
    Bytes.toString(COLUMN_FAMILY), dstPath);

  try {
    ((HStore)store).moveFileIntoPlace(origPath);
  } catch (Exception e) {
    // The complete compaction should fail and the corrupt file should remain
    // in the 'tmp' directory;
    assert (fs.exists(origPath));
    assert (!fs.exists(dstPath));
    System.out.println("testCompactionWithCorruptResult Passed");
    return;
  }
  fail("testCompactionWithCorruptResult failed since no exception was" +
      "thrown while completing a corrupt file");
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:37,代码来源:TestCompaction.java


示例6: getCompactor

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * @return Compactor to use.
 */
public Compactor getCompactor() {
  return this.compactor;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:7,代码来源:StoreEngine.java


示例7: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * Performs memstore flush, writing data from scanner into sink.
 * @param scanner Scanner to get data from.
 * @param sink Sink to write data to. Could be StoreFile.Writer.
 * @param smallestReadPoint Smallest read point used for the flush.
 */
protected void performFlush(InternalScanner scanner,
    Compactor.CellSink sink, long smallestReadPoint) throws IOException {
  int compactionKVMax =
    conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);
  List<Cell> kvs = new ArrayList<Cell>();

  // Shen Li: init nextSplitRow
  splitKeyIndex = 0;
  nextSplitRow = store.getRegionInfo().getSplitKey(splitKeyIndex);

  boolean hasMore;
  do {
    hasMore = scanner.next(kvs, compactionKVMax);
    if (!kvs.isEmpty()) {
      for (Cell c : kvs) {
        // If we know that this KV is going to be included always, then let us
        // set its memstoreTS to 0. This will help us save space when writing to
        // disk.
        KeyValue kv = KeyValueUtil.ensureKeyValue(c);
        if (kv.getMvccVersion() <= smallestReadPoint) {
          // let us not change the original KV. It could be in the memstore
          // changing its memstoreTS could affect other threads/scanners.
          kv = kv.shallowCopy();
          kv.setMvccVersion(0);
        }
        // Shen Li: TODO check split boundary. use Store, if exceed boundary,
        // call Store to seal block and reset replica group
        //
        // sink is a instance of StoreFile.Writer which has a
        // HFile.Writer as a member variable
        //
        // HFile.Writer has a FSDataOutputStream member variable
        // which can do seal, and set replica group operations.
        //
        if (shouldSeal(kv)) {
          // the sealCurBlock will flush buffer before seal block
          sink.sealCurBlock();
          sink.setReplicaGroups(getReplicaNamespace(), 
                                getReplicaGroups());
        }
        sink.append(kv);
      }
      kvs.clear();
    }
  } while (hasMore);
}
 
开发者ID:shenli-uiuc,项目名称:PyroDB,代码行数:53,代码来源:StoreFlusher.java


示例8: performCompaction

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
protected boolean performCompaction(Compactor.FileDetails fd, InternalScanner scanner, CellSink writer,
                                        long smallestReadPoint, boolean cleanSeqId,
                                        CompactionThroughputController throughputController,
                                        boolean major) throws IOException {
        if (LOG.isTraceEnabled())
            SpliceLogUtils.trace(LOG,"performCompaction");
        long bytesWritten = 0;
        long bytesWrittenProgress = 0;

        // Since scanner.next() can return 'false' but still be delivering data,
        // we have to use a do/while loop.
        List<Cell> cells =new ArrayList<>();
        long closeCheckInterval = HStore.getCloseCheckInterval();
        long lastMillis = 0;
        if (LOG.isDebugEnabled()) {
            lastMillis = EnvironmentEdgeManager.currentTime();
        }
        long now = 0;
        boolean hasMore;
        int compactionKVMax = this.conf.getInt(HConstants.COMPACTION_KV_MAX, HConstants.COMPACTION_KV_MAX_DEFAULT);
        ScannerContext scannerContext =
                ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();

        do {
            hasMore = scanner.next(cells, scannerContext);
            if (LOG.isDebugEnabled()) {
                now = EnvironmentEdgeManager.currentTime();
            }
            // output to writer:
            for (Cell c : cells) {
                if (cleanSeqId && c.getSequenceId() <= smallestReadPoint) {
                    CellUtil.setSequenceId(c, 0);
                }
                writer.append(c);
                int len = KeyValueUtil.length(c);
                ++progress.currentCompactedKVs;
                progress.totalCompactedSize += len;
                if (LOG.isDebugEnabled()) {
                    bytesWrittenProgress += len;
                }
                // check periodically to see if a system stop is requested
                if (closeCheckInterval > 0) {
                    bytesWritten += len;
                    if (bytesWritten > closeCheckInterval) {
                        bytesWritten = 0;
//                        if (!store.areWritesEnabled()) {
//                            progress.cancel();
//                            return false;
//                        }
                    }
                }
            }
            // Log the progress of long running compactions every minute if
            // logging at DEBUG level
            if (LOG.isDebugEnabled()) {
                if ((now - lastMillis) >= 60 * 1000) {
                    LOG.debug("Compaction progress: " + progress + String.format(", rate=%.2f kB/sec",
                            (bytesWrittenProgress / 1024.0) / ((now - lastMillis) / 1000.0)));
                    lastMillis = now;
                    bytesWrittenProgress = 0;
                }
            }
            cells.clear();
        } while (hasMore);
        progress.complete();
        return true;
    }
 
开发者ID:splicemachine,项目名称:spliceengine,代码行数:68,代码来源:SpliceDefaultCompactor.java


示例9: performFlush

import org.apache.hadoop.hbase.regionserver.compactions.Compactor; //导入依赖的package包/类
/**
 * Performs memstore flush, writing data from scanner into sink.
 *
 * @param scanner           Scanner to get data from.
 * @param sink              Sink to write data to. Could be StoreFile.Writer.
 * @param smallestReadPoint Smallest read point used for the flush.
 */
@Override
protected void performFlush(InternalScanner scanner, Compactor.CellSink sink, long smallestReadPoint) throws IOException {
    super.performFlush(scanner, sink, smallestReadPoint);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:12,代码来源:ParquetStoreFlusher.java



注:本文中的org.apache.hadoop.hbase.regionserver.compactions.Compactor类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ResourceBitmapTextureAtlasSource类代码示例发布时间:2022-05-23
下一篇:
Java InventoryLargeChest类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap