• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FilterWrapper类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.filter.FilterWrapper的典型用法代码示例。如果您正苦于以下问题:Java FilterWrapper类的具体用法?Java FilterWrapper怎么用?Java FilterWrapper使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FilterWrapper类属于org.apache.hadoop.hbase.filter包,在下文中一共展示了FilterWrapper类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: RegionScannerImpl

import org.apache.hadoop.hbase.filter.FilterWrapper; //导入依赖的package包/类
RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region,
    long nonceGroup, long nonce) throws IOException {
  this.region = region;
  this.maxResultSize = scan.getMaxResultSize();
  if (scan.hasFilter()) {
    this.filter = new FilterWrapper(scan.getFilter());
  } else {
    this.filter = null;
  }
  this.comparator = region.getCellComparator();
  /**
   * By default, calls to next/nextRaw must enforce the batch limit. Thus, construct a default
   * scanner context that can be used to enforce the batch limit in the event that a
   * ScannerContext is not specified during an invocation of next/nextRaw
   */
  defaultScannerContext = ScannerContext.newBuilder()
      .setBatchLimit(scan.getBatch()).build();
  this.stopRow = scan.getStopRow();
  this.includeStopRow = scan.includeStopRow();

  // synchronize on scannerReadPoints so that nobody calculates
  // getSmallestReadPoint, before scannerReadPoints is updated.
  IsolationLevel isolationLevel = scan.getIsolationLevel();
  long mvccReadPoint = PackagePrivateFieldAccessor.getMvccReadPoint(scan);
  synchronized (scannerReadPoints) {
    if (mvccReadPoint > 0) {
      this.readPt = mvccReadPoint;
    } else if (nonce == HConstants.NO_NONCE || rsServices == null
        || rsServices.getNonceManager() == null) {
      this.readPt = getReadPoint(isolationLevel);
    } else {
      this.readPt = rsServices.getNonceManager().getMvccFromOperationContext(nonceGroup, nonce);
    }
    scannerReadPoints.put(this, this.readPt);
  }
  initializeScanners(scan, additionalScanners);
}
 
开发者ID:apache,项目名称:hbase,代码行数:38,代码来源:HRegion.java


示例2: RegionScannerImpl

import org.apache.hadoop.hbase.filter.FilterWrapper; //导入依赖的package包/类
RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region)
        throws IOException {

    this.region = region;
    this.maxResultSize = scan.getMaxResultSize();
    if (scan.hasFilter()) {
        this.filter = new FilterWrapper(scan.getFilter());
    } else {
        this.filter = null;
    }

    this.batch = scan.getBatch();//一次next所调用的最大数据量

    if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW) && !scan.isGetScan()) {
        this.stopRow = null;
    } else {
        this.stopRow = scan.getStopRow();
    }
    // If we are doing a get, we want to be [startRow,endRow] normally
    // it is [startRow,endRow) and if startRow=endRow we get nothing.
    this.isScan = scan.isGetScan() ? -1 : 0;

    // synchronize on scannerReadPoints so that nobody calculates
    // getSmallestReadPoint, before scannerReadPoints is updated.
    IsolationLevel isolationLevel = scan.getIsolationLevel();
    synchronized (scannerReadPoints) {
        this.readPt = getReadpoint(isolationLevel);
        scannerReadPoints.put(this, this.readPt);
    }

    // Here we separate all scanners into two lists - scanner that provide data required
    // by the filter to operate (scanners list) and all others (joinedScanners list).
    List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
    List<KeyValueScanner> joinedScanners = new ArrayList<KeyValueScanner>();
    if (additionalScanners != null) {
        scanners.addAll(additionalScanners);
    }

    for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
            scan.getFamilyMap().entrySet()) {
        Store store = stores.get(entry.getKey());
        KeyValueScanner scanner = store.getScanner(scan, entry.getValue(), this.readPt);
        if (this.filter == null || !scan.doLoadColumnFamiliesOnDemand()
                || this.filter.isFamilyEssential(entry.getKey())) {
            scanners.add(scanner);
        } else {
            joinedScanners.add(scanner);
        }
    }
    initializeKVHeap(scanners, joinedScanners, region);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:52,代码来源:HRegion.java


示例3: RegionScannerImpl

import org.apache.hadoop.hbase.filter.FilterWrapper; //导入依赖的package包/类
RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region)
    throws IOException {

  this.region = region;
  this.maxResultSize = scan.getMaxResultSize();
  if (scan.hasFilter()) {
    this.filter = new FilterWrapper(scan.getFilter());
  } else {
    this.filter = null;
  }

  this.batch = scan.getBatch();
  if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW) && !scan.isGetScan()) {
    this.stopRow = null;
  } else {
    this.stopRow = scan.getStopRow();
  }
  // If we are doing a get, we want to be [startRow,endRow] normally
  // it is [startRow,endRow) and if startRow=endRow we get nothing.
  this.isScan = scan.isGetScan() ? -1 : 0;

  // synchronize on scannerReadPoints so that nobody calculates
  // getSmallestReadPoint, before scannerReadPoints is updated.
  IsolationLevel isolationLevel = scan.getIsolationLevel();
  synchronized(scannerReadPoints) {
    this.readPt = getReadpoint(isolationLevel);
    scannerReadPoints.put(this, this.readPt);
  }

  // Here we separate all scanners into two lists - scanner that provide data required
  // by the filter to operate (scanners list) and all others (joinedScanners list).
  List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
  List<KeyValueScanner> joinedScanners = new ArrayList<KeyValueScanner>();
  if (additionalScanners != null) {
    scanners.addAll(additionalScanners);
  }

  for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
      scan.getFamilyMap().entrySet()) {
    Store store = stores.get(entry.getKey());
    KeyValueScanner scanner = store.getScanner(scan, entry.getValue(), this.readPt);
    if (this.filter == null || !scan.doLoadColumnFamiliesOnDemand()
      || this.filter.isFamilyEssential(entry.getKey())) {
      scanners.add(scanner);
    } else {
      joinedScanners.add(scanner);
    }
  }
  initializeKVHeap(scanners, joinedScanners, region);
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:51,代码来源:HRegion.java


示例4: RegionScannerImpl

import org.apache.hadoop.hbase.filter.FilterWrapper; //导入依赖的package包/类
RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners, HRegion region)
    throws IOException {

  this.region = region;
  this.maxResultSize = scan.getMaxResultSize();
  if (scan.hasFilter()) {
    this.filter = new FilterWrapper(scan.getFilter());
  } else {
    this.filter = null;
  }

  this.batch = scan.getBatch();
  if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW) && !scan.isGetScan()) {
    this.stopRow = null;
  } else {
    this.stopRow = scan.getStopRow();
  }
  // If we are doing a get, we want to be [startRow,endRow] normally
  // it is [startRow,endRow) and if startRow=endRow we get nothing.
  this.isScan = scan.isGetScan() ? -1 : 0;

  // synchronize on scannerReadPoints so that nobody calculates
  // getSmallestReadPoint, before scannerReadPoints is updated.
  IsolationLevel isolationLevel = scan.getIsolationLevel();
  synchronized(scannerReadPoints) {
    if (isolationLevel == IsolationLevel.READ_UNCOMMITTED) {
      // This scan can read even uncommitted transactions
      this.readPt = Long.MAX_VALUE;
      MultiVersionConsistencyControl.setThreadReadPoint(this.readPt);
    } else {
      this.readPt = MultiVersionConsistencyControl.resetThreadReadPoint(mvcc);
    }
    scannerReadPoints.put(this, this.readPt);
  }

  // Here we separate all scanners into two lists - scanner that provide data required
  // by the filter to operate (scanners list) and all others (joinedScanners list).
  List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
  List<KeyValueScanner> joinedScanners = new ArrayList<KeyValueScanner>();
  if (additionalScanners != null) {
    scanners.addAll(additionalScanners);
  }

  for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
      scan.getFamilyMap().entrySet()) {
    Store store = stores.get(entry.getKey());
    KeyValueScanner scanner = store.getScanner(scan, entry.getValue());
    if (this.filter == null || !scan.doLoadColumnFamiliesOnDemand()
      || this.filter.isFamilyEssential(entry.getKey())) {
      scanners.add(scanner);
    } else {
      joinedScanners.add(scanner);
    }
  }
  this.storeHeap = new KeyValueHeap(scanners, comparator);
  if (!joinedScanners.isEmpty()) {
    this.joinedHeap = new KeyValueHeap(joinedScanners, comparator);
  }
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:60,代码来源:HRegion.java


示例5: RegionScannerImpl

import org.apache.hadoop.hbase.filter.FilterWrapper; //导入依赖的package包/类
RegionScannerImpl(Scan scan, List<KeyValueScanner> additionalScanners) throws IOException {
  //DebugPrint.println("HRegionScanner.<init>");

  this.maxResultSize = scan.getMaxResultSize();
  if (scan.hasFilter()) {
    this.filter = new FilterWrapper(scan.getFilter());
  } else {
    this.filter = null;
  }

  this.batch = scan.getBatch();
  if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW)) {
    this.stopRow = null;
  } else {
    this.stopRow = scan.getStopRow();
  }
  // If we are doing a get, we want to be [startRow,endRow] normally
  // it is [startRow,endRow) and if startRow=endRow we get nothing.
  this.isScan = scan.isGetScan() ? -1 : 0;

  // synchronize on scannerReadPoints so that nobody calculates
  // getSmallestReadPoint, before scannerReadPoints is updated.
  IsolationLevel isolationLevel = scan.getIsolationLevel();
  synchronized(scannerReadPoints) {
    if (isolationLevel == IsolationLevel.READ_UNCOMMITTED) {
      // This scan can read even uncommitted transactions
      this.readPt = Long.MAX_VALUE;
      MultiVersionConsistencyControl.setThreadReadPoint(this.readPt);
    } else {
      this.readPt = MultiVersionConsistencyControl.resetThreadReadPoint(mvcc);
    }
    scannerReadPoints.put(this, this.readPt);
  }

  // Here we separate all scanners into two lists - scanner that provide data required
  // by the filter to operate (scanners list) and all others (joinedScanners list).
  List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
  List<KeyValueScanner> joinedScanners = new ArrayList<KeyValueScanner>();
  if (additionalScanners != null) {
    scanners.addAll(additionalScanners);
  }

  for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
      scan.getFamilyMap().entrySet()) {
    Store store = stores.get(entry.getKey());
    KeyValueScanner scanner = store.getScanner(scan, entry.getValue());
    if (this.filter == null || !scan.doLoadColumnFamiliesOnDemand()
      || this.filter.isFamilyEssential(entry.getKey())) {
      scanners.add(scanner);
    } else {
      joinedScanners.add(scanner);
    }
  }
  this.storeHeap = new KeyValueHeap(scanners, comparator);
  if (!joinedScanners.isEmpty()) {
    this.joinedHeap = new KeyValueHeap(joinedScanners, comparator);
  }
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:59,代码来源:HRegion.java



注:本文中的org.apache.hadoop.hbase.filter.FilterWrapper类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TemplateAwareExpressionParser类代码示例发布时间:2022-05-23
下一篇:
Java HttpWaitStrategy类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap