• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Region类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.Region的典型用法代码示例。如果您正苦于以下问题:Java Region类的具体用法?Java Region怎么用?Java Region使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Region类属于org.apache.hadoop.hbase.regionserver包,在下文中一共展示了Region类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: postInstantiateDeleteTracker

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Override
public DeleteTracker postInstantiateDeleteTracker(
    ObserverContext<RegionCoprocessorEnvironment> ctx, DeleteTracker delTracker)
    throws IOException {
  // Nothing to do if we are not filtering by visibility
  if (!authorizationEnabled) {
    return delTracker;
  }
  Region region = ctx.getEnvironment().getRegion();
  TableName table = region.getRegionInfo().getTable();
  if (table.isSystemTable()) {
    return delTracker;
  }
  // We are creating a new type of delete tracker here which is able to track
  // the timestamps and also the
  // visibility tags per cell. The covering cells are determined not only
  // based on the delete type and ts
  // but also on the visibility expression matching.
  return new VisibilityScanDeleteTracker();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:21,代码来源:VisibilityController.java


示例2: testClassLoadingFromLocalFS

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test
// HBASE-3516: Test CP Class loading from local file system
public void testClassLoadingFromLocalFS() throws Exception {
  File jarFile = buildCoprocessorJar(cpName3);

  // create a table that references the jar
  HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(cpName3));
  htd.addFamily(new HColumnDescriptor("test"));
  htd.setValue("COPROCESSOR$1", getLocalPath(jarFile) + "|" + cpName3 + "|" +
    Coprocessor.PRIORITY_USER);
  Admin admin = TEST_UTIL.getHBaseAdmin();
  admin.createTable(htd);
  waitForTable(htd.getTableName());

  // verify that the coprocessor was loaded
  boolean found = false;
  MiniHBaseCluster hbase = TEST_UTIL.getHBaseCluster();
  for (Region region: hbase.getRegionServer(0).getOnlineRegionsLocalContext()) {
    if (region.getRegionInfo().getRegionNameAsString().startsWith(cpName3)) {
      found = (region.getCoprocessorHost().findCoprocessor(cpName3) != null);
    }
  }
  assertTrue("Class " + cpName3 + " was missing on a region", found);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:25,代码来源:TestClassLoading.java


示例3: initialize

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
void initialize(RegionCoprocessorEnvironment e) throws IOException {
  final Region region = e.getRegion();
  Configuration conf = e.getConfiguration();
  Map<byte[], ListMultimap<String,TablePermission>> tables =
      AccessControlLists.loadAll(region);
  // For each table, write out the table's permissions to the respective
  // znode for that table.
  for (Map.Entry<byte[], ListMultimap<String,TablePermission>> t:
    tables.entrySet()) {
    byte[] entry = t.getKey();
    ListMultimap<String,TablePermission> perms = t.getValue();
    byte[] serialized = AccessControlLists.writePermissionsAsBytes(perms, conf);
    this.authManager.getZKPermissionWatcher().writeToZookeeper(entry, serialized);
  }
  initialized = true;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:17,代码来源:AccessController.java


示例4: preOpen

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Override
public void preOpen(ObserverContext<RegionCoprocessorEnvironment> e)
    throws IOException {
  RegionCoprocessorEnvironment env = e.getEnvironment();
  final Region region = env.getRegion();
  if (region == null) {
    LOG.error("NULL region from RegionCoprocessorEnvironment in preOpen()");
  } else {
    HRegionInfo regionInfo = region.getRegionInfo();
    if (regionInfo.getTable().isSystemTable()) {
      checkSystemOrSuperUser();
    } else {
      requirePermission("preOpen", Action.ADMIN);
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:17,代码来源:AccessController.java


示例5: testLogRolling

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
/**
 * Tests that logs are deleted
 * @throws IOException
 * @throws org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException
 */
@Test
public void testLogRolling() throws Exception {
  this.tableName = getName();
    // TODO: Why does this write data take for ever?
    startAndWriteData();
  final WAL log = server.getWAL(null);
  LOG.info("after writing there are " + DefaultWALProvider.getNumRolledLogFiles(log) +
      " log files");

    // flush all regions
    for (Region r: server.getOnlineRegionsLocalContext()) {
      r.flush(true);
    }

    // Now roll the log
    log.rollWriter();

  int count = DefaultWALProvider.getNumRolledLogFiles(log);
  LOG.info("after flushing all regions and rolling logs there are " + count + " log files");
    assertTrue(("actual count: " + count), count <= 2);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:TestLogRolling.java


示例6: checkQuota

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
/**
 * Check the quota for the current (rpc-context) user. Returns the OperationQuota used to get the
 * available quota and to report the data/usage of the operation.
 * @param region the region where the operation will be performed
 * @param numWrites number of writes to perform
 * @param numReads number of short-reads to perform
 * @param numScans number of scan to perform
 * @return the OperationQuota
 * @throws ThrottlingException if the operation cannot be executed due to quota exceeded.
 */
private OperationQuota checkQuota(final Region region, final int numWrites, final int numReads,
    final int numScans) throws IOException, ThrottlingException {
  User user = RpcServer.getRequestUser();
  UserGroupInformation ugi;
  if (user != null) {
    ugi = user.getUGI();
  } else {
    ugi = User.getCurrent().getUGI();
  }
  TableName table = region.getTableDesc().getTableName();

  OperationQuota quota = getQuota(ugi, table);
  try {
    quota.checkQuota(numWrites, numReads, numScans);
  } catch (ThrottlingException e) {
    LOG.debug("Throttling exception for user=" + ugi.getUserName() + " table=" + table
        + " numWrites=" + numWrites + " numReads=" + numReads + " numScans=" + numScans + ": "
        + e.getMessage());
    throw e;
  }
  return quota;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:33,代码来源:RegionServerQuotaManager.java


示例7: reopenRegion

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
Region reopenRegion(final Region closedRegion, Class<?> ... implClasses)
    throws IOException {
  //HRegionInfo info = new HRegionInfo(tableName, null, null, false);
  Region r = HRegion.openHRegion(closedRegion, null);

  // this following piece is a hack. currently a coprocessorHost
  // is secretly loaded at OpenRegionHandler. we don't really
  // start a region server here, so just manually create cphost
  // and set it to region.
  Configuration conf = TEST_UTIL.getConfiguration();
  RegionCoprocessorHost host = new RegionCoprocessorHost(r, null, conf);
  ((HRegion)r).setCoprocessorHost(host);

  for (Class<?> implClass : implClasses) {
    host.load(implClass, Coprocessor.PRIORITY_USER, conf);
  }
  // we need to manually call pre- and postOpen here since the
  // above load() is not the real case for CP loading. A CP is
  // expected to be loaded by default from 1) configuration; or 2)
  // HTableDescriptor. If it's loaded after HRegion initialized,
  // the pre- and postOpen() won't be triggered automatically.
  // Here we have to call pre and postOpen explicitly.
  host.preOpen();
  host.postOpen();
  return r;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:TestCoprocessorInterface.java


示例8: initHRegion

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
Region initHRegion(byte[] tableName, String callingMethod, Configuration conf,
    byte[]... families) throws IOException {
  HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName));
  for (byte[] family : families) {
    htd.addFamily(new HColumnDescriptor(family));
  }
  HRegionInfo info = new HRegionInfo(htd.getTableName(), null, null, false);
  Path path = new Path(DIR + callingMethod);
  HRegion r = HRegion.createHRegion(info, path, conf, htd);
  // this following piece is a hack. currently a coprocessorHost
  // is secretly loaded at OpenRegionHandler. we don't really
  // start a region server here, so just manually create cphost
  // and set it to region.
  RegionCoprocessorHost host = new RegionCoprocessorHost(r, null, conf);
  r.setCoprocessorHost(host);
  return r;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:18,代码来源:TestRegionObserverScannerOpenHook.java


示例9: testReplayCallable

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test (timeout = 240000)
public void testReplayCallable() throws Exception {
  // tests replaying the edits to a secondary region replica using the Callable directly
  openRegion(HTU, rs0, hriSecondary);
  ClusterConnection connection =
      (ClusterConnection) ConnectionFactory.createConnection(HTU.getConfiguration());

  //load some data to primary
  HTU.loadNumericRows(table, f, 0, 1000);

  Assert.assertEquals(1000, entries.size());
  // replay the edits to the secondary using replay callable
  replicateUsingCallable(connection, entries);

  Region region = rs0.getFromOnlineRegions(hriSecondary.getEncodedName());
  HTU.verifyNumericRows(region, f, 0, 1000);

  HTU.deleteNumericRows(table, f, 0, 1000);
  closeRegion(HTU, rs0, hriSecondary);
  connection.close();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:22,代码来源:TestRegionReplicaReplicationEndpointNoMaster.java


示例10: testRegionObserverScanTimeStacking

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test
public void testRegionObserverScanTimeStacking() throws Exception {
  byte[] ROW = Bytes.toBytes("testRow");
  byte[] TABLE = Bytes.toBytes(getClass().getName());
  byte[] A = Bytes.toBytes("A");
  byte[][] FAMILIES = new byte[][] { A };

  Configuration conf = HBaseConfiguration.create();
  Region region = initHRegion(TABLE, getClass().getName(), conf, FAMILIES);
  RegionCoprocessorHost h = region.getCoprocessorHost();
  h.load(NoDataFromScan.class, Coprocessor.PRIORITY_HIGHEST, conf);
  h.load(EmptyRegionObsever.class, Coprocessor.PRIORITY_USER, conf);

  Put put = new Put(ROW);
  put.add(A, A, A);
  region.put(put);

  Get get = new Get(ROW);
  Result r = region.get(get);
  assertNull(
    "Got an unexpected number of rows - no data should be returned with the NoDataFromScan coprocessor. Found: "
        + r, r.listCells());
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:24,代码来源:TestRegionObserverScannerOpenHook.java


示例11: getServerHoldingRegion

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Override
public ServerName getServerHoldingRegion(final TableName tn, byte[] regionName)
throws IOException {
  // Assume there is only one master thread which is the active master.
  // If there are multiple master threads, the backup master threads
  // should hold some regions. Please refer to #countServedRegions
  // to see how we find out all regions.
  HMaster master = getMaster();
  Region region = master.getOnlineRegion(regionName);
  if (region != null) {
    return master.getServerName();
  }
  int index = getServerWith(regionName);
  if (index < 0) {
    return null;
  }
  return getRegionServer(index).getServerName();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:MiniHBaseCluster.java


示例12: testWALRollWriting

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test (timeout=300000)
public void testWALRollWriting() throws Exception {
  setUpforLogRolling();
  String className = this.getClass().getName();
  StringBuilder v = new StringBuilder(className);
  while (v.length() < 1000) {
    v.append(className);
  }
  byte[] value = Bytes.toBytes(v.toString());
  HRegionServer regionServer = startAndWriteData(TableName.valueOf("TestLogRolling"), value);
  LOG.info("after writing there are "
      + DefaultWALProvider.getNumRolledLogFiles(regionServer.getWAL(null)) + " log files");

  // flush all regions
  for (Region r : regionServer.getOnlineRegionsLocalContext()) {
    r.flush(true);
  }
  admin.rollWALWriter(regionServer.getServerName());
  int count = DefaultWALProvider.getNumRolledLogFiles(regionServer.getWAL(null));
  LOG.info("after flushing all regions and rolling logs there are " +
      count + " log files");
  assertTrue(("actual count: " + count), count <= 2);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:24,代码来源:TestAdmin2.java


示例13: setUpBeforeClass

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@BeforeClass
public static void setUpBeforeClass() throws Exception {
  cluster = TEST_UTIL.startMiniCluster(1, ServerNum);
  table = TEST_UTIL.createTable(tableName, FAMILY, HBaseTestingUtility.KEYS_FOR_HBA_CREATE_TABLE);
  TEST_UTIL.waitTableAvailable(tableName, 1000);
  TEST_UTIL.loadTable(table, FAMILY);

  for (int i = 0; i < ServerNum; i++) {
    HRegionServer server = cluster.getRegionServer(i);
    for (Region region : server.getOnlineRegions(tableName)) {
      region.flush(true);
    }
  }

  finder.setConf(TEST_UTIL.getConfiguration());
  finder.setServices(cluster.getMaster());
  finder.setClusterStatus(cluster.getMaster().getClusterStatus());
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:TestRegionLocationFinder.java


示例14: testInternalGetTopBlockLocation

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test
public void testInternalGetTopBlockLocation() throws Exception {
  for (int i = 0; i < ServerNum; i++) {
    HRegionServer server = cluster.getRegionServer(i);
    for (Region region : server.getOnlineRegions(tableName)) {
      // get region's hdfs block distribution by region and RegionLocationFinder, 
      // they should have same result
      HDFSBlocksDistribution blocksDistribution1 = region.getHDFSBlocksDistribution();
      HDFSBlocksDistribution blocksDistribution2 = finder.getBlockDistribution(region
          .getRegionInfo());
      assertEquals(blocksDistribution1.getUniqueBlocksTotalWeight(),
        blocksDistribution2.getUniqueBlocksTotalWeight());
      if (blocksDistribution1.getUniqueBlocksTotalWeight() != 0) {
        assertEquals(blocksDistribution1.getTopHosts().get(0), blocksDistribution2.getTopHosts()
            .get(0));
      }
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:20,代码来源:TestRegionLocationFinder.java


示例15: testGetTopBlockLocations

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Test
public void testGetTopBlockLocations() throws Exception {
  for (int i = 0; i < ServerNum; i++) {
    HRegionServer server = cluster.getRegionServer(i);
    for (Region region : server.getOnlineRegions(tableName)) {
      List<ServerName> servers = finder.getTopBlockLocations(region.getRegionInfo());
      // test table may have empty region
      if (region.getHDFSBlocksDistribution().getUniqueBlocksTotalWeight() == 0) {
        continue;
      }
      List<String> topHosts = region.getHDFSBlocksDistribution().getTopHosts();
      // rs and datanode may have different host in local machine test
      if (!topHosts.contains(server.getServerName().getHostname())) {
        continue;
      }
      for (int j = 0; j < ServerNum; j++) {
        ServerName serverName = cluster.getRegionServer(j).getServerName();
        assertTrue(servers.contains(serverName));
      }
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:23,代码来源:TestRegionLocationFinder.java


示例16: initHRegion

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
Region initHRegion (TableName tableName, String callingMethod,
    Configuration conf, Class<?> [] implClasses, byte [][] families)
    throws IOException {
  HTableDescriptor htd = new HTableDescriptor(tableName);
  for(byte [] family : families) {
    htd.addFamily(new HColumnDescriptor(family));
  }
  HRegionInfo info = new HRegionInfo(tableName, null, null, false);
  Path path = new Path(DIR + callingMethod);
  HRegion r = HRegion.createHRegion(info, path, conf, htd);

  // this following piece is a hack.
  RegionCoprocessorHost host = new RegionCoprocessorHost(r, null, conf);
  r.setCoprocessorHost(host);

  for (Class<?> implClass : implClasses) {
    host.load(implClass, Coprocessor.PRIORITY_USER, conf);
    Coprocessor c = host.findCoprocessor(implClass.getName());
    assertNotNull(c);
  }

  // Here we have to call pre and postOpen explicitly.
  host.preOpen();
  host.postOpen();
  return r;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:27,代码来源:TestCoprocessorInterface.java


示例17: preScannerOpen

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
@Override
public RegionScanner preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan,
    RegionScanner s) throws IOException {
  if (!initialized) {
    throw new VisibilityControllerNotReadyException("VisibilityController not yet initialized!");
  }
  // Nothing to do if authorization is not enabled
  if (!authorizationEnabled) {
    return s;
  }
  Region region = e.getEnvironment().getRegion();
  Authorizations authorizations = null;
  try {
    authorizations = scan.getAuthorizations();
  } catch (DeserializationException de) {
    throw new IOException(de);
  }
  if (authorizations == null) {
    // No Authorizations present for this scan/Get!
    // In case of system tables other than "labels" just scan with out visibility check and
    // filtering. Checking visibility labels for META and NAMESPACE table is not needed.
    TableName table = region.getRegionInfo().getTable();
    if (table.isSystemTable() && !table.equals(LABELS_TABLE_NAME)) {
      return s;
    }
  }

  Filter visibilityLabelFilter = VisibilityUtils.createVisibilityLabelFilter(region,
      authorizations);
  if (visibilityLabelFilter != null) {
    Filter filter = scan.getFilter();
    if (filter != null) {
      scan.setFilter(new FilterList(filter, visibilityLabelFilter));
    } else {
      scan.setFilter(visibilityLabelFilter);
    }
  }
  return s;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:40,代码来源:VisibilityController.java


示例18: createVisibilityLabelFilter

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
public static Filter createVisibilityLabelFilter(Region region, Authorizations authorizations)
    throws IOException {
  Map<ByteRange, Integer> cfVsMaxVersions = new HashMap<ByteRange, Integer>();
  for (HColumnDescriptor hcd : region.getTableDesc().getFamilies()) {
    cfVsMaxVersions.put(new SimpleMutableByteRange(hcd.getName()), hcd.getMaxVersions());
  }
  VisibilityLabelService vls = VisibilityLabelServiceManager.getInstance()
      .getVisibilityLabelService();
  Filter visibilityLabelFilter = new VisibilityLabelFilter(
      vls.getVisibilityExpEvaluator(authorizations), cfVsMaxVersions);
  return visibilityLabelFilter;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:VisibilityUtils.java


示例19: addSystemLabel

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
protected void addSystemLabel(Region region, Map<String, Integer> labels,
    Map<String, List<Integer>> userAuths) throws IOException {
  if (!labels.containsKey(SYSTEM_LABEL)) {
    Put p = new Put(Bytes.toBytes(SYSTEM_LABEL_ORDINAL));
    p.addImmutable(LABELS_TABLE_FAMILY, LABEL_QUALIFIER, Bytes.toBytes(SYSTEM_LABEL));
    region.put(p);
    labels.put(SYSTEM_LABEL, SYSTEM_LABEL_ORDINAL);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:DefaultVisibilityLabelServiceImpl.java


示例20: loadAll

import org.apache.hadoop.hbase.regionserver.Region; //导入依赖的package包/类
/**
 * Loads all of the permission grants stored in a region of the {@code _acl_}
 * table.
 *
 * @param aclRegion
 * @return a map of the permissions for this table.
 * @throws IOException
 */
static Map<byte[], ListMultimap<String,TablePermission>> loadAll(Region aclRegion)
  throws IOException {

  if (!isAclRegion(aclRegion)) {
    throw new IOException("Can only load permissions from "+ACL_TABLE_NAME);
  }

  Map<byte[], ListMultimap<String, TablePermission>> allPerms =
      new TreeMap<byte[], ListMultimap<String, TablePermission>>(Bytes.BYTES_RAWCOMPARATOR);

  // do a full scan of _acl_ table

  Scan scan = new Scan();
  scan.addFamily(ACL_LIST_FAMILY);

  InternalScanner iScanner = null;
  try {
    iScanner = aclRegion.getScanner(scan);

    while (true) {
      List<Cell> row = new ArrayList<Cell>();

      boolean hasNext = iScanner.next(row);
      ListMultimap<String,TablePermission> perms = ArrayListMultimap.create();
      byte[] entry = null;
      for (Cell kv : row) {
        if (entry == null) {
          entry = CellUtil.cloneRow(kv);
        }
        Pair<String,TablePermission> permissionsOfUserOnTable =
            parsePermissionRecord(entry, kv);
        if (permissionsOfUserOnTable != null) {
          String username = permissionsOfUserOnTable.getFirst();
          TablePermission permissions = permissionsOfUserOnTable.getSecond();
          perms.put(username, permissions);
        }
      }
      if (entry != null) {
        allPerms.put(entry, perms);
      }
      if (!hasNext) {
        break;
      }
    }
  } finally {
    if (iScanner != null) {
      iScanner.close();
    }
  }

  return allPerms;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:61,代码来源:AccessControlLists.java



注:本文中的org.apache.hadoop.hbase.regionserver.Region类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SchemaAction类代码示例发布时间:2022-05-23
下一篇:
Java NetInfoModule类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap