• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java RpcRetryingCaller类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.client.RpcRetryingCaller的典型用法代码示例。如果您正苦于以下问题:Java RpcRetryingCaller类的具体用法?Java RpcRetryingCaller怎么用?Java RpcRetryingCaller使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



RpcRetryingCaller类属于org.apache.hadoop.hbase.client包,在下文中一共展示了RpcRetryingCaller类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testNoBulkLoadsWithNoWrites

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
@Test(timeout=120000)
public void testNoBulkLoadsWithNoWrites() throws Exception {
  Put p = new Put(Bytes.toBytes("to_reject"));
  p.addColumn(
      Bytes.toBytes(SpaceQuotaHelperForTests.F1), Bytes.toBytes("to"), Bytes.toBytes("reject"));
  TableName tableName = writeUntilViolationAndVerifyViolation(SpaceViolationPolicy.NO_WRITES, p);

  // The table is now in violation. Try to do a bulk load
  ClientServiceCallable<Void> callable = generateFileToLoad(tableName, 1, 50);
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(TEST_UTIL.getConfiguration());
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  try {
    caller.callWithRetries(callable, Integer.MAX_VALUE);
    fail("Expected the bulk load call to fail!");
  } catch (SpaceLimitingException e) {
    // Pass
    LOG.trace("Caught expected exception", e);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:20,代码来源:TestSpaceQuotas.java


示例2: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<Pair<byte[], String>>(
      NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<byte[], String>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final HConnection conn = UTIL.getHBaseAdmin().getConnection();
  RegionServerCallable<Void> callable =
      new RegionServerCallable<Void>(conn, tableName, Bytes.toBytes("aaa")) {
    @Override
    public Void call(int callTimeout) throws Exception {
      LOG.debug("Going to connect to server " + getLocation() + " for row "
          + Bytes.toStringBinary(getRow()));
      byte[] regionName = getLocation().getRegionInfo().getRegionName();
      BulkLoadHFileRequest request =
        RequestConverter.buildBulkLoadHFileRequest(famPaths, regionName, true);
      getStub().bulkLoadHFile(null, request);
      return null;
    }
  };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable, Integer.MAX_VALUE);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 10 == 0) {
    // 10 * 50 = 500 open file handles!
    callable = new RegionServerCallable<Void>(conn, tableName, Bytes.toBytes("aaa")) {
      @Override
      public Void call(int callTimeout) throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request =
          RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable, Integer.MAX_VALUE);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:58,代码来源:TestHRegionServerBulkLoad.java


示例3: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<Pair<byte[], String>>(
      NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<byte[], String>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final HConnection conn = UTIL.getHBaseAdmin().getConnection();
  TableName tbl = TableName.valueOf(tableName);
  RegionServerCallable<Void> callable =
      new RegionServerCallable<Void>(conn, tbl, Bytes.toBytes("aaa")) {
    @Override
    public Void call() throws Exception {
      LOG.debug("Going to connect to server " + getLocation() + " for row "
          + Bytes.toStringBinary(getRow()));
      byte[] regionName = getLocation().getRegionInfo().getRegionName();
      BulkLoadHFileRequest request =
        RequestConverter.buildBulkLoadHFileRequest(famPaths, regionName, true);
      getStub().bulkLoadHFile(null, request);
      return null;
    }
  };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 10 == 0) {
    // 10 * 50 = 500 open file handles!
    callable = new RegionServerCallable<Void>(conn, tbl, Bytes.toBytes("aaa")) {
      @Override
      public Void call() throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request =
          RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable);
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:59,代码来源:TestHRegionServerBulkLoad.java


示例4: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<>(NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final ClusterConnection conn = (ClusterConnection) UTIL.getAdmin().getConnection();
  Table table = conn.getTable(tableName);
  final String bulkToken = new SecureBulkLoadEndpointClient(table).prepareBulkLoad(tableName);
  RpcControllerFactory rpcControllerFactory = new RpcControllerFactory(UTIL.getConfiguration());
  ClientServiceCallable<Void> callable =
      new ClientServiceCallable<Void>(conn, tableName, Bytes.toBytes("aaa"),
          rpcControllerFactory.newController(), HConstants.PRIORITY_UNSET) {
        @Override
        protected Void rpcCall() throws Exception {
          LOG.debug("Going to connect to server " + getLocation() + " for row " +
              Bytes.toStringBinary(getRow()));
          try (Table table = conn.getTable(getTableName())) {
            boolean loaded = new SecureBulkLoadEndpointClient(table).bulkLoadHFiles(famPaths,
                null, bulkToken, getLocation().getRegionInfo().getStartKey());
          }
          return null;
        }
      };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable, Integer.MAX_VALUE);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 5 == 0) {
    // 5 * 50 = 250 open file handles!
    callable = new ClientServiceCallable<Void>(conn, tableName, Bytes.toBytes("aaa"),
        rpcControllerFactory.newController(), HConstants.PRIORITY_UNSET) {
      @Override
      protected Void rpcCall() throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request =
          RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable, Integer.MAX_VALUE);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:62,代码来源:TestHRegionServerBulkLoadWithOldSecureEndpoint.java


示例5: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
@Override
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<>(NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final ClusterConnection conn = (ClusterConnection)UTIL.getConnection();
  Table table = conn.getTable(tableName);
  final String bulkToken = new SecureBulkLoadClient(UTIL.getConfiguration(), table).
      prepareBulkLoad(conn);
  ClientServiceCallable<Void> callable = new ClientServiceCallable<Void>(conn,
      tableName, Bytes.toBytes("aaa"),
      new RpcControllerFactory(UTIL.getConfiguration()).newController(), HConstants.PRIORITY_UNSET) {
    @Override
    public Void rpcCall() throws Exception {
      LOG.debug("Going to connect to server " + getLocation() + " for row "
          + Bytes.toStringBinary(getRow()));
      SecureBulkLoadClient secureClient = null;
      byte[] regionName = getLocation().getRegionInfo().getRegionName();
      try (Table table = conn.getTable(getTableName())) {
        secureClient = new SecureBulkLoadClient(UTIL.getConfiguration(), table);
        secureClient.secureBulkLoadHFiles(getStub(), famPaths, regionName,
              true, null, bulkToken);
      }
      return null;
    }
  };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable, Integer.MAX_VALUE);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 5 == 0) {
    // 5 * 50 = 250 open file handles!
    callable = new ClientServiceCallable<Void>(conn,
        tableName, Bytes.toBytes("aaa"),
        new RpcControllerFactory(UTIL.getConfiguration()).newController(), HConstants.PRIORITY_UNSET) {
      @Override
      protected Void rpcCall() throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request = RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable, Integer.MAX_VALUE);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:66,代码来源:TestHRegionServerBulkLoad.java


示例6: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
@Override
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<>(NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final ClusterConnection conn = (ClusterConnection) UTIL.getAdmin().getConnection();
  RpcControllerFactory rpcControllerFactory = new RpcControllerFactory(UTIL.getConfiguration());
  ClientServiceCallable<Void> callable =
      new ClientServiceCallable<Void>(conn, tableName,
          Bytes.toBytes("aaa"), rpcControllerFactory.newController(), HConstants.PRIORITY_UNSET) {
    @Override
    protected Void rpcCall() throws Exception {
      LOG.info("Non-secure old client");
      byte[] regionName = getLocation().getRegionInfo().getRegionName();
          BulkLoadHFileRequest request =
              RequestConverter
                  .buildBulkLoadHFileRequest(famPaths, regionName, true, null, null);
          getStub().bulkLoadHFile(null, request);
          return null;
    }
  };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable, Integer.MAX_VALUE);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 5 == 0) {
    // 5 * 50 = 250 open file handles!
    callable = new ClientServiceCallable<Void>(conn, tableName,
        Bytes.toBytes("aaa"), rpcControllerFactory.newController(), HConstants.PRIORITY_UNSET) {
      @Override
      protected Void rpcCall() throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request =
          RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable, Integer.MAX_VALUE);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:61,代码来源:TestHRegionServerBulkLoadWithOldClient.java


示例7: doAnAction

import org.apache.hadoop.hbase.client.RpcRetryingCaller; //导入依赖的package包/类
public void doAnAction() throws Exception {
  long iteration = numBulkLoads.getAndIncrement();
  Path dir =  UTIL.getDataTestDirOnTestFS(String.format("bulkLoad_%08d",
      iteration));

  // create HFiles for different column families
  FileSystem fs = UTIL.getTestFileSystem();
  byte[] val = Bytes.toBytes(String.format("%010d", iteration));
  final List<Pair<byte[], String>> famPaths = new ArrayList<Pair<byte[], String>>(
      NUM_CFS);
  for (int i = 0; i < NUM_CFS; i++) {
    Path hfile = new Path(dir, family(i));
    byte[] fam = Bytes.toBytes(family(i));
    createHFile(fs, hfile, fam, QUAL, val, 1000);
    famPaths.add(new Pair<byte[], String>(fam, hfile.toString()));
  }

  // bulk load HFiles
  final HConnection conn = UTIL.getHBaseAdmin().getConnection();
  TableName tbl = TableName.valueOf(tableName);
  RegionServerCallable<Void> callable =
      new RegionServerCallable<Void>(conn, tbl, Bytes.toBytes("aaa")) {
    @Override
    public Void call(int callTimeout) throws Exception {
      LOG.debug("Going to connect to server " + getLocation() + " for row "
          + Bytes.toStringBinary(getRow()));
      byte[] regionName = getLocation().getRegionInfo().getRegionName();
      BulkLoadHFileRequest request =
        RequestConverter.buildBulkLoadHFileRequest(famPaths, regionName, true);
      getStub().bulkLoadHFile(null, request);
      return null;
    }
  };
  RpcRetryingCallerFactory factory = new RpcRetryingCallerFactory(conf);
  RpcRetryingCaller<Void> caller = factory.<Void> newCaller();
  caller.callWithRetries(callable, Integer.MAX_VALUE);

  // Periodically do compaction to reduce the number of open file handles.
  if (numBulkLoads.get() % 10 == 0) {
    // 10 * 50 = 500 open file handles!
    callable = new RegionServerCallable<Void>(conn, tbl, Bytes.toBytes("aaa")) {
      @Override
      public Void call(int callTimeout) throws Exception {
        LOG.debug("compacting " + getLocation() + " for row "
            + Bytes.toStringBinary(getRow()));
        AdminProtos.AdminService.BlockingInterface server =
          conn.getAdmin(getLocation().getServerName());
        CompactRegionRequest request =
          RequestConverter.buildCompactRegionRequest(
            getLocation().getRegionInfo().getRegionName(), true, null);
        server.compactRegion(null, request);
        numCompactions.incrementAndGet();
        return null;
      }
    };
    caller.callWithRetries(callable, Integer.MAX_VALUE);
  }
}
 
开发者ID:shenli-uiuc,项目名称:PyroDB,代码行数:59,代码来源:TestHRegionServerBulkLoad.java



注:本文中的org.apache.hadoop.hbase.client.RpcRetryingCaller类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java RecipientList类代码示例发布时间:2022-05-22
下一篇:
Java Iterable类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap