• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java CachePoolInfo类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.CachePoolInfo的典型用法代码示例。如果您正苦于以下问题:Java CachePoolInfo类的具体用法?Java CachePoolInfo怎么用?Java CachePoolInfo使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



CachePoolInfo类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了CachePoolInfo类的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: readCachePoolInfo

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
public static CachePoolInfo readCachePoolInfo(DataInput in)
    throws IOException {
  String poolName = readString(in);
  CachePoolInfo info = new CachePoolInfo(poolName);
  int flags = readInt(in);
  if ((flags & 0x1) != 0) {
    info.setOwnerName(readString(in));
  }
  if ((flags & 0x2) != 0)  {
    info.setGroupName(readString(in));
  }
  if ((flags & 0x4) != 0) {
    info.setMode(FsPermission.read(in));
  }
  if ((flags & 0x8) != 0) {
    info.setLimit(readLong(in));
  }
  if ((flags & 0x10) != 0) {
    info.setMaxRelativeExpiryMs(readLong(in));
  }
  if ((flags & ~0x1F) != 0) {
    throw new IOException("Unknown flag in CachePoolInfo: " + flags);
  }
  return info;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSImageSerialization.java


示例2: addCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
void addCachePool(CachePoolInfo req, boolean logRetryCache)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  writeLock();
  boolean success = false;
  String poolInfoStr = null;
  try {
    checkOperation(OperationCategory.WRITE);
    if (isInSafeMode()) {
      throw new SafeModeException(
          "Cannot add cache pool " + req.getPoolName(), safeMode);
    }
    CachePoolInfo info = FSNDNCacheOp.addCachePool(this, cacheManager, req,
        logRetryCache);
    poolInfoStr = info.toString();
    success = true;
  } finally {
    writeUnlock();
    logAuditEvent(success, "addCachePool", poolInfoStr, null, null);
  }
  
  getEditLog().logSync();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:FSNamesystem.java


示例3: modifyCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
void modifyCachePool(CachePoolInfo req, boolean logRetryCache)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  writeLock();
  boolean success = false;
  try {
    checkOperation(OperationCategory.WRITE);
    if (isInSafeMode()) {
      throw new SafeModeException(
          "Cannot modify cache pool " + req.getPoolName(), safeMode);
    }
    FSNDNCacheOp.modifyCachePool(this, cacheManager, req, logRetryCache);
    success = true;
  } finally {
    writeUnlock();
    String poolNameStr = "{poolName: " +
        (req == null ? null : req.getPoolName()) + "}";
    logAuditEvent(success, "modifyCachePool", poolNameStr,
                  req == null ? null : req.toString(), null);
  }

  getEditLog().logSync();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:FSNamesystem.java


示例4: checkLimit

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
/**
 * Throws an exception if the CachePool does not have enough capacity to
 * cache the given path at the replication factor.
 *
 * @param pool CachePool where the path is being cached
 * @param path Path that is being cached
 * @param replication Replication factor of the path
 * @throws InvalidRequestException if the pool does not have enough capacity
 */
private void checkLimit(CachePool pool, String path,
    short replication) throws InvalidRequestException {
  CacheDirectiveStats stats = computeNeeded(path, replication);
  if (pool.getLimit() == CachePoolInfo.LIMIT_UNLIMITED) {
    return;
  }
  if (pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool
      .getLimit()) {
    throw new InvalidRequestException("Caching path " + path + " of size "
        + stats.getBytesNeeded() / replication + " bytes at replication "
        + replication + " would exceed pool " + pool.getPoolName()
        + "'s remaining capacity of "
        + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:CacheManager.java


示例5: addCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
/**
 * Create a cache pool.
 * 
 * Only the superuser should be able to call this function.
 *
 * @param info    The info for the cache pool to create.
 * @return        Information about the cache pool we created.
 */
public CachePoolInfo addCachePool(CachePoolInfo info)
    throws IOException {
  assert namesystem.hasWriteLock();
  CachePool pool;
  try {
    CachePoolInfo.validate(info);
    String poolName = info.getPoolName();
    pool = cachePools.get(poolName);
    if (pool != null) {
      throw new InvalidRequestException("Cache pool " + poolName
          + " already exists.");
    }
    pool = CachePool.createFromInfoAndDefaults(info);
    cachePools.put(pool.getPoolName(), pool);
  } catch (IOException e) {
    LOG.info("addCachePool of " + info + " failed: ", e);
    throw e;
  }
  LOG.info("addCachePool of {} successful.", info);
  return pool.getInfo(true);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:CacheManager.java


示例6: createFromInfoAndDefaults

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
/**
 * Create a new cache pool based on a CachePoolInfo object and the defaults.
 * We will fill in information that was not supplied according to the
 * defaults.
 */
static CachePool createFromInfoAndDefaults(CachePoolInfo info)
    throws IOException {
  UserGroupInformation ugi = null;
  String ownerName = info.getOwnerName();
  if (ownerName == null) {
    ugi = NameNode.getRemoteUser();
    ownerName = ugi.getShortUserName();
  }
  String groupName = info.getGroupName();
  if (groupName == null) {
    if (ugi == null) {
      ugi = NameNode.getRemoteUser();
    }
    groupName = ugi.getPrimaryGroupName();
  }
  FsPermission mode = (info.getMode() == null) ? 
      FsPermission.getCachePoolDefault() : info.getMode();
  long limit = info.getLimit() == null ?
      CachePoolInfo.DEFAULT_LIMIT : info.getLimit();
  long maxRelativeExpiry = info.getMaxRelativeExpiryMs() == null ?
      CachePoolInfo.DEFAULT_MAX_RELATIVE_EXPIRY :
      info.getMaxRelativeExpiryMs();
  return new CachePool(info.getPoolName(),
      ownerName, groupName, mode, limit, maxRelativeExpiry);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:CachePool.java


示例7: convert

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
public static CachePoolInfoProto convert(CachePoolInfo info) {
  CachePoolInfoProto.Builder builder = CachePoolInfoProto.newBuilder();
  builder.setPoolName(info.getPoolName());
  if (info.getOwnerName() != null) {
    builder.setOwnerName(info.getOwnerName());
  }
  if (info.getGroupName() != null) {
    builder.setGroupName(info.getGroupName());
  }
  if (info.getMode() != null) {
    builder.setMode(info.getMode().toShort());
  }
  if (info.getLimit() != null) {
    builder.setLimit(info.getLimit());
  }
  if (info.getMaxRelativeExpiryMs() != null) {
    builder.setMaxRelativeExpiry(info.getMaxRelativeExpiryMs());
  }
  return builder.build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:PBHelper.java


示例8: testListCachePools

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
/**
 * Add a list of cache pools, list cache pools,
 * switch active NN, and list cache pools again.
 */
@Test (timeout=60000)
public void testListCachePools() throws Exception {
  final int poolCount = 7;
  HashSet<String> poolNames = new HashSet<String>(poolCount);
  for (int i=0; i<poolCount; i++) {
    String poolName = "testListCachePools-" + i;
    dfs.addCachePool(new CachePoolInfo(poolName));
    poolNames.add(poolName);
  }
  listCachePools(poolNames, 0);

  cluster.transitionToStandby(0);
  cluster.transitionToActive(1);
  cluster.waitActive(1);
  listCachePools(poolNames, 1);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestRetryCacheWithHA.java


示例9: testListCacheDirectives

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
/**
 * Add a list of cache directives, list cache directives,
 * switch active NN, and list cache directives again.
 */
@Test (timeout=60000)
public void testListCacheDirectives() throws Exception {
  final int poolCount = 7;
  HashSet<String> poolNames = new HashSet<String>(poolCount);
  Path path = new Path("/p");
  for (int i=0; i<poolCount; i++) {
    String poolName = "testListCacheDirectives-" + i;
    CacheDirectiveInfo directiveInfo =
      new CacheDirectiveInfo.Builder().setPool(poolName).setPath(path).build();
    dfs.addCachePool(new CachePoolInfo(poolName));
    dfs.addCacheDirective(directiveInfo, EnumSet.of(CacheFlag.FORCE));
    poolNames.add(poolName);
  }
  listCacheDirectives(poolNames, 0);

  cluster.transitionToStandby(0);
  cluster.transitionToActive(1);
  cluster.waitActive(1);
  listCacheDirectives(poolNames, 1);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:TestRetryCacheWithHA.java


示例10: testExceedsCapacity

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
@Test(timeout=60000)
public void testExceedsCapacity() throws Exception {
  // Create a giant file
  final Path fileName = new Path("/exceeds");
  final long fileLen = CACHE_CAPACITY * (NUM_DATANODES*2);
  int numCachedReplicas = (int) ((CACHE_CAPACITY*NUM_DATANODES)/BLOCK_SIZE);
  DFSTestUtil.createFile(dfs, fileName, fileLen, (short) NUM_DATANODES,
      0xFADED);
  dfs.addCachePool(new CachePoolInfo("pool"));
  dfs.addCacheDirective(new CacheDirectiveInfo.Builder().setPool("pool")
      .setPath(fileName).setReplication((short) 1).build());
  waitForCachedBlocks(namenode, -1, numCachedReplicas,
      "testExceeds:1");
  checkPendingCachedEmpty(cluster);
  Thread.sleep(1000);
  checkPendingCachedEmpty(cluster);

  // Try creating a file with giant-sized blocks that exceed cache capacity
  dfs.delete(fileName, false);
  DFSTestUtil.createFile(dfs, fileName, 4096, fileLen, CACHE_CAPACITY * 2,
      (short) 1, 0xFADED);
  checkPendingCachedEmpty(cluster);
  Thread.sleep(1000);
  checkPendingCachedEmpty(cluster);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:TestCacheDirectives.java


示例11: convert

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
public static CachePoolInfo convert (CachePoolInfoProto proto) {
  // Pool name is a required field, the rest are optional
  String poolName = Preconditions.checkNotNull(proto.getPoolName());
  CachePoolInfo info = new CachePoolInfo(poolName);
  if (proto.hasOwnerName()) {
    info.setOwnerName(proto.getOwnerName());
  }
  if (proto.hasGroupName()) {
    info.setGroupName(proto.getGroupName());
  }
  if (proto.hasMode()) {
    info.setMode(new FsPermission((short)proto.getMode()));
  }
  if (proto.hasLimit())  {
    info.setLimit(proto.getLimit());
  }
  if (proto.hasMaxRelativeExpiry()) {
    info.setMaxRelativeExpiryMs(proto.getMaxRelativeExpiry());
  }
  return info;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:22,代码来源:PBHelperClient.java


示例12: addCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
void addCachePool(CachePoolInfo req, boolean logRetryCache)
    throws IOException {
  writeLock();
  boolean success = false;
  String poolInfoStr = null;
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot add cache pool"
        + (req == null ? null : req.getPoolName()));
    CachePoolInfo info = FSNDNCacheOp.addCachePool(this, cacheManager, req,
        logRetryCache);
    poolInfoStr = info.toString();
    success = true;
  } finally {
    writeUnlock();
    logAuditEvent(success, "addCachePool", poolInfoStr, null, null);
  }
  
  getEditLog().logSync();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:FSNamesystem.java


示例13: modifyCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
void modifyCachePool(CachePoolInfo req, boolean logRetryCache)
    throws IOException {
  writeLock();
  boolean success = false;
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot modify cache pool"
        + (req == null ? null : req.getPoolName()));
    FSNDNCacheOp.modifyCachePool(this, cacheManager, req, logRetryCache);
    success = true;
  } finally {
    writeUnlock();
    String poolNameStr = "{poolName: " +
        (req == null ? null : req.getPoolName()) + "}";
    logAuditEvent(success, "modifyCachePool", poolNameStr,
                  req == null ? null : req.toString(), null);
  }

  getEditLog().logSync();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:FSNamesystem.java


示例14: addCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
@Override //ClientProtocol
public void addCachePool(CachePoolInfo info) throws IOException {
  checkNNStartup();
  namesystem.checkOperation(OperationCategory.WRITE);
  CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache);
  if (cacheEntry != null && cacheEntry.isSuccess()) {
    return; // Return previous response
  }
  boolean success = false;
  try {
    namesystem.addCachePool(info, cacheEntry != null);
    success = true;
  } finally {
    RetryCache.setState(cacheEntry, success);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:NameNodeRpcServer.java


示例15: modifyCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
@Override // ClientProtocol
public void modifyCachePool(CachePoolInfo info) throws IOException {
  checkNNStartup();
  namesystem.checkOperation(OperationCategory.WRITE);
  CacheEntry cacheEntry = RetryCache.waitForCompletion(retryCache);
  if (cacheEntry != null && cacheEntry.isSuccess()) {
    return; // Return previous response
  }
  boolean success = false;
  try {
    namesystem.modifyCachePool(info, cacheEntry != null);
    success = true;
  } finally {
    RetryCache.setState(cacheEntry, success);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:NameNodeRpcServer.java


示例16: testNoBackingReplica

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
@Test(timeout=60000)
public void testNoBackingReplica() throws Exception {
  // Cache all three replicas for a file.
  final Path filename = new Path("/noback");
  final short replication = (short) 3;
  DFSTestUtil.createFile(dfs, filename, 1, replication, 0x0BAC);
  dfs.addCachePool(new CachePoolInfo("pool"));
  dfs.addCacheDirective(
      new CacheDirectiveInfo.Builder().setPool("pool").setPath(filename)
          .setReplication(replication).build());
  waitForCachedBlocks(namenode, 1, replication, "testNoBackingReplica:1");
  // Pause cache reports while we change the replication factor.
  // This will orphan some cached replicas.
  DataNodeTestUtils.setCacheReportsDisabledForTests(cluster, true);
  try {
    dfs.setReplication(filename, (short) 1);
    DFSTestUtil.waitForReplication(dfs, filename, (short) 1, 30000);
    // The cache locations should drop down to 1 even without cache reports.
    waitForCachedBlocks(namenode, 1, (short) 1, "testNoBackingReplica:2");
  } finally {
    DataNodeTestUtils.setCacheReportsDisabledForTests(cluster, false);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:24,代码来源:TestCacheDirectives.java


示例17: addCachePool

import org.apache.hadoop.hdfs.protocol.CachePoolInfo; //导入依赖的package包/类
public void addCachePool(CachePoolInfo info) throws IOException {
  checkOpen();
  TraceScope scope = Trace.startSpan("addCachePool", traceSampler);
  try {
    namenode.addCachePool(info);
  } catch (RemoteException re) {
    throw re.unwrapRemoteException();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:DFSClient.java



注:本文中的org.apache.hadoop.hdfs.protocol.CachePoolInfo类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java GetContainerReportResponsePBImpl类代码示例发布时间:2022-05-22
下一篇:
Java KruskalMinimumSpanningTree类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap