• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java NumberReplicas类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas的典型用法代码示例。如果您正苦于以下问题:Java NumberReplicas类的具体用法?Java NumberReplicas怎么用?Java NumberReplicas使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



NumberReplicas类属于org.apache.hadoop.hdfs.server.blockmanagement包,在下文中一共展示了NumberReplicas类的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: validateNumberReplicas

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
private void validateNumberReplicas(int expectedReplicas) throws IOException {
  NumberReplicas numberReplicas = blockManager.countNodes(block);
  assertThat(numberReplicas.liveReplicas(), is(expectedReplicas));
  assertThat(numberReplicas.excessReplicas(), is(0));
  assertThat(numberReplicas.corruptReplicas(), is(0));
  assertThat(numberReplicas.decommissionedReplicas(), is(0));
  assertThat(numberReplicas.replicasOnStaleNodes(), is(0));
  
  BlockManagerTestUtil.updateState(blockManager);
  assertThat(blockManager.getUnderReplicatedBlocksCount(), is(0L));
  assertThat(blockManager.getExcessBlocksCount(), is(0L));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:TestReadOnlySharedStorage.java


示例2: testNormalReplicaOffline

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Verify that the NameNode is able to still use <tt>READ_ONLY_SHARED</tt> replicas even 
 * when the single NORMAL replica is offline (and the effective replication count is 0).
 */
@Test
public void testNormalReplicaOffline() throws Exception {
  // Stop the datanode hosting the NORMAL replica
  cluster.stopDataNode(normalDataNode.getXferAddr());
  
  // Force NameNode to detect that the datanode is down
  BlockManagerTestUtil.noticeDeadDatanode(
      cluster.getNameNode(), normalDataNode.getXferAddr());
  
  // The live replica count should now be zero (since the NORMAL replica is offline)
  NumberReplicas numberReplicas = blockManager.countNodes(block);
  assertThat(numberReplicas.liveReplicas(), is(0));
  
  // The block should be reported as under-replicated
  BlockManagerTestUtil.updateState(blockManager);
  assertThat(blockManager.getUnderReplicatedBlocksCount(), is(1L));
  
  // The BlockManager should be able to heal the replication count back to 1
  // by triggering an inter-datanode replication from one of the READ_ONLY_SHARED replicas
  BlockManagerTestUtil.computeAllPendingWork(blockManager);
  
  DFSTestUtil.waitForReplication(cluster, extendedBlock, 1, 1, 0);
  
  // There should now be 2 *locations* for the block, and 1 *replica*
  assertThat(getLocatedBlock().getLocations().length, is(2));
  validateNumberReplicas(1);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:TestReadOnlySharedStorage.java


示例3: testReadOnlyReplicaCorrupt

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Verify that corrupt <tt>READ_ONLY_SHARED</tt> replicas aren't counted 
 * towards the corrupt replicas total.
 */
@Test
public void testReadOnlyReplicaCorrupt() throws Exception {
  // "Corrupt" a READ_ONLY_SHARED replica by reporting it as a bad replica
  client.reportBadBlocks(new LocatedBlock[] { 
      new LocatedBlock(extendedBlock, new DatanodeInfo[] { readOnlyDataNode })
  });

  // There should now be only 1 *location* for the block as the READ_ONLY_SHARED is corrupt
  waitForLocations(1);
  
  // However, the corrupt READ_ONLY_SHARED replica should *not* affect the overall corrupt replicas count
  NumberReplicas numberReplicas = blockManager.countNodes(block);
  assertThat(numberReplicas.corruptReplicas(), is(0));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TestReadOnlySharedStorage.java


示例4: validateNumberReplicas

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
private void validateNumberReplicas(int expectedReplicas) throws IOException {
  NumberReplicas numberReplicas = blockManager.countNodes(storedBlock);
  assertThat(numberReplicas.liveReplicas(), is(expectedReplicas));
  assertThat(numberReplicas.excessReplicas(), is(0));
  assertThat(numberReplicas.corruptReplicas(), is(0));
  assertThat(numberReplicas.decommissionedAndDecommissioning(), is(0));
  assertThat(numberReplicas.replicasOnStaleNodes(), is(0));
  
  BlockManagerTestUtil.updateState(blockManager);
  assertThat(blockManager.getUnderReplicatedBlocksCount(), is(0L));
  assertThat(blockManager.getExcessBlocksCount(), is(0L));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:13,代码来源:TestReadOnlySharedStorage.java


示例5: testNormalReplicaOffline

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Verify that the NameNode is able to still use <tt>READ_ONLY_SHARED</tt> replicas even 
 * when the single NORMAL replica is offline (and the effective replication count is 0).
 */
@Test
public void testNormalReplicaOffline() throws Exception {
  // Stop the datanode hosting the NORMAL replica
  cluster.stopDataNode(normalDataNode.getXferAddr());
  
  // Force NameNode to detect that the datanode is down
  BlockManagerTestUtil.noticeDeadDatanode(
      cluster.getNameNode(), normalDataNode.getXferAddr());
  
  // The live replica count should now be zero (since the NORMAL replica is offline)
  NumberReplicas numberReplicas = blockManager.countNodes(storedBlock);
  assertThat(numberReplicas.liveReplicas(), is(0));
  
  // The block should be reported as under-replicated
  BlockManagerTestUtil.updateState(blockManager);
  assertThat(blockManager.getUnderReplicatedBlocksCount(), is(1L));
  
  // The BlockManager should be able to heal the replication count back to 1
  // by triggering an inter-datanode replication from one of the READ_ONLY_SHARED replicas
  BlockManagerTestUtil.computeAllPendingWork(blockManager);
  
  DFSTestUtil.waitForReplication(cluster, extendedBlock, 1, 1, 0);
  
  // There should now be 2 *locations* for the block, and 1 *replica*
  assertThat(getLocatedBlock().getLocations().length, is(2));
  validateNumberReplicas(1);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:32,代码来源:TestReadOnlySharedStorage.java


示例6: testReadOnlyReplicaCorrupt

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Verify that corrupt <tt>READ_ONLY_SHARED</tt> replicas aren't counted 
 * towards the corrupt replicas total.
 */
@Test
public void testReadOnlyReplicaCorrupt() throws Exception {
  // "Corrupt" a READ_ONLY_SHARED replica by reporting it as a bad replica
  client.reportBadBlocks(new LocatedBlock[] { 
      new LocatedBlock(extendedBlock, new DatanodeInfo[] { readOnlyDataNode })
  });

  // There should now be only 1 *location* for the block as the READ_ONLY_SHARED is corrupt
  waitForLocations(1);
  
  // However, the corrupt READ_ONLY_SHARED replica should *not* affect the overall corrupt replicas count
  NumberReplicas numberReplicas = blockManager.countNodes(storedBlock);
  assertThat(numberReplicas.corruptReplicas(), is(0));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:19,代码来源:TestReadOnlySharedStorage.java


示例7: countReplicas

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
private static NumberReplicas countReplicas(final FSNamesystem namesystem,
    final ExtendedBlock block) throws IOException {
  return (NumberReplicas) new HopsTransactionalRequestHandler(
      HDFSOperationType.COUNT_NODES) {
    INodeIdentifier inodeIdentifier;

    @Override
    public void setUp() throws StorageException, IOException {
      inodeIdentifier =
          INodeUtil.resolveINodeFromBlock(block.getLocalBlock());
    }

    @Override
    public void acquireLock(TransactionLocks locks) throws IOException {
      LockFactory lf = LockFactory.getInstance();
      locks
          .add(lf.getIndividualBlockLock(block.getBlockId(), inodeIdentifier))
          .add(lf.getBlockRelated(LockFactory.BLK.RE, LockFactory.BLK.ER,
              LockFactory.BLK.CR));
    }

    @Override
    public Object performTask() throws StorageException, IOException {
      return namesystem.getBlockManager().countNodes(block.getLocalBlock());
    }

  }.handle(namesystem);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:29,代码来源:TestProcessCorruptBlocks.java


示例8: blockIdCK

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Check block information given a blockId number
 *
*/
public void blockIdCK(String blockId) {

  if(blockId == null) {
    out.println("Please provide valid blockId!");
    return;
  }

  BlockManager bm = namenode.getNamesystem().getBlockManager();
  try {
    //get blockInfo
    Block block = new Block(Block.getBlockId(blockId));
    //find which file this block belongs to
    BlockInfoContiguous blockInfo = bm.getStoredBlock(block);
    if(blockInfo == null) {
      out.println("Block "+ blockId +" " + NONEXISTENT_STATUS);
      LOG.warn("Block "+ blockId + " " + NONEXISTENT_STATUS);
      return;
    }
    BlockCollection bc = bm.getBlockCollection(blockInfo);
    INode iNode = (INode) bc;
    NumberReplicas numberReplicas= bm.countNodes(block);
    out.println("Block Id: " + blockId);
    out.println("Block belongs to: "+iNode.getFullPathName());
    out.println("No. of Expected Replica: " + bc.getBlockReplication());
    out.println("No. of live Replica: " + numberReplicas.liveReplicas());
    out.println("No. of excess Replica: " + numberReplicas.excessReplicas());
    out.println("No. of stale Replica: " + numberReplicas.replicasOnStaleNodes());
    out.println("No. of decommission Replica: "
        + numberReplicas.decommissionedReplicas());
    out.println("No. of corrupted Replica: " + numberReplicas.corruptReplicas());
    //record datanodes that have corrupted block replica
    Collection<DatanodeDescriptor> corruptionRecord = null;
    if (bm.getCorruptReplicas(block) != null) {
      corruptionRecord = bm.getCorruptReplicas(block);
    }

    //report block replicas status on datanodes
    for(int idx = (blockInfo.numNodes()-1); idx >= 0; idx--) {
      DatanodeDescriptor dn = blockInfo.getDatanode(idx);
      out.print("Block replica on datanode/rack: " + dn.getHostName() +
          dn.getNetworkLocation() + " ");
      if (corruptionRecord != null && corruptionRecord.contains(dn)) {
        out.print(CORRUPT_STATUS+"\t ReasonCode: "+
          bm.getCorruptReason(block,dn));
      } else if (dn.isDecommissioned() ){
        out.print(DECOMMISSIONED_STATUS);
      } else if (dn.isDecommissionInProgress()) {
        out.print(DECOMMISSIONING_STATUS);
      } else {
        out.print(HEALTHY_STATUS);
      }
      out.print("\n");
    }
  } catch (Exception e){
    String errMsg = "Fsck on blockId '" + blockId;
    LOG.warn(errMsg, e);
    out.println(e.getMessage());
    out.print("\n\n" + errMsg);
    LOG.warn("Error in looking up block", e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:66,代码来源:NamenodeFsck.java


示例9: countReplicas

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
private static NumberReplicas countReplicas(final FSNamesystem namesystem, ExtendedBlock block) {
  return namesystem.getBlockManager().countNodes(block.getLocalBlock());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:4,代码来源:TestProcessCorruptBlocks.java


示例10: blockIdCK

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
/**
 * Check block information given a blockId number
 *
*/
public void blockIdCK(String blockId) {

  if(blockId == null) {
    out.println("Please provide valid blockId!");
    return;
  }

  try {
    //get blockInfo
    Block block = new Block(Block.getBlockId(blockId));
    //find which file this block belongs to
    BlockInfo blockInfo = blockManager.getStoredBlock(block);
    if(blockInfo == null) {
      out.println("Block "+ blockId +" " + NONEXISTENT_STATUS);
      LOG.warn("Block "+ blockId + " " + NONEXISTENT_STATUS);
      return;
    }
    final INodeFile iNode = namenode.getNamesystem().getBlockCollection(blockInfo);
    NumberReplicas numberReplicas= blockManager.countNodes(blockInfo);
    out.println("Block Id: " + blockId);
    out.println("Block belongs to: "+iNode.getFullPathName());
    out.println("No. of Expected Replica: " +
        blockManager.getExpectedReplicaNum(blockInfo));
    out.println("No. of live Replica: " + numberReplicas.liveReplicas());
    out.println("No. of excess Replica: " + numberReplicas.excessReplicas());
    out.println("No. of stale Replica: " +
        numberReplicas.replicasOnStaleNodes());
    out.println("No. of decommissioned Replica: "
        + numberReplicas.decommissioned());
    out.println("No. of decommissioning Replica: "
        + numberReplicas.decommissioning());
    out.println("No. of corrupted Replica: " +
        numberReplicas.corruptReplicas());
    //record datanodes that have corrupted block replica
    Collection<DatanodeDescriptor> corruptionRecord = null;
    if (blockManager.getCorruptReplicas(block) != null) {
      corruptionRecord = blockManager.getCorruptReplicas(block);
    }

    //report block replicas status on datanodes
    for(int idx = (blockInfo.numNodes()-1); idx >= 0; idx--) {
      DatanodeDescriptor dn = blockInfo.getDatanode(idx);
      out.print("Block replica on datanode/rack: " + dn.getHostName() +
          dn.getNetworkLocation() + " ");
      if (corruptionRecord != null && corruptionRecord.contains(dn)) {
        out.print(CORRUPT_STATUS + "\t ReasonCode: " +
            blockManager.getCorruptReason(block, dn));
      } else if (dn.isDecommissioned() ){
        out.print(DECOMMISSIONED_STATUS);
      } else if (dn.isDecommissionInProgress()) {
        out.print(DECOMMISSIONING_STATUS);
      } else {
        out.print(HEALTHY_STATUS);
      }
      out.print("\n");
    }
  } catch (Exception e){
    String errMsg = "Fsck on blockId '" + blockId;
    LOG.warn(errMsg, e);
    out.println(e.getMessage());
    out.print("\n\n" + errMsg);
    LOG.warn("Error in looking up block", e);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:69,代码来源:NamenodeFsck.java


示例11: countReplicas

import org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas; //导入依赖的package包/类
private static NumberReplicas countReplicas(final FSNamesystem namesystem, ExtendedBlock block) {
  final BlockManager blockManager = namesystem.getBlockManager();
  return blockManager.countNodes(blockManager.getStoredBlock(
      block.getLocalBlock()));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:6,代码来源:TestProcessCorruptBlocks.java



注:本文中的org.apache.hadoop.hdfs.server.blockmanagement.NumberReplicas类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java MotanSwitcherUtil类代码示例发布时间:2022-05-22
下一篇:
Java CorbanameURL类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap