• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java StorageReceivedDeletedBlocks类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks的典型用法代码示例。如果您正苦于以下问题:Java StorageReceivedDeletedBlocks类的具体用法?Java StorageReceivedDeletedBlocks怎么用?Java StorageReceivedDeletedBlocks使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StorageReceivedDeletedBlocks类属于org.apache.hadoop.hdfs.server.protocol包,在下文中一共展示了StorageReceivedDeletedBlocks类的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public void blockReceivedAndDeleted(DatanodeRegistration registration,
    String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks)
    throws IOException {
  BlockReceivedAndDeletedRequestProto.Builder builder = 
      BlockReceivedAndDeletedRequestProto.newBuilder()
      .setRegistration(PBHelper.convert(registration))
      .setBlockPoolId(poolId);
  for (StorageReceivedDeletedBlocks storageBlock : receivedAndDeletedBlocks) {
    StorageReceivedDeletedBlocksProto.Builder repBuilder = 
        StorageReceivedDeletedBlocksProto.newBuilder();
    repBuilder.setStorageUuid(storageBlock.getStorage().getStorageID());  // Set for wire compatibility.
    repBuilder.setStorage(PBHelper.convert(storageBlock.getStorage()));
    for (ReceivedDeletedBlockInfo rdBlock : storageBlock.getBlocks()) {
      repBuilder.addBlocks(PBHelper.convert(rdBlock));
    }
    builder.addBlocks(repBuilder.build());
  }
  try {
    rpcProxy.blockReceivedAndDeleted(NULL_CONTROLLER, builder.build());
  } catch (ServiceException se) {
    throw ProtobufHelper.getRemoteException(se);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:DatanodeProtocolClientSideTranslatorPB.java


示例2: addBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ExtendedBlock addBlocks(String fileName, String clientName)
throws IOException {
  ExtendedBlock prevBlock = null;
  for(int jdx = 0; jdx < blocksPerFile; jdx++) {
    LocatedBlock loc = nameNodeProto.addBlock(fileName, clientName,
        prevBlock, null, INodeId.GRANDFATHER_INODE_ID, null);
    prevBlock = loc.getBlock();
    for(DatanodeInfo dnInfo : loc.getLocations()) {
      int dnIdx = Arrays.binarySearch(datanodes, dnInfo.getXferAddr());
      datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
      ReceivedDeletedBlockInfo[] rdBlocks = { new ReceivedDeletedBlockInfo(
          loc.getBlock().getLocalBlock(),
          ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK, null) };
      StorageReceivedDeletedBlocks[] report = { new StorageReceivedDeletedBlocks(
          datanodes[dnIdx].storage.getStorageID(), rdBlocks) };
      nameNodeProto.blockReceivedAndDeleted(datanodes[dnIdx].dnRegistration, loc
          .getBlock().getBlockPoolId(), report);
    }
  }
  return prevBlock;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:NNThroughputBenchmark.java


示例3: waitForBlockReceived

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ReceivedDeletedBlockInfo[] waitForBlockReceived(
    final ExtendedBlock fakeBlock,
    final DatanodeProtocolClientSideTranslatorPB mockNN) throws Exception {
  final String fakeBlockPoolId = fakeBlock.getBlockPoolId();
  final ArgumentCaptor<StorageReceivedDeletedBlocks[]> captor =
    ArgumentCaptor.forClass(StorageReceivedDeletedBlocks[].class);
  GenericTestUtils.waitFor(new Supplier<Boolean>() {

    @Override
    public Boolean get() {
      try {
        Mockito.verify(mockNN).blockReceivedAndDeleted(
          Mockito.<DatanodeRegistration>anyObject(),
          Mockito.eq(fakeBlockPoolId),
          captor.capture());
        return true;
      } catch (Throwable t) {
        return false;
      }
    }
  }, 100, 10000);
  return captor.getValue()[0].getBlocks();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:TestBPOfferService.java


示例4: testReportBlockReceived

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
/**
 * Ensure that an IBR is generated immediately for a block received by
 * the DN.
 *
 * @throws InterruptedException
 * @throws IOException
 */
@Test (timeout=60000)
public void testReportBlockReceived() throws InterruptedException, IOException {
  try {
    DatanodeProtocolClientSideTranslatorPB nnSpy = spyOnDnCallsToNn();
    injectBlockReceived();

    // Sleep for a very short time, this is necessary since the IBR is
    // generated asynchronously.
    Thread.sleep(2000);

    // Ensure that the received block was reported immediately.
    Mockito.verify(nnSpy, times(1)).blockReceivedAndDeleted(
        any(DatanodeRegistration.class),
        anyString(),
        any(StorageReceivedDeletedBlocks[].class));
  } finally {
    cluster.shutdown();
    cluster = null;
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:TestIncrementalBlockReports.java


示例5: processIncrementalBlockReport

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
/**
 * The given node is reporting incremental information about some blocks.
 * This includes blocks that are starting to be received, completed being
 * received, or deleted.
 * 
 * This method must be called with FSNamesystem lock held.
 */
public void processIncrementalBlockReport(final DatanodeID nodeID,
    final StorageReceivedDeletedBlocks srdb) throws IOException {
  assert namesystem.hasWriteLock();
  final DatanodeDescriptor node = datanodeManager.getDatanode(nodeID);
  if (node == null || !node.isRegistered()) {
    blockLog.warn("BLOCK* processIncrementalBlockReport"
            + " is received from dead or unregistered node {}", nodeID);
    throw new IOException(
        "Got incremental block report from unregistered or dead node");
  }
  try {
    processIncrementalBlockReport(node, srdb);
  } catch (Exception ex) {
    node.setForceRegistration(true);
    throw ex;
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:BlockManager.java


示例6: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public void blockReceivedAndDeleted(DatanodeRegistration registration,
    String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks)
    throws IOException {
  BlockReceivedAndDeletedRequestProto.Builder builder = 
      BlockReceivedAndDeletedRequestProto.newBuilder()
      .setRegistration(PBHelper.convert(registration))
      .setBlockPoolId(poolId);
  for (StorageReceivedDeletedBlocks storageBlock : receivedAndDeletedBlocks) {
    StorageReceivedDeletedBlocksProto.Builder repBuilder = 
        StorageReceivedDeletedBlocksProto.newBuilder();
    repBuilder.setStorageUuid(storageBlock.getStorage().getStorageID());  // Set for wire compatibility.
    repBuilder.setStorage(PBHelperClient.convert(storageBlock.getStorage()));
    for (ReceivedDeletedBlockInfo rdBlock : storageBlock.getBlocks()) {
      repBuilder.addBlocks(PBHelper.convert(rdBlock));
    }
    builder.addBlocks(repBuilder.build());
  }
  try {
    rpcProxy.blockReceivedAndDeleted(NULL_CONTROLLER, builder.build());
  } catch (ServiceException se) {
    throw ProtobufHelper.getRemoteException(se);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:DatanodeProtocolClientSideTranslatorPB.java


示例7: addBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ExtendedBlock addBlocks(String fileName, String clientName)
throws IOException {
  ExtendedBlock prevBlock = null;
  for(int jdx = 0; jdx < blocksPerFile; jdx++) {
    LocatedBlock loc = clientProto.addBlock(fileName, clientName,
        prevBlock, null, HdfsConstants.GRANDFATHER_INODE_ID, null);
    prevBlock = loc.getBlock();
    for(DatanodeInfo dnInfo : loc.getLocations()) {
      int dnIdx = dnInfo.getXferPort() - 1;
      datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
      ReceivedDeletedBlockInfo[] rdBlocks = { new ReceivedDeletedBlockInfo(
          loc.getBlock().getLocalBlock(),
          ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK, null) };
      StorageReceivedDeletedBlocks[] report = { new StorageReceivedDeletedBlocks(
          datanodes[dnIdx].storage.getStorageID(), rdBlocks) };
      dataNodeProto.blockReceivedAndDeleted(datanodes[dnIdx].dnRegistration,
          bpid, report);
    }
  }
  return prevBlock;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:22,代码来源:NNThroughputBenchmark.java


示例8: transferBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
/**
 * Transfer blocks to another data-node.
 * Just report on behalf of the other data-node
 * that the blocks have been received.
 */
private int transferBlocks( Block blocks[], 
                            DatanodeInfo xferTargets[][],
                            String targetStorageIDs[][]
                          ) throws IOException {
  for(int i = 0; i < blocks.length; i++) {
    DatanodeInfo blockTargets[] = xferTargets[i];
    for(int t = 0; t < blockTargets.length; t++) {
      DatanodeInfo dnInfo = blockTargets[t];
      String targetStorageID = targetStorageIDs[i][t];
      DatanodeRegistration receivedDNReg;
      receivedDNReg = new DatanodeRegistration(dnInfo,
        new DataStorage(nsInfo),
        new ExportedBlockKeys(), VersionInfo.getVersion());
      ReceivedDeletedBlockInfo[] rdBlocks = {
        new ReceivedDeletedBlockInfo(
              blocks[i], ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK,
              null) };
      StorageReceivedDeletedBlocks[] report = { new StorageReceivedDeletedBlocks(
          targetStorageID, rdBlocks) };
      nameNodeProto.blockReceivedAndDeleted(receivedDNReg, nameNode
          .getNamesystem().getBlockPoolId(), report);
    }
  }
  return blocks.length;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:31,代码来源:NNThroughputBenchmark.java


示例9: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public void blockReceivedAndDeleted(DatanodeRegistration registration,
    String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks)
    throws IOException {
  BlockReceivedAndDeletedRequestProto.Builder builder = 
      BlockReceivedAndDeletedRequestProto.newBuilder()
      .setRegistration(PBHelper.convert(registration))
      .setBlockPoolId(poolId);
  for (StorageReceivedDeletedBlocks storageBlock : receivedAndDeletedBlocks) {
    StorageReceivedDeletedBlocksProto.Builder repBuilder = 
        StorageReceivedDeletedBlocksProto.newBuilder();
    repBuilder.setStorageID(storageBlock.getStorageID());
    for (ReceivedDeletedBlockInfo rdBlock : storageBlock.getBlocks()) {
      repBuilder.addBlocks(PBHelper.convert(rdBlock));
    }
    builder.addBlocks(repBuilder.build());
  }
  try {
    rpcProxy.blockReceivedAndDeleted(NULL_CONTROLLER, builder.build());
  } catch (ServiceException se) {
    throw ProtobufHelper.getRemoteException(se);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:24,代码来源:DatanodeProtocolClientSideTranslatorPB.java


示例10: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public BlockReceivedAndDeletedResponseProto blockReceivedAndDeleted(
    RpcController controller, BlockReceivedAndDeletedRequestProto request)
    throws ServiceException {
  List<StorageReceivedDeletedBlocksProto> sBlocks = request.getBlocksList();
  StorageReceivedDeletedBlocks[] info = 
      new StorageReceivedDeletedBlocks[sBlocks.size()];
  for (int i = 0; i < sBlocks.size(); i++) {
    StorageReceivedDeletedBlocksProto sBlock = sBlocks.get(i);
    List<ReceivedDeletedBlockInfoProto> list = sBlock.getBlocksList();
    ReceivedDeletedBlockInfo[] rdBlocks = 
        new ReceivedDeletedBlockInfo[list.size()];
    for (int j = 0; j < list.size(); j++) {
      rdBlocks[j] = PBHelper.convert(list.get(j));
    }
    info[i] = new StorageReceivedDeletedBlocks(sBlock.getStorageID(), rdBlocks);
  }
  try {
    impl.blockReceivedAndDeleted(PBHelper.convert(request.getRegistration()),
        request.getBlockPoolId(), info);
  } catch (IOException e) {
    throw new ServiceException(e);
  }
  return VOID_BLOCK_RECEIVED_AND_DELETE_RESPONSE;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:26,代码来源:DatanodeProtocolServerSideTranslatorPB.java


示例11: transferBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
/**
 * Transfer blocks to another data-node.
 * Just report on behalf of the other data-node
 * that the blocks have been received.
 */
private int transferBlocks( Block blocks[], 
                            DatanodeInfo xferTargets[][] 
                          ) throws IOException {
  for(int i = 0; i < blocks.length; i++) {
    DatanodeInfo blockTargets[] = xferTargets[i];
    for(int t = 0; t < blockTargets.length; t++) {
      DatanodeInfo dnInfo = blockTargets[t];
      DatanodeRegistration receivedDNReg;
      receivedDNReg = new DatanodeRegistration(dnInfo,
        new DataStorage(nsInfo, dnInfo.getStorageID()),
        new ExportedBlockKeys(), VersionInfo.getVersion());
      ReceivedDeletedBlockInfo[] rdBlocks = {
        new ReceivedDeletedBlockInfo(
              blocks[i], ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK,
              null) };
      StorageReceivedDeletedBlocks[] report = { new StorageReceivedDeletedBlocks(
          receivedDNReg.getStorageID(), rdBlocks) };
      nameNodeProto.blockReceivedAndDeleted(receivedDNReg, nameNode
          .getNamesystem().getBlockPoolId(), report);
    }
  }
  return blocks.length;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:29,代码来源:NNThroughputBenchmark.java


示例12: addBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ExtendedBlock addBlocks(String fileName, String clientName)
throws IOException {
  ExtendedBlock prevBlock = null;
  for(int jdx = 0; jdx < blocksPerFile; jdx++) {
    LocatedBlock loc = nameNodeProto.addBlock(fileName, clientName,
        prevBlock, null, INodeId.GRANDFATHER_INODE_ID, null);
    prevBlock = loc.getBlock();
    for(DatanodeInfo dnInfo : loc.getLocations()) {
      int dnIdx = Arrays.binarySearch(datanodes, dnInfo.getXferAddr());
      datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
      ReceivedDeletedBlockInfo[] rdBlocks = { new ReceivedDeletedBlockInfo(
          loc.getBlock().getLocalBlock(),
          ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK, null) };
      StorageReceivedDeletedBlocks[] report = { new StorageReceivedDeletedBlocks(
          datanodes[dnIdx].dnRegistration.getStorageID(), rdBlocks) };
      nameNodeProto.blockReceivedAndDeleted(datanodes[dnIdx].dnRegistration, loc
          .getBlock().getBlockPoolId(), report);
    }
  }
  return prevBlock;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:22,代码来源:NNThroughputBenchmark.java


示例13: waitForBlockReceived

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ReceivedDeletedBlockInfo[] waitForBlockReceived(
    ExtendedBlock fakeBlock,
    DatanodeProtocolClientSideTranslatorPB mockNN) throws Exception {
  final ArgumentCaptor<StorageReceivedDeletedBlocks[]> captor =
    ArgumentCaptor.forClass(StorageReceivedDeletedBlocks[].class);
  GenericTestUtils.waitFor(new Supplier<Boolean>() {

    @Override
    public Boolean get() {
      try {
        Mockito.verify(mockNN1).blockReceivedAndDeleted(
          Mockito.<DatanodeRegistration>anyObject(),
          Mockito.eq(FAKE_BPID),
          captor.capture());
        return true;
      } catch (Throwable t) {
        return false;
      }
    }
  }, 100, 10000);
  return captor.getValue()[0].getBlocks();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:23,代码来源:TestBPOfferService.java


示例14: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public void blockReceivedAndDeleted(DatanodeRegistration registration,
    String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks)
    throws IOException {
  BlockReceivedAndDeletedRequestProto.Builder builder =
      BlockReceivedAndDeletedRequestProto.newBuilder()
          .setRegistration(PBHelper.convert(registration))
          .setBlockPoolId(poolId);
  for (StorageReceivedDeletedBlocks storageBlock : receivedAndDeletedBlocks) {
    StorageReceivedDeletedBlocksProto.Builder repBuilder =
        StorageReceivedDeletedBlocksProto.newBuilder();
    repBuilder.setStorageID(storageBlock.getStorageID());
    for (ReceivedDeletedBlockInfo rdBlock : storageBlock.getBlocks()) {
      repBuilder.addBlocks(PBHelper.convert(rdBlock));
    }
    builder.addBlocks(repBuilder.build());
  }
  try {
    rpcProxy.blockReceivedAndDeleted(NULL_CONTROLLER, builder.build());
  } catch (ServiceException se) {
    throw ProtobufHelper.getRemoteException(se);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:24,代码来源:DatanodeProtocolClientSideTranslatorPB.java


示例15: blockReceivedAndDeleted

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
@Override
public BlockReceivedAndDeletedResponseProto blockReceivedAndDeleted(
    RpcController controller, BlockReceivedAndDeletedRequestProto request)
    throws ServiceException {
  List<StorageReceivedDeletedBlocksProto> sBlocks = request.getBlocksList();
  StorageReceivedDeletedBlocks[] info =
      new StorageReceivedDeletedBlocks[sBlocks.size()];
  for (int i = 0; i < sBlocks.size(); i++) {
    StorageReceivedDeletedBlocksProto sBlock = sBlocks.get(i);
    List<ReceivedDeletedBlockInfoProto> list = sBlock.getBlocksList();
    ReceivedDeletedBlockInfo[] rdBlocks =
        new ReceivedDeletedBlockInfo[list.size()];
    for (int j = 0; j < list.size(); j++) {
      rdBlocks[j] = PBHelper.convert(list.get(j));
    }
    info[i] =
        new StorageReceivedDeletedBlocks(sBlock.getStorageID(), rdBlocks);
  }
  try {
    impl.blockReceivedAndDeleted(PBHelper.convert(request.getRegistration()),
        request.getBlockPoolId(), info);
  } catch (IOException e) {
    throw new ServiceException(e);
  }
  return VOID_BLOCK_RECEIVED_AND_DELETE_RESPONSE;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:DatanodeProtocolServerSideTranslatorPB.java


示例16: transferBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
/**
 * Transfer blocks to another data-node.
 * Just report on behalf of the other data-node
 * that the blocks have been received.
 */
private int transferBlocks(Block blocks[], DatanodeInfo xferTargets[][])
    throws IOException {
  for (int i = 0; i < blocks.length; i++) {
    DatanodeInfo blockTargets[] = xferTargets[i];
    for (DatanodeInfo dnInfo : blockTargets) {
      DatanodeRegistration receivedDNReg;
      receivedDNReg = new DatanodeRegistration(dnInfo,
          new DataStorage(nsInfo, dnInfo.getStorageID()),
          new ExportedBlockKeys(), VersionInfo.getVersion());
      ReceivedDeletedBlockInfo[] rdBlocks =
          {new ReceivedDeletedBlockInfo(blocks[i],
              ReceivedDeletedBlockInfo.BlockStatus.RECEIVED, null)};
      StorageReceivedDeletedBlocks[] report =
          {new StorageReceivedDeletedBlocks(receivedDNReg.getStorageID(),
              rdBlocks)};
      nameNodeProto.blockReceivedAndDeleted(receivedDNReg,
          nameNode.getNamesystem().getBlockPoolId(), report);
    }
  }
  return blocks.length;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:NNThroughputBenchmark.java


示例17: addBlocks

import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks; //导入依赖的package包/类
private ExtendedBlock addBlocks(String fileName, String clientName)
    throws IOException {
  ExtendedBlock prevBlock = null;
  for (int jdx = 0; jdx < blocksPerFile; jdx++) {
    LocatedBlock loc =
        nameNodeProto.addBlock(fileName, clientName, prevBlock, null);
    prevBlock = loc.getBlock();
    for (DatanodeInfo dnInfo : loc.getLocations()) {
      int dnIdx = Arrays.binarySearch(datanodes, dnInfo.getXferAddr());
      datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
      ReceivedDeletedBlockInfo[] rdBlocks =
          {new ReceivedDeletedBlockInfo(loc.getBlock().getLocalBlock(),
              ReceivedDeletedBlockInfo.BlockStatus.RECEIVED, null)};
      StorageReceivedDeletedBlocks[] report =
          {new StorageReceivedDeletedBlocks(
              datanodes[dnIdx].dnRegistration.getStorageID(), rdBlocks)};
      nameNodeProto.blockReceivedAndDeleted(datanodes[dnIdx].dnRegistration,
          loc.getBlock().getBlockPoolId(), report);
    }
  }
  return prevBlock;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:23,代码来源:NNThroughputBenchmark.java



注:本文中的org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ParserException类代码示例发布时间:2022-05-22
下一篇:
Java SchedulerUtils类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap