• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java BlockProto类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto的典型用法代码示例。如果您正苦于以下问题:Java BlockProto类的具体用法?Java BlockProto怎么用?Java BlockProto使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockProto类属于org.apache.hadoop.hdfs.protocol.proto.HdfsProtos包,在下文中一共展示了BlockProto类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initReplicaRecovery

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
@Override
public ReplicaRecoveryInfo initReplicaRecovery(RecoveringBlock rBlock)
    throws IOException {
  InitReplicaRecoveryRequestProto req = InitReplicaRecoveryRequestProto
      .newBuilder().setBlock(PBHelper.convert(rBlock)).build();
  InitReplicaRecoveryResponseProto resp;
  try {
    resp = rpcProxy.initReplicaRecovery(NULL_CONTROLLER, req);
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }
  if (!resp.getReplicaFound()) {
    // No replica found on the remote node.
    return null;
  } else {
    if (!resp.hasBlock() || !resp.hasState()) {
      throw new IOException("Replica was found but missing fields. " +
          "Req: " + req + "\n" +
          "Resp: " + resp);
    }
  }
  
  BlockProto b = resp.getBlock();
  return new ReplicaRecoveryInfo(b.getBlockId(), b.getNumBytes(),
      b.getGenStamp(), PBHelper.convert(resp.getState()));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:InterDatanodeProtocolTranslatorPB.java


示例2: dumpINodeFile

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
private void dumpINodeFile(INodeSection.INodeFile f) {
  o("replication", f.getReplication()).o("mtime", f.getModificationTime())
      .o("atime", f.getAccessTime())
      .o("perferredBlockSize", f.getPreferredBlockSize())
      .o("permission", dumpPermission(f.getPermission()));

  if (f.getBlocksCount() > 0) {
    out.print("<blocks>");
    for (BlockProto b : f.getBlocksList()) {
      out.print("<block>");
      o("id", b.getBlockId()).o("genstamp", b.getGenStamp()).o("numBytes",
          b.getNumBytes());
      out.print("</block>\n");
    }
    out.print("</blocks>\n");
  }

  if (f.hasFileUC()) {
    INodeSection.FileUnderConstructionFeature u = f.getFileUC();
    out.print("<file-under-construction>");
    o("clientName", u.getClientName()).o("clientMachine",
        u.getClientMachine());
    out.print("</file-under-construction>\n");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:PBImageXmlWriter.java


示例3: dumpINodeFile

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
private void dumpINodeFile(INodeSection.INodeFile f) {
  o("replication", f.getReplication()).o("mtime", f.getModificationTime())
      .o("atime", f.getAccessTime())
      .o("preferredBlockSize", f.getPreferredBlockSize())
      .o("permission", dumpPermission(f.getPermission()));
  dumpAcls(f.getAcl());
  if (f.getBlocksCount() > 0) {
    out.print("<blocks>");
    for (BlockProto b : f.getBlocksList()) {
      out.print("<block>");
      o("id", b.getBlockId()).o("genstamp", b.getGenStamp()).o("numBytes",
          b.getNumBytes());
      out.print("</block>\n");
    }
    out.print("</blocks>\n");
  }

  if (f.hasFileUC()) {
    INodeSection.FileUnderConstructionFeature u = f.getFileUC();
    out.print("<file-under-construction>");
    o("clientName", u.getClientName()).o("clientMachine",
        u.getClientMachine());
    out.print("</file-under-construction>\n");
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:26,代码来源:PBImageXmlWriter.java


示例4: convert

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
public static BlockCommand convert(BlockCommandProto blkCmd) {
  List<BlockProto> blockProtoList = blkCmd.getBlocksList();
  Block[] blocks = new Block[blockProtoList.size()];
  for (int i = 0; i < blockProtoList.size(); i++) {
    blocks[i] = PBHelper.convert(blockProtoList.get(i));
  }
  List<DatanodeInfosProto> targetList = blkCmd.getTargetsList();
  DatanodeInfo[][] targets = new DatanodeInfo[targetList.size()][];
  for (int i = 0; i < targetList.size(); i++) {
    targets[i] = PBHelper.convert(targetList.get(i));
  }
  int action = DatanodeProtocol.DNA_UNKNOWN;
  switch (blkCmd.getAction()) {
  case TRANSFER:
    action = DatanodeProtocol.DNA_TRANSFER;
    break;
  case INVALIDATE:
    action = DatanodeProtocol.DNA_INVALIDATE;
    break;
  case SHUTDOWN:
    action = DatanodeProtocol.DNA_SHUTDOWN;
    break;
  }
  return new BlockCommand(action, blkCmd.getBlockPoolId(), blocks, targets);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:26,代码来源:PBHelper.java


示例5: convert

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
public static BlockCommand convert(BlockCommandProto blkCmd) {
  List<BlockProto> blockProtoList = blkCmd.getBlocksList();
  Block[] blocks = new Block[blockProtoList.size()];
  for (int i = 0; i < blockProtoList.size(); i++) {
    blocks[i] = PBHelper.convert(blockProtoList.get(i));
  }
  List<DatanodeInfosProto> targetList = blkCmd.getTargetsList();
  DatanodeInfo[][] targets = new DatanodeInfo[targetList.size()][];
  for (int i = 0; i < targetList.size(); i++) {
    targets[i] = PBHelper.convert(targetList.get(i));
  }
  int action = DatanodeProtocol.DNA_UNKNOWN;
  switch (blkCmd.getAction()) {
    case TRANSFER:
      action = DatanodeProtocol.DNA_TRANSFER;
      break;
    case INVALIDATE:
      action = DatanodeProtocol.DNA_INVALIDATE;
      break;
    case SHUTDOWN:
      action = DatanodeProtocol.DNA_SHUTDOWN;
      break;
  }
  return new BlockCommand(action, blkCmd.getBlockPoolId(), blocks, targets);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:26,代码来源:PBHelper.java


示例6: run

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
private void run(InputStream in) throws IOException {
  INodeSection s = INodeSection.parseDelimitedFrom(in);
  for (int i = 0; i < s.getNumInodes(); ++i) {
    INodeSection.INode p = INodeSection.INode.parseDelimitedFrom(in);
    if (p.getType() == INodeSection.INode.Type.FILE) {
      ++totalFiles;
      INodeSection.INodeFile f = p.getFile();
      totalBlocks += f.getBlocksCount();
      long fileSize = 0;
      for (BlockProto b : f.getBlocksList()) {
        fileSize += b.getNumBytes();
      }
      maxFileSize = Math.max(fileSize, maxFileSize);
      totalSpace += fileSize * f.getReplication();

      int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
          .ceil((double)fileSize / steps);
      ++distribution[bucket];

    } else if (p.getType() == INodeSection.INode.Type.DIRECTORY) {
      ++totalDirectories;
    }

    if (i % (1 << 20) == 0) {
      out.println("Processed " + i + " inodes.");
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:FileDistributionCalculator.java


示例7: testConvertBlock

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
@Test
public void testConvertBlock() {
  Block b = new Block(1, 100, 3);
  BlockProto bProto = PBHelper.convert(b);
  Block b2 = PBHelper.convert(bProto);
  assertEquals(b, b2);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:TestPBHelper.java


示例8: testConvertBlock

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
@Test
public void testConvertBlock() {
  Block b = new Block(1, 100, 3);
  BlockProto bProto = PBHelperClient.convert(b);
  Block b2 = PBHelperClient.convert(bProto);
  assertEquals(b, b2);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:TestPBHelper.java


示例9: initReplicaRecovery

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
@Override
public ReplicaRecoveryInfo initReplicaRecovery(RecoveringBlock rBlock)
    throws IOException {
  InitReplicaRecoveryRequestProto req =
      InitReplicaRecoveryRequestProto.newBuilder()
          .setBlock(PBHelper.convert(rBlock)).build();
  InitReplicaRecoveryResponseProto resp;
  try {
    resp = rpcProxy.initReplicaRecovery(NULL_CONTROLLER, req);
  } catch (ServiceException e) {
    throw ProtobufHelper.getRemoteException(e);
  }
  if (!resp.getReplicaFound()) {
    // No replica found on the remote node.
    return null;
  } else {
    if (!resp.hasBlock() || !resp.hasState()) {
      throw new IOException("Replica was found but missing fields. " +
          "Req: " + req + "\n" +
          "Resp: " + resp);
    }
  }
  
  BlockProto b = resp.getBlock();
  return new ReplicaRecoveryInfo(b.getBlockId(), b.getNumBytes(),
      b.getGenStamp(), PBHelper.convert(resp.getState()));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:28,代码来源:InterDatanodeProtocolTranslatorPB.java


示例10: convert

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
public static BlockCommand convert(BlockCommandProto blkCmd) {
  List<BlockProto> blockProtoList = blkCmd.getBlocksList();
  Block[] blocks = new Block[blockProtoList.size()];
  for (int i = 0; i < blockProtoList.size(); i++) {
    blocks[i] = PBHelper.convert(blockProtoList.get(i));
  }
  List<DatanodeInfosProto> targetList = blkCmd.getTargetsList();
  DatanodeInfo[][] targets = new DatanodeInfo[targetList.size()][];
  for (int i = 0; i < targetList.size(); i++) {
    targets[i] = PBHelper.convert(targetList.get(i));
  }

  List<StorageUuidsProto> targetStorageUuidsList = blkCmd.getTargetStorageUuidsList();
  String[][] targetStorageIDs = new String[targetStorageUuidsList.size()][];
  for(int i = 0; i < targetStorageIDs.length; i++) {
    List<String> storageIDs = targetStorageUuidsList.get(i).getStorageUuidsList();
    targetStorageIDs[i] = storageIDs.toArray(new String[storageIDs.size()]);
  }

  int action = DatanodeProtocol.DNA_UNKNOWN;
  switch (blkCmd.getAction()) {
  case TRANSFER:
    action = DatanodeProtocol.DNA_TRANSFER;
    break;
  case INVALIDATE:
    action = DatanodeProtocol.DNA_INVALIDATE;
    break;
  case SHUTDOWN:
    action = DatanodeProtocol.DNA_SHUTDOWN;
    break;
  default:
    throw new AssertionError("Unknown action type: " + blkCmd.getAction());
  }
  return new BlockCommand(action, blkCmd.getBlockPoolId(), blocks, targets,
      targetStorageIDs);
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:37,代码来源:PBHelper.java


示例11: getFileSize

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
private long getFileSize(INodeFile f) {
  long size = 0;
  for (BlockProto p : f.getBlocksList()) {
    size += p.getNumBytes();
  }
  return size;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:8,代码来源:LsrPBImage.java


示例12: convert

import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto; //导入依赖的package包/类
public static BlockProto convert(Block b) {
  return BlockProto.newBuilder().setBlockId(b.getBlockId())
      .setGenStamp(b.getGenerationStamp()).setNumBytes(b.getNumBytes())
      .build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:PBHelper.java



注:本文中的org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java QuantifierNode类代码示例发布时间:2022-05-22
下一篇:
Java TestUserGroupInformation类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap