• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java BlockInfo类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo的典型用法代码示例。如果您正苦于以下问题:Java BlockInfo类的具体用法?Java BlockInfo怎么用?Java BlockInfo使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockInfo类属于org.apache.hadoop.hdfs.server.namenode.BlocksMap包,在下文中一共展示了BlockInfo类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: generateINode

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
private INode generateINode(long inodeId) {
  return new INode(inodeId, new PermissionStatus("", "", new FsPermission((short) 0)), 0, 0) {
    @Override
    long[] computeContentSummary(long[] summary) {
      return null;
    }

    @Override
    DirCounts spaceConsumedInTree(DirCounts counts) {
      return null;
    }

    @Override
    public boolean isDirectory() {
      return false;
    }

    @Override
    int collectSubtreeBlocksAndClear(List<BlockInfo> v, 
                                     int blocksLimit, 
                                     List<INode> removedINodes) {
      return 0;
    }
  };
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:TestINodeMap.java


示例2: filterMapWithInode

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
private void filterMapWithInode(INode node) {
  // Must NOT filter with files in WaitingRoom already!
  if (node.getFullPathName().startsWith(wrDir)) return;

  LOG.info("Filtering WaitingRoomMaps with inode " + node.getFullPathName());

  if (node.isDirectory()) {
    INodeDirectory dir = (INodeDirectory) node;
    for (INode child: dir.getChildren()) {
      filterMapWithInode(child);
    }
  } else {
    BlockInfo[] blocks = ((INodeFile)node).getBlocks();

    // Mark all blocks of this file as referenced
    for (BlockInfo block: blocks) {
      blockRefMap.remove(block.getBlockId());
    }
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:WaitingRoom.java


示例3: getAllLocatedBlocks

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
private void getAllLocatedBlocks(INode inode,
    List<LocatedBlocksWithMetaInfo> blocks)
throws IOException {
  if (inode.isDirectory()) {
    INodeDirectory dir = (INodeDirectory) inode;
    for (INode child: dir.getChildren()) {
      getAllLocatedBlocks(child, blocks);
    }
  } else {
    INodeFile file = (INodeFile) inode;
    BlockInfo[] fileBlocks = file.getBlocks();
    List<LocatedBlock> lb = new ArrayList<LocatedBlock>();
    for (BlockInfo block: fileBlocks) {
      // DatanodeInfo is unavailable, so set as empty for now
      lb.add(new LocatedBlock(block, new DatanodeInfo[0]));
    }

    LocatedBlocks locatedBlocks =  new LocatedBlocks(
                           file.computeContentSummary().getLength(), // flength
                           lb, // blks
                           false); // isUnderConstruction

    // Update DatanodeInfo from NN
    blocks.add(namenode.updateDatanodeInfo(locatedBlocks));
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:27,代码来源:SnapshotNode.java


示例4: getParityBlocks

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
public BlockInfo[] getParityBlocks(BlockInfo[] blocks) {
  int numBlocks = (blocks.length / numStripeBlocks) * numParityBlocks
      + ((blocks.length % numStripeBlocks == 0) ? 0 : numParityBlocks);
  BlockInfo[] parityBlocks = new BlockInfo[numBlocks];
  int pos = 0;
  int parityEnd = numParityBlocks;
  for (int i = 0; i < numBlocks; i++) {
    parityBlocks[i] = blocks[pos];
    pos++;
    if (pos == parityEnd) {
      pos += numDataBlocks;
      parityEnd += numStripeBlocks;
    }
  }
  return parityBlocks;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:RaidCodec.java


示例5: checkRaidProgress

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * Count the number of live replicas of each parity block in the raided file
 * If any stripe has not enough parity block replicas, add the stripe to 
 *  raidEncodingTasks to schedule encoding.
 * If forceAdd is true, we always add the stripe to raidEncodingTasks 
 * without checking
 * @param sourceINode
 * @param raidTasks
 * @param fs
 * @param forceAdd
 * @return true if all parity blocks of the file have enough replicas
 * @throws IOException
 */
public boolean checkRaidProgress(INodeFile sourceINode, 
    LightWeightLinkedSet<RaidBlockInfo> raidEncodingTasks, FSNamesystem fs,
    boolean forceAdd) throws IOException {
  boolean result = true;
  BlockInfo[] blocks = sourceINode.getBlocks();
  for (int i = 0; i < blocks.length;
      i += numStripeBlocks) {
    boolean hasParity = true;
    if (!forceAdd) {
      for (int j = 0; j < numParityBlocks; j++) {
        if (fs.countLiveNodes(blocks[i + j]) < this.parityReplication) {
          hasParity = false;
          break;
        }
      }
    }
    if (!hasParity || forceAdd) {
      raidEncodingTasks.add(new RaidBlockInfo(blocks[i], parityReplication, i));
      result = false; 
    }
  }
  return result;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:37,代码来源:RaidCodec.java


示例6: appendBlocks

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
@Override
public void appendBlocks(INodeFile [] inodes, int totalAddedBlocks, INodeFile inode) {
  int size = this.blocks.length;

  BlockInfo[] newlist = new BlockInfo[size + totalAddedBlocks];
  System.arraycopy(this.blocks, 0, newlist, 0, size);

  for(INodeFile in: inodes) {
    BlockInfo[] blks = in.storage.getBlocks();
    System.arraycopy(blks, 0, newlist, size, blks.length);
    size += blks.length;
  }

  this.blocks = newlist;

  for(BlockInfo bi: this.blocks) {
    bi.setINode(inode);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:20,代码来源:INodeRegularStorage.java


示例7: listMoveToHead

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * Remove block from the list and insert
 * into the head of the list of blocks
 * related to the specified DatanodeDescriptor.
 * If the head is null then form a new list.
 * @return current block as the new head of the list.
 */
protected BlockInfo listMoveToHead(BlockInfo block, BlockInfo head,
    DatanodeIndex indexes) {
  assert head != null : "Head can not be null";
  if (head == block) {
    return head;
  }
  BlockInfo next = block.getSetNext(indexes.currentIndex, head);
  BlockInfo prev = block.getSetPrevious(indexes.currentIndex, null);

  head.setPrevious(indexes.headIndex, block);
  indexes.headIndex = indexes.currentIndex;
  prev.setNext(prev.findDatanode(this), next);
  if (next != null)
    next.setPrevious(next.findDatanode(this), prev);
  return block;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:DatanodeDescriptor.java


示例8: isSourceBlock

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
@Override
public boolean isSourceBlock(BlockInfo block) {
  int index = 0;
  if (block instanceof RaidBlockInfo) {
    RaidBlockInfo rbi = (RaidBlockInfo)block;
    index = rbi.index; 
  } else {
    if (LOG.isDebugEnabled()) {
      LOG.debug("block: " + block + " is not raid block info");
    }
    for (index = 0; index < blocks.length; index++) {
      if (blocks[index].equals(block)) {
        break;
      }
    }
    if (index == blocks.length) {
      return false; 
    }
  }
  return index % codec.numStripeBlocks >= codec.numParityBlocks;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:INodeRaidStorage.java


示例9: updateNeededReplicationQueue

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * Update a block's priority queue in neededReplicaiton queues
 * 
 * @param blockInfo blockInfo
 * @param delta the change of number of replicas
 * @param numCurrentReplicas current number of replicas
 * @param numCurrentDecommissionedReplicas current number of decommissioed replicas
 * @param node the node where the replica resides
 * @param fileReplication expected number of replicas
 */
private void updateNeededReplicationQueue(BlockInfo blockInfo, int delta,
    int numCurrentReplicas, int numCurrentDecommissionedReplicas,
     DatanodeDescriptor node, short fileReplication) {
   int numOldReplicas = numCurrentReplicas;
   int numOldDecommissionedReplicas = numCurrentDecommissionedReplicas;
   if (node.isDecommissioned() || node.isDecommissionInProgress()) {
     numOldDecommissionedReplicas -= delta;
   } else {
     numOldReplicas -= delta;
   }   
   if (fileReplication > numOldReplicas) {
     neededReplications.remove(blockInfo, numOldReplicas,
         numOldDecommissionedReplicas, fileReplication);
   }
   if (fileReplication > numCurrentReplicas) {
     neededReplications.add(blockInfo, numCurrentReplicas,
         numCurrentDecommissionedReplicas, fileReplication); 
   }   
 }
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:30,代码来源:FSNamesystem.java


示例10: set

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
public void set(long inodeId,
    String path,
    short replication,
    long mtime,
    long atime,
    long blockSize,
    BlockInfo[] blocks,
    PermissionStatus permissions,
    String clientName,
    String clientMachine) {
  this.inodeId = inodeId;
  this.path = path;
  this.replication = replication;
  this.mtime = mtime;
  this.atime = atime;
  this.blockSize = blockSize;
  this.blocks = blocks;
  this.permissions = permissions;
  this.clientName = clientName;
  this.clientMachine = clientMachine;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:FSEditLogOp.java


示例11: insertIntoList

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * Adds blocks already connected into list, to this descriptor's blocks.
 * The blocks in the input list already have this descriptor inserted to them.
 * Used for parallel initial block reports.
 */
void insertIntoList(BlockInfo head, int headIndex, BlockInfo tail, int tailIndex, int count) {
  if (head == null)
    return;
  
  // connect tail to now-head
  tail.setNext(tailIndex, blockList);
  if (blockList != null)
    blockList.setPrevious(blockList.findDatanode(this), tail);
  
  // create new head
  blockList = head;
  blockList.setPrevious(headIndex, null);
  
  // add new blocks to the count
  numOfBlocks += count;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:DatanodeDescriptor.java


示例12: getBlockInfoInternal

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
private LocatedBlockWithFileName getBlockInfoInternal(long blockId)
     throws IOException {
	Block block = new Block(blockId);
	BlockInfo blockInfo = namesystem.blocksMap.getBlockInfo(block);
	if (null == blockInfo) {
		return null;
	}

	INodeFile inode = blockInfo.getINode();
	if (null == inode) {
		return null;
	}

	String fileName = inode.getFullPathName();
	// get the location info
	List<DatanodeInfo> diList = new ArrayList<DatanodeInfo>();
	for (Iterator<DatanodeDescriptor> it
			= namesystem.blocksMap.nodeIterator(block); it.hasNext();) {
		diList.add(it.next());
	}
	return new LocatedBlockWithFileName(block,
			diList.toArray(new DatanodeInfo[] {}), fileName);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:NameNode.java


示例13: ReplicationWork

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
public ReplicationWork(BlockInfo block,
                        INodeFile fileINode,
                        int numOfReplicas,
                        DatanodeDescriptor srcNode,
                        List<DatanodeDescriptor> containingNodes,
                        int priority){
  this.block = block;
  this.blockSize = block.getNumBytes();
  this.fileINode = fileINode;
  this.numOfReplicas = numOfReplicas;
  this.srcNode = srcNode;
  this.containingNodes = containingNodes;
  this.priority = priority;

  this.targets = null;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:FSNamesystem.java


示例14: getBlockIndex

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
int getBlockIndex(Block blk, String file) throws IOException {
  BlockInfo[] blocks = getBlocks();
  if (blocks == null) {
    throw new IOException("blocks is null for file " + file);
  }
  // null indicates that this block is currently added. Return size() 
  // as the index in this case
  if (blk == null) {
    return blocks.length;
  }
  for (int curBlk = 0; curBlk < blocks.length; curBlk++) {
    if (blocks[curBlk].equals(blk)) {
      return curBlk;
    }
  }
  throw new IOException("Cannot locate " + blk + " in file " + file);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:18,代码来源:INodeFile.java


示例15: metaSave

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * Iterate through all items and print them.
 */
void metaSave(PrintWriter out) {
  synchronized (pendingReplications) {
    out.println("Metasave: Blocks being replicated: " +
                pendingReplications.size());
    Iterator<Map.Entry<BlockInfo, PendingBlockInfo>> iter = 
        pendingReplications.entrySet().iterator();
    while (iter.hasNext()) {
      Map.Entry<BlockInfo, PendingBlockInfo> entry = iter.next();
      PendingBlockInfo pendingBlock = entry.getValue();
      BlockInfo block = entry.getKey();
      out.println(block + 
                  " StartTime: " + new Time(pendingBlock.timeStamp) +
                  " NumReplicaInProgress: " + 
                  pendingBlock.numReplicasInProgress);
    }
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:PendingReplicationBlocks.java


示例16: corruptFileForTesting

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * corrupts a file by:
 * 1. removing all targets of the last block
 */
void corruptFileForTesting(String src) throws IOException {
  INodeFile inode = dir.getFileINode(src);

  if (inode.isUnderConstruction()) {
    INodeFileUnderConstruction pendingFile =
      (INodeFileUnderConstruction) inode;
    BlockInfo[] blocks = pendingFile.getBlocks();
    if (blocks != null && blocks.length >= 1) {
      BlockInfo lastBlockInfo = blocks[blocks.length - 1];

      pendingFile.setLastBlock(
        lastBlockInfo,
        new DatanodeDescriptor[0]
      );
    }
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:FSNamesystem.java


示例17: processPendingReplications

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
/**
 * If there were any replication requests that timed out, reap them
 * and put them back into the neededReplication queue
 */
void processPendingReplications() {
  BlockInfo[] timedOutItems = pendingReplications.getTimedOutBlocks();
  if (timedOutItems != null) {
    writeLock();
    try {
      for (int i = 0; i < timedOutItems.length; i++) {
        NumberReplicas num = countNodes(timedOutItems[i]);
        neededReplications.add(
          timedOutItems[i], 
          num.liveReplicas(),
          num.decommissionedReplicas(),
          getReplication(timedOutItems[i]));
      }
    } finally {
      writeUnlock();
    }
    /* If we know the target datanodes where the replication timedout,
     * we could invoke decBlocksScheduled() on it. Its ok for now.
     */
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:FSNamesystem.java


示例18: createINodeRaidFile

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
INodeFile createINodeRaidFile(short replication, RaidCodec codec, 
    long blockSize, BlockInfo[] blocks) {
  return new INodeFile(INodeId.GRANDFATHER_INODE_ID, 
      new PermissionStatus(userName, null,
      FsPermission.getDefault()), blocks, replication,
      1L, 2L, preferredBlockSize, codec);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:8,代码来源:TestINodeFile.java


示例19: testEmptyINodeRaidStorage

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
@Test
public void testEmptyINodeRaidStorage() throws IOException {
  INodeFile emptyFile = createINodeRaidFile(replication,
      RaidCodecBuilder.getRSCodec("rs", 4, 10, RaidCodec.FULL_BLOCK, 
          parityReplication, parityReplication), preferredBlockSize, null);
  BlockInfo fakeBlockInfo = new BlockInfo(new Block(0, 0, 0), 1);
  assertEquals(2L, emptyFile.accessTime);
  assertEquals(1L, emptyFile.modificationTime);
  assertEquals(replication, emptyFile.getReplication());
  assertEquals(StorageType.RAID_STORAGE, emptyFile.getStorageType());
  assertEquals(null, emptyFile.getLastBlock());
  assertFalse(emptyFile.isLastBlock(fakeBlockInfo));
  LOG.info("Test getBlockIndex");
  try {
    emptyFile.getBlockIndex(fakeBlockInfo, "");
  } catch (IOException ioe) {
    assertTrue(ioe.getMessage().startsWith("blocks is null for file "));     
  }
  
  LOG.info("Test computeContentSummary");
  long[] summary = new long[]{0L, 0L, 0L, 0L};
  emptyFile.computeContentSummary(summary);
  assertEquals(0, summary[0]);
  assertEquals(1, summary[1]);
  assertEquals(0, summary[3]);
  
  LOG.info("Test collectSubtreeBlocksAndClear");
  ArrayList<BlockInfo> removedBlocks = new ArrayList<BlockInfo>();
  ArrayList<INode> removedINodes = new ArrayList<INode>();
  assertEquals(1, emptyFile.collectSubtreeBlocksAndClear(removedBlocks,
      Integer.MAX_VALUE, removedINodes));
  assertEquals(null, emptyFile.getStorage());
  assertEquals(0, removedBlocks.size());
  assertEquals(1, removedINodes.size());
  assertEquals(emptyFile, removedINodes.get(0));
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:37,代码来源:TestINodeFile.java


示例20: update

import org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo; //导入依赖的package包/类
synchronized void update(BlockInfo blockInfo, int curReplicas, 
                         int decommissionedReplicas,
                         int curExpectedReplicas,
                         int curReplicasDelta, int expectedReplicasDelta) {
  int oldReplicas = curReplicas-curReplicasDelta;
  int oldExpectedReplicas = curExpectedReplicas-expectedReplicasDelta;
  int curPri = getPriority(blockInfo, curReplicas, decommissionedReplicas, curExpectedReplicas);
  int oldPri = getPriority(blockInfo, oldReplicas, decommissionedReplicas, oldExpectedReplicas);
  if(NameNode.stateChangeLog.isDebugEnabled()) {
    NameNode.stateChangeLog.debug("UnderReplicationBlocks.update " + 
                                  blockInfo +
                                  " curReplicas " + curReplicas +
                                  " curExpectedReplicas " + curExpectedReplicas +
                                  " oldReplicas " + oldReplicas +
                                  " oldExpectedReplicas  " + oldExpectedReplicas +
                                  " curPri  " + curPri +
                                  " oldPri  " + oldPri);
  }
  if(oldPri != LEVEL && oldPri != curPri) {
    remove(blockInfo, oldPri);
  }
  if(curPri != LEVEL && priorityQueues.get(curPri).add(blockInfo)) {
    if (NameNode.stateChangeLog.isDebugEnabled()) {
      NameNode.stateChangeLog.debug(
                                    "BLOCK* NameSystem.UnderReplicationBlock.update:"
                                    + blockInfo
                                    + " has only "+curReplicas
                                    + " replicas and need " + curExpectedReplicas
                                    + " replicas so is added to neededReplications"
                                    + " at priority level " + curPri);
    }
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:34,代码来源:UnderReplicatedBlocks.java



注:本文中的org.apache.hadoop.hdfs.server.namenode.BlocksMap.BlockInfo类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java FileCopyPasteUtil类代码示例发布时间:2022-05-22
下一篇:
Java HistoricActivityInstanceQueryImpl类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap