• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FsVolumeSpi类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi的典型用法代码示例。如果您正苦于以下问题:Java FsVolumeSpi类的具体用法?Java FsVolumeSpi怎么用?Java FsVolumeSpi使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FsVolumeSpi类属于org.apache.hadoop.hdfs.server.datanode.fsdataset包,在下文中一共展示了FsVolumeSpi类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: ScanInfo

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
ScanInfo(long blockId, File blockFile, File metaFile, FsVolumeSpi vol) {
  this.blockId = blockId;
  String condensedVolPath = vol == null ? null :
    getCondensedPath(vol.getBasePath());
  this.blockSuffix = blockFile == null ? null :
    getSuffix(blockFile, condensedVolPath);
  this.blockFileLength = (blockFile != null) ? blockFile.length() : 0; 
  if (metaFile == null) {
    this.metaSuffix = null;
  } else if (blockFile == null) {
    this.metaSuffix = getSuffix(metaFile, condensedVolPath);
  } else {
    this.metaSuffix = getSuffix(metaFile,
        condensedVolPath + blockSuffix);
  }
  this.volume = vol;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DirectoryScanner.java


示例2: removeVolumeScanner

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Stops and removes a volume scanner.<p/>
 *
 * This function will block until the volume scanner has stopped.
 *
 * @param volume           The volume to remove.
 */
public synchronized void removeVolumeScanner(FsVolumeSpi volume) {
  if (!isEnabled()) {
    LOG.debug("Not removing volume scanner for {}, because the block " +
        "scanner is disabled.", volume.getStorageID());
    return;
  }
  VolumeScanner scanner = scanners.get(volume.getStorageID());
  if (scanner == null) {
    LOG.warn("No scanner found to remove for volumeId {}",
        volume.getStorageID());
    return;
  }
  LOG.info("Removing scanner for volume {} (StorageID {})",
      volume.getBasePath(), volume.getStorageID());
  scanner.shutdown();
  scanners.remove(volume.getStorageID());
  Uninterruptibles.joinUninterruptibly(scanner, 5, TimeUnit.MINUTES);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:BlockScanner.java


示例3: testLocalDirs

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Check that the permissions of the local DN directories are as expected.
 */
@Test
public void testLocalDirs() throws Exception {
  Configuration conf = new Configuration();
  final String permStr = conf.get(
    DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_KEY);
  FsPermission expected = new FsPermission(permStr);

  // Check permissions on directories in 'dfs.datanode.data.dir'
  FileSystem localFS = FileSystem.getLocal(conf);
  for (DataNode dn : cluster.getDataNodes()) {
    for (FsVolumeSpi v : dn.getFSDataset().getVolumes()) {
      String dir = v.getBasePath();
      Path dataDir = new Path(dir);
      FsPermission actual = localFS.getFileStatus(dataDir).getPermission();
        assertEquals("Permission for dir: " + dataDir + ", is " + actual +
            ", while expected is " + expected, expected, actual);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:TestDiskError.java


示例4: ScanInfo

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Create a ScanInfo object for a block. This constructor will examine
 * the block data and meta-data files.
 *
 * @param blockId the block ID
 * @param blockFile the path to the block data file
 * @param metaFile the path to the block meta-data file
 * @param vol the volume that contains the block
 */
ScanInfo(long blockId, File blockFile, File metaFile, FsVolumeSpi vol) {
  this.blockId = blockId;
  String condensedVolPath = vol == null ? null :
    getCondensedPath(vol.getBasePath());
  this.blockSuffix = blockFile == null ? null :
    getSuffix(blockFile, condensedVolPath);
  this.blockFileLength = (blockFile != null) ? blockFile.length() : 0; 
  if (metaFile == null) {
    this.metaSuffix = null;
  } else if (blockFile == null) {
    this.metaSuffix = getSuffix(metaFile, condensedVolPath);
  } else {
    this.metaSuffix = getSuffix(metaFile,
        condensedVolPath + blockSuffix);
  }
  this.volume = vol;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:27,代码来源:DirectoryScanner.java


示例5: getStoredReplicas

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
@Override
public Iterator<Replica> getStoredReplicas(String bpid) throws IOException {
  // Reload replicas from the disk.
  ReplicaMap replicaMap = new ReplicaMap(dataset);
  try (FsVolumeReferences refs = dataset.getFsVolumeReferences()) {
    for (FsVolumeSpi vol : refs) {
      FsVolumeImpl volume = (FsVolumeImpl) vol;
      volume.getVolumeMap(bpid, replicaMap, dataset.ramDiskReplicaTracker);
    }
  }

  // Cast ReplicaInfo to Replica, because ReplicaInfo assumes a file-based
  // FsVolumeSpi implementation.
  List<Replica> ret = new ArrayList<>();
  if (replicaMap.replicas(bpid) != null) {
    ret.addAll(replicaMap.replicas(bpid));
  }
  return ret.iterator();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:20,代码来源:FsDatasetImplTestUtils.java


示例6: createReplicas

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
private void createReplicas(List<String> bpList, List<FsVolumeSpi> volumes,
                            FsDatasetTestUtils testUtils) throws IOException {
  // Here we create all different type of replicas and add it
  // to volume map. 
  // Created all type of ReplicaInfo, each under Blkpool corresponding volume
  long id = 1; // This variable is used as both blockId and genStamp
  for (String bpId: bpList) {
    for (FsVolumeSpi volume: volumes) {
      ExtendedBlock eb = new ExtendedBlock(bpId, id, 1, id);
      testUtils.createFinalizedReplica(volume, eb);
      id++;

      eb = new ExtendedBlock(bpId, id, 1, id);
      testUtils.createRBW(volume, eb);
      id++;

      eb = new ExtendedBlock(bpId, id, 1, id);
      testUtils.createReplicaWaitingToBeRecovered(volume, eb);
      id++;

      eb = new ExtendedBlock(bpId, id, 1, id);
      testUtils.createReplicaInPipeline(volume, eb);
      id++;
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:27,代码来源:TestWriteToReplica.java


示例7: testLocalDirs

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Check that the permissions of the local DN directories are as expected.
 */
@Test
public void testLocalDirs() throws Exception {
  Configuration conf = new Configuration();
  final String permStr = conf.get(
    DFSConfigKeys.DFS_DATANODE_DATA_DIR_PERMISSION_KEY);
  FsPermission expected = new FsPermission(permStr);

  // Check permissions on directories in 'dfs.datanode.data.dir'
  FileSystem localFS = FileSystem.getLocal(conf);
  for (DataNode dn : cluster.getDataNodes()) {
    try (FsDatasetSpi.FsVolumeReferences volumes =
        dn.getFSDataset().getFsVolumeReferences()) {
      for (FsVolumeSpi vol : volumes) {
        String dir = vol.getBasePath();
        Path dataDir = new Path(dir);
        FsPermission actual = localFS.getFileStatus(dataDir).getPermission();
        assertEquals("Permission for dir: " + dataDir + ", is " + actual +
            ", while expected is " + expected, expected, actual);
      }
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:26,代码来源:TestDiskError.java


示例8: getRemaining

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
long getRemaining() throws IOException {
  long remaining = 0L;
  for (FsVolumeSpi vol : volumes.get()) {
    try (FsVolumeReference ref = vol.obtainReference()) {
      remaining += vol.getAvailable();
    } catch (ClosedChannelException e) {
      // ignore
    }
  }
  return remaining;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:FsVolumeList.java


示例9: addDifference

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/** Block is not found on the disk */
private void addDifference(LinkedList<ScanInfo> diffRecord,
                           Stats statsRecord, long blockId,
                           FsVolumeSpi vol) {
  statsRecord.missingBlockFile++;
  statsRecord.missingMetaFile++;
  diffRecord.add(new ScanInfo(blockId, null, null, vol));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:DirectoryScanner.java


示例10: isValid

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/** Is the given volume still valid in the dataset? */
private static boolean isValid(final FsDatasetSpi<?> dataset,
    final FsVolumeSpi volume) {
  for (FsVolumeSpi vol : dataset.getVolumes()) {
    if (vol == volume) {
      return true;
    }
  }
  return false;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:DirectoryScanner.java


示例11: reportBadBlocks

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Report a bad block which is hosted on the local DN.
 */
public void reportBadBlocks(ExtendedBlock block) throws IOException{
  BPOfferService bpos = getBPOSForBlock(block);
  FsVolumeSpi volume = getFSDataset().getVolume(block);
  bpos.reportBadBlocks(
      block, volume.getStorageID(), volume.getStorageType());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:DataNode.java


示例12: reportBadBlock

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
private void reportBadBlock(final BPOfferService bpos,
    final ExtendedBlock block, final String msg) {
  FsVolumeSpi volume = getFSDataset().getVolume(block);
  bpos.reportBadBlocks(
      block, volume.getStorageID(), volume.getStorageType());
  LOG.warn(msg);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:DataNode.java


示例13: addVolumeScanner

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
* Set up a scanner for the given block pool and volume.
*
* @param ref              A reference to the volume.
*/
public synchronized void addVolumeScanner(FsVolumeReference ref) {
  boolean success = false;
  try {
    FsVolumeSpi volume = ref.getVolume();
    if (!isEnabled()) {
      LOG.debug("Not adding volume scanner for {}, because the block " +
          "scanner is disabled.", volume.getBasePath());
      return;
    }
    VolumeScanner scanner = scanners.get(volume.getStorageID());
    if (scanner != null) {
      LOG.error("Already have a scanner for volume {}.",
          volume.getBasePath());
      return;
    }
    LOG.debug("Adding scanner for volume {} (StorageID {})",
        volume.getBasePath(), volume.getStorageID());
    scanner = new VolumeScanner(conf, datanode, ref);
    scanner.start();
    scanners.put(volume.getStorageID(), scanner);
    success = true;
  } finally {
    if (!success) {
      // If we didn't create a new VolumeScanner object, we don't
      // need this reference to the volume.
      IOUtils.cleanup(null, ref);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:BlockScanner.java


示例14: handle

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
public void handle(ExtendedBlock block, IOException e) {
  FsVolumeSpi volume = scanner.volume;
  if (e == null) {
    LOG.trace("Successfully scanned {} on {}", block, volume.getBasePath());
    return;
  }
  // If the block does not exist anymore, then it's not an error.
  if (!volume.getDataset().contains(block)) {
    LOG.debug("Volume {}: block {} is no longer in the dataset.",
        volume.getBasePath(), block);
    return;
  }
  // If the block exists, the exception may due to a race with write:
  // The BlockSender got an old block path in rbw. BlockReceiver removed
  // the rbw block from rbw to finalized but BlockSender tried to open the
  // file before BlockReceiver updated the VolumeMap. The state of the
  // block can be changed again now, so ignore this error here. If there
  // is a block really deleted by mistake, DirectoryScan should catch it.
  if (e instanceof FileNotFoundException ) {
    LOG.info("Volume {}: verification failed for {} because of " +
            "FileNotFoundException.  This may be due to a race with write.",
        volume.getBasePath(), block);
    return;
  }
  LOG.warn("Reporting bad {} on {}", block, volume.getBasePath());
  try {
    scanner.datanode.reportBadBlocks(block);
  } catch (IOException ie) {
    // This is bad, but not bad enough to shut down the scanner.
    LOG.warn("Cannot report bad " + block.getBlockId(), e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:VolumeScanner.java


示例15: setVolumeFull

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
private void setVolumeFull(DataNode dn, StorageType type) {
  List<? extends FsVolumeSpi> volumes = dn.getFSDataset().getVolumes();
  for (FsVolumeSpi v : volumes) {
    FsVolumeImpl volume = (FsVolumeImpl) v;
    if (volume.getStorageType() == type) {
      LOG.info("setCapacity to 0 for [" + volume.getStorageType() + "]"
          + volume.getStorageID());
      volume.setCapacityForTesting(0);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:TestStorageMover.java


示例16: ensureLazyPersistBlocksAreSaved

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Make sure at least one non-transient volume has a saved copy of the replica.
 * An infinite loop is used to ensure the async lazy persist tasks are completely
 * done before verification. Caller of ensureLazyPersistBlocksAreSaved expects
 * either a successful pass or timeout failure.
 */
protected final void ensureLazyPersistBlocksAreSaved(
    LocatedBlocks locatedBlocks) throws IOException, InterruptedException {
  final String bpid = cluster.getNamesystem().getBlockPoolId();
  List<? extends FsVolumeSpi> volumes =
    cluster.getDataNodes().get(0).getFSDataset().getVolumes();
  final Set<Long> persistedBlockIds = new HashSet<Long>();

  while (persistedBlockIds.size() < locatedBlocks.getLocatedBlocks().size()) {
    // Take 1 second sleep before each verification iteration
    Thread.sleep(1000);

    for (LocatedBlock lb : locatedBlocks.getLocatedBlocks()) {
      for (FsVolumeSpi v : volumes) {
        if (v.isTransientStorage()) {
          continue;
        }

        FsVolumeImpl volume = (FsVolumeImpl) v;
        File lazyPersistDir = volume.getBlockPoolSlice(bpid).getLazypersistDir();

        long blockId = lb.getBlock().getBlockId();
        File targetDir =
          DatanodeUtil.idToBlockDir(lazyPersistDir, blockId);
        File blockFile = new File(targetDir, lb.getBlock().getBlockName());
        if (blockFile.exists()) {
          // Found a persisted copy for this block and added to the Set
          persistedBlockIds.add(blockId);
        }
      }
    }
  }

  // We should have found a persisted copy for each located block.
  assertThat(persistedBlockIds.size(), is(locatedBlocks.getLocatedBlocks().size()));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:42,代码来源:LazyPersistTestCase.java


示例17: verifyDeletedBlocks

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
protected final boolean verifyDeletedBlocks(LocatedBlocks locatedBlocks)
    throws IOException, InterruptedException {

  LOG.info("Verifying replica has no saved copy after deletion.");
  triggerBlockReport();

  while(
    DataNodeTestUtils.getPendingAsyncDeletions(cluster.getDataNodes().get(0))
      > 0L){
    Thread.sleep(1000);
  }

  final String bpid = cluster.getNamesystem().getBlockPoolId();
  List<? extends FsVolumeSpi> volumes =
    cluster.getDataNodes().get(0).getFSDataset().getVolumes();

  // Make sure deleted replica does not have a copy on either finalized dir of
  // transient volume or finalized dir of non-transient volume
  for (FsVolumeSpi v : volumes) {
    FsVolumeImpl volume = (FsVolumeImpl) v;
    File targetDir = (v.isTransientStorage()) ?
        volume.getBlockPoolSlice(bpid).getFinalizedDir() :
        volume.getBlockPoolSlice(bpid).getLazypersistDir();
    if (verifyBlockDeletedFromDir(targetDir, locatedBlocks) == false) {
      return false;
    }
  }
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:LazyPersistTestCase.java


示例18: startCluster

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 *
 * @param blockSize
 * @param perVolumeCapacity limit the capacity of each volume to the given
 *                          value. If negative, then don't limit.
 * @throws IOException
 */
private void startCluster(int blockSize, int numDatanodes, long perVolumeCapacity) throws IOException {
  initConfig(blockSize);

  cluster = new MiniDFSCluster
      .Builder(conf)
      .storagesPerDatanode(STORAGES_PER_DATANODE)
      .numDataNodes(numDatanodes)
      .build();
  fs = cluster.getFileSystem();
  client = fs.getClient();
  cluster.waitActive();

  if (perVolumeCapacity >= 0) {
    for (DataNode dn : cluster.getDataNodes()) {
      for (FsVolumeSpi volume : dn.getFSDataset().getVolumes()) {
        ((FsVolumeImpl) volume).setCapacityForTesting(perVolumeCapacity);
      }
    }
  }

  if (numDatanodes == 1) {
    List<? extends FsVolumeSpi> volumes =
        cluster.getDataNodes().get(0).getFSDataset().getVolumes();
    assertThat(volumes.size(), is(1));
    singletonVolume = ((FsVolumeImpl) volumes.get(0));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestRbwSpaceReservation.java


示例19: getVolume

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/** Get the FsVolume on the given basePath */
private FsVolumeImpl getVolume(DataNode dn, File basePath) {
  for (FsVolumeSpi vol : dn.getFSDataset().getVolumes()) {
    if (vol.getBasePath().equals(basePath.getPath())) {
      return (FsVolumeImpl)vol;
    }
  }
  return null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestDataNodeHotSwapVolumes.java


示例20: duplicateBlock

import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; //导入依赖的package包/类
/**
 * Duplicate the given block on all volumes.
 * @param blockId
 * @throws IOException
 */
private void duplicateBlock(long blockId) throws IOException {
  synchronized (fds) {
    ReplicaInfo b = FsDatasetTestUtil.fetchReplicaInfo(fds, bpid, blockId);
    for (FsVolumeSpi v : fds.getVolumes()) {
      if (v.getStorageID().equals(b.getVolume().getStorageID())) {
        continue;
      }

      // Volume without a copy of the block. Make a copy now.
      File sourceBlock = b.getBlockFile();
      File sourceMeta = b.getMetaFile();
      String sourceRoot = b.getVolume().getBasePath();
      String destRoot = v.getBasePath();

      String relativeBlockPath = new File(sourceRoot).toURI().relativize(sourceBlock.toURI()).getPath();
      String relativeMetaPath = new File(sourceRoot).toURI().relativize(sourceMeta.toURI()).getPath();

      File destBlock = new File(destRoot, relativeBlockPath);
      File destMeta = new File(destRoot, relativeMetaPath);

      destBlock.getParentFile().mkdirs();
      FileUtils.copyFile(sourceBlock, destBlock);
      FileUtils.copyFile(sourceMeta, destMeta);

      if (destBlock.exists() && destMeta.exists()) {
        LOG.info("Copied " + sourceBlock + " ==> " + destBlock);
        LOG.info("Copied " + sourceMeta + " ==> " + destMeta);
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:TestDirectoryScanner.java



注:本文中的org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java EncapsOutputStream类代码示例发布时间:2022-05-22
下一篇:
Java Id类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap