• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FsVolumeImpl类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl的典型用法代码示例。如果您正苦于以下问题:Java FsVolumeImpl类的具体用法?Java FsVolumeImpl怎么用?Java FsVolumeImpl使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FsVolumeImpl类属于org.apache.hadoop.hdfs.server.datanode.fsdataset.impl包,在下文中一共展示了FsVolumeImpl类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: setVolumeFull

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
private void setVolumeFull(DataNode dn, StorageType type) {
  List<? extends FsVolumeSpi> volumes = dn.getFSDataset().getVolumes();
  for (FsVolumeSpi v : volumes) {
    FsVolumeImpl volume = (FsVolumeImpl) v;
    if (volume.getStorageType() == type) {
      LOG.info("setCapacity to 0 for [" + volume.getStorageType() + "]"
          + volume.getStorageID());
      volume.setCapacityForTesting(0);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:TestStorageMover.java


示例2: before

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
@Before
public void before() {
  BlockScanner.Conf.allowUnitTestSettings = true;
  GenericTestUtils.setLogLevel(BlockScanner.LOG, Level.ALL);
  GenericTestUtils.setLogLevel(VolumeScanner.LOG, Level.ALL);
  GenericTestUtils.setLogLevel(FsVolumeImpl.LOG, Level.ALL);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:TestBlockScanner.java


示例3: testNextSorted

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
@Test(timeout=120000)
public void testNextSorted() throws Exception {
  List<String> arr = new LinkedList<String>();
  arr.add("1");
  arr.add("3");
  arr.add("5");
  arr.add("7");
  Assert.assertEquals("3", FsVolumeImpl.nextSorted(arr, "2"));
  Assert.assertEquals("3", FsVolumeImpl.nextSorted(arr, "1"));
  Assert.assertEquals("1", FsVolumeImpl.nextSorted(arr, ""));
  Assert.assertEquals("1", FsVolumeImpl.nextSorted(arr, null));
  Assert.assertEquals(null, FsVolumeImpl.nextSorted(arr, "9"));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestBlockScanner.java


示例4: getVolume

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/** Get the FsVolume on the given basePath */
private FsVolumeImpl getVolume(DataNode dn, File basePath) {
  for (FsVolumeSpi vol : dn.getFSDataset().getVolumes()) {
    if (vol.getBasePath().equals(basePath.getPath())) {
      return (FsVolumeImpl)vol;
    }
  }
  return null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestDataNodeHotSwapVolumes.java


示例5: testDirectlyReloadAfterCheckDiskError

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/**
 * Verify that {@link DataNode#checkDiskErrors()} removes all metadata in
 * DataNode upon a volume failure. Thus we can run reconfig on the same
 * configuration to reload the new volume on the same directory as the failed one.
 */
@Test(timeout=60000)
public void testDirectlyReloadAfterCheckDiskError()
    throws IOException, TimeoutException, InterruptedException,
    ReconfigurationException {
  startDFSCluster(1, 2);
  createFile(new Path("/test"), 32, (short)2);

  DataNode dn = cluster.getDataNodes().get(0);
  final String oldDataDir = dn.getConf().get(DFS_DATANODE_DATA_DIR_KEY);
  File dirToFail = new File(cluster.getDataDirectory(), "data1");

  FsVolumeImpl failedVolume = getVolume(dn, dirToFail);
  assertTrue("No FsVolume was found for " + dirToFail,
      failedVolume != null);
  long used = failedVolume.getDfsUsed();

  DataNodeTestUtils.injectDataDirFailure(dirToFail);
  // Call and wait DataNode to detect disk failure.
  long lastDiskErrorCheck = dn.getLastDiskErrorCheck();
  dn.checkDiskErrorAsync();
  while (dn.getLastDiskErrorCheck() == lastDiskErrorCheck) {
    Thread.sleep(100);
  }

  createFile(new Path("/test1"), 32, (short)2);
  assertEquals(used, failedVolume.getDfsUsed());

  DataNodeTestUtils.restoreDataDirFromFailure(dirToFail);
  dn.reconfigurePropertyImpl(DFS_DATANODE_DATA_DIR_KEY, oldDataDir);

  createFile(new Path("/test2"), 32, (short)2);
  FsVolumeImpl restoredVolume = getVolume(dn, dirToFail);
  assertTrue(restoredVolume != null);
  assertTrue(restoredVolume != failedVolume);
  // More data has been written to this volume.
  assertTrue(restoredVolume.getDfsUsed() > used);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:43,代码来源:TestDataNodeHotSwapVolumes.java


示例6: setVolumeFull

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
private void setVolumeFull(DataNode dn, StorageType type) {
  try (FsDatasetSpi.FsVolumeReferences refs = dn.getFSDataset()
      .getFsVolumeReferences()) {
    for (FsVolumeSpi fvs : refs) {
      FsVolumeImpl volume = (FsVolumeImpl) fvs;
      if (volume.getStorageType() == type) {
        LOG.info("setCapacity to 0 for [" + volume.getStorageType() + "]"
            + volume.getStorageID());
        volume.setCapacityForTesting(0);
      }
    }
  } catch (IOException e) {
    LOG.error("Unexpected exception by closing FsVolumeReference", e);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:16,代码来源:TestStorageMover.java


示例7: getVolume

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/** Get the FsVolume on the given basePath */
private FsVolumeImpl getVolume(DataNode dn, File basePath)
    throws IOException {
  try (FsDatasetSpi.FsVolumeReferences volumes =
    dn.getFSDataset().getFsVolumeReferences()) {
    for (FsVolumeSpi vol : volumes) {
      if (vol.getBasePath().equals(basePath.getPath())) {
        return (FsVolumeImpl) vol;
      }
    }
  }
  return null;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:14,代码来源:TestDataNodeHotSwapVolumes.java


示例8: submitSyncFileRangeRequest

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
public void submitSyncFileRangeRequest(FsVolumeImpl volume,
    final FileDescriptor fd, final long offset, final long nbytes,
    final int flags) {
  execute(volume.getCurrentDir(), new Runnable() {
    @Override
    public void run() {
      try {
        NativeIO.POSIX.syncFileRangeIfPossible(fd, offset, nbytes, flags);
      } catch (NativeIOException e) {
        LOG.warn("sync_file_range error", e);
      }
    }
  });
}
 
开发者ID:bikash,项目名称:PDHC,代码行数:15,代码来源:FsDatasetAsyncDiskService.java


示例9: deleteAsync

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/**
 * Delete the block file and meta file from the disk asynchronously, adjust
 * dfsUsed statistics accordingly.
 */
void deleteAsync(FsVolumeImpl volume, File blockFile, File metaFile,
    ExtendedBlock block, String trashDirectory) {
  LOG.info("Scheduling " + block.getLocalBlock()
      + " file " + blockFile + " for deletion");
  ReplicaFileDeleteTask deletionTask = new ReplicaFileDeleteTask(
      volume, blockFile, metaFile, block, trashDirectory);
  execute(volume.getCurrentDir(), deletionTask);
}
 
开发者ID:bikash,项目名称:PDHC,代码行数:13,代码来源:FsDatasetAsyncDiskService.java


示例10: ReplicaFileDeleteTask

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
ReplicaFileDeleteTask(FsVolumeImpl volume, File blockFile,
    File metaFile, ExtendedBlock block, String trashDirectory) {
  this.volume = volume;
  this.blockFile = blockFile;
  this.metaFile = metaFile;
  this.block = block;
  this.trashDirectory = trashDirectory;
}
 
开发者ID:bikash,项目名称:PDHC,代码行数:9,代码来源:FsDatasetAsyncDiskService.java


示例11: testDirectlyReloadAfterCheckDiskError

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/**
 * Verify that {@link DataNode#checkDiskErrors()} removes all metadata in
 * DataNode upon a volume failure. Thus we can run reconfig on the same
 * configuration to reload the new volume on the same directory as the failed one.
 */
@Test(timeout=60000)
public void testDirectlyReloadAfterCheckDiskError()
    throws IOException, TimeoutException, InterruptedException,
    ReconfigurationException {
  // The test uses DataNodeTestUtils#injectDataDirFailure() to simulate
  // volume failures which is currently not supported on Windows.
  assumeTrue(!Path.WINDOWS);

  startDFSCluster(1, 2);
  createFile(new Path("/test"), 32, (short)2);

  DataNode dn = cluster.getDataNodes().get(0);
  final String oldDataDir = dn.getConf().get(DFS_DATANODE_DATA_DIR_KEY);
  File dirToFail = new File(cluster.getDataDirectory(), "data1");

  FsVolumeImpl failedVolume = getVolume(dn, dirToFail);
  assertTrue("No FsVolume was found for " + dirToFail,
      failedVolume != null);
  long used = failedVolume.getDfsUsed();

  DataNodeTestUtils.injectDataDirFailure(dirToFail);
  // Call and wait DataNode to detect disk failure.
  long lastDiskErrorCheck = dn.getLastDiskErrorCheck();
  dn.checkDiskErrorAsync();
  while (dn.getLastDiskErrorCheck() == lastDiskErrorCheck) {
    Thread.sleep(100);
  }

  createFile(new Path("/test1"), 32, (short)2);
  assertEquals(used, failedVolume.getDfsUsed());

  DataNodeTestUtils.restoreDataDirFromFailure(dirToFail);
  dn.reconfigurePropertyImpl(DFS_DATANODE_DATA_DIR_KEY, oldDataDir);

  createFile(new Path("/test2"), 32, (short)2);
  FsVolumeImpl restoredVolume = getVolume(dn, dirToFail);
  assertTrue(restoredVolume != null);
  assertTrue(restoredVolume != failedVolume);
  // More data has been written to this volume.
  assertTrue(restoredVolume.getDfsUsed() > used);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:47,代码来源:TestDataNodeHotSwapVolumes.java


示例12: onCompleteLazyPersist

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
@Override
public void onCompleteLazyPersist(String bpId, long blockId,
    long creationTime, File[] savedFiles, FsVolumeImpl targetVolume) {
  throw new UnsupportedOperationException();
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:6,代码来源:SimulatedFSDataset.java


示例13: onCompleteLazyPersist

import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl; //导入依赖的package包/类
/**
* Callback from RamDiskAsyncLazyPersistService upon async lazy persist task end
*/
public void onCompleteLazyPersist(String bpId, long blockId,
   long creationTime, File[] savedFiles, FsVolumeImpl targetVolume);
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:6,代码来源:FsDatasetSpi.java



注:本文中的org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java UnauthorizedUserException类代码示例发布时间:2022-05-22
下一篇:
Java Dispatch类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap