• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FiTestUtil类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.fi.FiTestUtil的典型用法代码示例。如果您正苦于以下问题:Java FiTestUtil类的具体用法?Java FiTestUtil怎么用?Java FiTestUtil使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FiTestUtil类属于org.apache.hadoop.fi包,在下文中一共展示了FiTestUtil类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: run

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
@Override
public void run(NodeBytes nb) throws IOException {
  synchronized (rcv) {
    rcv.add(nb);
    for (NodeBytes n : rcv) {
      long counterPartsBytes = -1;
      NodeBytes counterPart = null;
      if (ack.size() > rcv.indexOf(n)) {
        counterPart = ack.get(rcv.indexOf(n));
        counterPartsBytes = counterPart.bytes;
      }
      assertTrue("FI: Wrong receiving length",
          counterPartsBytes <= n.bytes);
      if(FiTestUtil.LOG.isDebugEnabled()) {
        FiTestUtil.LOG.debug("FI: before compare of Recv bytes. Expected "
            + n.bytes + ", got " + counterPartsBytes);
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:PipelinesTestUtil.java


示例2: run

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
@Override
public void run(NodeBytes nb) throws IOException {
  synchronized (rcv) {
    rcv.add(nb);
    for (NodeBytes n : rcv) {
      long counterPartsBytes = -1;
      NodeBytes counterPart = null;
      if (ack.size() > rcv.indexOf(n)) {
        counterPart = ack.get(rcv.indexOf(n));
        counterPartsBytes = counterPart.bytes;
      }
      assertTrue("FI: Wrong receiving length",
          counterPartsBytes <= n.bytes);
      if (FiTestUtil.LOG.isDebugEnabled()) {
        FiTestUtil.LOG.debug(
            "FI: before compare of Recv bytes. Expected " + n.bytes +
                ", got " + counterPartsBytes);
      }
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:22,代码来源:PipelinesTestUtil.java


示例3: run

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
public void run(NodeBytes nb) throws IOException {
  synchronized (ack) {
    ack.add(nb);
    for (NodeBytes n : ack) {
      NodeBytes counterPart = null;
      long counterPartsBytes = -1;
      if (rcv.size() > ack.indexOf(n)) { 
        counterPart = rcv.get(ack.indexOf(n));
        counterPartsBytes = counterPart.bytes;
      }
      assertTrue("FI: Wrong acknowledged length",
          counterPartsBytes == n.bytes);
      if(FiTestUtil.LOG.isDebugEnabled()) {
        FiTestUtil.LOG.debug(
            "FI: before compare of Acked bytes. Expected " +
            n.bytes + ", got " + counterPartsBytes);
      }
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:24,代码来源:PipelinesTestUtil.java


示例4: initLoggers

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void initLoggers() {
  ((Log4JLogger) NameNode.stateChangeLog).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) DataNode.LOG).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) TestFiPipelines.LOG).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) DFSClient.LOG).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) FiTestUtil.LOG).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) BlockReceiverAspects.LOG).getLogger().setLevel(Level.ALL);
  ((Log4JLogger) DFSClientAspects.LOG).getLogger().setLevel(Level.ALL);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:TestFiPipelines.java


示例5: runSlowDatanodeTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runSlowDatanodeTest(String methodName, SleepAction a
                ) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest)DataTransferTestUtil.initTest();
  t.fiCallReceivePacket.set(a);
  t.fiReceiverOpWriteBlock.set(a);
  t.fiStatusRead.set(a);
  write1byte(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestFiDataTransferProtocol.java


示例6: runReceiverOpWriteBlockTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runReceiverOpWriteBlockTest(String methodName,
    int errorIndex, Action<DatanodeID, IOException> a) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  t.fiReceiverOpWriteBlock.set(a);
  t.fiPipelineInitErrorNonAppend.set(new VerificationAction(methodName,
      errorIndex));
  write1byte(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:TestFiDataTransferProtocol.java


示例7: runStatusReadTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runStatusReadTest(String methodName, int errorIndex,
    Action<DatanodeID, IOException> a) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  t.fiStatusRead.set(a);
  t.fiPipelineInitErrorNonAppend.set(new VerificationAction(methodName,
      errorIndex));
  write1byte(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:TestFiDataTransferProtocol.java


示例8: runCallWritePacketToDisk

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runCallWritePacketToDisk(String methodName,
    int errorIndex, Action<DatanodeID, IOException> a) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest)DataTransferTestUtil.initTest();
  t.fiCallWritePacketToDisk.set(a);
  t.fiPipelineErrorAfterInit.set(new VerificationAction(methodName, errorIndex));
  write1byte(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestFiDataTransferProtocol.java


示例9: runPipelineCloseTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runPipelineCloseTest(String methodName,
    Action<DatanodeID, IOException> a) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  t.fiPipelineClose.set(a);
  TestFiDataTransferProtocol.write1byte(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:TestFiPipelineClose.java


示例10: runPipelineCloseAck

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runPipelineCloseAck(String name, int i, DataNodeAction a
    ) throws IOException {
  FiTestUtil.LOG.info("Running " + name + " ...");
  final DataTransferTest t = (DataTransferTest)DataTransferTestUtil.initTest();
  final MarkerConstraint marker = new MarkerConstraint(name);
  t.fiPipelineClose.set(new DatanodeMarkingAction(name, i, marker));
  t.fiPipelineAck.set(new ConstraintSatisfactionAction<DatanodeID, IOException>(a, marker));
  TestFiDataTransferProtocol.write1byte(name);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestFiPipelineClose.java


示例11: runBlockFileCloseTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private static void runBlockFileCloseTest(String methodName,
    Action<DatanodeID, IOException> a) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  t.fiBlockFileClose.set(a);
  TestFiDataTransferProtocol.write1byte(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:TestFiPipelineClose.java


示例12: writeSeveralPackets

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/**
 * 1. create files with dfs
 * 2. write MIN_N_PACKET to MAX_N_PACKET packets
 * 3. close file
 * 4. open the same file
 * 5. read the bytes and compare results
 */
private static void writeSeveralPackets(String methodName) throws IOException {
  final Random r = FiTestUtil.RANDOM.get();
  final int nPackets = FiTestUtil.nextRandomInt(MIN_N_PACKET, MAX_N_PACKET + 1);
  final int lastPacketSize = FiTestUtil.nextRandomInt(1, PACKET_SIZE + 1);
  final int size = (nPackets - 1)*PACKET_SIZE + lastPacketSize;

  FiTestUtil.LOG.info("size=" + size + ", nPackets=" + nPackets
      + ", lastPacketSize=" + lastPacketSize);

  final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf
      ).numDataNodes(REPLICATION + 2).build();
  final FileSystem dfs = cluster.getFileSystem();
  try {
    final Path p = new Path("/" + methodName + "/foo");
    final FSDataOutputStream out = createFile(dfs, p);

    final long seed = r.nextLong();
    final Random ran = new Random(seed);
    ran.nextBytes(bytes);
    out.write(bytes, 0, size);
    out.close();

    final FSDataInputStream in = dfs.open(p);
    int totalRead = 0;
    int nRead = 0;
    while ((nRead = in.read(toRead, totalRead, size - totalRead)) > 0) {
      totalRead += nRead;
    }
    Assert.assertEquals("Cannot read file.", size, totalRead);
    for (int i = 0; i < size; i++) {
      Assert.assertTrue("File content differ.", bytes[i] == toRead[i]);
    }
  }
  finally {
    dfs.close();
    cluster.shutdown();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:46,代码来源:TestFiDataTransferProtocol2.java


示例13: runTest17_19

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private void runTest17_19(String methodName, int dnIndex)
    throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  initSlowDatanodeTest(t, new SleepAction(methodName, 0, 0, MAX_SLEEP));
  initSlowDatanodeTest(t, new SleepAction(methodName, 1, 0, MAX_SLEEP));
  initSlowDatanodeTest(t, new SleepAction(methodName, 2, 0, MAX_SLEEP));
  t.fiCallWritePacketToDisk.set(new CountdownDoosAction(methodName, dnIndex, 3));
  t.fiPipelineErrorAfterInit.set(new VerificationAction(methodName, dnIndex));
  writeSeveralPackets(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestFiDataTransferProtocol2.java


示例14: runTest29_30

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private void runTest29_30(String methodName, int dnIndex) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  initSlowDatanodeTest(t, new SleepAction(methodName, 0, 0, MAX_SLEEP));
  initSlowDatanodeTest(t, new SleepAction(methodName, 1, 0, MAX_SLEEP));
  initSlowDatanodeTest(t, new SleepAction(methodName, 2, 0, MAX_SLEEP));
  t.fiAfterDownstreamStatusRead.set(new CountdownOomAction(methodName, dnIndex, 3));
  t.fiPipelineErrorAfterInit.set(new VerificationAction(methodName, dnIndex));
  writeSeveralPackets(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:TestFiDataTransferProtocol2.java


示例15: runTest34_35

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
private void runTest34_35(String methodName, int dnIndex) throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  t.fiAfterDownstreamStatusRead.set(new CountdownSleepAction(methodName, dnIndex, 0, 3));
  t.fiPipelineErrorAfterInit.set(new VerificationAction(methodName, dnIndex));
  writeSeveralPackets(methodName);
  Assert.assertTrue(t.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:10,代码来源:TestFiDataTransferProtocol2.java


示例16: pipeline_Fi_20

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/**
 * Streaming: Client writes several packets with DN0 very slow. Client
 * finishes write successfully.
 */
@Test
public void pipeline_Fi_20() throws IOException {
  final String methodName = FiTestUtil.getMethodName();
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  initSlowDatanodeTest(t, new SleepAction(methodName, 0, MAX_SLEEP));
  writeSeveralPackets(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestFiDataTransferProtocol2.java


示例17: pipeline_Fi_21

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/**
 * Streaming: Client writes several packets with DN1 very slow. Client
 * finishes write successfully.
 */
@Test
public void pipeline_Fi_21() throws IOException {
  final String methodName = FiTestUtil.getMethodName();
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  initSlowDatanodeTest(t, new SleepAction(methodName, 1, MAX_SLEEP));
  writeSeveralPackets(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestFiDataTransferProtocol2.java


示例18: pipeline_Fi_22

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/**
 * Streaming: Client writes several packets with DN2 very slow. Client
 * finishes write successfully.
 */
@Test
public void pipeline_Fi_22() throws IOException {
  final String methodName = FiTestUtil.getMethodName();
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final DataTransferTest t = (DataTransferTest) DataTransferTestUtil
      .initTest();
  initSlowDatanodeTest(t, new SleepAction(methodName, 2, MAX_SLEEP));
  writeSeveralPackets(methodName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:TestFiDataTransferProtocol2.java


示例19: runDiskErrorTest

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/** Methods initializes a test and sets required actions to be used later by
 * an injected advice
 * @param conf mini cluster configuration
 * @param methodName String representation of a test method invoking this 
 * method
 * @param block_size needed size of file's block
 * @param a is an action to be set for the set
 * @throws IOException in case of any errors
 */
private static void runDiskErrorTest (final Configuration conf, 
    final String methodName, final int block_size, DerrAction a, int index,
    boolean trueVerification)
    throws IOException {
  FiTestUtil.LOG.info("Running " + methodName + " ...");
  final HFlushTest hft = (HFlushTest) FiHFlushTestUtil.initTest();
  hft.fiCallHFlush.set(a);
  hft.fiErrorOnCallHFlush.set(new DataTransferTestUtil.VerificationAction(methodName, index));
  TestHFlush.doTheJob(conf, methodName, block_size, (short)3);
  if (trueVerification)
    assertTrue("Some of expected conditions weren't detected", hft.isSuccess());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:TestFiHFlush.java


示例20: hFlushFi01_b

import org.apache.hadoop.fi.FiTestUtil; //导入依赖的package包/类
/** The tests calls 
 * {@link #runDiskErrorTest(Configuration, String, int, DerrAction, int, boolean)}
 * to make a number of writes across a block boundaries.
 * hflush() is called after each write() during a pipeline life time.
 * Thus, injected fault ought to be triggered for 0th datanode
 */
@Test
public void hFlushFi01_b() throws IOException {
  final String methodName = FiTestUtil.getMethodName();
  Configuration conf = new HdfsConfiguration();
  int customPerChecksumSize = 512;
  int customBlockSize = customPerChecksumSize * 3;
  conf.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, customPerChecksumSize);
  conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, customBlockSize);
  runDiskErrorTest(conf, methodName, 
      customBlockSize, new DerrAction(methodName, 0), 0, true);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:TestFiHFlush.java



注:本文中的org.apache.hadoop.fi.FiTestUtil类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java CacheDataDescription类代码示例发布时间:2022-05-22
下一篇:
Java SpecialMethod类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap