• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java BlockReaderFactory类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.BlockReaderFactory的典型用法代码示例。如果您正苦于以下问题:Java BlockReaderFactory类的具体用法?Java BlockReaderFactory怎么用?Java BlockReaderFactory使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockReaderFactory类属于org.apache.hadoop.hdfs包,在下文中一共展示了BlockReaderFactory类的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false);
  blockReader.close();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:28,代码来源:TestDataNodeVolumeFailure.java


示例2: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 *
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
    throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock();

  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory
      .getFileName(targetAddr, "test-blockpoolid", block.getBlockId());
  BlockReaderFactory
      .newBlockReader(conf, s, file, block, lblock.getBlockToken(), 0, -1,
          null);

  // nothing - if it fails - it will throw and exception
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:28,代码来源:TestDataNodeVolumeFailure.java


示例3: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false,
      CachingStrategy.newDefaultStrategy());
  blockReader.close();
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:29,代码来源:TestDataNodeVolumeFailure.java


示例4: testDataXceiverCleansUpSlotsOnFailure

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
@Test(timeout=60000)
public void testDataXceiverCleansUpSlotsOnFailure() throws Exception {
  BlockReaderTestUtil.enableShortCircuitShmTracing();
  TemporarySocketDirectory sockDir = new TemporarySocketDirectory();
  Configuration conf = createShortCircuitConf(
      "testDataXceiverCleansUpSlotsOnFailure", sockDir);
  conf.setLong(
      HdfsClientConfigKeys.Read.ShortCircuit.STREAMS_CACHE_EXPIRY_MS_KEY,
      1000000000L);
  MiniDFSCluster cluster =
      new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
  cluster.waitActive();
  DistributedFileSystem fs = cluster.getFileSystem();
  final Path TEST_PATH1 = new Path("/test_file1");
  final Path TEST_PATH2 = new Path("/test_file2");
  final int TEST_FILE_LEN = 4096;
  final int SEED = 0xFADE1;
  DFSTestUtil.createFile(fs, TEST_PATH1, TEST_FILE_LEN,
      (short)1, SEED);
  DFSTestUtil.createFile(fs, TEST_PATH2, TEST_FILE_LEN,
      (short)1, SEED);

  // The first read should allocate one shared memory segment and slot.
  DFSTestUtil.readFileBuffer(fs, TEST_PATH1);

  // The second read should fail, and we should only have 1 segment and 1 slot
  // left.
  BlockReaderFactory.setFailureInjectorForTesting(
      new TestCleanupFailureInjector());
  try {
    DFSTestUtil.readFileBuffer(fs, TEST_PATH2);
  } catch (Throwable t) {
    GenericTestUtils.assertExceptionContains("TCP reads were disabled for " +
        "testing, but we failed to do a non-TCP read.", t);
  }
  checkNumberOfSegmentsAndSlots(1, 1,
      cluster.getDataNodes().get(0).getShortCircuitRegistry());
  cluster.shutdown();
  sockDir.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:41,代码来源:TestShortCircuitCache.java


示例5: testPreReceiptVerificationDfsClientCanDoScr

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
@Test(timeout=60000)
public void testPreReceiptVerificationDfsClientCanDoScr() throws Exception {
  BlockReaderTestUtil.enableShortCircuitShmTracing();
  TemporarySocketDirectory sockDir = new TemporarySocketDirectory();
  Configuration conf = createShortCircuitConf(
      "testPreReceiptVerificationDfsClientCanDoScr", sockDir);
  conf.setLong(
      HdfsClientConfigKeys.Read.ShortCircuit.STREAMS_CACHE_EXPIRY_MS_KEY,
      1000000000L);
  MiniDFSCluster cluster =
      new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
  cluster.waitActive();
  DistributedFileSystem fs = cluster.getFileSystem();
  BlockReaderFactory.setFailureInjectorForTesting(
      new TestPreReceiptVerificationFailureInjector());
  final Path TEST_PATH1 = new Path("/test_file1");
  DFSTestUtil.createFile(fs, TEST_PATH1, 4096, (short)1, 0xFADE2);
  final Path TEST_PATH2 = new Path("/test_file2");
  DFSTestUtil.createFile(fs, TEST_PATH2, 4096, (short)1, 0xFADE2);
  DFSTestUtil.readFileBuffer(fs, TEST_PATH1);
  DFSTestUtil.readFileBuffer(fs, TEST_PATH2);
  ShortCircuitRegistry registry =
      cluster.getDataNodes().get(0).getShortCircuitRegistry();
  registry.visit(new ShortCircuitRegistry.Visitor() {
    @Override
    public void accept(HashMap<ShmId, RegisteredShm> segments,
                       HashMultimap<ExtendedBlockId, Slot> slots) {
      Assert.assertEquals(1, segments.size());
      Assert.assertEquals(2, slots.size());
    }
  });
  cluster.shutdown();
  sockDir.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:35,代码来源:TestShortCircuitCache.java


示例6: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:67,代码来源:TestBlockTokenWithDFS.java


示例7: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:50,代码来源:TestDataNodeVolumeFailure.java


示例8: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
protected void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DfsClientConf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr,
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setTracer(FsTracer.get(conf)).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsConstants.READ_TIMEOUT);
              peer = DFSUtilClient.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:68,代码来源:TestBlockTokenWithDFS.java


示例9: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DfsClientConf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setTracer(FsTracer.get(conf)).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsConstants.READ_TIMEOUT);
          peer = DFSUtilClient.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:51,代码来源:TestDataNodeVolumeFailure.java


示例10: streamBlockInAscii

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, String poolId,
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView,
    JspWriter out, Configuration conf, DFSClient.Conf dfsConf,
    DataEncryptionKey encryptionKey)
        throws IOException {
  if (chunkSizeToView == 0) return;
  Socket s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
    
  int amtToRead = (int)Math.min(chunkSizeToView, blockSize - offsetIntoBlock);
    
    // Use the block name for file name. 
  String file = BlockReaderFactory.getFileName(addr, poolId, blockId);
  BlockReader blockReader = BlockReaderFactory.newBlockReader(dfsConf, file,
      new ExtendedBlock(poolId, blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead,  true,
      "JspHelper", TcpPeerServer.peerFromSocketAndKey(s, encryptionKey),
      new DatanodeID(addr.getAddress().getHostAddress(),
          addr.getHostName(), poolId, addr.getPort(), 0, 0), null,
          null, null, false);
      
  final byte[] buf = new byte[amtToRead];
  int readOffset = 0;
  int retries = 2;
  while ( amtToRead > 0 ) {
    int numRead = amtToRead;
    try {
      blockReader.readFully(buf, readOffset, amtToRead);
    }
    catch (IOException e) {
      retries--;
      if (retries == 0)
        throw new IOException("Could not read data from datanode");
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf, Charsets.UTF_8)));
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:44,代码来源:JspHelper.java


示例11: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
private static void tryRead(Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
    s = NetUtils.getDefaultSocketFactory(conf).createSocket();
    s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
    s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

    String file = BlockReaderFactory.getFileName(targetAddr, 
        "test-blockpoolid", block.getBlockId());
    blockReader = BlockReaderFactory.newBlockReader(
        new DFSClient.Conf(conf), file, block, lblock.getBlockToken(), 0, -1,
        true, "TestBlockTokenWithDFS", TcpPeerServer.peerFromSocket(s),
        nodes[0], null, null, null, false);

  } catch (IOException ex) {
    if (ex instanceof InvalidBlockTokenException) {
      assertFalse("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", shouldSucceed);
      return;
    }
    fail("OP_READ_BLOCK failed due to reasons other than access token: "
        + StringUtils.stringifyException(ex));
  } finally {
    if (s != null) {
      try {
        s.close();
      } catch (IOException iex) {
      } finally {
        s = null;
      }
    }
  }
  if (blockReader == null) {
    fail("OP_READ_BLOCK failed due to reasons other than access token");
  }
  assertTrue("OP_READ_BLOCK: access token is valid, "
      + "when it is expected to be invalid", shouldSucceed);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:45,代码来源:TestBlockTokenWithDFS.java


示例12: streamBlockInAscii

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, String poolId,
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView, JspWriter out,
    Configuration conf, DataEncryptionKey encryptionKey) throws IOException {
  if (chunkSizeToView == 0) {
    return;
  }
  Socket s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  int amtToRead =
      (int) Math.min(chunkSizeToView, blockSize - offsetIntoBlock);

  // Use the block name for file name.
  String file = BlockReaderFactory.getFileName(addr, poolId, blockId);
  BlockReader blockReader = BlockReaderFactory.newBlockReader(conf, s, file,
      new ExtendedBlock(poolId, blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead, encryptionKey);

  byte[] buf = new byte[(int) amtToRead];
  int readOffset = 0;
  int retries = 2;
  while (amtToRead > 0) {
    int numRead = amtToRead;
    try {
      blockReader.readFully(buf, readOffset, amtToRead);
    } catch (IOException e) {
      retries--;
      if (retries == 0) {
        throw new IOException("Could not read data from datanode");
      }
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader = null;
  s.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf, Charsets.UTF_8)));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:42,代码来源:JspHelper.java


示例13: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
private static void tryRead(Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
    s = NetUtils.getDefaultSocketFactory(conf).createSocket();
    s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
    s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

    String file = BlockReaderFactory
        .getFileName(targetAddr, "test-blockpoolid", block.getBlockId());
    blockReader = BlockReaderFactory
        .newBlockReader(conf, s, file, block, lblock.getBlockToken(), 0, -1,
            null);

  } catch (IOException ex) {
    if (ex instanceof InvalidBlockTokenException) {
      assertFalse("OP_READ_BLOCK: access token is invalid, " +
          "when it is expected to be valid", shouldSucceed);
      return;
    }
    fail("OP_READ_BLOCK failed due to reasons other than access token: " +
        StringUtils.stringifyException(ex));
  } finally {
    if (s != null) {
      try {
        s.close();
      } catch (IOException iex) {
      } finally {
        s = null;
      }
    }
  }
  if (blockReader == null) {
    fail("OP_READ_BLOCK failed due to reasons other than access token");
  }
  assertTrue("OP_READ_BLOCK: access token is valid, " +
      "when it is expected to be invalid", shouldSucceed);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:44,代码来源:TestBlockTokenWithDFS.java


示例14: streamBlockInAscii

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, String poolId,
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView,
    JspWriter out, Configuration conf, DFSClient.Conf dfsConf,
    DataEncryptionKey encryptionKey)
        throws IOException {
  if (chunkSizeToView == 0) return;
  Socket s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
    
  int amtToRead = (int)Math.min(chunkSizeToView, blockSize - offsetIntoBlock);
    
    // Use the block name for file name. 
  String file = BlockReaderFactory.getFileName(addr, poolId, blockId);
  BlockReader blockReader = BlockReaderFactory.newBlockReader(dfsConf, file,
      new ExtendedBlock(poolId, blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead,  true,
      "JspHelper", TcpPeerServer.peerFromSocketAndKey(s, encryptionKey),
      new DatanodeID(addr.getAddress().getHostAddress(),
          addr.getHostName(), poolId, addr.getPort(), 0, 0, 0), null,
          null, null, false, CachingStrategy.newDefaultStrategy());
      
  final byte[] buf = new byte[amtToRead];
  int readOffset = 0;
  int retries = 2;
  while ( amtToRead > 0 ) {
    int numRead = amtToRead;
    try {
      blockReader.readFully(buf, readOffset, amtToRead);
    }
    catch (IOException e) {
      retries--;
      if (retries == 0)
        throw new IOException("Could not read data from datanode");
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf, Charsets.UTF_8)));
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:44,代码来源:JspHelper.java


示例15: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
private static void tryRead(Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
    s = NetUtils.getDefaultSocketFactory(conf).createSocket();
    s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
    s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

    String file = BlockReaderFactory.getFileName(targetAddr, 
        "test-blockpoolid", block.getBlockId());
    blockReader = BlockReaderFactory.newBlockReader(
        new DFSClient.Conf(conf), file, block, lblock.getBlockToken(), 0, -1,
        true, "TestBlockTokenWithDFS", TcpPeerServer.peerFromSocket(s),
        nodes[0], null, null, null, false,
        CachingStrategy.newDefaultStrategy());

  } catch (IOException ex) {
    if (ex instanceof InvalidBlockTokenException) {
      assertFalse("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", shouldSucceed);
      return;
    }
    fail("OP_READ_BLOCK failed due to reasons other than access token: "
        + StringUtils.stringifyException(ex));
  } finally {
    if (s != null) {
      try {
        s.close();
      } catch (IOException iex) {
      } finally {
        s = null;
      }
    }
  }
  if (blockReader == null) {
    fail("OP_READ_BLOCK failed due to reasons other than access token");
  }
  assertTrue("OP_READ_BLOCK: access token is valid, "
      + "when it is expected to be invalid", shouldSucceed);
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:46,代码来源:TestBlockTokenWithDFS.java


示例16: tryRead

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:66,代码来源:TestBlockTokenWithDFS.java


示例17: accessBlock

import org.apache.hadoop.hdfs.BlockReaderFactory; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:49,代码来源:TestDataNodeVolumeFailure.java



注:本文中的org.apache.hadoop.hdfs.BlockReaderFactory类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java OAuthService类代码示例发布时间:2022-05-22
下一篇:
Java ComplicationData类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap