• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java TcpPeerServer类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.net.TcpPeerServer的典型用法代码示例。如果您正苦于以下问题:Java TcpPeerServer类的具体用法?Java TcpPeerServer怎么用?Java TcpPeerServer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



TcpPeerServer类属于org.apache.hadoop.hdfs.net包,在下文中一共展示了TcpPeerServer类的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: newConnectedPeer

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
@Override // RemotePeerFactory
public Peer newConnectedPeer(InetSocketAddress addr,
    Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
    throws IOException {
  Peer peer = null;
  boolean success = false;
  Socket sock = null;
  try {
    sock = socketFactory.createSocket();
    NetUtils.connect(sock, addr,
      getRandomLocalInterfaceAddr(),
      dfsClientConf.socketTimeout);
    peer = TcpPeerServer.peerFromSocketAndKey(saslClient, sock, this,
        blockToken, datanodeId);
    peer.setReadTimeout(dfsClientConf.socketTimeout);
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtils.cleanup(LOG, peer);
      IOUtils.closeSocket(sock);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:DFSClient.java


示例2: newTcpPeer

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
private Peer newTcpPeer(InetSocketAddress addr) throws IOException {
  Peer peer = null;
  boolean success = false;
  Socket sock = null;
  try {
    sock = dfsClient.socketFactory.createSocket();
    NetUtils.connect(sock, addr,
      dfsClient.getRandomLocalInterfaceAddr(),
      dfsClient.getConf().socketTimeout);
    peer = TcpPeerServer.peerFromSocketAndKey(sock, 
        dfsClient.getDataEncryptionKey());
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtils.closeQuietly(peer);
      IOUtils.closeQuietly(sock);
    }
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:21,代码来源:DFSInputStream.java


示例3: accessBlock

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false);
  blockReader.close();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:28,代码来源:TestDataNodeVolumeFailure.java


示例4: getBlockReader

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * Get a BlockReader for the given block.
 */
public BlockReader getBlockReader(LocatedBlock testBlock, int offset, int lenToRead)
    throws IOException {
  InetSocketAddress targetAddr = null;
  Socket sock = null;
  ExtendedBlock block = testBlock.getBlock();
  DatanodeInfo[] nodes = testBlock.getLocations();
  targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
  sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
  sock.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  return BlockReaderFactory.newBlockReader(
    new DFSClient.Conf(conf),
    targetAddr.toString()+ ":" + block.getBlockId(), block,
    testBlock.getBlockToken(), 
    offset, lenToRead,
    true, "BlockReaderTestUtil", TcpPeerServer.peerFromSocket(sock),
    nodes[0], null, null, null, false);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:23,代码来源:BlockReaderTestUtil.java


示例5: accessBlock

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
    
  s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  String file = BlockReaderFactory.getFileName(targetAddr, 
      "test-blockpoolid",
      block.getBlockId());
  BlockReader blockReader =
    BlockReaderFactory.newBlockReader(new DFSClient.Conf(conf), file, block,
      lblock.getBlockToken(), 0, -1, true, "TestDataNodeVolumeFailure",
      TcpPeerServer.peerFromSocket(s), datanode, null, null, null, false,
      CachingStrategy.newDefaultStrategy());
  blockReader.close();
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:29,代码来源:TestDataNodeVolumeFailure.java


示例6: getBlockReader

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * Get a BlockReader for the given block.
 */
public BlockReader getBlockReader(LocatedBlock testBlock, int offset, int lenToRead)
    throws IOException {
  InetSocketAddress targetAddr = null;
  Socket sock = null;
  ExtendedBlock block = testBlock.getBlock();
  DatanodeInfo[] nodes = testBlock.getLocations();
  targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
  sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
  sock.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
  sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

  return BlockReaderFactory.newBlockReader(
    new DFSClient.Conf(conf),
    targetAddr.toString()+ ":" + block.getBlockId(), block,
    testBlock.getBlockToken(), 
    offset, lenToRead,
    true, "BlockReaderTestUtil", TcpPeerServer.peerFromSocket(sock),
    nodes[0], null, null, null, false, CachingStrategy.newDefaultStrategy());
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:23,代码来源:BlockReaderTestUtil.java


示例7: newConnectedPeer

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
@Override // RemotePeerFactory
public Peer newConnectedPeer(InetSocketAddress addr) throws IOException {
  Peer peer = null;
  boolean success = false;
  Socket sock = null;
  try {
    sock = socketFactory.createSocket();
    NetUtils.connect(sock, addr,
      getRandomLocalInterfaceAddr(),
      dfsClientConf.socketTimeout);
    peer = TcpPeerServer.peerFromSocketAndKey(sock, 
        getDataEncryptionKey());
    success = true;
    return peer;
  } finally {
    if (!success) {
      IOUtils.cleanup(LOG, peer);
      IOUtils.closeSocket(sock);
    }
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:22,代码来源:DFSClient.java


示例8: tryRead

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:67,代码来源:TestBlockTokenWithDFS.java


示例9: accessBlock

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:50,代码来源:TestDataNodeVolumeFailure.java


示例10: getBlockReader

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * Get a BlockReader for the given block.
 */
public static BlockReader getBlockReader(MiniDFSCluster cluster,
    LocatedBlock testBlock, int offset, int lenToRead) throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = testBlock.getBlock();
  DatanodeInfo[] nodes = testBlock.getLocations();
  targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

  final DistributedFileSystem fs = cluster.getFileSystem();
  return new BlockReaderFactory(fs.getClient().getConf()).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(targetAddr.toString()+ ":" + block.getBlockId()).
    setBlockToken(testBlock.getBlockToken()).
    setStartOffset(offset).
    setLength(lenToRead).
    setVerifyChecksum(true).
    setClientName("BlockReaderTestUtil").
    setDatanodeInfo(nodes[0]).
    setClientCacheContext(ClientContext.getFromConf(fs.getConf())).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setConfiguration(fs.getConf()).
    setAllowShortCircuitLocalReads(true).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.
            getDefaultSocketFactory(fs.getConf()).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeQuietly(sock);
          }
        }
        return peer;
      }
    }).
    build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:48,代码来源:BlockReaderTestUtil.java


示例11: streamBlockInAscii

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, String poolId,
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView,
    JspWriter out, Configuration conf, DFSClient.Conf dfsConf,
    DataEncryptionKey encryptionKey)
        throws IOException {
  if (chunkSizeToView == 0) return;
  Socket s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
    
  int amtToRead = (int)Math.min(chunkSizeToView, blockSize - offsetIntoBlock);
    
    // Use the block name for file name. 
  String file = BlockReaderFactory.getFileName(addr, poolId, blockId);
  BlockReader blockReader = BlockReaderFactory.newBlockReader(dfsConf, file,
      new ExtendedBlock(poolId, blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead,  true,
      "JspHelper", TcpPeerServer.peerFromSocketAndKey(s, encryptionKey),
      new DatanodeID(addr.getAddress().getHostAddress(),
          addr.getHostName(), poolId, addr.getPort(), 0, 0), null,
          null, null, false);
      
  final byte[] buf = new byte[amtToRead];
  int readOffset = 0;
  int retries = 2;
  while ( amtToRead > 0 ) {
    int numRead = amtToRead;
    try {
      blockReader.readFully(buf, readOffset, amtToRead);
    }
    catch (IOException e) {
      retries--;
      if (retries == 0)
        throw new IOException("Could not read data from datanode");
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf, Charsets.UTF_8)));
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:44,代码来源:JspHelper.java


示例12: tryRead

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
private static void tryRead(Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
    s = NetUtils.getDefaultSocketFactory(conf).createSocket();
    s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
    s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

    String file = BlockReaderFactory.getFileName(targetAddr, 
        "test-blockpoolid", block.getBlockId());
    blockReader = BlockReaderFactory.newBlockReader(
        new DFSClient.Conf(conf), file, block, lblock.getBlockToken(), 0, -1,
        true, "TestBlockTokenWithDFS", TcpPeerServer.peerFromSocket(s),
        nodes[0], null, null, null, false);

  } catch (IOException ex) {
    if (ex instanceof InvalidBlockTokenException) {
      assertFalse("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", shouldSucceed);
      return;
    }
    fail("OP_READ_BLOCK failed due to reasons other than access token: "
        + StringUtils.stringifyException(ex));
  } finally {
    if (s != null) {
      try {
        s.close();
      } catch (IOException iex) {
      } finally {
        s = null;
      }
    }
  }
  if (blockReader == null) {
    fail("OP_READ_BLOCK failed due to reasons other than access token");
  }
  assertTrue("OP_READ_BLOCK: access token is valid, "
      + "when it is expected to be invalid", shouldSucceed);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:45,代码来源:TestBlockTokenWithDFS.java


示例13: streamBlockInAscii

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
public static void streamBlockInAscii(InetSocketAddress addr, String poolId,
    long blockId, Token<BlockTokenIdentifier> blockToken, long genStamp,
    long blockSize, long offsetIntoBlock, long chunkSizeToView,
    JspWriter out, Configuration conf, DFSClient.Conf dfsConf,
    DataEncryptionKey encryptionKey)
        throws IOException {
  if (chunkSizeToView == 0) return;
  Socket s = NetUtils.getDefaultSocketFactory(conf).createSocket();
  s.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
    
  int amtToRead = (int)Math.min(chunkSizeToView, blockSize - offsetIntoBlock);
    
    // Use the block name for file name. 
  String file = BlockReaderFactory.getFileName(addr, poolId, blockId);
  BlockReader blockReader = BlockReaderFactory.newBlockReader(dfsConf, file,
      new ExtendedBlock(poolId, blockId, 0, genStamp), blockToken,
      offsetIntoBlock, amtToRead,  true,
      "JspHelper", TcpPeerServer.peerFromSocketAndKey(s, encryptionKey),
      new DatanodeID(addr.getAddress().getHostAddress(),
          addr.getHostName(), poolId, addr.getPort(), 0, 0, 0), null,
          null, null, false, CachingStrategy.newDefaultStrategy());
      
  final byte[] buf = new byte[amtToRead];
  int readOffset = 0;
  int retries = 2;
  while ( amtToRead > 0 ) {
    int numRead = amtToRead;
    try {
      blockReader.readFully(buf, readOffset, amtToRead);
    }
    catch (IOException e) {
      retries--;
      if (retries == 0)
        throw new IOException("Could not read data from datanode");
      continue;
    }
    amtToRead -= numRead;
    readOffset += numRead;
  }
  blockReader.close();
  out.print(HtmlQuoting.quoteHtmlChars(new String(buf, Charsets.UTF_8)));
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:44,代码来源:JspHelper.java


示例14: tryRead

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
private static void tryRead(Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());
    s = NetUtils.getDefaultSocketFactory(conf).createSocket();
    s.connect(targetAddr, HdfsServerConstants.READ_TIMEOUT);
    s.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);

    String file = BlockReaderFactory.getFileName(targetAddr, 
        "test-blockpoolid", block.getBlockId());
    blockReader = BlockReaderFactory.newBlockReader(
        new DFSClient.Conf(conf), file, block, lblock.getBlockToken(), 0, -1,
        true, "TestBlockTokenWithDFS", TcpPeerServer.peerFromSocket(s),
        nodes[0], null, null, null, false,
        CachingStrategy.newDefaultStrategy());

  } catch (IOException ex) {
    if (ex instanceof InvalidBlockTokenException) {
      assertFalse("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", shouldSucceed);
      return;
    }
    fail("OP_READ_BLOCK failed due to reasons other than access token: "
        + StringUtils.stringifyException(ex));
  } finally {
    if (s != null) {
      try {
        s.close();
      } catch (IOException iex) {
      } finally {
        s = null;
      }
    }
  }
  if (blockReader == null) {
    fail("OP_READ_BLOCK failed due to reasons other than access token");
  }
  assertTrue("OP_READ_BLOCK: access token is valid, "
      + "when it is expected to be invalid", shouldSucceed);
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:46,代码来源:TestBlockTokenWithDFS.java


示例15: tryRead

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:66,代码来源:TestBlockTokenWithDFS.java


示例16: accessBlock

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:49,代码来源:TestDataNodeVolumeFailure.java


示例17: getBlockReader

import org.apache.hadoop.hdfs.net.TcpPeerServer; //导入依赖的package包/类
/**
 * Get a BlockReader for the given block.
 */
public static BlockReader getBlockReader(MiniDFSCluster cluster,
    LocatedBlock testBlock, int offset, int lenToRead) throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = testBlock.getBlock();
  DatanodeInfo[] nodes = testBlock.getLocations();
  targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

  final DistributedFileSystem fs = cluster.getFileSystem();
  return new BlockReaderFactory(fs.getClient().getConf()).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(targetAddr.toString()+ ":" + block.getBlockId()).
    setBlockToken(testBlock.getBlockToken()).
    setStartOffset(offset).
    setLength(lenToRead).
    setVerifyChecksum(true).
    setClientName("BlockReaderTestUtil").
    setDatanodeInfo(nodes[0]).
    setClientCacheContext(ClientContext.getFromConf(fs.getConf())).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setConfiguration(fs.getConf()).
    setAllowShortCircuitLocalReads(true).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.
            getDefaultSocketFactory(fs.getConf()).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeQuietly(sock);
          }
        }
        return peer;
      }
    }).
    build();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:47,代码来源:BlockReaderTestUtil.java



注:本文中的org.apache.hadoop.hdfs.net.TcpPeerServer类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java CalendarDataUtility类代码示例发布时间:2022-05-22
下一篇:
Java ContainerProperties类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap