• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Sender类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.datatransfer.Sender的典型用法代码示例。如果您正苦于以下问题:Java Sender类的具体用法?Java Sender怎么用?Java Sender使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Sender类属于org.apache.hadoop.hdfs.protocol.datatransfer包,在下文中一共展示了Sender类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: inferChecksumTypeByReading

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Infer the checksum type for a replica by sending an OP_READ_BLOCK
 * for the first byte of that replica. This is used for compatibility
 * with older HDFS versions which did not include the checksum type in
 * OpBlockChecksumResponseProto.
 *
 * @param lb the located block
 * @param dn the connected datanode
 * @return the inferred checksum type
 * @throws IOException if an error occurs
 */
private Type inferChecksumTypeByReading(LocatedBlock lb, DatanodeInfo dn)
    throws IOException {
  IOStreamPair pair = connectToDN(dn, dfsClientConf.socketTimeout, lb);

  try {
    DataOutputStream out = new DataOutputStream(new BufferedOutputStream(pair.out,
        HdfsConstants.SMALL_BUFFER_SIZE));
    DataInputStream in = new DataInputStream(pair.in);

    new Sender(out).readBlock(lb.getBlock(), lb.getBlockToken(), clientName,
        0, 1, true, CachingStrategy.newDefaultStrategy());
    final BlockOpResponseProto reply =
        BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    String logInfo = "trying to read " + lb.getBlock() + " from datanode " + dn;
    DataTransferProtoUtil.checkBlockOpStatus(reply, logInfo);

    return PBHelper.convert(reply.getReadOpChecksumInfo().getChecksum().getType());
  } finally {
    IOUtils.cleanup(null, pair.in, pair.out);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:DFSClient.java


示例2: transferRbw

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/** For {@link TestTransferRbw} */
public static BlockOpResponseProto transferRbw(final ExtendedBlock b, 
    final DFSClient dfsClient, final DatanodeInfo... datanodes) throws IOException {
  assertEquals(2, datanodes.length);
  final Socket s = DFSOutputStream.createSocketForPipeline(datanodes[0],
      datanodes.length, dfsClient);
  final long writeTimeout = dfsClient.getDatanodeWriteTimeout(datanodes.length);
  final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
      NetUtils.getOutputStream(s, writeTimeout),
      HdfsConstants.SMALL_BUFFER_SIZE));
  final DataInputStream in = new DataInputStream(NetUtils.getInputStream(s));

  // send the request
  new Sender(out).transferBlock(b, new Token<BlockTokenIdentifier>(),
      dfsClient.clientName, new DatanodeInfo[]{datanodes[1]},
      new StorageType[]{StorageType.DEFAULT});
  out.flush();

  return BlockOpResponseProto.parseDelimitedFrom(in);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:DFSTestUtil.java


示例3: inferChecksumTypeByReading

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Infer the checksum type for a replica by sending an OP_READ_BLOCK
 * for the first byte of that replica. This is used for compatibility
 * with older HDFS versions which did not include the checksum type in
 * OpBlockChecksumResponseProto.
 *
 * @param lb the located block
 * @param dn the connected datanode
 * @return the inferred checksum type
 * @throws IOException if an error occurs
 */
private Type inferChecksumTypeByReading(LocatedBlock lb, DatanodeInfo dn)
    throws IOException {
  IOStreamPair pair = connectToDN(dn, dfsClientConf.getSocketTimeout(), lb);

  try {
    DataOutputStream out = new DataOutputStream(
        new BufferedOutputStream(pair.out, smallBufferSize));
    DataInputStream in = new DataInputStream(pair.in);

    new Sender(out).readBlock(lb.getBlock(), lb.getBlockToken(), clientName,
        0, 1, true, CachingStrategy.newDefaultStrategy());
    final BlockOpResponseProto reply =
        BlockOpResponseProto.parseFrom(PBHelperClient.vintPrefixed(in));
    String logInfo = "trying to read " + lb.getBlock() + " from datanode " +
        dn;
    DataTransferProtoUtil.checkBlockOpStatus(reply, logInfo);

    return PBHelperClient.convert(
        reply.getReadOpChecksumInfo().getChecksum().getType());
  } finally {
    IOUtilsClient.cleanup(null, pair.in, pair.out);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:35,代码来源:DFSClient.java


示例4: transferRbw

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/** For {@link TestTransferRbw} */
public static BlockOpResponseProto transferRbw(final ExtendedBlock b, 
    final DFSClient dfsClient, final DatanodeInfo... datanodes) throws IOException {
  assertEquals(2, datanodes.length);
  final Socket s = DataStreamer.createSocketForPipeline(datanodes[0],
      datanodes.length, dfsClient);
  final long writeTimeout = dfsClient.getDatanodeWriteTimeout(datanodes.length);
  final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
      NetUtils.getOutputStream(s, writeTimeout),
      DFSUtilClient.getSmallBufferSize(dfsClient.getConfiguration())));
  final DataInputStream in = new DataInputStream(NetUtils.getInputStream(s));

  // send the request
  new Sender(out).transferBlock(b, new Token<BlockTokenIdentifier>(),
      dfsClient.clientName, new DatanodeInfo[]{datanodes[1]},
      new StorageType[]{StorageType.DEFAULT});
  out.flush();

  return BlockOpResponseProto.parseDelimitedFrom(in);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:DFSTestUtil.java


示例5: transferRbw

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/** For {@link TestTransferRbw} */
public static BlockOpResponseProto transferRbw(final ExtendedBlock b, 
    final DFSClient dfsClient, final DatanodeInfo... datanodes) throws IOException {
  assertEquals(2, datanodes.length);
  final Socket s = DFSOutputStream.createSocketForPipeline(datanodes[0],
      datanodes.length, dfsClient);
  final long writeTimeout = dfsClient.getDatanodeWriteTimeout(datanodes.length);
  final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
      NetUtils.getOutputStream(s, writeTimeout),
      HdfsConstants.SMALL_BUFFER_SIZE));
  final DataInputStream in = new DataInputStream(NetUtils.getInputStream(s));

  // send the request
  new Sender(out).transferBlock(b, new Token<BlockTokenIdentifier>(),
      dfsClient.clientName, new DatanodeInfo[]{datanodes[1]});
  out.flush();

  return BlockOpResponseProto.parseDelimitedFrom(in);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:20,代码来源:DFSTestUtil.java


示例6: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock( ExtendedBlock block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getXferAddr()), HdfsServerConstants.READ_TIMEOUT); 
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  new Sender(out).replaceBlock(block, BlockTokenSecretManager.DUMMY_TOKEN,
      source.getStorageID(), sourceProxy);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  BlockOpResponseProto proto =
    BlockOpResponseProto.parseDelimitedFrom(reply);
  return proto.getStatus() == Status.SUCCESS;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:19,代码来源:TestBlockReplacement.java


示例7: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock( ExtendedBlock block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getXferAddr()), HdfsServerConstants.READ_TIMEOUT); 
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  new Sender(out).replaceBlock(block, StorageType.DEFAULT,
      BlockTokenSecretManager.DUMMY_TOKEN,
      source.getDatanodeUuid(), sourceProxy);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  BlockOpResponseProto proto = BlockOpResponseProto.parseDelimitedFrom(reply);
  while (proto.getStatus() == Status.IN_PROGRESS) {
    proto = BlockOpResponseProto.parseDelimitedFrom(reply);
  }
  return proto.getStatus() == Status.SUCCESS;
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:22,代码来源:TestBlockReplacement.java


示例8: transferRbw

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * For {@link TestTransferRbw}
 */
public static BlockOpResponseProto transferRbw(final ExtendedBlock b,
    final DFSClient dfsClient, final DatanodeInfo... datanodes)
    throws IOException {
  assertEquals(2, datanodes.length);
  final Socket s = DFSOutputStream
      .createSocketForPipeline(datanodes[0], datanodes.length, dfsClient);
  final long writeTimeout =
      dfsClient.getDatanodeWriteTimeout(datanodes.length);
  final DataOutputStream out = new DataOutputStream(
      new BufferedOutputStream(NetUtils.getOutputStream(s, writeTimeout),
          HdfsConstants.SMALL_BUFFER_SIZE));
  final DataInputStream in = new DataInputStream(NetUtils.getInputStream(s));

  // send the request
  new Sender(out).transferBlock(b, new Token<BlockTokenIdentifier>(),
      dfsClient.clientName, new DatanodeInfo[]{datanodes[1]});
  out.flush();

  return BlockOpResponseProto.parseDelimitedFrom(in);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:24,代码来源:DFSTestUtil.java


示例9: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock(ExtendedBlock block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(destination.getXferAddr()),
      HdfsServerConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  new Sender(out).replaceBlock(block, BlockTokenSecretManager.DUMMY_TOKEN,
      source.getStorageID(), sourceProxy);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  BlockOpResponseProto proto = BlockOpResponseProto.parseDelimitedFrom(reply);
  return proto.getStatus() == Status.SUCCESS;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:18,代码来源:TestBlockReplacement.java


示例10: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock( ExtendedBlock block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getXferAddr()), HdfsServerConstants.READ_TIMEOUT); 
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  new Sender(out).replaceBlock(block, BlockTokenSecretManager.DUMMY_TOKEN,
      source.getDatanodeUuid(), sourceProxy);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  BlockOpResponseProto proto =
    BlockOpResponseProto.parseDelimitedFrom(reply);
  return proto.getStatus() == Status.SUCCESS;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:19,代码来源:TestBlockReplacement.java


示例11: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock(
    ExtendedBlock block,
    DatanodeInfo source,
    DatanodeInfo sourceProxy,
    DatanodeInfo destination,
    StorageType targetStorageType) throws IOException, SocketException {
  Socket sock = new Socket();
  try {
    sock.connect(NetUtils.createSocketAddr(destination.getXferAddr()),
        HdfsServerConstants.READ_TIMEOUT);
    sock.setKeepAlive(true);
    // sendRequest
    DataOutputStream out = new DataOutputStream(sock.getOutputStream());
    new Sender(out).replaceBlock(block, targetStorageType,
        BlockTokenSecretManager.DUMMY_TOKEN, source.getDatanodeUuid(),
        sourceProxy);
    out.flush();
    // receiveResponse
    DataInputStream reply = new DataInputStream(sock.getInputStream());

    BlockOpResponseProto proto =
        BlockOpResponseProto.parseDelimitedFrom(reply);
    while (proto.getStatus() == Status.IN_PROGRESS) {
      proto = BlockOpResponseProto.parseDelimitedFrom(reply);
    }
    return proto.getStatus() == Status.SUCCESS;
  } finally {
    sock.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:TestBlockReplacement.java


示例12: run

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
@Override
public void run() {
  LOG.trace("{}: about to release {}", ShortCircuitCache.this, slot);
  final DfsClientShm shm = (DfsClientShm)slot.getShm();
  final DomainSocket shmSock = shm.getPeer().getDomainSocket();
  final String path = shmSock.getPath();
  boolean success = false;
  try (DomainSocket sock = DomainSocket.connect(path);
       DataOutputStream out = new DataOutputStream(
           new BufferedOutputStream(sock.getOutputStream()))) {
    new Sender(out).releaseShortCircuitFds(slot.getSlotId());
    DataInputStream in = new DataInputStream(sock.getInputStream());
    ReleaseShortCircuitAccessResponseProto resp =
        ReleaseShortCircuitAccessResponseProto.parseFrom(
            PBHelperClient.vintPrefixed(in));
    if (resp.getStatus() != Status.SUCCESS) {
      String error = resp.hasError() ? resp.getError() : "(unknown)";
      throw new IOException(resp.getStatus().toString() + ": " + error);
    }
    LOG.trace("{}: released {}", this, slot);
    success = true;
  } catch (IOException e) {
    LOG.error(ShortCircuitCache.this + ": failed to release " +
        "short-circuit shared memory slot " + slot + " by sending " +
        "ReleaseShortCircuitAccessRequestProto to " + path +
        ".  Closing shared memory segment.", e);
  } finally {
    if (success) {
      shmManager.freeSlot(slot);
    } else {
      shm.getEndpointShmManager().shutdown(shm);
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:35,代码来源:ShortCircuitCache.java


示例13: replaceBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private boolean replaceBlock(
    ExtendedBlock block,
    DatanodeInfo source,
    DatanodeInfo sourceProxy,
    DatanodeInfo destination,
    StorageType targetStorageType) throws IOException, SocketException {
  Socket sock = new Socket();
  try {
    sock.connect(NetUtils.createSocketAddr(destination.getXferAddr()),
        HdfsConstants.READ_TIMEOUT);
    sock.setKeepAlive(true);
    // sendRequest
    DataOutputStream out = new DataOutputStream(sock.getOutputStream());
    new Sender(out).replaceBlock(block, targetStorageType,
        BlockTokenSecretManager.DUMMY_TOKEN, source.getDatanodeUuid(),
        sourceProxy);
    out.flush();
    // receiveResponse
    DataInputStream reply = new DataInputStream(sock.getInputStream());

    BlockOpResponseProto proto =
        BlockOpResponseProto.parseDelimitedFrom(reply);
    while (proto.getStatus() == Status.IN_PROGRESS) {
      proto = BlockOpResponseProto.parseDelimitedFrom(reply);
    }
    return proto.getStatus() == Status.SUCCESS;
  } finally {
    sock.close();
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:31,代码来源:TestBlockReplacement.java


示例14: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);
    
    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    IOStreamPair saslStreams = dfsClient.saslClient.socketSend(sock,
      unbufOut, unbufIn, dfsClient, blockToken, src);
    unbufOut = saslStreams.out;
    unbufIn = saslStreams.in;
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out).transferBlock(block, blockToken, dfsClient.clientName,
        targets, targetStorageTypes);
    out.flush();

    //ack
    BlockOpResponseProto response =
      BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:39,代码来源:DFSOutputStream.java


示例15: inferChecksumTypeByReading

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Infer the checksum type for a replica by sending an OP_READ_BLOCK
 * for the first byte of that replica. This is used for compatibility
 * with older HDFS versions which did not include the checksum type in
 * OpBlockChecksumResponseProto.
 *
 * @param lb the located block
 * @param dn the connected datanode
 * @return the inferred checksum type
 * @throws IOException if an error occurs
 */
private Type inferChecksumTypeByReading(LocatedBlock lb, DatanodeInfo dn)
    throws IOException {
  IOStreamPair pair = connectToDN(dn, dfsClientConf.socketTimeout, lb);

  try {
    DataOutputStream out = new DataOutputStream(new BufferedOutputStream(pair.out,
        HdfsConstants.SMALL_BUFFER_SIZE));
    DataInputStream in = new DataInputStream(pair.in);

    new Sender(out).readBlock(lb.getBlock(), lb.getBlockToken(), clientName,
        0, 1, true, CachingStrategy.newDefaultStrategy());
    final BlockOpResponseProto reply =
        BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    
    if (reply.getStatus() != Status.SUCCESS) {
      if (reply.getStatus() == Status.ERROR_ACCESS_TOKEN) {
        throw new InvalidBlockTokenException();
      } else {
        throw new IOException("Bad response " + reply + " trying to read "
            + lb.getBlock() + " from datanode " + dn);
      }
    }
    
    return PBHelper.convert(reply.getReadOpChecksumInfo().getChecksum().getType());
  } finally {
    IOUtils.cleanup(null, pair.in, pair.out);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:40,代码来源:DFSClient.java


示例16: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);
    
    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    if (dfsClient.shouldEncryptData()) {
      IOStreamPair encryptedStreams =
          DataTransferEncryptor.getEncryptedStreams(
              unbufOut, unbufIn, dfsClient.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out).transferBlock(block, blockToken, dfsClient.clientName,
        targets);
    out.flush();

    //ack
    BlockOpResponseProto response =
      BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:41,代码来源:DFSOutputStream.java


示例17: inferChecksumTypeByReading

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Infer the checksum type for a replica by sending an OP_READ_BLOCK
 * for the first byte of that replica. This is used for compatibility
 * with older HDFS versions which did not include the checksum type in
 * OpBlockChecksumResponseProto.
 *
 * @param in input stream from datanode
 * @param out output stream to datanode
 * @param lb the located block
 * @param clientName the name of the DFSClient requesting the checksum
 * @param dn the connected datanode
 * @return the inferred checksum type
 * @throws IOException if an error occurs
 */
private static Type inferChecksumTypeByReading(
    String clientName, SocketFactory socketFactory, int socketTimeout,
    LocatedBlock lb, DatanodeInfo dn,
    DataEncryptionKey encryptionKey, boolean connectToDnViaHostname)
    throws IOException {
  IOStreamPair pair = connectToDN(socketFactory, connectToDnViaHostname,
      encryptionKey, dn, socketTimeout);

  try {
    DataOutputStream out = new DataOutputStream(new BufferedOutputStream(pair.out,
        HdfsConstants.SMALL_BUFFER_SIZE));
    DataInputStream in = new DataInputStream(pair.in);

    new Sender(out).readBlock(lb.getBlock(), lb.getBlockToken(), clientName, 0, 1, true);
    final BlockOpResponseProto reply =
        BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    
    if (reply.getStatus() != Status.SUCCESS) {
      if (reply.getStatus() == Status.ERROR_ACCESS_TOKEN) {
        throw new InvalidBlockTokenException();
      } else {
        throw new IOException("Bad response " + reply + " trying to read "
            + lb.getBlock() + " from datanode " + dn);
      }
    }
    
    return PBHelper.convert(reply.getReadOpChecksumInfo().getChecksum().getType());
  } finally {
    IOUtils.cleanup(null, pair.in, pair.out);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:46,代码来源:DFSClient.java


示例18: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
                      final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);

    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    if (dfsClient.shouldEncryptData()) {
      IOStreamPair encryptedStreams = DataTransferEncryptor
              .getEncryptedStreams(unbufOut, unbufIn,
                      dfsClient.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
            HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out)
            .transferBlock(block, blockToken, dfsClient.clientName, targets);
    out.flush();

    //ack
    BlockOpResponseProto response =
            BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:41,代码来源:DFSOutputStream.java


示例19: newBlockReader

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Create a new BlockReader specifically to satisfy a read.
 * This method also sends the OP_READ_BLOCK request.
 *
 * @param sock
 *     An established Socket to the DN. The BlockReader will not close it
 *     normally
 * @param file
 *     File location
 * @param block
 *     The block object
 * @param blockToken
 *     The block token for security
 * @param startOffset
 *     The read offset, relative to block head
 * @param len
 *     The number of bytes to read
 * @param bufferSize
 *     The IO buffer size (not the client buffer size)
 * @param verifyChecksum
 *     Whether to verify checksum
 * @param clientName
 *     Client name
 * @return New BlockReader instance, or null on error.
 */
public static RemoteBlockReader newBlockReader(Socket sock, String file,
    ExtendedBlock block, Token<BlockTokenIdentifier> blockToken,
    long startOffset, long len, int bufferSize, boolean verifyChecksum,
    String clientName) throws IOException {
  // in and out will be closed when sock is closed (by the caller)
  final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
      NetUtils.getOutputStream(sock, HdfsServerConstants.WRITE_TIMEOUT)));
  new Sender(out).readBlock(block, blockToken, clientName, startOffset, len,
      verifyChecksum);
  
  //
  // Get bytes in block, set streams
  //

  DataInputStream in = new DataInputStream(
      new BufferedInputStream(NetUtils.getInputStream(sock), bufferSize));
  
  BlockOpResponseProto status =
      BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
  RemoteBlockReader2.checkSuccess(status, sock, block, file);
  ReadOpChecksumInfoProto checksumInfo = status.getReadOpChecksumInfo();
  DataChecksum checksum =
      DataTransferProtoUtil.fromProto(checksumInfo.getChecksum());
  //Warning when we get CHECKSUM_NULL?
  
  // Read the first chunk offset.
  long firstChunkOffset = checksumInfo.getChunkOffset();
  
  if (firstChunkOffset < 0 || firstChunkOffset > startOffset ||
      firstChunkOffset <= (startOffset - checksum.getBytesPerChecksum())) {
    throw new IOException("BlockReader: error in first chunk offset (" +
        firstChunkOffset + ") startOffset is " +
        startOffset + " for file " + file);
  }

  return new RemoteBlockReader(file, block.getBlockPoolId(),
      block.getBlockId(), in, checksum, verifyChecksum, startOffset,
      firstChunkOffset, len, sock);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:65,代码来源:RemoteBlockReader.java


示例20: inferChecksumTypeByReading

import org.apache.hadoop.hdfs.protocol.datatransfer.Sender; //导入依赖的package包/类
/**
 * Infer the checksum type for a replica by sending an OP_READ_BLOCK
 * for the first byte of that replica. This is used for compatibility
 * with older HDFS versions which did not include the checksum type in
 * OpBlockChecksumResponseProto.
 *
 * @param lb
 *     the located block
 * @param clientName
 *     the name of the DFSClient requesting the checksum
 * @param dn
 *     the connected datanode
 * @return the inferred checksum type
 * @throws IOException
 *     if an error occurs
 */
private static Type inferChecksumTypeByReading(String clientName,
    SocketFactory socketFactory, int socketTimeout, LocatedBlock lb,
    DatanodeInfo dn, DataEncryptionKey encryptionKey,
    boolean connectToDnViaHostname) throws IOException {
  IOStreamPair pair =
      connectToDN(socketFactory, connectToDnViaHostname, encryptionKey, dn,
          socketTimeout);

  try {
    DataOutputStream out = new DataOutputStream(
        new BufferedOutputStream(pair.out, HdfsConstants.SMALL_BUFFER_SIZE));
    DataInputStream in = new DataInputStream(pair.in);

    new Sender(out)
        .readBlock(lb.getBlock(), lb.getBlockToken(), clientName, 0, 1, true);
    final BlockOpResponseProto reply =
        BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    
    if (reply.getStatus() != Status.SUCCESS) {
      if (reply.getStatus() == Status.ERROR_ACCESS_TOKEN) {
        throw new InvalidBlockTokenException();
      } else {
        throw new IOException(
            "Bad response " + reply + " trying to read " + lb.getBlock() +
                " from datanode " + dn);
      }
    }
    
    return PBHelper
        .convert(reply.getReadOpChecksumInfo().getChecksum().getType());
  } finally {
    IOUtils.cleanup(null, pair.in, pair.out);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:51,代码来源:DFSClient.java



注:本文中的org.apache.hadoop.hdfs.protocol.datatransfer.Sender类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ClasspathEntry类代码示例发布时间:2022-05-22
下一篇:
Java Identifier类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap