• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java DataTransferEncryptor类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor的典型用法代码示例。如果您正苦于以下问题:Java DataTransferEncryptor类的具体用法?Java DataTransferEncryptor怎么用?Java DataTransferEncryptor使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DataTransferEncryptor类属于org.apache.hadoop.hdfs.protocol.datatransfer包,在下文中一共展示了DataTransferEncryptor类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);
    
    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    if (dfsClient.shouldEncryptData()) {
      IOStreamPair encryptedStreams =
          DataTransferEncryptor.getEncryptedStreams(
              unbufOut, unbufIn, dfsClient.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out).transferBlock(block, blockToken, dfsClient.clientName,
        targets);
    out.flush();

    //ack
    BlockOpResponseProto response =
      BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:41,代码来源:DFSOutputStream.java


示例2: EncryptedPeer

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
public EncryptedPeer(Peer enclosedPeer, DataEncryptionKey key)
    throws IOException {
  this.enclosedPeer = enclosedPeer;
  IOStreamPair ios = DataTransferEncryptor.getEncryptedStreams(
      enclosedPeer.getOutputStream(), enclosedPeer.getInputStream(), key);
  this.in = ios.in;
  this.out = ios.out;
  this.channel = ios.in instanceof ReadableByteChannel ? 
      (ReadableByteChannel)ios.in : null;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:11,代码来源:EncryptedPeer.java


示例3: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
                      final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);

    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    if (dfsClient.shouldEncryptData()) {
      IOStreamPair encryptedStreams = DataTransferEncryptor
              .getEncryptedStreams(unbufOut, unbufIn,
                      dfsClient.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
            HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out)
            .transferBlock(block, blockToken, dfsClient.clientName, targets);
    out.flush();

    //ack
    BlockOpResponseProto response =
            BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:41,代码来源:DFSOutputStream.java


示例4: dispatch

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void dispatch() {
  Socket sock = new Socket();
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock.connect(
        NetUtils.createSocketAddr(target.datanode.getXferAddr()),
        HdfsServerConstants.READ_TIMEOUT);
    sock.setKeepAlive(true);
    
    OutputStream unbufOut = sock.getOutputStream();
    InputStream unbufIn = sock.getInputStream();
    if (nnc.getDataEncryptionKey() != null) {
      IOStreamPair encryptedStreams =
          DataTransferEncryptor.getEncryptedStreams(
              unbufOut, unbufIn, nnc.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    in = new DataInputStream(new BufferedInputStream(unbufIn,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    
    sendRequest(out);
    receiveResponse(in);
    bytesMoved.inc(block.getNumBytes());
    LOG.info( "Moving block " + block.getBlock().getBlockId() +
          " from "+ source.getDisplayName() + " to " +
          target.getDisplayName() + " through " +
          proxySource.getDisplayName() +
          " is succeeded." );
  } catch (IOException e) {
    LOG.warn("Error moving block "+block.getBlockId()+
        " from " + source.getDisplayName() + " to " +
        target.getDisplayName() + " through " +
        proxySource.getDisplayName() +
        ": "+e.getMessage());
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(in);
    IOUtils.closeSocket(sock);
    
    proxySource.removePendingBlock(this);
    target.removePendingBlock(this);

    synchronized (this ) {
      reset();
    }
    synchronized (Balancer.this) {
      Balancer.this.notifyAll();
    }
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:55,代码来源:Balancer.java


示例5: dispatch

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void dispatch() {
  Socket sock = new Socket();
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock.connect(NetUtils.createSocketAddr(target.datanode.getXferAddr()),
        HdfsServerConstants.READ_TIMEOUT);
    sock.setKeepAlive(true);
    
    OutputStream unbufOut = sock.getOutputStream();
    InputStream unbufIn = sock.getInputStream();
    if (nnc.getDataEncryptionKey() != null) {
      IOStreamPair encryptedStreams = DataTransferEncryptor
          .getEncryptedStreams(unbufOut, unbufIn,
              nnc.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    in = new DataInputStream(new BufferedInputStream(unbufIn,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    
    sendRequest(out);
    receiveResponse(in);
    bytesMoved.inc(block.getNumBytes());
    LOG.info("Moving block " + block.getBlock().getBlockId() +
        " from " + source.getDisplayName() + " to " +
        target.getDisplayName() + " through " +
        proxySource.getDisplayName() +
        " is succeeded.");
  } catch (IOException e) {
    LOG.warn("Error moving block " + block.getBlockId() +
        " from " + source.getDisplayName() + " to " +
        target.getDisplayName() + " through " +
        proxySource.getDisplayName() +
        ": " + e.getMessage());
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(in);
    IOUtils.closeSocket(sock);
    
    proxySource.removePendingBlock(this);
    target.removePendingBlock(this);

    synchronized (this) {
      reset();
    }
    synchronized (Balancer.this) {
      Balancer.this.notifyAll();
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:54,代码来源:Balancer.java


示例6: transfer

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  //transfer replica to the new datanode
  Socket sock = null;
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock = createSocketForPipeline(src, 2, dfsClient);
    final long writeTimeout = dfsClient.getDatanodeWriteTimeout(2);
    
    OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
    InputStream unbufIn = NetUtils.getInputStream(sock);
    if (dfsClient.shouldEncryptData() && 
        !dfsClient.trustedChannelResolver.isTrusted(sock.getInetAddress())) {
      IOStreamPair encryptedStreams =
          DataTransferEncryptor.getEncryptedStreams(
              unbufOut, unbufIn, dfsClient.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.SMALL_BUFFER_SIZE));
    in = new DataInputStream(unbufIn);

    //send the TRANSFER_BLOCK request
    new Sender(out).transferBlock(block, blockToken, dfsClient.clientName,
        targets);
    out.flush();

    //ack
    BlockOpResponseProto response =
      BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
    if (SUCCESS != response.getStatus()) {
      throw new IOException("Failed to add a datanode");
    }
  } finally {
    IOUtils.closeStream(in);
    IOUtils.closeStream(out);
    IOUtils.closeSocket(sock);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:42,代码来源:DFSOutputStream.java


示例7: dispatch

import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor; //导入依赖的package包/类
private void dispatch() {
  Socket sock = new Socket();
  DataOutputStream out = null;
  DataInputStream in = null;
  try {
    sock.connect(
        NetUtils.createSocketAddr(target.datanode.getXferAddr()),
        HdfsServerConstants.READ_TIMEOUT);
    /* Unfortunately we don't have a good way to know if the Datanode is
     * taking a really long time to move a block, OR something has
     * gone wrong and it's never going to finish. To deal with this 
     * scenario, we set a long timeout (20 minutes) to avoid hanging
     * the balancer indefinitely.
     */
    sock.setSoTimeout(BLOCK_MOVE_READ_TIMEOUT);

    sock.setKeepAlive(true);
    
    OutputStream unbufOut = sock.getOutputStream();
    InputStream unbufIn = sock.getInputStream();
    if (nnc.getDataEncryptionKey() != null) {
      IOStreamPair encryptedStreams =
          DataTransferEncryptor.getEncryptedStreams(
              unbufOut, unbufIn, nnc.getDataEncryptionKey());
      unbufOut = encryptedStreams.out;
      unbufIn = encryptedStreams.in;
    }
    out = new DataOutputStream(new BufferedOutputStream(unbufOut,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    in = new DataInputStream(new BufferedInputStream(unbufIn,
        HdfsConstants.IO_FILE_BUFFER_SIZE));
    
    sendRequest(out);
    receiveResponse(in);
    bytesMoved.inc(block.getNumBytes());
    LOG.info("Successfully moved " + this);
  } catch (IOException e) {
    LOG.warn("Failed to move " + this + ": " + e.getMessage());
    /* proxy or target may have an issue, insert a small delay
     * before using these nodes further. This avoids a potential storm
     * of "threads quota exceeded" Warnings when the balancer
     * gets out of sync with work going on in datanode.
     */
    proxySource.activateDelay(DELAY_AFTER_ERROR);
    target.activateDelay(DELAY_AFTER_ERROR);
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(in);
    IOUtils.closeSocket(sock);
    
    proxySource.removePendingBlock(this);
    target.removePendingBlock(this);

    synchronized (this ) {
      reset();
    }
    synchronized (Balancer.this) {
      Balancer.this.notifyAll();
    }
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:62,代码来源:Balancer.java



注:本文中的org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferEncryptor类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DateTimeField类代码示例发布时间:2022-05-23
下一篇:
Java PersistentEntity类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap