• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Op类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.datatransfer.Op的典型用法代码示例。如果您正苦于以下问题:Java Op类的具体用法?Java Op怎么用?Java Op使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Op类属于org.apache.hadoop.hdfs.protocol.datatransfer包,在下文中一共展示了Op类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: transferBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void transferBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes) throws IOException {
  checkAccess(socketOut, true, blk, blockToken,
      Op.TRANSFER_BLOCK, BlockTokenSecretManager.AccessMode.COPY);
  previousOpClientName = clientName;
  updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk);

  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  try {
    datanode.transferReplicaForPipelineRecovery(blk, targets,
        targetStorageTypes, clientName);
    writeResponse(Status.SUCCESS, null, out);
  } catch (IOException ioe) {
    LOG.info("transferBlock " + blk + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:DataXceiver.java


示例2: transferBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void transferBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes) throws IOException {
  checkAccess(socketOut, true, blk, blockToken,
      Op.TRANSFER_BLOCK, BlockTokenIdentifier.AccessMode.COPY);
  previousOpClientName = clientName;
  updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk);

  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  try {
    datanode.transferReplicaForPipelineRecovery(blk, targets,
        targetStorageTypes, clientName);
    writeResponse(Status.SUCCESS, null, out);
  } catch (IOException ioe) {
    LOG.info("transferBlock " + blk + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:26,代码来源:DataXceiver.java


示例3: transferBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void transferBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets) throws IOException {
  checkAccess(socketOut, true, blk, blockToken,
      Op.TRANSFER_BLOCK, BlockTokenSecretManager.AccessMode.COPY);
  previousOpClientName = clientName;
  updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk);

  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  try {
    datanode.transferReplicaForPipelineRecovery(blk, targets, clientName);
    writeResponse(Status.SUCCESS, null, out);
  } finally {
    IOUtils.closeStream(out);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:20,代码来源:DataXceiver.java


示例4: transferBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void transferBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken,
    final String clientName,
    final DatanodeInfo[] targets,
    final StorageType[] targetStorageTypes) throws IOException {
  checkAccess(socketOut, true, blk, blockToken,
      Op.TRANSFER_BLOCK, BlockTokenSecretManager.AccessMode.COPY);
  previousOpClientName = clientName;
  updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk);

  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  try {
    datanode.transferReplicaForPipelineRecovery(blk, targets,
        targetStorageTypes, clientName);
    writeResponse(Status.SUCCESS, null, out);
  } finally {
    IOUtils.closeStream(out);
  }
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:22,代码来源:DataXceiver.java


示例5: transferBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void transferBlock(final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> blockToken, final String clientName,
    final DatanodeInfo[] targets) throws IOException {
  checkAccess(null, true, blk, blockToken, Op.TRANSFER_BLOCK,
      BlockTokenSecretManager.AccessMode.COPY);
  previousOpClientName = clientName;
  updateCurrentThreadName(Op.TRANSFER_BLOCK + " " + blk);

  final DataOutputStream out = new DataOutputStream(getOutputStream());
  try {
    datanode.transferReplicaForPipelineRecovery(blk, targets, clientName);
    writeResponse(Status.SUCCESS, null, out);
  } finally {
    IOUtils.closeStream(out);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:18,代码来源:DataXceiver.java


示例6: checkAccess

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
private void checkAccess(OutputStream out, final boolean reply, 
    final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> t,
    final Op op,
    final BlockTokenSecretManager.AccessMode mode) throws IOException {
  if (datanode.isBlockTokenEnabled) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Checking block access token for block '" + blk.getBlockId()
          + "' with mode '" + mode + "'");
    }
    try {
      datanode.blockPoolTokenSecretManager.checkAccess(t, null, blk, mode);
    } catch(InvalidToken e) {
      try {
        if (reply) {
          BlockOpResponseProto.Builder resp = BlockOpResponseProto.newBuilder()
            .setStatus(ERROR_ACCESS_TOKEN);
          if (mode == BlockTokenSecretManager.AccessMode.WRITE) {
            DatanodeRegistration dnR = 
              datanode.getDNRegistrationForBP(blk.getBlockPoolId());
            // NB: Unconditionally using the xfer addr w/o hostname
            resp.setFirstBadLink(dnR.getXferAddr());
          }
          resp.build().writeDelimitedTo(out);
          out.flush();
        }
        LOG.warn("Block token verification failed: op=" + op
            + ", remoteAddress=" + remoteAddress
            + ", message=" + e.getLocalizedMessage());
        throw e;
      } finally {
        IOUtils.closeStream(out);
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:DataXceiver.java


示例7: checkAccess

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
private void checkAccess(OutputStream out, final boolean reply, 
    final ExtendedBlock blk,
    final Token<BlockTokenIdentifier> t,
    final Op op,
    final BlockTokenIdentifier.AccessMode mode) throws IOException {
  if (datanode.isBlockTokenEnabled) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Checking block access token for block '" + blk.getBlockId()
          + "' with mode '" + mode + "'");
    }
    try {
      datanode.blockPoolTokenSecretManager.checkAccess(t, null, blk, mode);
    } catch(InvalidToken e) {
      try {
        if (reply) {
          BlockOpResponseProto.Builder resp = BlockOpResponseProto.newBuilder()
            .setStatus(ERROR_ACCESS_TOKEN);
          if (mode == BlockTokenIdentifier.AccessMode.WRITE) {
            DatanodeRegistration dnR = 
              datanode.getDNRegistrationForBP(blk.getBlockPoolId());
            // NB: Unconditionally using the xfer addr w/o hostname
            resp.setFirstBadLink(dnR.getXferAddr());
          }
          resp.build().writeDelimitedTo(out);
          out.flush();
        }
        LOG.warn("Block token verification failed: op=" + op
            + ", remoteAddress=" + remoteAddress
            + ", message=" + e.getLocalizedMessage());
        throw e;
      } finally {
        IOUtils.closeStream(out);
      }
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:37,代码来源:DataXceiver.java


示例8: checkAccess

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
private void checkAccess(DataOutputStream out, final boolean reply,
    final ExtendedBlock blk, final Token<BlockTokenIdentifier> t, final Op op,
    final BlockTokenSecretManager.AccessMode mode) throws IOException {
  if (datanode.isBlockTokenEnabled) {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Checking block access token for block '" + blk.getBlockId() +
          "' with mode '" + mode + "'");
    }
    try {
      datanode.blockPoolTokenSecretManager.checkAccess(t, null, blk, mode);
    } catch (InvalidToken e) {
      try {
        if (reply) {
          if (out == null) {
            out = new DataOutputStream(
                NetUtils.getOutputStream(s, dnConf.socketWriteTimeout));
          }
          
          BlockOpResponseProto.Builder resp =
              BlockOpResponseProto.newBuilder().setStatus(ERROR_ACCESS_TOKEN);
          if (mode == BlockTokenSecretManager.AccessMode.WRITE) {
            DatanodeRegistration dnR =
                datanode.getDNRegistrationForBP(blk.getBlockPoolId());
            // NB: Unconditionally using the xfer addr w/o hostname
            resp.setFirstBadLink(dnR.getXferAddr());
          }
          resp.build().writeDelimitedTo(out);
          out.flush();
        }
        LOG.warn(
            "Block token verification failed: op=" + op + ", remoteAddress=" +
                remoteAddress + ", message=" + e.getLocalizedMessage());
        throw e;
      } finally {
        IOUtils.closeStream(out);
      }
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:40,代码来源:DataXceiver.java


示例9: requestWriteBlock

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
private static void requestWriteBlock(Channel channel, Enum<?> storageType,
    OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
  OpWriteBlockProto proto = STORAGE_TYPE_SETTER.set(writeBlockProtoBuilder, storageType).build();
  int protoLen = proto.getSerializedSize();
  ByteBuf buffer =
      channel.alloc().buffer(3 + CodedOutputStream.computeRawVarint32Size(protoLen) + protoLen);
  buffer.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  buffer.writeByte(Op.WRITE_BLOCK.code);
  proto.writeDelimitedTo(new ByteBufOutputStream(buffer));
  channel.writeAndFlush(buffer);
}
 
开发者ID:apache,项目名称:hbase,代码行数:12,代码来源:FanOutOneBlockAsyncDFSOutputHelper.java


示例10: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } catch (IOException ioe) {
    LOG.info("blockChecksum " + block + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:63,代码来源:DataXceiver.java


示例11: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenIdentifier.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, ioFileBufferSize));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelperClient.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } catch (IOException ioe) {
    LOG.info("blockChecksum " + block + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:63,代码来源:DataXceiver.java


示例12: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = 
    datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(new BufferedInputStream(
      metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum(); 
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = (metadataIn.getLength()
        - BlockMetadataHeader.getHeaderSize())/checksum.getChecksumSize();
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType()))
        )
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:52,代码来源:DataXceiver.java


示例13: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:59,代码来源:DataXceiver.java


示例14: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(getOutputStream());
  checkAccess(out, true, block, blockToken, Op.BLOCK_CHECKSUM,
      BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn =
      datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header =
        BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock =
        (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) /
            checksum.getChecksumSize();
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC +
          ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder().setStatus(SUCCESS).setChecksumResponse(
        OpBlockChecksumResponseProto.newBuilder().setBytesPerCrc(bytesPerCRC)
            .setCrcPerBlock(crcPerBlock)
            .setMd5(ByteString.copyFrom(md5.getDigest()))
            .setCrcType(PBHelper.convert(checksum.getChecksumType()))).build()
        .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:49,代码来源:DataXceiver.java


示例15: blockChecksum

import org.apache.hadoop.hdfs.protocol.datatransfer.Op; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = 
    datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(new BufferedInputStream(
      metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum(); 
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = checksum.getChecksumSize() > 0 
            ? (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize())/checksum.getChecksumSize()
            : 0;
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType()))
        )
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:53,代码来源:DataXceiver.java



注:本文中的org.apache.hadoop.hdfs.protocol.datatransfer.Op类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SoftPwm类代码示例发布时间:2022-05-22
下一篇:
Java Events类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap