• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java LengthInputStream类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream的典型用法代码示例。如果您正苦于以下问题:Java LengthInputStream类的具体用法?Java LengthInputStream怎么用?Java LengthInputStream使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



LengthInputStream类属于org.apache.hadoop.hdfs.server.datanode.fsdataset包,在下文中一共展示了LengthInputStream类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
/**
 * @return the FileInputStream for the meta data of the given block.
 * @throws FileNotFoundException
 *           if the file not found.
 * @throws ClassCastException
 *           if the underlying input stream is not a FileInputStream.
 */
public static FileInputStream getMetaDataInputStream(
    ExtendedBlock b, FsDatasetSpi<?> data) throws IOException {
  final LengthInputStream lin = data.getMetaDataInputStream(b);
  if (lin == null) {
    throw new FileNotFoundException("Meta file for " + b + " not found.");
  }
  return (FileInputStream)lin.getWrappedStream();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:DatanodeUtil.java


示例2: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override // FsDatasetSpi
public LengthInputStream getMetaDataInputStream(ExtendedBlock b)
    throws IOException {
  File meta = FsDatasetUtil.getMetaFile(getBlockFile(b), b.getGenerationStamp());
  if (meta == null || !meta.exists()) {
    return null;
  }
  if (isNativeIOAvailable) {
    return new LengthInputStream(
        NativeIO.getShareDeleteFileInputStream(meta),
        meta.length());
  }
  return new LengthInputStream(new FileInputStream(meta), meta.length());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:FsDatasetImpl.java


示例3: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override // FsDatasetSpi
public synchronized LengthInputStream getMetaDataInputStream(ExtendedBlock b
    ) throws IOException {
  final Map<Block, BInfo> map = getMap(b.getBlockPoolId());
  BInfo binfo = map.get(b.getLocalBlock());
  if (binfo == null) {
    throw new IOException("No such Block " + b );  
  }
  if (!binfo.finalized) {
    throw new IOException("Block " + b + 
        " is being written, its meta cannot be read");
  }
  final SimulatedInputStream sin = binfo.getMetaIStream();
  return new LengthInputStream(sin, sin.getLength());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:SimulatedFSDataset.java


示例4: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override // FsDatasetSpi
public LengthInputStream getMetaDataInputStream(ExtendedBlock b)
    throws IOException {
  File meta =
      FsDatasetUtil.getMetaFile(getBlockFile(b), b.getGenerationStamp());
  if (meta == null || !meta.exists()) {
    return null;
  }
  return new LengthInputStream(new FileInputStream(meta), meta.length());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:11,代码来源:FsDatasetImpl.java


示例5: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override // FsDatasetSpi
public synchronized LengthInputStream getMetaDataInputStream(ExtendedBlock b)
    throws IOException {
  final Map<Block, BInfo> map = getMap(b.getBlockPoolId());
  BInfo binfo = map.get(b.getLocalBlock());
  if (binfo == null) {
    throw new IOException("No such Block " + b);
  }
  if (!binfo.finalized) {
    throw new IOException("Block " + b +
        " is being written, its meta cannot be read");
  }
  final SimulatedInputStream sin = binfo.getMetaIStream();
  return new LengthInputStream(sin, sin.getLength());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:SimulatedFSDataset.java


示例6: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } catch (IOException ioe) {
    LOG.info("blockChecksum " + block + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:63,代码来源:DataXceiver.java


示例7: getMetaDataInputStream

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public LengthInputStream getMetaDataInputStream(ExtendedBlock b)
    throws IOException {
  return new LengthInputStream(null, 0);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:ExternalDatasetImpl.java


示例8: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenIdentifier.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, ioFileBufferSize));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelperClient.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } catch (IOException ioe) {
    LOG.info("blockChecksum " + block + " received exception " + ioe);
    incrDatanodeNetworkErrors();
    throw ioe;
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:63,代码来源:DataXceiver.java


示例9: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = 
    datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(new BufferedInputStream(
      metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum(); 
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = (metadataIn.getLength()
        - BlockMetadataHeader.getHeaderSize())/checksum.getChecksumSize();
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType()))
        )
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:52,代码来源:DataXceiver.java


示例10: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  // client side now can specify a range of the block for checksum
  long requestLength = block.getNumBytes();
  Preconditions.checkArgument(requestLength >= 0);
  long visibleLength = datanode.data.getReplicaVisibleLength(block);
  boolean partialBlk = requestLength < visibleLength;

  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = datanode.data
      .getMetaDataInputStream(block);
  
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));
  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader
        .readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int csize = checksum.getChecksumSize();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = csize <= 0 ? 0 : 
      (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) / csize;

    final MD5Hash md5 = partialBlk && crcPerBlock > 0 ? 
        calcPartialBlockChecksum(block, requestLength, checksum, checksumIn)
          : MD5Hash.digest(checksumIn);
    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType())))
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:59,代码来源:DataXceiver.java


示例11: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(getOutputStream());
  checkAccess(out, true, block, blockToken, Op.BLOCK_CHECKSUM,
      BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn =
      datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(
      new BufferedInputStream(metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header =
        BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum();
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock =
        (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize()) /
            checksum.getChecksumSize();
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC +
          ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder().setStatus(SUCCESS).setChecksumResponse(
        OpBlockChecksumResponseProto.newBuilder().setBytesPerCrc(bytesPerCRC)
            .setCrcPerBlock(crcPerBlock)
            .setMd5(ByteString.copyFrom(md5.getDigest()))
            .setCrcType(PBHelper.convert(checksum.getChecksumType()))).build()
        .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:49,代码来源:DataXceiver.java


示例12: blockChecksum

import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream; //导入依赖的package包/类
@Override
public void blockChecksum(final ExtendedBlock block,
    final Token<BlockTokenIdentifier> blockToken) throws IOException {
  final DataOutputStream out = new DataOutputStream(
      getOutputStream());
  checkAccess(out, true, block, blockToken,
      Op.BLOCK_CHECKSUM, BlockTokenSecretManager.AccessMode.READ);
  updateCurrentThreadName("Reading metadata for block " + block);
  final LengthInputStream metadataIn = 
    datanode.data.getMetaDataInputStream(block);
  final DataInputStream checksumIn = new DataInputStream(new BufferedInputStream(
      metadataIn, HdfsConstants.IO_FILE_BUFFER_SIZE));

  updateCurrentThreadName("Getting checksum for block " + block);
  try {
    //read metadata file
    final BlockMetadataHeader header = BlockMetadataHeader.readHeader(checksumIn);
    final DataChecksum checksum = header.getChecksum(); 
    final int bytesPerCRC = checksum.getBytesPerChecksum();
    final long crcPerBlock = checksum.getChecksumSize() > 0 
            ? (metadataIn.getLength() - BlockMetadataHeader.getHeaderSize())/checksum.getChecksumSize()
            : 0;
    
    //compute block checksum
    final MD5Hash md5 = MD5Hash.digest(checksumIn);

    if (LOG.isDebugEnabled()) {
      LOG.debug("block=" + block + ", bytesPerCRC=" + bytesPerCRC
          + ", crcPerBlock=" + crcPerBlock + ", md5=" + md5);
    }

    //write reply
    BlockOpResponseProto.newBuilder()
      .setStatus(SUCCESS)
      .setChecksumResponse(OpBlockChecksumResponseProto.newBuilder()             
        .setBytesPerCrc(bytesPerCRC)
        .setCrcPerBlock(crcPerBlock)
        .setMd5(ByteString.copyFrom(md5.getDigest()))
        .setCrcType(PBHelper.convert(checksum.getChecksumType()))
        )
      .build()
      .writeDelimitedTo(out);
    out.flush();
  } finally {
    IOUtils.closeStream(out);
    IOUtils.closeStream(checksumIn);
    IOUtils.closeStream(metadataIn);
  }

  //update metrics
  datanode.metrics.addBlockChecksumOp(elapsed());
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:53,代码来源:DataXceiver.java



注:本文中的org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SnmpPduRequest类代码示例发布时间:2022-05-22
下一篇:
Java XMLGrammarDescription类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap