• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java BlockCompressedStreamConstants类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中htsjdk.samtools.util.BlockCompressedStreamConstants的典型用法代码示例。如果您正苦于以下问题:Java BlockCompressedStreamConstants类的具体用法?Java BlockCompressedStreamConstants怎么用?Java BlockCompressedStreamConstants使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockCompressedStreamConstants类属于htsjdk.samtools.util包,在下文中一共展示了BlockCompressedStreamConstants类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createStream

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private InputStream createStream(final FileInputStream fileStream) throws IOException {
    // if this looks like a block compressed file and it in fact is, we will use it
    // otherwise we will use the file as is
    if (!AbstractFeatureReader.hasBlockCompressedExtension(inputFile)) {
        return fileStream;
    }

    // make a buffered stream to test that this is in fact a valid block compressed file
    final int bufferSize = Math.max(Defaults.BUFFER_SIZE,
            BlockCompressedStreamConstants.MAX_COMPRESSED_BLOCK_SIZE);
    final BufferedInputStream bufferedStream = new BufferedInputStream(fileStream, bufferSize);

    if (!BlockCompressedInputStream.isValidFile(bufferedStream)) {
        throw new TribbleException.MalformedFeatureFile(
                "Input file is not in valid block compressed format.", inputFile.getAbsolutePath());
    }

    final ISeekableStreamFactory ssf = SeekableStreamFactory.getInstance();
    // if we got here, the file is valid, make a SeekableStream for the BlockCompressedInputStream
    // to read from
    final SeekableStream seekableStream =
            ssf.getBufferedStream(ssf.getStreamFor(inputFile.getAbsolutePath()));
    return new BlockCompressedInputStream(seekableStream);
}
 
开发者ID:react-dev26,项目名称:NGB-master,代码行数:25,代码来源:FeatureIterator.java


示例2: test

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
@Test
public void test() throws IOException {
  Configuration conf = new Configuration();
  Path path = new Path(file.toURI());
  FSDataInputStream fsDataInputStream = path.getFileSystem(conf).open(path);
  BGZFSplitGuesser bgzfSplitGuesser = new BGZFSplitGuesser(fsDataInputStream);
  LinkedList<Long> boundaries = new LinkedList<>();
  long start = 1;
  while (true) {
    long end = file.length();
    long nextStart = bgzfSplitGuesser.guessNextBGZFBlockStart(start, end);
    if (nextStart == end) {
      break;
    }
    boundaries.add(nextStart);
    canReadFromBlockStart(nextStart);
    start = nextStart + 1;
  }
  assertEquals(firstSplit, (long) boundaries.getFirst());
  assertEquals(lastSplit, (long) boundaries.getLast());

  assertEquals("Last block start is terminator gzip block",
      file.length() - BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK.length,
      (long) boundaries.get(boundaries.size() - 1));
}
 
开发者ID:HadoopGenomics,项目名称:Hadoop-BAM,代码行数:26,代码来源:TestBGZFSplitGuesser.java


示例3: mergeBAMBlockStream

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private ByteArrayInputStream mergeBAMBlockStream(
    final File blockStreamFile,
    final SAMFileHeader header) throws IOException
{
    // assemble a proper BAM file from the block stream shard(s) in
    // order to verify the contents
    final ByteArrayOutputStream bamOutputStream = new ByteArrayOutputStream();

    // write out the bam file header
    new SAMOutputPreparer().prepareForRecords(
        bamOutputStream,
        SAMFormat.BAM,
        header);

    // copy the contents of the block shard(s) written out by the M/R job
    final ByteArrayOutputStream blockOutputStream = new ByteArrayOutputStream();
    Files.copy(blockStreamFile.toPath(), blockOutputStream);
    blockOutputStream.writeTo(bamOutputStream);

    // add the BGZF terminator
    bamOutputStream.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK);
    bamOutputStream.close();

    return new ByteArrayInputStream(bamOutputStream.toByteArray());
}
 
开发者ID:HadoopGenomics,项目名称:Hadoop-BAM,代码行数:26,代码来源:TestBAMOutputFormat.java


示例4: mergeSAMInto

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
/**
 * Merges the files in the given directory that have names given by
 * getMergeableWorkFile() into out in the given SAMFormat, using
 * getSAMHeaderMerger().getMergedHeader() as the header. Outputs progress
 * reports if commandName is non-null.
 */
public static void mergeSAMInto(Path out, Path directory, String basePrefix,
    String basePostfix, SAMFormat format, Configuration conf,
    String commandName) throws IOException {

  final OutputStream outs = out.getFileSystem(conf).create(out);

  // First, place the SAM or BAM header.
  //
  // Don't use the returned stream, because we're concatenating directly
  // and don't want to apply another layer of compression to BAM.
  new SAMOutputPreparer().prepareForRecords(outs, format,
      getSAMHeaderMerger(conf).getMergedHeader());

  // Then, the actual SAM or BAM contents.
  mergeInto(outs, directory, basePrefix, basePostfix, conf, commandName);

  // And if BAM, the BGZF terminator.
  if (format == SAMFormat.BAM)
    outs.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK);

  outs.close();
}
 
开发者ID:GenomicParisCentre,项目名称:eoulsan,代码行数:29,代码来源:HadoopBamUtils.java


示例5: expandChunks

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
@NotNull
private static List<Chunk> expandChunks(@NotNull final List<Chunk> chunks) {
    final List<Chunk> result = Lists.newArrayList();
    //MIVO: add chunk for header
    final long headerEndVirtualPointer = ((long) BlockCompressedStreamConstants.MAX_COMPRESSED_BLOCK_SIZE) << 16;
    result.add(new Chunk(0, headerEndVirtualPointer));
    for (final Chunk chunk : chunks) {
        final long chunkEndBlockAddress = BlockCompressedFilePointerUtil.getBlockAddress(chunk.getChunkEnd());
        final long extendedEndBlockAddress = chunkEndBlockAddress + BlockCompressedStreamConstants.MAX_COMPRESSED_BLOCK_SIZE;
        final long newChunkEnd = extendedEndBlockAddress > MAX_BLOCK_ADDRESS ? MAX_BLOCK_ADDRESS : extendedEndBlockAddress;
        final long chunkEndVirtualPointer = newChunkEnd << 16;
        result.add(new Chunk(chunk.getChunkStart(), chunkEndVirtualPointer));
    }
    return Chunk.optimizeChunkList(result, 0);
}
 
开发者ID:hartwigmedical,项目名称:hmftools,代码行数:16,代码来源:BamSlicerApplication.java


示例6: writeTerminatorBlock

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private static void writeTerminatorBlock(final OutputStream out, final SAMFormat samOutputFormat) throws IOException {
  if (SAMFormat.CRAM == samOutputFormat) {
    CramIO.issueEOF(CramVersions.DEFAULT_CRAM_VERSION, out); // terminate with CRAM EOF container
  } else {
    out.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK); // add the BGZF terminator
  }
}
 
开发者ID:HadoopGenomics,项目名称:Hadoop-BAM,代码行数:8,代码来源:SAMFileMerger.java


示例7: tryBGZIP

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private static InputStream tryBGZIP(final InputStream in) throws IOException
{
final byte buffer[]=new byte[ 
		BlockCompressedStreamConstants.GZIP_BLOCK_PREAMBLE.length  ];

final PushbackInputStream push_back=new PushbackInputStream(in,buffer.length+10);
int nReads=push_back.read(buffer);
push_back.unread(buffer, 0, nReads);

try
	{
	if( nReads>= buffer.length && 
		buffer[0]==BlockCompressedStreamConstants.GZIP_ID1 &&
		buffer[1]==(byte)BlockCompressedStreamConstants.GZIP_ID2 &&
		buffer[2]==BlockCompressedStreamConstants.GZIP_CM_DEFLATE &&
		buffer[3]==BlockCompressedStreamConstants.GZIP_FLG &&
		buffer[8]==BlockCompressedStreamConstants.GZIP_XFL
		)
		{
		return new BlockCompressedInputStream(push_back);
		}
	}
catch(final Exception err)
	{
	//not bzip
	}
return new GZIPInputStream(push_back);
}
 
开发者ID:lindenb,项目名称:jvarkit,代码行数:29,代码来源:IOUtils.java


示例8: close

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
public void close() throws IOException {
	output.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK);
	output.close();
}
 
开发者ID:genepi,项目名称:imputationserver,代码行数:5,代码来源:MergedVcfFile.java


示例9: writeTerminatorBlock

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private static void writeTerminatorBlock(final OutputStream out) throws IOException {
  out.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK); // add the BGZF terminator
}
 
开发者ID:HadoopGenomics,项目名称:Hadoop-BAM,代码行数:4,代码来源:VCFFileMerger.java


示例10: expand

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
@Override
public PCollection<String> expand(PCollectionTuple tuple) {
  final PCollection<HeaderInfo> header = tuple.get(HEADER_TAG);
  final PCollectionView<HeaderInfo> headerView =
      header.apply(View.<HeaderInfo>asSingleton());

  final PCollection<Read> shardedReads = tuple.get(SHARDED_READS_TAG);

  final PCollectionTuple writeBAMFilesResult =
      shardedReads.apply("Write BAM shards", ParDo
        .of(new WriteBAMFn(headerView))
        .withSideInputs(Arrays.asList(headerView))
        .withOutputTags(WriteBAMFn.WRITTEN_BAM_NAMES_TAG, TupleTagList.of(WriteBAMFn.SEQUENCE_SHARD_SIZES_TAG)));

  PCollection<String> writtenBAMShardNames = writeBAMFilesResult.get(WriteBAMFn.WRITTEN_BAM_NAMES_TAG);
  final PCollectionView<Iterable<String>> writtenBAMShardsView =
      writtenBAMShardNames.apply(View.<String>asIterable());

  final PCollection<KV<Integer, Long>> sequenceShardSizes = writeBAMFilesResult.get(WriteBAMFn.SEQUENCE_SHARD_SIZES_TAG);
  final PCollection<KV<Integer, Long>> sequenceShardSizesCombined = sequenceShardSizes.apply(
      Combine.<Integer, Long, Long>perKey(Sum.ofLongs()));
  final PCollectionView<Iterable<KV<Integer, Long>>> sequenceShardSizesView =
      sequenceShardSizesCombined.apply(View.<KV<Integer, Long>>asIterable());

  final PCollection<String> destinationBAMPath = this.pipeline.apply(
      Create.<String>of(this.output));

  final PCollectionView<byte[]> eofForBAM = pipeline.apply(
      Create.<byte[]>of(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK))
      .apply(View.<byte[]>asSingleton());

  final PCollection<String> writtenBAMFile = destinationBAMPath.apply(
      "Combine BAM shards", ParDo
        .of(new CombineShardsFn(writtenBAMShardsView, eofForBAM))
        .withSideInputs(writtenBAMShardsView, eofForBAM));

  final PCollectionView<String> writtenBAMFileView =
      writtenBAMFile.apply(View.<String>asSingleton());

  final PCollection<String> indexShards = header.apply(
      "Generate index shard tasks", ParDo
      .of(new GetReferencesFromHeaderFn()));

  final PCollectionTuple indexingResult = indexShards
      .apply(new BreakFusionTransform<String>())
      .apply(
        "Write index shards", ParDo
          .of(new WriteBAIFn(headerView, writtenBAMFileView, sequenceShardSizesView))
          .withSideInputs(headerView, writtenBAMFileView, sequenceShardSizesView)
          .withOutputTags(WriteBAIFn.WRITTEN_BAI_NAMES_TAG,
              TupleTagList.of(WriteBAIFn.NO_COORD_READS_COUNT_TAG)));

  final PCollection<String> writtenBAIShardNames = indexingResult.get(WriteBAIFn.WRITTEN_BAI_NAMES_TAG);
  final PCollectionView<Iterable<String>> writtenBAIShardsView =
      writtenBAIShardNames.apply(View.<String>asIterable());

  final PCollection<Long> noCoordCounts = indexingResult.get(WriteBAIFn.NO_COORD_READS_COUNT_TAG);

  final PCollection<Long> totalNoCoordCount = noCoordCounts
        .apply(new BreakFusionTransform<Long>())
        .apply(
            Combine.globally(Sum.ofLongs()));

  final PCollection<byte[]> totalNoCoordCountBytes = totalNoCoordCount.apply(
      "No coord count to bytes", ParDo.of(new Long2BytesFn()));
  final PCollectionView<byte[]> eofForBAI = totalNoCoordCountBytes
      .apply(View.<byte[]>asSingleton());

  final PCollection<String> destinationBAIPath = this.pipeline.apply(
      Create.<String>of(this.output + ".bai"));

  final PCollection<String> writtenBAIFile = destinationBAIPath.apply(
      "Combine BAI shards", ParDo
        .of(new CombineShardsFn(writtenBAIShardsView, eofForBAI))
        .withSideInputs(writtenBAIShardsView, eofForBAI));

  final PCollection<String> writtenFileNames = PCollectionList.of(writtenBAMFile).and(writtenBAIFile)
      .apply(Flatten.<String>pCollections());

  return writtenFileNames;
}
 
开发者ID:googlegenomics,项目名称:dataflow-java,代码行数:82,代码来源:WriteBAMTransform.java


示例11: processElement

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
@ProcessElement
public void processElement(DoFn<Read, String>.ProcessContext c, BoundedWindow window)
    throws Exception {

  this.window = window;

  if (headerInfo == null) {
    headerInfo = c.sideInput(headerView);
  }
  final Read read = c.element();

  if (readCount == 0) {

    shardContig = KeyReadsFn.shardKeyForRead(read, 1);
    sequenceIndex = headerInfo.header.getSequenceIndex(shardContig.referenceName);
    final boolean isFirstShard = headerInfo.shardHasFirstRead(shardContig);
    final String outputFileName = options.getOutput();
    shardName = outputFileName + "-" + String.format("%012d", sequenceIndex) + "-"
        + shardContig.referenceName
        + ":" + String.format("%012d", shardContig.start);
    LOG.info("Writing shard file " + shardName);
    final OutputStream outputStream =
        Channels.newOutputStream(
            new GcsUtil.GcsUtilFactory().create(options)
              .create(GcsPath.fromUri(shardName),
                  BAMIO.BAM_INDEX_FILE_MIME_TYPE));
    ts = new TruncatedOutputStream(
        outputStream, BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK.length);
    bw = new BAMBlockWriter(ts, null /*file*/);
    bw.setSortOrder(headerInfo.header.getSortOrder(), true);
    bw.setHeader(headerInfo.header);
    if (isFirstShard) {
      LOG.info("First shard - writing header to " + shardName);
      bw.writeHeader(headerInfo.header);
    }
  }
  SAMRecord samRecord = ReadUtils.makeSAMRecord(read, headerInfo.header);
  if (prevRead != null && prevRead.getAlignmentStart() > samRecord.getAlignmentStart()) {
    LOG.info("Out of order read " + prevRead.getAlignmentStart() + " " +
        samRecord.getAlignmentStart() + " during writing of shard " + shardName +
        " after processing " + readCount + " reads, min seen alignment is " +
        minAlignment + " and max is " + maxAlignment + ", this read is " +
        (samRecord.getReadUnmappedFlag() ? "unmapped" : "mapped") + " and its mate is " +
        (samRecord.getMateUnmappedFlag() ? "unmapped" : "mapped"));
    Metrics.counter(WriteBAMFn.class, "Out of order reads").inc();
    readCount++;
    hadOutOfOrder = true;
    return;
  }
  minAlignment = Math.min(minAlignment, samRecord.getAlignmentStart());
  maxAlignment = Math.max(maxAlignment, samRecord.getAlignmentStart());
  prevRead = samRecord;
  if (samRecord.getReadUnmappedFlag()) {
    if (!samRecord.getMateUnmappedFlag()) {
      samRecord.setReferenceName(samRecord.getMateReferenceName());
      samRecord.setAlignmentStart(samRecord.getMateAlignmentStart());
    }
    unmappedReadCount++;
  }
  bw.addAlignment(samRecord);
  readCount++;
}
 
开发者ID:googlegenomics,项目名称:dataflow-java,代码行数:63,代码来源:WriteBAMFn.java


示例12: init

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
private void init(final InputStream stream, File file, final File indexFile, final boolean eagerDecode, final ValidationStringency validationStringency) {
	if (stream != null && file != null)
		throw new IllegalArgumentException("stream and file are mutually exclusive");
	this.samFile = file;

	try {
		BufferedInputStream bufferedStream;
		// Buffering is required because mark() and reset() are called on the input stream.
		final int bufferSize = Math.max(Defaults.BUFFER_SIZE, BlockCompressedStreamConstants.MAX_COMPRESSED_BLOCK_SIZE);
		if (file != null)
			bufferedStream = new BufferedInputStream(new FileInputStream(file), bufferSize);
		else
			bufferedStream = IOUtil.toBufferedStream(stream);
		if (SamStreams.isBAMFile(bufferedStream)) {
			mIsBinary = true;
			if (file == null || !file.isFile()) {
				// Handle case in which file is a named pipe, e.g. /dev/stdin or created by mkfifo
				mReader = new BAMFileReader(bufferedStream, indexFile, eagerDecode, useAsyncIO, validationStringency, this.samRecordFactory);
			} else {
				bufferedStream.close();
				mReader = new BAMFileReader(file, indexFile, eagerDecode, useAsyncIO, validationStringency, this.samRecordFactory);
			}
		} else if (BlockCompressedInputStream.isValidFile(bufferedStream)) {
			mIsBinary = false;
			mReader = new SAMTextReader(new BlockCompressedInputStream(bufferedStream), validationStringency, this.samRecordFactory);
		} else if (SamStreams.isGzippedSAMFile(bufferedStream)) {
			mIsBinary = false;
			mReader = new SAMTextReader(new GZIPInputStream(bufferedStream), validationStringency, this.samRecordFactory);
		} else if (SamStreams.isCRAMFile(bufferedStream)) {
			if (file == null || !file.isFile()) {
				file = null;
			} else {
				bufferedStream.close();
				bufferedStream = null;
			}
			mReader = new CRAMFileReader(file, bufferedStream);
		} else if (isSAMFile(bufferedStream)) {
			if (indexFile != null) {
				bufferedStream.close();
				throw new RuntimeException("Cannot use index file with textual SAM file");
			}
			mIsBinary = false;
			mReader = new SAMTextReader(bufferedStream, file, validationStringency, this.samRecordFactory);
		} else {
			bufferedStream.close();
			throw new SAMFormatException("Unrecognized file format");
		}

		setValidationStringency(validationStringency);
		mReader.setSAMRecordFactory(this.samRecordFactory);
	} catch (final IOException e) {
		throw new RuntimeIOException(e);
	}
}
 
开发者ID:NimbleGen,项目名称:bioinformatics,代码行数:55,代码来源:SAMFileReader.java


示例13: convertHeaderlessHadoopBamShardToBam

import htsjdk.samtools.util.BlockCompressedStreamConstants; //导入依赖的package包/类
/**
 * Converts a headerless Hadoop bam shard (eg., a part0000, part0001, etc. file produced by
 * {@link org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSink}) into a readable bam file
 * by adding a header and a BGZF terminator.
 *
 * This method is not intended for use with Hadoop bam shards that already have a header -- these shards are
 * already readable using samtools. Currently {@link ReadsSparkSink} saves the "shards" with a header for the
 * {@link ReadsWriteFormat#SHARDED} case, and without a header for the {@link ReadsWriteFormat#SINGLE} case.
 *
 * @param bamShard The headerless Hadoop bam shard to convert
 * @param header header for the BAM file to be created
 * @param destination path to which to write the new BAM file
 */
public static void convertHeaderlessHadoopBamShardToBam( final File bamShard, final SAMFileHeader header, final File destination ) {
    try ( FileOutputStream outStream = new FileOutputStream(destination) ) {
        writeBAMHeaderToStream(header, outStream);
        FileUtils.copyFile(bamShard, outStream);
        outStream.write(BlockCompressedStreamConstants.EMPTY_GZIP_BLOCK);
    }
    catch ( IOException e ) {
        throw new UserException("Error writing to " + destination.getAbsolutePath(), e);
    }
}
 
开发者ID:broadinstitute,项目名称:gatk,代码行数:24,代码来源:SparkUtils.java



注:本文中的htsjdk.samtools.util.BlockCompressedStreamConstants类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java EnterNumberDialog类代码示例发布时间:2022-05-22
下一篇:
Java ServerAuthModule类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap