• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Header类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.kafka.common.header.Header的典型用法代码示例。如果您正苦于以下问题:Java Header类的具体用法?Java Header怎么用?Java Header使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Header类属于org.apache.kafka.common.header包,在下文中一共展示了Header类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: tryAppend

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 *  Try to append to a ProducerBatch.
 *
 *  If it is full, we return null and a new batch is created. We also close the batch for record appends to free up
 *  resources like compression buffers. The batch will be fully closed (ie. the record batch headers will be written
 *  and memory records built) in one of the following cases (whichever comes first): right before send,
 *  if it is expired, or when the producer is closed.
 */
// 查找batches集合对应队列的最后一个ProducerBatch
private RecordAppendResult tryAppend(long timestamp, byte[] key, byte[] value, Header[] headers, Callback callback, Deque<ProducerBatch> deque) {
    //拿到消息队列中的最后一个
    ProducerBatch last = deque.peekLast();
    if (last != null) {
        //调用ProducerBatch的tryAppend方法返回 FutureRecordMetadata future,MemoryRecordsBuilder中是还有空间
        FutureRecordMetadata future = last.tryAppend(timestamp, key, value, headers, callback, time.milliseconds());
        if (future == null)
            last.closeForRecordAppends();
        else
            return new RecordAppendResult(future, deque.size() > 1 || last.isFull(), false);

    }
    return null;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:RecordAccumulator.java


示例2: tryAppendForSplit

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * This method is only used by {@link #split(int)} when splitting a large batch to smaller ones.
 * @return true if the record has been successfully appended, false otherwise.
 */
private boolean tryAppendForSplit(long timestamp, ByteBuffer key, ByteBuffer value, Header[] headers, Thunk thunk) {
    if (!recordsBuilder.hasRoomFor(timestamp, key, value, headers)) {
        return false;
    } else {
        // No need to get the CRC.
        this.recordsBuilder.append(timestamp, key, value);
        this.maxRecordSize = Math.max(this.maxRecordSize, AbstractRecords.estimateSizeInBytesUpperBound(magic(),
                recordsBuilder.compressionType(), key, value, headers));
        FutureRecordMetadata future = new FutureRecordMetadata(this.produceFuture, this.recordCount,
                                                               timestamp, thunk.future.checksumOrNull(),
                                                               key == null ? -1 : key.remaining(),
                                                               value == null ? -1 : value.remaining());
        // Chain the future to the original thunk.
        thunk.future.chain(future);
        this.thunks.add(thunk);
        this.recordCount++;
        return true;
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:ProducerBatch.java


示例3: ProducerRecord

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * Creates a record with a specified timestamp to be sent to a specified topic and partition
 * 
 * @param topic The topic the record will be appended to
 * @param partition The partition to which the record should be sent
 * @param timestamp The timestamp of the record
 * @param key The key that will be included in the record
 * @param value The record contents
 * @param headers the headers that will be included in the record
 */
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value, Iterable<Header> headers) {
    if (topic == null)
        throw new IllegalArgumentException("Topic cannot be null.");
    if (timestamp != null && timestamp < 0)
        throw new IllegalArgumentException(
                String.format("Invalid timestamp: %d. Timestamp should always be non-negative or null.", timestamp));
    if (partition != null && partition < 0)
        throw new IllegalArgumentException(
                String.format("Invalid partition: %d. Partition number should always be non-negative or null.", partition));
    this.topic = topic;
    this.partition = partition;
    this.key = key;
    this.value = value;
    this.timestamp = timestamp;
    this.headers = new RecordHeaders(headers);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:27,代码来源:ProducerRecord.java


示例4: hasRoomFor

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * Check if we have room for a new record containing the given key/value pair. If no records have been
 * appended, then this returns true.
 *
 * Note that the return value is based on the estimate of the bytes written to the compressor, which may not be
 * accurate if compression is used. When this happens, the following append may cause dynamic buffer
 * re-allocation in the underlying byte buffer stream.
 */
public boolean hasRoomFor(long timestamp, ByteBuffer key, ByteBuffer value, Header[] headers) {
    if (isFull())
        return false;

    // We always allow at least one record to be appended (the ByteBufferOutputStream will grow as needed)
    if (numRecords == 0)
        return true;

    final int recordSize;
    if (magic < RecordBatch.MAGIC_VALUE_V2) {
        recordSize = Records.LOG_OVERHEAD + LegacyRecord.recordSize(magic, key, value);
    } else {
        int nextOffsetDelta = lastOffset == null ? 0 : (int) (lastOffset - baseOffset + 1);
        long timestampDelta = baseTimestamp == null ? 0 : timestamp - baseTimestamp;
        recordSize = DefaultRecord.sizeInBytes(nextOffsetDelta, timestampDelta, key, value, headers);
    }

    // Be conservative and not take compression of the new record into consideration.
    return this.writeLimit >= estimatedBytesWritten() + recordSize;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:29,代码来源:MemoryRecordsBuilder.java


示例5: DefaultRecord

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private DefaultRecord(int sizeInBytes,
                      byte attributes,
                      long offset,
                      long timestamp,
                      int sequence,
                      ByteBuffer key,
                      ByteBuffer value,
                      Header[] headers) {
    this.sizeInBytes = sizeInBytes;
    this.attributes = attributes;
    this.offset = offset;
    this.timestamp = timestamp;
    this.sequence = sequence;
    this.key = key;
    this.value = value;
    this.headers = headers;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:DefaultRecord.java


示例6: readHeaders

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private static Header[] readHeaders(ByteBuffer buffer, int numHeaders) {
    Header[] headers = new Header[numHeaders];
    for (int i = 0; i < numHeaders; i++) {
        int headerKeySize = ByteUtils.readVarint(buffer);
        if (headerKeySize < 0)
            throw new InvalidRecordException("Invalid negative header key size " + headerKeySize);

        String headerKey = Utils.utf8(buffer, headerKeySize);
        buffer.position(buffer.position() + headerKeySize);

        ByteBuffer headerValue = null;
        int headerValueSize = ByteUtils.readVarint(buffer);
        if (headerValueSize >= 0) {
            headerValue = buffer.slice();
            headerValue.limit(headerValueSize);
            buffer.position(buffer.position() + headerValueSize);
        }

        headers[i] = new RecordHeader(headerKey, headerValue);
    }

    return headers;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:DefaultRecord.java


示例7: closeAware

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private Iterator<Header> closeAware(final Iterator<Header> original) {
    return new Iterator<Header>() {
        @Override
        public boolean hasNext() {
            return original.hasNext();
        }

        public Header next() {
            return original.next();
        }

        @Override
        public void remove() {
            canWrite();
            original.remove();
        }
    };
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:RecordHeaders.java


示例8: testSizeInBytes

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
@Test
public void testSizeInBytes() {
    Header[] headers = new Header[] {
        new RecordHeader("foo", "value".getBytes()),
        new RecordHeader("bar", (byte[]) null)
    };

    long timestamp = System.currentTimeMillis();
    SimpleRecord[] records = new SimpleRecord[] {
        new SimpleRecord(timestamp, "key".getBytes(), "value".getBytes()),
        new SimpleRecord(timestamp + 30000, null, "value".getBytes()),
        new SimpleRecord(timestamp + 60000, "key".getBytes(), null),
        new SimpleRecord(timestamp + 60000, "key".getBytes(), "value".getBytes(), headers)
    };
    int actualSize = MemoryRecords.withRecords(CompressionType.NONE, records).sizeInBytes();
    assertEquals(actualSize, DefaultRecordBatch.sizeInBytes(Arrays.asList(records)));
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:18,代码来源:DefaultRecordBatchTest.java


示例9: testSerdeNoSequence

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
@Test
public void testSerdeNoSequence() throws IOException {
    ByteBuffer key = ByteBuffer.wrap("hi".getBytes());
    ByteBuffer value = ByteBuffer.wrap("there".getBytes());
    long baseOffset = 37;
    int offsetDelta = 10;
    long baseTimestamp = System.currentTimeMillis();
    long timestampDelta = 323;

    ByteBufferOutputStream out = new ByteBufferOutputStream(1024);
    DefaultRecord.writeTo(new DataOutputStream(out), offsetDelta, timestampDelta, key, value, new Header[0]);
    ByteBuffer buffer = out.buffer();
    buffer.flip();

    DefaultRecord record = DefaultRecord.readFrom(buffer, baseOffset, baseTimestamp, RecordBatch.NO_SEQUENCE, null);
    assertNotNull(record);
    assertEquals(RecordBatch.NO_SEQUENCE, record.sequence());
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:19,代码来源:DefaultRecordTest.java


示例10: testHeaders

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
@Test
public void testHeaders() throws IOException {
    RecordHeaders headers = new RecordHeaders();
    headers.add(new RecordHeader("key", "value".getBytes()));
    headers.add(new RecordHeader("key1", "key1value".getBytes()));
    headers.add(new RecordHeader("key", "value2".getBytes()));
    headers.add(new RecordHeader("key2", "key2value".getBytes()));


    Iterator<Header> keyHeaders = headers.headers("key").iterator();
    assertHeader("key", "value", keyHeaders.next());
    assertHeader("key", "value2", keyHeaders.next());
    assertFalse(keyHeaders.hasNext());

    keyHeaders = headers.headers("key1").iterator();
    assertHeader("key1", "key1value", keyHeaders.next());
    assertFalse(keyHeaders.hasNext());

    keyHeaders = headers.headers("key2").iterator();
    assertHeader("key2", "key2value", keyHeaders.next());
    assertFalse(keyHeaders.hasNext());

}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:24,代码来源:RecordHeadersTest.java


示例11: reconsumeLater

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private void reconsumeLater(ConsumerRecord<String, byte[]> consumeRecord) throws InterruptedException, ExecutionException {

		// add all header to headList except RETRY_COUNT
		Headers headers = consumeRecord.headers();
		List<Header> headerList = new ArrayList<Header>(8);
		Iterator<Header> iterator = headers.iterator();
		Integer retryCount = -1;
		boolean hasOrignalHeader = false;
		while (iterator.hasNext()) {
			Header next = iterator.next();
			if (next.key().equals(RETRY_COUNT_KEY)) {
				retryCount = serializer.deserialize(next.value());
				continue;
			}
			
			if(next.key().equals(ORGINAL_TOPIC)){
				hasOrignalHeader = true;
			}
			headerList.add(next);
		}
		
		// add RETRY_COUNT to header
		retryCount++;
		headerList.add(new RecordHeader(RETRY_COUNT_KEY, serializer.serialization(retryCount)));
		
		if(!hasOrignalHeader){
			headerList.add(new RecordHeader(ORGINAL_TOPIC, serializer.serialization(consumeRecord.topic())));
		}

		// send message to corresponding queue according to retry times
		String retryTopic = calcRetryTopic(consumeRecord.topic(), retryCount);
		
		ProducerRecord<String, byte[]> record = new ProducerRecord<>(retryTopic,
				consumeRecord.partition() % retryQueuePartitionCount.get(retryTopic), null, consumeRecord.key(),
				consumeRecord.value(), headerList);
		Future<RecordMetadata> publishKafkaMessage = retryQueueMsgProducer.publishKafkaMessage(record);
		publishKafkaMessage.get();
	}
 
开发者ID:QNJR-GROUP,项目名称:EasyTransaction,代码行数:39,代码来源:KafkaEasyTransMsgConsumerImpl.java


示例12: publish

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
@Override
public EasyTransMsgPublishResult publish(String topic, String tag, String key, Map<String,Object> header, byte[] msgByte) {
	String kafkaTopic = QueueKafkaHelper.getKafkaTopic(topic, tag);
	
	//calculate partition
	TransactionId trxId = (TransactionId) header.get(EasytransConstant.CallHeadKeys.PARENT_TRX_ID_KEY);
	int partition = calcMessagePartition(kafkaTopic, trxId);
	
	List<Header> kafkaHeaderList = new ArrayList<>(header.size());
	for(Entry<String, Object> entry:header.entrySet()){
		kafkaHeaderList.add(new RecordHeader(entry.getKey(),serializer.serialization(entry.getValue())));
	}
	
	ProducerRecord<String, byte[]> record = new ProducerRecord<>(kafkaTopic, partition, null, key, msgByte, kafkaHeaderList);
	Future<RecordMetadata> sendResultFuture = kafkaProducer.send(record);
	try {
		RecordMetadata recordMetadata = sendResultFuture.get();
		log.info("message sent:" + recordMetadata);
	} catch (InterruptedException | ExecutionException e) {
		throw new RuntimeException("message sent error",e);
	}
	
	EasyTransMsgPublishResult easyTransMsgPublishResult = new EasyTransMsgPublishResult();
	easyTransMsgPublishResult.setTopic(topic);
	easyTransMsgPublishResult.setMessageId(key);
	return easyTransMsgPublishResult;
}
 
开发者ID:QNJR-GROUP,项目名称:EasyTransaction,代码行数:28,代码来源:KafkaEasyTransMsgPublisherImpl.java


示例13: HeadersMapExtractAdapter

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
HeadersMapExtractAdapter(Headers headers, boolean second) {
  for (Header header : headers) {
    if (second) {
      if (header.key().startsWith("second_span_")) {
        map.put(header.key().replaceFirst("^second_span_", ""),
            new String(header.value(), StandardCharsets.UTF_8));
      }
    } else {
      map.put(header.key(), new String(header.value(), StandardCharsets.UTF_8));
    }
  }
}
 
开发者ID:opentracing-contrib,项目名称:java-kafka-client,代码行数:13,代码来源:HeadersMapExtractAdapter.java


示例14: tryAppend

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * Append the record to the current record set and return the relative offset within that record set
 *
 * @return The RecordSend corresponding to this record or null if there isn't sufficient room.
 */
// 尝试将消息添加到ProducerRecord
public FutureRecordMetadata tryAppend(long timestamp, byte[] key, byte[] value, Header[] headers, Callback callback, long now) {
    //判断MemoryRecordsBuilder中是否满了
    if (!recordsBuilder.hasRoomFor(timestamp, key, value, headers)) {
        return null;
    } else {
        // 向MemoryRecord中添加数据
        Long checksum = this.recordsBuilder.append(timestamp, key, value, headers);
        this.maxRecordSize = Math.max(this.maxRecordSize, AbstractRecords.estimateSizeInBytesUpperBound(magic(),
                recordsBuilder.compressionType(), key, value, headers));
        this.lastAppendTime = now;

        // 创建future对象
        FutureRecordMetadata future = new FutureRecordMetadata(this.produceFuture, this.recordCount,
                                                               timestamp, checksum,
                                                               key == null ? -1 : key.length,
                                                               value == null ? -1 : value.length);
        // we have to keep every future returned to the users in case the batch needs to be
        // split to several new batches and resent.
        //Thunk存储Callback和FutureRecordMetadata

        // 将自定义callbakc和future对象添加到thunks中
        thunks.add(new Thunk(callback, future));

        // 更新record数量
        this.recordCount++;
        return future;
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:35,代码来源:ProducerBatch.java


示例15: estimateSizeInBytesUpperBound

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * Get an upper bound estimate on the batch size needed to hold a record with the given fields. This is only
 * an estimate because it does not take into account overhead from the compression algorithm.
 */
public static int estimateSizeInBytesUpperBound(byte magic, CompressionType compressionType, ByteBuffer key, ByteBuffer value, Header[] headers) {
    if (magic >= RecordBatch.MAGIC_VALUE_V2)
        return DefaultRecordBatch.estimateBatchSizeUpperBound(key, value, headers);
    else if (compressionType != CompressionType.NONE)
        return Records.LOG_OVERHEAD + LegacyRecord.recordOverhead(magic) + LegacyRecord.recordSize(magic, key, value);
    else
        return Records.LOG_OVERHEAD + LegacyRecord.recordSize(magic, key, value);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:13,代码来源:AbstractRecords.java


示例16: appendWithOffset

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
/**
 * Append a record and return its checksum for message format v0 and v1, or null for for v2 and above.
 */
private Long appendWithOffset(long offset, boolean isControlRecord, long timestamp, ByteBuffer key,
                              ByteBuffer value, Header[] headers) {
    try {
        if (isControlRecord != isControlBatch)
            throw new IllegalArgumentException("Control records can only be appended to control batches");

        if (lastOffset != null && offset <= lastOffset)
            throw new IllegalArgumentException(String.format("Illegal offset %s following previous offset %s " +
                    "(Offsets must increase monotonically).", offset, lastOffset));

        if (timestamp < 0 && timestamp != RecordBatch.NO_TIMESTAMP)
            throw new IllegalArgumentException("Invalid negative timestamp " + timestamp);

        if (magic < RecordBatch.MAGIC_VALUE_V2 && headers != null && headers.length > 0)
            throw new IllegalArgumentException("Magic v" + magic + " does not support record headers");

        if (baseTimestamp == null)
            baseTimestamp = timestamp;

        if (magic > RecordBatch.MAGIC_VALUE_V1) {
            appendDefaultRecord(offset, timestamp, key, value, headers);
            return null;
        } else {
            return appendLegacyRecord(offset, timestamp, key, value);
        }
    } catch (IOException e) {
        throw new KafkaException("I/O exception when writing to the append stream, closing", e);
    }
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:33,代码来源:MemoryRecordsBuilder.java


示例17: appendDefaultRecord

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private void appendDefaultRecord(long offset, long timestamp, ByteBuffer key, ByteBuffer value,
                                 Header[] headers) throws IOException {
    ensureOpenForRecordAppend();
    int offsetDelta = (int) (offset - baseOffset);
    long timestampDelta = timestamp - baseTimestamp;
    int sizeInBytes = DefaultRecord.writeTo(appendStream, offsetDelta, timestampDelta, key, value, headers);
    recordWritten(offset, timestamp, sizeInBytes);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:9,代码来源:MemoryRecordsBuilder.java


示例18: SimpleRecord

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
public SimpleRecord(long timestamp, ByteBuffer key, ByteBuffer value, Header[] headers) {
    Objects.requireNonNull(headers, "Headers must be non-null");
    this.key = key;
    this.value = value;
    this.timestamp = timestamp;
    this.headers = headers;
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:8,代码来源:SimpleRecord.java


示例19: sizeInBytes

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
public static int sizeInBytes(int offsetDelta,
                              long timestampDelta,
                              ByteBuffer key,
                              ByteBuffer value,
                              Header[] headers) {
    int bodySize = sizeOfBodyInBytes(offsetDelta, timestampDelta, key, value, headers);
    return bodySize + ByteUtils.sizeOfVarint(bodySize);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:9,代码来源:DefaultRecord.java


示例20: sizeOfBodyInBytes

import org.apache.kafka.common.header.Header; //导入依赖的package包/类
private static int sizeOfBodyInBytes(int offsetDelta,
                                     long timestampDelta,
                                     ByteBuffer key,
                                     ByteBuffer value,
                                     Header[] headers) {

    int keySize = key == null ? -1 : key.remaining();
    int valueSize = value == null ? -1 : value.remaining();
    return sizeOfBodyInBytes(offsetDelta, timestampDelta, keySize, valueSize, headers);
}
 
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:11,代码来源:DefaultRecord.java



注:本文中的org.apache.kafka.common.header.Header类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java InfobrightNamedPipeLoader类代码示例发布时间:2022-05-22
下一篇:
Java ExpandableRecyclerAdapter类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap