本文整理汇总了Java中io.confluent.kafka.serializers.KafkaAvroDeserializer类的典型用法代码示例。如果您正苦于以下问题:Java KafkaAvroDeserializer类的具体用法?Java KafkaAvroDeserializer怎么用?Java KafkaAvroDeserializer使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
KafkaAvroDeserializer类属于io.confluent.kafka.serializers包,在下文中一共展示了KafkaAvroDeserializer类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: shouldCreateCorrectRow
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Test
public void shouldCreateCorrectRow() {
KafkaAvroDeserializer kafkaAvroDeserializer = EasyMock.mock(KafkaAvroDeserializer.class);
EasyMock.expect(kafkaAvroDeserializer.deserialize(EasyMock.anyString(), EasyMock.anyObject())
).andReturn(genericRecord);
expectLastCall();
replay(kafkaAvroDeserializer);
KsqlGenericRowAvroDeserializer ksqlGenericRowAvroDeserializer = new
KsqlGenericRowAvroDeserializer(schema, kafkaAvroDeserializer, false);
GenericRow genericRow = ksqlGenericRowAvroDeserializer.deserialize("", new byte[]{});
assertThat("Column number does not match.", genericRow.getColumns().size(), equalTo(6));
assertThat("Invalid column value.", genericRow.getColumns().get(0), equalTo(1511897796092L));
assertThat("Invalid column value.", genericRow.getColumns().get(1), equalTo(1L));
assertThat("Invalid column value.", ((Double[])genericRow.getColumns().get(4))[0], equalTo
(100.0));
assertThat("Invalid column value.", ((Map<String, Double>)genericRow.getColumns().get(5))
.get("key1"),
equalTo
(100.0));
}
开发者ID:confluentinc,项目名称:ksql,代码行数:25,代码来源:KsqlGenericRowAvroDeserializerTest.java
示例2: testConfluentSerDes
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Test
public void testConfluentSerDes() throws Exception {
org.apache.avro.Schema schema = new org.apache.avro.Schema.Parser().parse(GENERIC_TEST_RECORD_SCHEMA);
GenericRecord record = new GenericRecordBuilder(schema).set("field1", "some value").set("field2", "some other value").build();
Map<String, Object> config = new HashMap<>();
config.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, rootTarget.getUri().toString());
KafkaAvroSerializer kafkaAvroSerializer = new KafkaAvroSerializer();
kafkaAvroSerializer.configure(config, false);
byte[] bytes = kafkaAvroSerializer.serialize("topic", record);
KafkaAvroDeserializer kafkaAvroDeserializer = new KafkaAvroDeserializer();
kafkaAvroDeserializer.configure(config, false);
GenericRecord result = (GenericRecord) kafkaAvroDeserializer.deserialize("topic", bytes);
LOG.info(result.toString());
}
开发者ID:hortonworks,项目名称:registry,代码行数:20,代码来源:ConfluentRegistryCompatibleResourceTest.java
示例3: KsqlGenericRowAvroDeserializer
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
KsqlGenericRowAvroDeserializer(Schema schema, KafkaAvroDeserializer
kafkaAvroDeserializer, boolean isInternal) {
if (isInternal) {
this.schema = SchemaUtil.getAvroSerdeKsqlSchema(schema);
} else {
this.schema = SchemaUtil.getSchemaWithNoAlias(schema);
}
this.kafkaAvroDeserializer = kafkaAvroDeserializer;
}
开发者ID:confluentinc,项目名称:ksql,代码行数:12,代码来源:KsqlGenericRowAvroDeserializer.java
示例4: shouldSerializeRowCorrectly
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Test
public void shouldSerializeRowCorrectly() {
SchemaRegistryClient schemaRegistryClient = new MockSchemaRegistryClient();
KsqlGenericRowAvroSerializer ksqlGenericRowAvroSerializer = new KsqlGenericRowAvroSerializer
(schema, schemaRegistryClient, new KsqlConfig(new HashMap<>()));
List columns = Arrays.asList(1511897796092L, 1L, "item_1", 10.0, new Double[]{100.0},
Collections.singletonMap("key1", 100.0));
GenericRow genericRow = new GenericRow(columns);
byte[] serializedRow = ksqlGenericRowAvroSerializer.serialize("t1", genericRow);
KafkaAvroDeserializer kafkaAvroDeserializer = new KafkaAvroDeserializer(schemaRegistryClient);
GenericRecord genericRecord = (GenericRecord) kafkaAvroDeserializer.deserialize("t1", serializedRow);
Assert.assertNotNull(genericRecord);
assertThat("Incorrect serialization.", genericRecord.get("ordertime".toUpperCase()), equalTo
(1511897796092L));
assertThat("Incorrect serialization.", genericRecord.get("orderid".toUpperCase()), equalTo
(1L));
assertThat("Incorrect serialization.", genericRecord.get("itemid".toUpperCase()).toString(), equalTo("item_1"));
assertThat("Incorrect serialization.", genericRecord.get("orderunits".toUpperCase()), equalTo
(10.0));
GenericData.Array array = (GenericData.Array) genericRecord.get("arraycol".toUpperCase());
Map map = (Map) genericRecord.get("mapcol".toUpperCase());
assertThat("Incorrect serialization.", array.size(), equalTo(1));
assertThat("Incorrect serialization.", array.get(0), equalTo(100.0));
assertThat("Incorrect serialization.", map.size(), equalTo(1));
assertThat("Incorrect serialization.", map.get(new Utf8("key1")), equalTo(100.0));
}
开发者ID:confluentinc,项目名称:ksql,代码行数:32,代码来源:KsqlGenericRowAvroSerializerTest.java
示例5: testConfluentAvroDeserializer
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Test
public void testConfluentAvroDeserializer() throws IOException, RestClientException {
WorkUnitState mockWorkUnitState = getMockWorkUnitState();
mockWorkUnitState.setProp("schema.registry.url", TEST_URL);
Schema schema = SchemaBuilder.record(TEST_RECORD_NAME)
.namespace(TEST_NAMESPACE).fields()
.name(TEST_FIELD_NAME).type().stringType().noDefault()
.endRecord();
GenericRecord testGenericRecord = new GenericRecordBuilder(schema).set(TEST_FIELD_NAME, "testValue").build();
SchemaRegistryClient mockSchemaRegistryClient = mock(SchemaRegistryClient.class);
when(mockSchemaRegistryClient.getByID(any(Integer.class))).thenReturn(schema);
Serializer<Object> kafkaEncoder = new KafkaAvroSerializer(mockSchemaRegistryClient);
Deserializer<Object> kafkaDecoder = new KafkaAvroDeserializer(mockSchemaRegistryClient);
ByteBuffer testGenericRecordByteBuffer =
ByteBuffer.wrap(kafkaEncoder.serialize(TEST_TOPIC_NAME, testGenericRecord));
KafkaSchemaRegistry<Integer, Schema> mockKafkaSchemaRegistry = mock(KafkaSchemaRegistry.class);
KafkaDeserializerExtractor kafkaDecoderExtractor =
new KafkaDeserializerExtractor(mockWorkUnitState,
Optional.fromNullable(Deserializers.CONFLUENT_AVRO), kafkaDecoder, mockKafkaSchemaRegistry);
ByteArrayBasedKafkaRecord mockMessageAndOffset = getMockMessageAndOffset(testGenericRecordByteBuffer);
Assert.assertEquals(kafkaDecoderExtractor.decodeRecord(mockMessageAndOffset), testGenericRecord);
}
开发者ID:apache,项目名称:incubator-gobblin,代码行数:31,代码来源:KafkaDeserializerExtractorTest.java
示例6: testConfluentAvroDeserializerForSchemaEvolution
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Test
public void testConfluentAvroDeserializerForSchemaEvolution() throws IOException, RestClientException, SchemaRegistryException {
WorkUnitState mockWorkUnitState = getMockWorkUnitState();
mockWorkUnitState.setProp("schema.registry.url", TEST_URL);
Schema schemaV1 = SchemaBuilder.record(TEST_RECORD_NAME)
.namespace(TEST_NAMESPACE).fields()
.name(TEST_FIELD_NAME).type().stringType().noDefault()
.endRecord();
Schema schemaV2 = SchemaBuilder.record(TEST_RECORD_NAME)
.namespace(TEST_NAMESPACE).fields()
.name(TEST_FIELD_NAME).type().stringType().noDefault()
.optionalString(TEST_FIELD_NAME2).endRecord();
GenericRecord testGenericRecord = new GenericRecordBuilder(schemaV1).set(TEST_FIELD_NAME, "testValue").build();
SchemaRegistryClient mockSchemaRegistryClient = mock(SchemaRegistryClient.class);
when(mockSchemaRegistryClient.getByID(any(Integer.class))).thenReturn(schemaV1);
Serializer<Object> kafkaEncoder = new KafkaAvroSerializer(mockSchemaRegistryClient);
Deserializer<Object> kafkaDecoder = new KafkaAvroDeserializer(mockSchemaRegistryClient);
ByteBuffer testGenericRecordByteBuffer =
ByteBuffer.wrap(kafkaEncoder.serialize(TEST_TOPIC_NAME, testGenericRecord));
KafkaSchemaRegistry<Integer, Schema> mockKafkaSchemaRegistry = mock(KafkaSchemaRegistry.class);
when(mockKafkaSchemaRegistry.getLatestSchemaByTopic(TEST_TOPIC_NAME)).thenReturn(schemaV2);
KafkaDeserializerExtractor kafkaDecoderExtractor = new KafkaDeserializerExtractor(mockWorkUnitState,
Optional.fromNullable(Deserializers.CONFLUENT_AVRO), kafkaDecoder, mockKafkaSchemaRegistry);
when(kafkaDecoderExtractor.getSchema()).thenReturn(schemaV2);
ByteArrayBasedKafkaRecord mockMessageAndOffset = getMockMessageAndOffset(testGenericRecordByteBuffer);
GenericRecord received = (GenericRecord) kafkaDecoderExtractor.decodeRecord(mockMessageAndOffset);
Assert.assertEquals(received.toString(), "{\"testField\": \"testValue\", \"testField2\": null}");
}
开发者ID:apache,项目名称:incubator-gobblin,代码行数:40,代码来源:KafkaDeserializerExtractorTest.java
示例7: GenericAvroDeserializer
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
/**
* Constructor used by Kafka Streams.
*/
public GenericAvroDeserializer() {
inner = new KafkaAvroDeserializer();
}
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:7,代码来源:GenericAvroDeserializer.java
示例8: SpecificAvroDeserializer
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
/**
* Constructor used by Kafka Streams.
*/
public SpecificAvroDeserializer() {
inner = new KafkaAvroDeserializer();
}
开发者ID:jeqo,项目名称:talk-kafka-messaging-logs,代码行数:7,代码来源:SpecificAvroDeserializer.java
示例9: GenericAvroDeserializer
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
public GenericAvroDeserializer(SchemaRegistryClient client) {
inner = new KafkaAvroDeserializer(client);
}
开发者ID:confluentinc,项目名称:strata-tutorials,代码行数:4,代码来源:GenericAvroDeserializer.java
示例10: getAllTopicPartitions
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
/**
* Describe topic with topic specified
*
* @api {get} /s2p/:taskId 6. Get partition information for the specific subject/topic
* @apiVersion 0.1.1
* @apiName getAllTopicPartitions
* @apiGroup All
* @apiPermission none
* @apiDescription This is where we get partition information for the subject/topic.
* @apiParam {String} topic topic name.
* @apiSuccess {JsonObject[]} info. partition info.
* @apiSampleRequest http://localhost:8080/api/df/s2p/:taskId
*/
private void getAllTopicPartitions(RoutingContext routingContext) {
final String topic = routingContext.request().getParam("id");
if (topic == null) {
routingContext.response()
.setStatusCode(ConstantApp.STATUS_CODE_BAD_REQUEST)
.end(DFAPIMessage.getResponseMessage(9000));
LOG.error(DFAPIMessage.getResponseMessage(9000, topic));
} else {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_server_host_and_port);
props.put(ConsumerConfig.GROUP_ID_CONFIG, ConstantApp.DF_CONNECT_KAFKA_CONSUMER_GROUP_ID);
props.put(ConstantApp.SCHEMA_URI_KEY, "http://" + schema_registry_host_and_port);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
KafkaConsumer<String, String> consumer = KafkaConsumer.create(vertx, props);
ArrayList<JsonObject> responseList = new ArrayList<JsonObject>();
// Subscribe to a single topic
consumer.partitionsFor(topic, ar -> {
if (ar.succeeded()) {
for (PartitionInfo partitionInfo : ar.result()) {
responseList.add(new JsonObject()
.put("id", partitionInfo.getTopic())
.put("partitionNumber", partitionInfo.getPartition())
.put("leader", partitionInfo.getLeader().getIdString())
.put("replicas", StringUtils.join(partitionInfo.getReplicas(), ','))
.put("insyncReplicas", StringUtils.join(partitionInfo.getInSyncReplicas(), ','))
);
HelpFunc.responseCorsHandleAddOn(routingContext.response())
.putHeader("X-Total-Count", responseList.size() + "")
.end(Json.encodePrettily(responseList));
consumer.close();
}
} else {
LOG.error(DFAPIMessage.logResponseMessage(9030, topic + "-" +
ar.cause().getMessage()));
}
});
consumer.exceptionHandler(e -> {
LOG.error(DFAPIMessage.logResponseMessage(9031, topic + "-" + e.getMessage()));
});
}
}
开发者ID:datafibers-community,项目名称:df_data_service,代码行数:62,代码来源:DFDataProcessor.java
示例11: pollAllFromTopic
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
/**
* Poll all available information from specific topic
* @param routingContext
*
* @api {get} /avroconsumer 7.List all df tasks using specific topic
* @apiVersion 0.1.1
* @apiName poolAllFromTopic
* @apiGroup All
* @apiPermission none
* @apiDescription This is where consume data from specific topic in one pool.
* @apiSuccess {JsonObject[]} topic Consumer from the topic.
* @apiSampleRequest http://localhost:8080/api/df/avroconsumer
*/
private void pollAllFromTopic(RoutingContext routingContext) {
final String topic = routingContext.request().getParam("id");
if (topic == null) {
routingContext.response()
.setStatusCode(ConstantApp.STATUS_CODE_BAD_REQUEST)
.end(DFAPIMessage.getResponseMessage(9000));
LOG.error(DFAPIMessage.getResponseMessage(9000, "TOPIC_IS_NULL"));
} else {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_server_host_and_port);
props.put(ConsumerConfig.GROUP_ID_CONFIG, ConstantApp.DF_CONNECT_KAFKA_CONSUMER_GROUP_ID);
props.put(ConstantApp.SCHEMA_URI_KEY, "http://" + schema_registry_host_and_port);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
KafkaConsumer<String, String> consumer = KafkaConsumer.create(vertx, props);
ArrayList<JsonObject> responseList = new ArrayList<JsonObject>();
consumer.handler(record -> {
//LOG.debug("Processing value=" + record.record().value() + ",offset=" + record.record().offset());
responseList.add(new JsonObject()
.put("id", record.record().offset())
.put("value", new JsonObject(record.record().value().toString()))
.put("valueString", Json.encodePrettily(new JsonObject(record.record().value().toString())))
);
if(responseList.size() >= ConstantApp.AVRO_CONSUMER_BATCH_SIE ) {
HelpFunc.responseCorsHandleAddOn(routingContext.response())
.putHeader("X-Total-Count", responseList.size() + "")
.end(Json.encodePrettily(responseList));
consumer.pause();
consumer.commit();
consumer.close();
}
});
consumer.exceptionHandler(e -> {
LOG.error(DFAPIMessage.logResponseMessage(9031, topic + "-" + e.getMessage()));
});
// Subscribe to a single topic
consumer.subscribe(topic, ar -> {
if (ar.succeeded()) {
LOG.info(DFAPIMessage.logResponseMessage(1027, "topic = " + topic));
} else {
LOG.error(DFAPIMessage.logResponseMessage(9030, topic + "-" + ar.cause().getMessage()));
}
});
}
}
开发者ID:datafibers-community,项目名称:df_data_service,代码行数:65,代码来源:DFDataProcessor.java
示例12: start
import io.confluent.kafka.serializers.KafkaAvroDeserializer; //导入依赖的package包/类
@Override
public void start() throws Exception {
System.out.println("Test");
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put("schema.registry.url", "http://localhost:8002");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
String topic = "test_stock";
KafkaConsumer<String, String> consumer = KafkaConsumer.create(vertx, props);
ArrayList<JsonObject> responseList = new ArrayList<JsonObject>();
// consumer.handler(record -> {// TODO handler does not work
// System.out.println("Processing value=" + record.record().value() +
// ",partition=" + record.record().partition() + ",offset=" + record.record().offset());
// responseList.add(new JsonObject()
// .put("offset", record.record().offset())
// .put("value", record.record().value().toString()));
// if(responseList.size() >= 10 ) {
// consumer.pause();
// consumer.commit();
// consumer.close();
// }
// });
//
// // Subscribe to a single topic
// consumer.subscribe(topic, ar -> {
// if (ar.succeeded()) {
// System.out.println("topic " + topic + " is subscribed");
// } else {
// System.out.println("Could not subscribe " + ar.cause().getMessage());
// }
// });
consumer.partitionsFor(topic, ar -> {
if (ar.succeeded()) {
for (PartitionInfo partitionInfo : ar.result()) {
System.out.println(partitionInfo);
}
}
});
}
开发者ID:datafibers-community,项目名称:df_data_service,代码行数:50,代码来源:AvroConsumerVertx.java
注:本文中的io.confluent.kafka.serializers.KafkaAvroDeserializer类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论