在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:confluent-kafka-dotnet开源软件地址:https://gitee.com/mirrors/confluent-kafka-dotnet开源软件介绍:Confluent's .NET Client for Apache KafkaTMconfluent-kafka-dotnet is Confluent's .NET client for Apache Kafka and theConfluent Platform. Features:
confluent-kafka-dotnet is derived from Andreas Heider's rdkafka-dotnet.We're fans of his work and were very happy to have been able to leverage rdkafka-dotnet as the basis of thisclient. Thanks Andreas! Referencingconfluent-kafka-dotnet is distributed via NuGet. We provide five packages:
To install Confluent.Kafka from within Visual Studio, search for Confluent.Kafka in the NuGet Package Manager UI, or run the following command in the Package Manager Console: Install-Package Confluent.Kafka -Version 1.8.2 To add a reference to a dotnet core project, execute the following at the command line: dotnet add package -v 1.8.2 Confluent.Kafka Note: Branch buildsNuget packages corresponding to all commits to release branches are available from the following nuget package source (Note: this is not a web URL - youshould specify it in the nuget package manager):https://ci.appveyor.com/nuget/confluent-kafka-dotnet. The version suffix of these nuget packagesmatches the appveyor build number. You can see which commit a particular build number corresponds to by looking at theAppVeyor build history UsageFor a step-by-step guide and code samples, see Getting Started with Apache Kafka and .NET on Confluent Developer. Take a look in the examples directory and at the integration tests for further examples. For an overview of configuration properties, refer to the librdkafka documentation. Basic Producer ExamplesYou should use the using System;using System.Threading.Tasks;using Confluent.Kafka;class Program{ public static async Task Main(string[] args) { var config = new ProducerConfig { BootstrapServers = "localhost:9092" }; // If serializers are not specified, default serializers from // `Confluent.Kafka.Serializers` will be automatically used where // available. Note: by default strings are encoded as UTF8. using (var p = new ProducerBuilder<Null, string>(config).Build()) { try { var dr = await p.ProduceAsync("test-topic", new Message<Null, string> { Value="test" }); Console.WriteLine($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'"); } catch (ProduceException<Null, string> e) { Console.WriteLine($"Delivery failed: {e.Error.Reason}"); } } }} Note that a server round-trip is slow (3ms at a minimum; actual latency depends on many factors).In highly concurrent scenarios you will achieve high overall throughput out of the producer usingthe above approach, but there will be a delay on each using System;using Confluent.Kafka;class Program{ public static void Main(string[] args) { var conf = new ProducerConfig { BootstrapServers = "localhost:9092" }; Action<DeliveryReport<Null, string>> handler = r => Console.WriteLine(!r.Error.IsError ? $"Delivered message to {r.TopicPartitionOffset}" : $"Delivery Error: {r.Error.Reason}"); using (var p = new ProducerBuilder<Null, string>(conf).Build()) { for (int i=0; i<100; ++i) { p.Produce("my-topic", new Message<Null, string> { Value = i.ToString() }, handler); } // wait for up to 10 seconds for any inflight messages to be delivered. p.Flush(TimeSpan.FromSeconds(10)); } }} Basic Consumer Exampleusing System;using System.Threading;using Confluent.Kafka;class Program{ public static void Main(string[] args) { var conf = new ConsumerConfig { GroupId = "test-consumer-group", BootstrapServers = "localhost:9092", // Note: The AutoOffsetReset property determines the start offset in the event // there are not yet any committed offsets for the consumer group for the // topic/partitions of interest. By default, offsets are committed // automatically, so in this example, consumption will only start from the // earliest message in the topic 'my-topic' the first time you run the program. AutoOffsetReset = AutoOffsetReset.Earliest }; using (var c = new ConsumerBuilder<Ignore, string>(conf).Build()) { c.Subscribe("my-topic"); CancellationTokenSource cts = new CancellationTokenSource(); Console.CancelKeyPress += (_, e) => { e.Cancel = true; // prevent the process from terminating. cts.Cancel(); }; try { while (true) { try { var cr = c.Consume(cts.Token); Console.WriteLine($"Consumed message '{cr.Value}' at: '{cr.TopicPartitionOffset}'."); } catch (ConsumeException e) { Console.WriteLine($"Error occured: {e.Error.Reason}"); } } } catch (OperationCanceledException) { // Ensure the consumer leaves the group cleanly and final offsets are committed. c.Close(); } } }} IHostedService and Web Application IntegrationThe Web example demonstrates how to integrateApache Kafka with a web application, including how to implement Exactly Once ProcessingThe .NET Client has full support for transactions and idempotent message production, allowing you to write horizontally scalable streamprocessing applications with exactly once semantics. The ExactlyOnce example demonstrates this capability by wayof an implementation of the classic "word count" problem, also demonstrating how to use the FASTERKey/Value store (similar to RocksDb) to materialize working state that may be larger than available memory, and incremental rebalancingto avoid stop-the-world rebalancing operations and unnecessary reloading of state when you add or remove processing nodes. Schema Registry IntegrationThe three "Serdes" packages provide serializers and deserializers for Avro, Protobuf and JSON with Confluent Schema Registry integration. The Note: All three serialization formats are supported across Confluent Platform. They each make different tradeoffs, and you should use the one that best matches to your requirements. Avro is well suited to the streaming data use-case, but the quality and maturity of the non-Java implementations lags that of Java - this is an important consideration. Protobuf and JSON both have great support in .NET. Error HandlingErrors delivered to a client's error handler should be considered informational except when the Although calling most methods on the clients will result in a fatal error if the client is in an un-recoverablestate, you should generally only need to explicitly check for fatal errors in your error handler, and handlethis scenario there. ProducerWhen using When using ConsumerAll 3rd PartyThere are numerous libraries that expand on the capabilities provided by Confluent.Kafka, or use Confluent.Kafkato integrate with Kafka. For more information, refer to the 3rd Party Libraries page. Confluent CloudFor a step-by-step guide on using the .NET client with Confluent Cloud see Getting Started with Apache Kafka and .NET on Confluent Developer. You can also refer to the Confluent Cloud example which demonstrates how to configure the .NET client for use withConfluent Cloud. Developer NotesInstructions on building and testing confluent-kafka-dotnet can be found here. Copyright (c)2016-2019 Confluent Inc.2015-2016 Andreas Heider KAFKA is a registered trademark of The Apache Software Foundation and has been licensed for useby confluent-kafka-dotnet. confluent-kafka-dotnet has no affiliation with and is not endorsed byThe Apache Software Foundation. |
请发表评论