Kafka Replaces Zookeeper With Quorum
Thursday, 22 April 2021

Apache Kafka has been updated to version 2.8, with improvements including early access version of KIP-500, which allows you to run Kafka brokers without Apache ZooKeeper, instead depending on an internal Raft implementation.

This architectural improvement enables support for more partitions per cluster, simpler operation, and tighter security. Apache Kafka is a distributed streaming platform that can be used for building real-time streaming data pipelines between systems or applications.

kafka

Kafka began life at LinkedIn, from where it was taken on as an Apache project. It is a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system that can be used in place of traditional message brokers.

The ZooKeeper-free version of Kafka is achieved by a move to a self-managed quorum. This is included as an early-access implementation that is not yet feature complete and should not be used in production, but it is possible to start new clusters without ZooKeeper and go through basic produce and consume use cases.

At a high level, KIP-500 works by moving topic metadata and configurations out of ZooKeeper and into a new internal topic named @metadata. This topic is managed by an internal Raft quorum of "controllers" and is replicated to all brokers in the cluster. The leader of the Raft quorum serves the same role as the controller in clusters today.

Other improvements in the new version include a new Describe Cluster API. Until now, Kafka's AdminClient has used the broker's Metadata API to get information about the cluster, but that is developed to support the consumer and producer client. The new API adds the ability to directly query the brokers for information about the cluster, and means that it will be simpler to add new admin features in the future.

Other improvements include support for mutual TLS authentication on SASL_SSL listeners, so improving the ability to secure your environments; and better handling of logging hierarchy. Log4j uses a hierarchical model for configuring loggers within an application but until now the Kafka broker's APIs for viewing log levels did not respect this hierarchy.

Log handling has also been improved with the ability to emit JSONs with new auto-generated schema. Kafka brokers' debug-level request/response logs will from now on be JSON structured so that they can more easily be parsed and used by logging toolchains.

kafka

More Information

Kafka Website

Related Articles

Apache Kafka 2.7 Updates Broker

Kafka 2.5 Adds New Metrics And Improves Security

Kafka 2 Adds Support For ACLs

Kafka Graphs Framework Extends Kafka Streams

Kafka Webview Released

Comparing Kafka To RabbitMQ

Apache Kafka Adds New Streams API

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Programmer Gifts - Pi For Xmas
13/12/2024

The holiday season is a good time to learn about computers - you have the time. But where to start? Our advice is to ignore the pudding and go for a Pi.



GitHub Announces Open Source Security Fund
03/12/2024

A new security-focused program, the GitHub Secure Open Source Fund, will invest $1.25M across 125 open source projects. The project is backed by the support of organizations including American Express [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info