November 21, 2023 By Michael Burgess 4 min read

Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud®.

What is a schema?

A schema describes the structure of data.

For example:

A simple Java class modelling an order of some product from an online store might start with fields like:

public class Order{

private String productName

private String productCode

private int quantity

[…]

}

If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:

{
"type": "record",
"name": “Order”,
"fields": [
{"name": "productName", "type": "string"},
{"name": "productCode", "type": "string"},
{"name": "quantity", "type": "int"}
]
}

Why should you use a schema?

Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.

One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.

In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.

This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.

What is a schema registry?

A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.

Optimize your Kafka environment by using a schema registry.

A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.

Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.

There are various patterns for schema evolution:

  • Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
  • Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
  • Full Compatibility: when schemas are both forward and backward compatible.

A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.

By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.

What’s next?

In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.

  • Provision an instance of Event Streams on IBM Cloud here.
  • Learn how to use the Event Streams Schema Registry here.
  • Learn more about Kafka and its use cases here.
  • For any challenges in set up, see our Getting Started Guide and FAQs.
Was this article helpful?
YesNo

More from Cloud

A clear path to value: Overcome challenges on your FinOps journey 

3 min read - In recent years, cloud adoption services have accelerated, with companies increasingly moving from traditional on-premises hosting to public cloud solutions. However, the rise of hybrid and multi-cloud patterns has led to challenges in optimizing value and controlling cloud expenditure, resulting in a shift from capital to operational expenses.   According to a Gartner report, cloud operational expenses are expected to surpass traditional IT spending, reflecting the ongoing transformation in expenditure patterns by 2025. FinOps is an evolving cloud financial management discipline…

IBM Power8 end of service: What are my options?

3 min read - IBM Power8® generation of IBM Power Systems was introduced ten years ago and it is now time to retire that generation. The end-of-service (EoS) support for the entire IBM Power8 server line is scheduled for this year, commencing in March 2024 and concluding in October 2024. EoS dates vary by model: 31 March 2024: maintenance expires for Power Systems S812LC, S822, S822L, 822LC, 824 and 824L. 31 May 2024: maintenance expires for Power Systems S812L, S814 and 822LC. 31 October…

24 IBM offerings winning TrustRadius 2024 Top Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters