January 8, 2024 By Matt Sunley 7 min read

In modern enterprises, where operations leave a massive digital footprint, business events allow companies to become more adaptable and able to recognize and respond to opportunities or threats as they occur. They can optimize their supply chains, create delightful, personalized experiences for their customers, proactively identify quality issues or intercept customer churn before it occurs. 

As a result, organizations that become more event-driven are able to better differentiate themselves from competitors and ultimately impact their top and bottom lines. 

Becoming a real-time enterprise

Businesses often go on a journey that traverses several stages of maturity when they establish an EDA.

Stage 1—Tactical and project-based

To begin with, the potential is demonstrated in tactical projects that individual teams deliver. They often use Apache Kafka as an open technology and the de facto standard for accessing events from a various core systems and applications. This approach then enables them to build new responsive applications. 

 Stage 2—Broader adoption

Increased awareness across IT organizations leads to a transition to standardized methods of creating an event backbone that caters to both existing and new event-driven projects across multiple teams. This approach provides operational efficiency and the ability to create a solution that is resilient and scalable enough to support critical operations. 

Stage 3—Socialization and management

An increase in adoption drives a need for better management of event socialization and exposure. Teams want more visibility and access to events so they can reuse and innovate on the work of others. The importance of events is elevated to be on par with application programming interface (API), with facilities to describe, advertise and discover events. Self-service access is provided to prevent approval bottlenecks, alongside facilities to retain proper controls over usage. 

Stage 4 – Transformative business strategy

A broader range of users are able to access and process event streams to understand their relevance in a business context. They are able to combine event topics to identify patterns or aggregates to analyze trends and detect anomalies. Event triggers are used to automate workflows or decisions, allowing businesses to generate notifications so appropriate actions can be taken as swiftly as situations are detected. 

IBM® created a composable set of capabilities to support you wherever you are on this event-driven adoption journey. Built on the best open source technologies, each capability emphasizes scalability and designed for flexibility and compatibility with an entire ecosystem for connectivity, analytics, processing and more. Whether you are starting from scratch or looking to take the next step, IBM can help extend and add value to what you already have. 

Establishing an event backbone 

An event backbone is the core of an event-driven enterprise that efficiently makes business events available in the locations they are needed. IBM provides an Event Streams capability build on Apache Kafka that makes events manageable across an entire enterprise. Where Kafka-based infrastructure is already deployed, Event Streams can interoperate seamlessly with events as part of a hybrid brokering environment. 

 It enables deployment of infrastructure as code by using operators as part of Kubernetes-based container orchestration platforms to build and operate the many components of an Apache Kafka deployment in a consistent and repeatable way. Kafka clusters can be automatically scaled based on demand, with full encryption and access control. Flexible and customizable Kafka configurations can be automated by using a simple user interface. 

 It includes a built-in schema registry to validate event data from applications as expected, improving data quality and reducing errors. Event schemas help reduce integration complexity by establishing agreed formats between collaborating teams, and Event Streams enables schema evolution and adaptability as event-driven adoption accelerates. 

Event Streams establishes a resilient and highly available event backbone by supporting the replication of event data between clusters across multiple zones, such that the infrastructure can tolerate the failure of a zone with no loss of service availability. For disaster recovery, the geo-replication feature can create copies of event data to send to a backup cluster, with the user interface making this configurable in a few clicks. 

An event backbone is only as good as the event data it can access, and Event Streams supports a wide range of connectors to key applications, systems and platforms where event data is generated or consumed. The connector catalog contains an extensive list of key connectors supported by IBM and the community. 

Wrapping up all these features is a comprehensive management interface, which enables smooth monitoring and operations for the Kafka environment and its connected applications. This includes the overall system health as well as the ability to drill down into specifics of individual event payloads, schemas, publish and consumption rates and assists identification and resolution of any problematic event data.  

Governing an event-driven expansion

Many organizations reach a point where use of events is expanding rapidly. New streams of events are being created every day by multiple teams who don’t necessarily have visibility of each other’s activities. There starts to be concern over duplication, and how to improve that visibility and promote greater efficiency and reuse. Reuse of course can bring its own challenges. How will access be controlled? How will workloads be managed to avoid swamping backend systems? How can breaking changes be avoided that impact many teams? 

 To address these concerns, many companies begin to treat event interfaces more formally, applying good practices developed as part of API management to ensure that event interfaces are well described and versioned, that access is decoupled and isolated, that usage is appropriately secured and managed. 

IBM provides an Event Endpoint Management capability that enables existing events to be discovered and consumed by any user and manages event sources like APIs to securely reuse them across the enterprise. Not only can this capability manage Event Streams from IBM, but in keeping with the open approach already explained, can do this for any Kafka-based event-driven applications and backbones you may already have in place. 

It allows you to describe your event interfaces in a consistent framework based on the AsyncAPI open standard. This means they can be understood by people, are supported by code generation tools and are consistent with API definitions. Event Endpoint Management produces valid AsyncAPI documents based on event schemas or sample messages.

It provides a catalog for publishing event interfaces for others to discover. This includes lifecycle management, versioning and definition of policy-based controls. For example, to require that users provide valid credentials, it integrates with identity access management solutions and has role-based access. 

A user of the catalog can then discover available event, understand those that are relevant to them and easily register for self-service access. Usage analytics enable the event owners to monitor subscribers and revoke access if necessary. This significantly reduces the burden on Kafka administrators as teams providing topics for reuse can placed them in the catalogue themselves, and consumers can self-administer their access. 

Event Endpoint Management provides an event gateway to ensure consumers are decoupled from the event producers and brokers, that they are isolated from one another and that any changes in data format is managed. It also enforces the policy-based controls, applying these directly to the Kafka protocol itself. This means that it is able to manage any Kafka-compliant implementation that is part of the ecosystem. 

Detecting and acting on business situations 

As events are reused in different ways and use cases become more sophisticated, it’s often the case that events need more refinement, or must be combined with other events, to identify the most interesting business situations that should be acted on. Events may be more actionable when augmented with external data, or when they occur along with other events in a particular time period. 

 IBM provides an event processing capability that helps users to work with events to inform understanding of the business context. It includes a low-code user interface designed to enable a broad range of users to work with events and a powerful open-source event processing engine. Again, where you already have Kafka-based infrastructure deployed, event processing can work with events sourced from any Apache Kafka implementation you have in your environment. 

The event processing runtime is built on Apache Flink, an open, trusted, secure a scalable way of executing event processing flows. The IBM event processing runtime is fully supported, deployed, and managed using Kubernetes operators. This makes deployment and management simple, either as a shared execution environment or for deployment as part of an app. 

The low-code tooling allows users to connect event sources to a sequence of operations that define the way the events should be processed. You can join events from multiple sources to identify situations derived from various events occurring, filter events to remove irrelevant events from the stream, aggregate events to count occurrences over different windows of time, perform calculations using fields within the events to derive new information and much more. Traditionally this type of event processing would have required highly skilled programmers. Using this tooling, users can extensively prototype potentially useful scenarios in a safe, non-destructive environment. 

It has been designed to enable an intuitive and visual way of processing events, with drag and drop of event sources, destinations and processing operations that are then wired together, with productivity aids and validation at each step. Rapid iteration of solutions is made possible by the ability to click run and immediately see output director in the editor, then pause the processing to edit the processing before rerunning. Results can be exported or sent as a continuous stream into Kafka. 

Event Processing enables collaboration as many solutions can be authored and deployed, allowing multiple team members to share and collaborate within a workspace. Event processing logic generated by the tooling can be exported to tools such as GitHub so they can be shared with others in the organization. Tutorials and in-context help allows new team members to easily get up to speed and start contributing. 

Once a solution has been configured in Event Processing, the output can be sent to a number of places for observability and to drive actions, including cloud-native applications, automation platforms or business dashboards that can consume Kafka inputs. 

Ready to take the next step? 

IBM Event Automation, a fully composable event-driven service, enables businesses to drive their efforts wherever they are on their journey. The event streams, event endpoint management and event processing capabilities help lay the foundation of an event-driven architecture for unlocking the value of events. 

Visit our website to request a demo and learn more
Was this article helpful?
YesNo

More from Automation

Deployable architecture on IBM Cloud: Simplifying system deployment

3 min read - Deployable architecture (DA) refers to a specific design pattern or approach that allows an application or system to be easily deployed and managed across various environments. A deployable architecture involves components, modules and dependencies in a way that allows for seamless deployment and makes it easy for developers and operations teams to quickly deploy new features and updates to the system, without requiring extensive manual intervention. There are several key characteristics of a deployable architecture, which include: Automation: Deployable architecture…

Understanding glue records and Dedicated DNS

3 min read - Domain name system (DNS) resolution is an iterative process where a recursive resolver attempts to look up a domain name using a hierarchical resolution chain. First, the recursive resolver queries the root (.), which provides the nameservers for the top-level domain(TLD), e.g.com. Next, it queries the TLD nameservers, which provide the domain’s authoritative nameservers. Finally, the recursive resolver  queries those authoritative nameservers.   In many cases, we see domains delegated to nameservers inside their own domain, for instance, “example.com.” is delegated…

Using dig +trace to understand DNS resolution from start to finish

2 min read - The dig command is a powerful tool for troubleshooting queries and responses received from the Domain Name Service (DNS). It is installed by default on many operating systems, including Linux® and Mac OS X. It can be installed on Microsoft Windows as part of Cygwin.  One of the many things dig can do is to perform recursive DNS resolution and display all of the steps that it took in your terminal. This is extremely useful for understanding not only how the DNS…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters