ALTERNATE UNIVERSE DEV

Streaming Audio: A Confluent podcast about Apache Kafka®

Engaging Database Partials with Apache Kafka for Distributed System Consistency ft. Pat Helland

When compiling database reports using a variety of data from different systems, obtaining the right data when you need it in real time can be difficult. With cloud connectivity and distributed data pipelines, Pat Helland (Principal Architect, Salesforce) explains how to make educated partial answers when you need to use the Apache Kafka® platform. After all, you can’t get guarantees across a distance, making it critical to consider partial results.

Despite best efforts, managing systems from a distance can result in lag time. The secret, according to Helland, is to anticipate these situations and have a plan for when (not if) they happen. Your outputs may be incomplete from time to time, but that doesn’t mean that there isn’t valuable information and data to be shared. Although you cannot guarantee that stream data will be available when you need it, you can gather replicas within a batch to obtain a consistent result, also known as convergence. Distributed systems of all sizes and across large distances rely on reference architecture for database reporting. 

Plan and anticipate that there will be incomplete inputs at times. Regardless of the types of data that you’re using within a distributed database, there are many inferences that can be made from repetitive monitoring over time. There would be no reason to throw out data from 19 machines when you’re only waiting on one while approaching a deadline. You can make the sources that you have work by making the most out of what is available in the presence of a partition for the overall distributed database.

Confluent Cloud and convergence capabilities have allowed Salesforce to make decisions very quickly even when only partial data is available using replicated systems across multiple databases. This analytical approach is vital for consistency for large enterprises, especially those that depend on multi-cloud functionality. 

EPISODE LINKS

Episode source