Life doesn’t happen in batch mode which is why application engineers and data architects need to closely cooperate to bridge the "gap" between technology built for data in motion vs data at rest. Without doubt stream processing is a big deal these days and oftentimes we find Apache Kafka as the central nervous system of company-wide data architectures. However, many real-world uses cases simply need an operational data store which is flexible, robust and scalable enough to live up to diverse application-related requirements and challenges. For that reason, this session explores different options in order to build rock solid data integration pipelines between MongoDB and Apache Kafka. The focus lies on configuration-based data in motion scenarios leveraging the Kafka Connect framework in order to lay out streaming ETL pipeline examples, most of which can be realized without writing a single line of code.
Get notified about new features and conference additions.