This blog post explores the similarities between schemas and APIs, and the importance of being able to modify schemas without the risk of breaking consumer applications. Gwen Shapira discusses the details of what compatibility really means for schemas and events (and why it’s so … | Continue reading
Red Pill Analytics shares how they designed and implemented all the necessary data integration processes required to connect Oracle WMS Cloud with on-prem systems for a Fortune 500 e-commerce and wholesale company seeking to transform the way they manage inventory. | Continue reading
A Confluent Community Catalyst is a person who invests their time and energy relentlessly in the Apache Kafka and Confluent communities. They make it a habit of contributing knowledge, enthusiasm, support, encouragement, mentoring, and code to one of the most innovative communiti … | Continue reading
With Confluent Cloud today, you can now elastically scale production workloads from 0 to 100 MB/s and down instantaneously without ever having to size or provision a cluster, scale production workloads from hundreds of MB/s to tens of GB/s with provisioned capacity, and pay only … | Continue reading
The most challenging goal of any application architecture is simplicity, but it is possible to achieve. Neil Avery explores four pillars for enabling scalable development that works across the event-driven enterprise. These pillars minimize complexity and provide foundational rul … | Continue reading
Both Kafka Connect and KSQL can be managed and interacted with using a REST API, but many people prefer a GUI. Control Center provides the capability to work with multiple clusters of each and is free to use forever under the Confluent Developer License on single-broker Kafka clu … | Continue reading
Every developer who uses Apache Kafka® has used the Kafka consumer at least once. Although it is the simplest way to subscribe to and access events from Kafka, behind the scenes, Kafka consumers handle tricky distributed systems challenges like data consistency, failover and load … | Continue reading
In Confluent Platform 5.2, Control Center has grown a couple of new features that make large deployments a little more pleasant to manage: It has become much better at managing configuration changes among a large number of brokers, and it scales to a larger number of managed part … | Continue reading
We're continuing to challenge ourselves to help developers expand what is possible and create more value with Confluent. To that end, we’re announcing that the PipelineDB team will be joining Confluent. The PipelineDB team brings with them a vast wealth of experience, spanning bo … | Continue reading
With the release of Apache Kafka® 2.1.0, Kafka Streams introduced the processor topology optimization framework at the Kafka Streams DSL layer. This framework opens the door for various optimization techniques from the existing data stream management system (DSMS) and data stream … | Continue reading
By capturing Internet of Things (IoT) event data from farm to fork with Apache Kafka® and Confluent Cloud, BAADER is increasing the efficiency of the food value chain, creating new business opportunities and enabling its partners to optimize their operations. | Continue reading
Funding Circle is a global lending platform where investors lend directly to small businesses in Germany, the Netherlands, the UK and the U.S. (and soon in Canada). A typical borrower repayment triggers actions in several subsystems and if not done promptly and correctly, can pre … | Continue reading
Over the past few weeks, we tweeted 12 tech tips, each of which showcased a different language along with a simple example of how to write a producer and consumer to Confluent Cloud. Those examples are available to run in GitHub at confluentinc/examples, and we have compiled a li … | Continue reading
BAADER is using Confluent Cloud and associated IoT technologies to build a data-driven food value chain for all their partners from farm to fork. And they believe they are the only company in the industry doing so at such massive scale horizontally across all members of the value … | Continue reading
Confluent Control Center integrates with Confluent Schema Registry, allowing you to manage and evolve schemas. Schema evolution requires compatibility checks to ensure that producers can write data and consumers can read that data, even as schemas evolve. This is where Schema Reg … | Continue reading
It seems like there’s a Kafka Summit every other month. Of course there’s not—it’s every fourth month—but hey, close enough. We now have the Kafka Summit New York in the books, and the session videos are available in record time. | Continue reading
Landing data to S3 is ubiquitous and key to almost every AWS architecture. This explains why users have been looking for a reliable way to stream their data from Apache Kafka® to S3 since Kafka Connect became available. Since its initial release, the Kafka Connect S3 connector ha … | Continue reading
If an enterprise has a mission-critical, multi-datacenter Apache Kafka deployment, it’s important to ensure that data is replicated and stays in sync in near real time between core business applications. Confluent Control Center not only manages multiple Kafka deployments but als … | Continue reading
We’re partnering with Google Cloud to make Confluent Cloud, our fully managed offering of Apache Kafka®, available as a native offering on Google Cloud Platform (GCP). This means you will have the ability to use Confluent Cloud’s managed Apache Kafka service with familiar Google … | Continue reading
Microservices that need to take action on a common stream of events all listen to that stream. In the Apache Kafka® world, this means that each of those microservice client applications subscribes to a common Kafka topic. When an event lands in that topic, all the microservices r … | Continue reading
CASE is one of those Swiss-Army-knife functions of the SQL world. There are numerous uses for it, and now KSQL supports it! | Continue reading
Making sense of the communication and dataflow patterns inside choreographies can be a challenge. At SYSCO AS, distributed tracing has been key for helping us create a clear understanding of how applications are related to each other. This article describes how to instrument Kafk … | Continue reading
Suppress is an optional DSL operator that offers strong guarantees about when exactly it forwards KTable updates downstream. Since it’s an operator, you can use it to control the flow of updates in just the parts of your application that need it, leaving the majority of your appl … | Continue reading
Event-first thinking represents a new frontier for technology and businesses to go back to first principles of system architecture and design. | Continue reading
Confluent has raised a $125M Series D funding round, led by Sequoia, with participation from our other major investors, Index Ventures and Benchmark. | Continue reading
Confluent Community License FAQ | Continue reading
We’re changing the license for some of the open source components of Confluent Platform from Apache 2.0 to the Confluent Community License. This new license allows you to freely download, modify, and redistribute the code (very much like Apache 2.0 does), but it does not allow yo … | Continue reading
Since the Apache Kafka® 1.1.0 release, there has been a significant increase in the number of partitions that a single Kafka cluster can support from the deployment and the availability perspective. | Continue reading
This is an edited and expanded transcript of a talk I gave at Strange Loop 2014. The video recording (embedded below) has been watched over 8,000 times. For those of … | Continue reading
Anyone who can write SQL can now write stream processing applications for fraud detection with Apache Kafka and KSQL. We're going to see how we can take a stream of inbound ATM transactions and easily set up an application to detect transactions that look fraudulent. | Continue reading
"Kafka Streams, Apache Kafka’s stream processing library, allows developers to build sophisticated stateful stream processing applications which you can deploy in an environment of your choice. | Continue reading
Is the reactive, immutable, functional style of microservices enabled by Kafka and its Streams API the right fit for your application? | Continue reading
In this article we’re going to see an example of a powerful design pattern based around event-driven architectures and a streaming platform. We’ll discuss how stream processing with Apache Kafka® and KSQL is the changing face of ETL and why companies are adopting this. | Continue reading
Enabling everyone to run Apache Kafka® on Kubernetes is an important part of our mission to put a streaming platform at the heart of every company. This is why we look forward to releasing an implementation of the Kubernetes Operator API for automated provisioning, management, an … | Continue reading
We are very pleased to announce that the Seattle-based Distributed Masonry team, the innovators behind Onyx Platform and Pyrostore, will be joining Confluent to extend the cloud-native storage and processing capabilities of Confluent Cloud. | Continue reading
In this article we’re going to conclude our fun with syslog data by looking at how we can enrich inbound streams of syslog data with reference information from elsewhere to produce a real-time enriched data stream. | Continue reading
Summary: Confluent is starting to explore the integration of databases with event streams. As part of the first step in this exploration, Martin Kleppmann has made a new open source … | Continue reading