Oct 28, 2016

Simplifying Distributed Systems Using Apache Kafka

With the rising popularity of micro service architectures and typical scaling patterns used at enterprises, distributed systems are becoming more common and complex. What was once a simple web server connected to a database now can entail multiple databases, caches and integrations to other services/systems. By using Apache Kafka one can take much of the integration complexity out of the system, reduce coupling between the different components, easily expand functionality without disruption and scale horizontally.

This presentation will cover patterns and concepts that can be used to achieve all of the above. There will be a quick overview of how Apache Kafka works, it’s differences from other messaging brokers and why that’s important. I’ll speak about the good, the bad and what was missed during my experience working with large distributed systems. Finally I’ll briefly mention other technologies that work well with Kafka.

Original slides here.

About the Author

Object Partners profile.

One thought on “Simplifying Distributed Systems Using Apache Kafka

  1. sandipan mukherjee says:

    yes you are right..Apache Kafka is a stream-processing platform used for high-quality Apache Kafka developers analytic streaming, data pipeline, mission-critical application, and data integration. You can simply read, write, process, and store the events in any platform.
    Ksolves is backed by an array of high-quality developers carrying a healthy experience in the field. More than that, we are dedicated to providing the best Kafka service that surpasses your expectations and meet your requirements.

Leave a Reply

Your email address will not be published.

Related Blog Posts
Building Better Data Visualization Experiences: Part 1 of 2
Through direct experience with data scientists, business analysts, lab technicians, as well as other UX professionals, I have found that we need a better understanding of the people who will be using our data visualization products in order to build them. Creating a product utilizing data with the goal of providing insight is fundamentally different from a typical user-centric web experience, although traditional UX process methods can help.
Kafka Schema Evolution With Java Spring Boot and Protobuf
In this blog I will be demonstrating Kafka schema evolution with Java, Spring Boot and Protobuf.  This app is for tutorial purposes, so there will be instances where a refactor could happen. I tried to […]
Redis Bitmaps: Storing state in small places
Redis is a popular open source in-memory data store that supports all kinds of abstract data structures. In this post and in an accompanying example Java project, I am going to explore two great use […]
Let’s build a WordPress & Kernel updated AMI with Packer
First, let’s start with What is an AMI? An Amazon Machine Image (AMI) is a master image for the creation of virtual servers in an AWS environment. The machine images are like templates that are configured with […]