Understanding data stream processing

Stream processing Frameworks

The most popular stateful stream processing frameworks in the open source space are Apache Flink, Apache Kafka, and Apache Spark – all developed at the Apache Software Foundation and therefore available under the Apache license. All are written in Java or Scala and at home in the Java Virtual Machine (JVM). Apache Flink was born as a stream processor. Apache Spark came out of the batch-processing environment and, over time, added stream processing capabilities in the form of processing in micro-batches. Apache Kafka is actually a tool for storing data streams and is very popular as a stream processing source and sink. In recent years, however, Kafka has also been given a number of functions that enable it to operate as a stream processor.

Each stream processing framework has a different programming model, but the tools do have some similarities. For example, all three have programming interfaces (APIs) with varying degrees of expressiveness: from relatively close to the system to more abstract interfaces to languages such as SQL (see the box entitled "Databases and Stream Processing").

Databases and Stream Processing

Databases and stream processing often appear together, but it is important to distinguish between databases and stream-processing frameworks. SQL only acts as a description language for a program. In a streaming context, the application usually runs continuously and processes the infinite data stream chunk by chunk, adjusting the internal state and the output as a new event becomes available.

To use SQL at all, stream-processing frameworks often define a duality between a data stream and a dynamic table. (If you look at a table and observe all changes over time, you ultimately have a data stream.) Databases often even allow access to this data stream, either through a kind of binary log, or in the form of a Change Data Capture (CDC) data stream.

Listing 1 shows how to implement the example of linking the video playback events to video ads using the Apache Flink SQL API. In this case, you only have to define p.playTime and i.impressionTime as event-time attributes, including defining the watermark strategy, and you have quite a compact program that continuously outputs all display events for each video playback up to one hour before playback. In Flink's system-level DataStream API (Java or Scala), the code for this scenario would be a little more complicated: The programmer would have to take care of temporarily buffering the events while reading the data streams until the watermark signals that the input is complete.

Listing 1

SQL example in Flink

01 SELECT
02   p.userid, p.title, p.playTime, COLLECT(DISTINCT i.title) AS impressions
03 FROM
04   Plays p,
05   Impressions i
06 WHERE
07   p.userid = i.userid AND
08   i.impressionTime BETWEEN p.playTime - INTERVAL '1' HOUR AND p.playTime
09 GROUP BY p.userid, p.title, p.playTime

A special class of APIs called stateful functions allow easy and flexible creation of event-based distributed applications that use a stream processor as a substructure but feel comfortable in a serverless environment. A program of this kind is not modeled as a data stream but with stateful functions for each object of the system, where each function can freely interact with others. Programs written with stateful functions can use a number of different programming languages, because the functions use HTTP to communicate with each other and are completely independent of each other. Ververica was one of the first companies to publish a stateful functions API for this type of modeling and programming in the stream processing environment a year ago, and it is now an official component of Apache Flink.

Conclusions

Stream processing has become an important tool for processing as much distributed data as possible in real time. The stream processing paradigm offers an easy approach to creating distributed real-time applications of arbitrary complexity. Open source frameworks help programmers produce correct results and also handle task distribution, network communication, and fault tolerance in the underlying cluster. Stream processing might seem confusing at first glance, especially event time and watermarks, but once you have internalized the various elements, you'll be well on your way to building your own stream processing applications.

In order to make stream processing available to an even wider audience, SQL APIs have already been created to enable software engineers, data engineers, and scientists to benefit from the advantages of integrating a database. These APIs, as well as the ecosystem that surrounds them, are undergoing continuous development.

The Author

Dr. Nico Kruber is a committer in the Apache Flink project and works as a solutions architect at Ververica, where he helps both customers and the open source community get the most out of Apache Flink. Before his time with Apache Flink, he did his PhD in Distributed Systems at the Zuse Institute Berlin and worked on the distributed, transactional key-value store Scalaris.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Apache StreamPipes

    You don't need to be a stream processing expert to create useful custom solutions with Apache StreamPipes. We'll use StreamPipes to build a simple app that calculates when the International Space Station will fly overhead.

  • IBM System S: Stream Computing

    IBM's new System S analyzes business data at their creation.

  • FAQ – Apache Spark

    Spread your processing load across hundreds of machines as easily as running it locally.

  • ApacheCon 2009 Free Live Stream

    The Apache Software Foundation (ASF) is holding ApacheCon US 2009 from November 2-6 in Oakland, California. The foundation for a free webserver is also celebrating its 10th birthday. In honor of this 10th birthday, the celebration includes three days of the conference program available as a FREE Live stream.

  • ApacheCon Europe 2009 -- Live Stream for Free

    From March 25 to March 27 the official conference of the Apache Software Foundation (ASF), ApacheCon Europe 2009 will be held in Amsterdam, Netherlands. Some of the talks will be available here on Linux Magazine for free.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News