Minborg

Minborg
Minborg

Monday, April 8, 2019

Java Stream: Is a Count Always a Count?

Java Stream: Is a Count Always a Count?

It might appear obvious that counting the elements in a Stream takes longer time the more elements there are in the Stream. But actually, Stream::count can sometimes be done in a single operation, no matter how many elements you have. Read this article and learn how.

Count Complexity

The Stream::count terminal operation counts the number of elements in a Stream. The complexity of the operation is often O(N), meaning that the number of sub-operations is proportional to the number of elements in the Stream.

In contrast, the List::size method has a complexity of O(1) which means that regardless of the number of elements in the List, the size() method will return in constant time. This can be observed by running the following JMH benchmarks:
@State(Scope.Benchmark)
public class CountBenchmark {

    private List<Integer> list;

    @Param({"1", "1000", "1000000"})
    private int size;

    @Setup
    public void setup() {
        list = IntStream.range(0, size)
            .boxed()
            .collect(toList());
    }

    @Benchmark
    public long listSize() {
        return list.size();
    }

    @Benchmark
    public long listStreamCount() {
        return list.stream().count();
    }

    public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
            .include(CountBenchmark.class.getSimpleName())
            .mode(Mode.Throughput)
            .threads(Threads.MAX)
            .forks(1)
            .warmupIterations(5)
            .measurementIterations(5)
            .build();

        new Runner(opt).run();

    }

}

This produced the following output on my laptop (MacBook Pro mid 2015, 2.2 GHz Intel Core i7):

Benchmark                        (size)   Mode  Cnt          Score           Error  Units
CountBenchmark.listSize               1  thrpt    5  966658591.905 ± 175787129.100  ops/s
CountBenchmark.listSize            1000  thrpt    5  862173760.015 ± 293958267.033  ops/s
CountBenchmark.listSize         1000000  thrpt    5  879607621.737 ± 107212069.065  ops/s
CountBenchmark.listStreamCount        1  thrpt    5   39570790.720 ±   3590270.059  ops/s
CountBenchmark.listStreamCount     1000  thrpt    5   30383397.354 ±  10194137.917  ops/s
CountBenchmark.listStreamCount  1000000  thrpt    5        398.959 ±       170.737  ops/s


As can be seen, the throughput of List::size is largely independent of the number of elements in the List whereas the throughput of Stream::count drops of rapidly as the numbers of elements grow. But, is this really always the case for all Stream implementation per se?

Source Aware Streams

Some stream implementations are actually aware of their sources and can take appropriate shortcuts and merge stream operations into the stream source itself. This can improve performance massively, especially for large streams. The Speedment ORM tool allows databases to be viewed as Stream objects and these streams can optimize away many stream operations like the Stream::count operation as demonstrated in the benchmark below. I have used the open-source Sakila exemplary database as data input. The Sakila database is all about rental films, artists etc.

@State(Scope.Benchmark)
public class SpeedmentCountBenchmark {

    private Speedment app;
    private RentalManager rentals;
    private FilmManager films;

    @Setup
    public void setup() {
        app =  new SakilaApplicationBuilder()
            .withBundle(DataStoreBundle.class)
            .withLogging(ApplicationBuilder.LogType.STREAM)
            .withPassword(ExampleUtil.DEFAULT_PASSWORD)
            .build();

        app.get(DataStoreComponent.class).ifPresent(DataStoreComponent::load);

        rentals = app.getOrThrow(RentalManager.class);
        films = app.getOrThrow(FilmManager.class);

    }

    @TearDown
    public void tearDown() {
        app.close();
    }


    @Benchmark
    public long rentalsCount() {
        return rentals.stream().count();
    }


    @Benchmark
    public long filmsCount() {
        return films.stream().count();
    }


    public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
            .include(SpeedmentCountBenchmark.class.getSimpleName())
            .mode(Mode.Throughput)
            .threads(Threads.MAX)
            .forks(1)
            .warmupIterations(5)
            .measurementIterations(5)
            .build();

        new Runner(opt).run();

    }

}
When run, the following output will be produced:

Benchmark                              Mode  Cnt         Score          Error  Units
SpeedmentCountBenchmark.filmsCount    thrpt    5  71037544.648 ± 75915974.254  ops/s
SpeedmentCountBenchmark.rentalsCount  thrpt    5  69750012.675 ± 37961414.355  ops/s


The “rental” table contains over 10,000 rows whereas the “film” table only contains 1,000 rows. Nevertheless, their Stream::count operations complete in almost the same time. Even if a table would contain a trillion rows, it would still count the elements in the same elapsed time. Thus, the Stream::count implementation has a complexity that is O(1) and not O(N).

Note: The benchmark above were run with Speedment's “DataStore” in-JVM-memory acceleration. If run with no acceleration directly against a database, the response time would depend on the underlying database’s ability to execute a “SELECT count(*) FROM film” query.

Summary

It is possible to create Stream implementation that counts their elements in a single operation rather than counting each and every element in the stream. This can improve performance significantly, especially for streams with many elements.

Resources

Speedment Stream ORM Initializer: https://www.speedment.com/initializer/
Sakila: https://dev.mysql.com/doc/index-other.html or https://hub.docker.com/r/restsql/mysql-sakila

2 comments:

  1. This is only true on Java 8. Since Java 9, `count` is short-circuited for streams whose size is known (as is the case here) - see this tweet.

    ReplyDelete
    Replies
    1. Thanks for you perfectly correct comment Nicolai. I think this is material for a follow up article where this should be elaborated further.

      Delete

Note: Only a member of this blog may post a comment.