Trickle – asynchronous Java made easier

At Spotify, we’re doing more and more Java (Spotify started out as a Python-mostly shop, but performance requirements are changing that), and we’re doing more and more complex, asynchronous Java stuff. By ‘complex, asynchronous’, I mean things along the veins of:

  1. Call a search engine for a list of albums matching a certain query string.
  2. Call a search engine for a list of tracks matching the same query string.
  3. When the tracks list is available, call a service to find out how many times they  were played.
  4. Combine the results of the three service calls into some data structure and return it.

This is easy to do synchronously, but if you want performance, you don’t want to waste threads on blocking on service calls that take several tens of milliseconds. And if you do it asynchronously, you end up with code like this:

  ListenableFuture<List<Track>> tracks = search.searchForTracks(query);
  final ListenableFuture<List<DecoratedTrack>> decorated =
    Futures.transform(tracks,
      new AsyncFunction<List<Track>, List<DecoratedTrack>>() {
        @Override
        public ListenableFuture<List<DecoratedTrack>> apply(List<Track> tracks) {
          return decorationService.decorate(tracks);
        }
      });
  final ListenableFuture<List<Album>> albums = search.searchForAlbums(query);

  ListenableFuture<List<Object>> allDoneSignal =
    Futures.<Object>allAsList(decorated, albums);
 
  return Futures.transform(allDoneSignal,
    new Function<List<Object>, SomeResult>() {
      @Override
      public SomeResult apply(List<Object> dummy) {         
        return new SomeResult(
          Futures.getUnchecked(decorated),
          Futures.getUnchecked(albums));
      }
  });

It’s not exactly clear what’s going on. To me, some of the problems with the above code are:

  1. There’s a lot of noise due to Java syntax; it’s really hard to see which bits of the code do something useful.
  2. There’s a lot of concurrency management trivia in the way of understanding which service calls relate to which. The overall flow of data is very hard to understand.
  3. The pattern of using Futures.allAsList() to get a future that is a signal that ‘all the underlying futures are done’ is non-obvious, adding conceptual weight.
  4. The final transform doesn’t actually transform its inputs. To preserve type safety, we have to reach out to the ‘final’ decoratedTracks and albums futures.
  5. It’s easy for application code to accidentally make the code blocking by doing a ‘get’ on some future that isn’t yet completed, or by forgetting to add a future to a newly added service call to the ‘doneSignal’ list.

We’ve created a project called Trickle to reduce the level of ‘async pain’. With Trickle, the code above could be written like so:

  // constant definitions
  private static final Input<String> QUERY = Input.named("query");

  // one-time setup code, like a constructor or something
  private void wireUpGraph() {
    Func1<String, List<Track>> searchTracksFunc = new Func1<String, List<Track>>() {
      @Override
      public ListenableFuture<List<Track>> run(String query) {
        return search.searchForTracks(query);
      }
    };
    Func1<List<Track>, List<DecoratedTrack>> decorateTracksFunc =
      new Func1<List<Track>, List<DecoratedTrack>>() {
        @Override
        public ListenableFuture<List<DecoratedTrack>> run(List<Track> tracks) {
          return decorationService.decorate(tracks);
        }
      };
    Func1<String, List<Album>> searchAlbumsFunc = new Func1<String, List<Album>>() {
      @Override
      public ListenableFuture<List<Album>> run(String query) {
        return search.searchForAlbums(query);
      }
    };
    Func2<List<DecoratedTrack>, List<Album>, SomeResult> combine =
      new Func2<List<DecoratedTrack>, List<Album>, SomeResult>() {
        @Override
        public ListenableFuture run(List<DecoratedTrack> decorated, List<Album> albums) {
          return Futures.immediateFuture(new SomeResult(decorated, albums));
        }
      };

    Graph<List<Track>> searchTracks = Trickle.call(searchTracksFunc).with(QUERY);
    Graph<List<DecoratedTrack>> decorateTracks = Trickle.call(decorateTracksFunc).with(searchTracks);
    Graph<List<Album>> albums = Trickle.call(searchAlbumsFunc).with(QUERY);
    this.searchGraph = Trickle.call(combine).with(decorateTracks, albums);
  }

  // actual invocation method  
  public ListenableFuture search(String query) {
    return this.searchGraph.bind(QUERY, query).run();
  }

The code is not shorter, but there are some interesting differences:

  • The dependencies between different calls are shown much more clearly.
  • The individual steps are more clearly separated out, making them more easily testable.
  • Each step (node in Trickle lingo) is only invoked when the previous ones are completed, so you never get a Future as an input to business logic. This makes it very hard for application code to block concurrency.
  • It’s forward-compatible with lambdas, meaning it’ll look a lot nicer with Java 8.

For more examples and information about how it works, take a look at the README and wiki, because this post is about why, not how, I think you should use Trickle.

Why Trickle?

A danger that you always run as an engineer is that you fall in love with your own ideas and pursue something because you invented it rather than because it’s actually a good thing. Some of us had created systems for managing asynchronous call graphs at previous jobs, and while they had very large limitations (at least the ones I created did), it was clear that they made things quite a lot easier in the contexts where they were used. But even with that experience, we were not sure Trickle was a good idea. So once we had an API that felt like it was worth trying out, we sent out an email to the backend guild at Spotify, asking for volunteers to compare it with the two best similar frameworks we had been able to find on the interwebs (we ruled out Disruptor and Akka because we felt that introducing them would be an unreasonably large change to our existing ecosystem). The comparison was done by implementing a particular call graph, and we then asked people to fill out a questionnaire measuring a) how easy it was to get started, b) how much the framework got out of the way allowing you to focus on the core business logic, and c) how clean the resulting code was. Nine people took the survey, and the results were pretty interesting (1 is worst, 5 is best):

Technology Getting going Focus on core Cleanness
ListenableFutures 4.0 3.6 2.7
RxJava 2.8 3.7 3.1
Trickle 3.9 3.8 4.4

The most common comment regarding ListenableFutures (5/9 said this) was: “I already knew it, so it was of course easy to get started”. The most common comment about Trickle (6/9) was “no documentation” – three of the people who said that also said “but it was still very easy to get going”. So Trickle, without documentation, was almost as easy to get going with as raw Futures that most of the people already knew, and it was a clear winner in code cleanness. Given that we considered the cleanness of the resulting code to be the most important criterion, it felt like we were onto something.

Since getting that feedback, we’ve iterated a couple of times more on the API, and to ensure that it is production quality, we’re using it in the service discovery system that our group owns (and which can bring down almost the entire Spotify backend if it fails). We’ve also added some documentation, but not too much – we want to make sure that the API is simple enough that you can use it without needing to read any documentation at all. As you can tell from the links above, we also open-sourced it. Finally, we did a micro benchmark to ensure we hadn’t done anything that would introduce crazy performance limitations. All micro benchmarks are liars, but the results look like we’re in a reasonable zone compared to the others:

Benchmark Mode Samples Mean Mean error Units
c.s.t.Benchmark.benchmarkExecutorGuava thrpt 5 68.778 4.066 ops/ms
c.s.t.Benchmark.benchmarkExecutorRx thrpt 5 20.242 0.710 ops/ms
c.s.t.Benchmark.benchmarkExecutorTrickle thrpt 5 52.148 1.776 ops/ms
c.s.t.Benchmark.benchmarkImmediateGuava thrpt 5 890.375 79.594 ops/ms
c.s.t.Benchmark.benchmarkImmediateRx thrpt 5 312.870 8.643 ops/ms
c.s.t.Benchmark.benchmarkImmediateTrickle thrpt 5 168.820 13.991 ops/ms

Trickle is significantly slower than plain ListenableFutures (especially when the futures aren’t real futures because the result is immediately available; this case is very fast in Guava, whereas Trickle doesn’t do any optimisations for that). This is not a surprise, since Trickle is built on top of ListenableFutures and does more stuff. The key result we wanted out of this was that it shouldn’t be orders of magnitude slower than plain ListenableFutures, and it’s not. If a single thread is capable of doing 52k operations/second in Trickle, that’s more than our use case requires, so at least this test didn’t indicate that we had done something very wrong. I’m skeptical about the RxJava performance results; it’s slowness when using real threads may well be due to some mistake I did when writing the RxJava code.

Trickle, while still immature in particular regarding the exact details of the API, is now at a stage where we feel that it’s ready for general use, so if you’re using Futures.transform() and Futures.allAsList(), chances are great that you can make your code easier to read and work with by using Trickle instead. Or, if you’re doing things synchronously and wish that you could improve the performance of your services, perhaps Trickle could help you get a performance boost!

, , ,

  1. #1 by Björn Ritzl on February 3, 2014 - 20:00

    We had a discussion at work recently regarding the new Bolts framework (by Facebook/Parse, https://github.com/BoltsFramework/Bolts-Android) for Android and iOS, specifically around Tasks. Tasks tries to simplify async requests, much in the same way as Trickle is doing. I argued that I hardly ever have the need to perform more than one async task at a time, and almost never more than two. Judging from your example the need to perform a chain of async tasks vary a lot depending on product and project. Mind you, I come from a client perspective, and the needs may be completely different on the server.

    Anyways, I like the look of Bolts and the way it gets rid of nested async tasks. Trickle has a slightly different approach with a separate “wire graph” step, but it becomes super clear how the async calls relate to each other. I’m not sure which I prefer. Is there something similar to Bolts for pure Java?

    • #2 by Petter Måhlén on February 3, 2014 - 21:01

      I was not aware of Bolts, thanks for the pointer! I think Bolts is quite similar to one of Trickle’s predecessors. It looks like Bolt Tasks were first aimed at solving the problem of chaining a series of calls together, whereas Trickle is more aimed at first of all solving the problem of type-safe fan-out and fan-in of many asynchronous requests with graceful degradation. For instance, Trickle’s FuncN interfaces allows type-safe fan-in of multiple futures, something that seems a little harder to do with Bolts, if I understand the whenAll method right. On the other hand, I’m sure a lot of people would rather write the more fluent kind of code you get with Bolts. They do both seem to occupy the same kind of space, with slightly different tradeoffs.

  2. #3 by Kristoffer Sjögren on February 26, 2014 - 21:23

    Are you aware that CompletableFuture (and lambdas) is coming in Java 8?http://download.java.net/jdk8/docs/api/java/util/concurrent/package-summary.html

  3. #4 by Kristoffer Sjögren on February 26, 2014 - 21:32

    Are you aware of the additions to the concurrent package in Java 8? Have a look at CompletableFuture. It has built-in lambda and composability/barrier support for executors.

    • #5 by Petter Måhlén on February 27, 2014 - 09:22

      I wasn’t aware of that, thanks for the pointer. The CompletionStage interface looks at first glance as if it might be a good choice. If or when I find the time, I’ll try to play with it and see what the resulting code looks like. The thing that strikes me as a possible complication is that when you need to combine the results of more than 2 asynchronous computations, it looks like you need to do that in multiple steps, increasing noise. But it does look really interesting.

  4. #6 by abersnazea on September 19, 2014 - 23:00

    Hi, as one of the contributors working on RxJava I would be interested hearing more about your experience and with it and he feed back others had on why it was so hard to get going with RxJava.

    You might also want to take a second look at the performance as we’ve huge improvements and added backpresure support.

    • #7 by Petter Måhlén on September 21, 2014 - 10:29

      Hi, first about the performance: I don’t consider that microbenchmark to be useful for anything other than as a sanity check that we’re not doing anything crazy in Trickle, so I wouldn’t worry about the RX result. A couple of comments from our survey regarding RX are:

      “Too many “primitives” in the API so it’s harder to learn than ListenableFutures which have very few primitives.”
      “RxJava have some interesting ideas. I feel that it has the potential if being better then LF, however, when I got into the details of trying to implement the problem at hand, it was still difficult to get it to bend to my will. ”
      “I had to spend some extra time understanding reactive programming paradigm. It wasn’t very easy to figure out what methods to use to chain or combine observables. But once I completed the exercise, it made sense and the code looked clear in my eye.”
      “API is toooooo big.”

      If I were to try to summarise the comments, I think that there were two main themes: first, that the concepts around Observables take some time getting used to, and second, that the API is very large and hard to discover/learn (cf the number of methods in the Observable interface).

Leave a reply to Kristoffer Sjögren Cancel reply