As Andreas pointed out, Consumer::andThen
is an associative function and while the resulting consumer may have a different internal structure, it is still equivalent.
But let's debug it
public static void main(String[] args) {
performAllTasks(IntStream.range(0, 10)
.mapToObj(i -> new DebuggableConsumer(""+i)), new Object());
}
private static <T> void performAllTasks(Stream<Consumer<T>> consumerList, T data) {
Consumer<T> reduced = consumerList.reduce(Consumer::andThen).orElse(x -> {});
reduced.accept(data);
System.out.println(reduced);
}
static class DebuggableConsumer implements Consumer<Object> {
private final Consumer<Object> first, second;
private final boolean leaf;
DebuggableConsumer(String name) {
this(x -> System.out.println(name), x -> {}, true);
}
DebuggableConsumer(Consumer<Object> a, Consumer<Object> b, boolean l) {
first = a; second = b;
leaf = l;
}
public void accept(Object t) {
first.accept(t);
second.accept(t);
}
@Override public Consumer<Object> andThen(Consumer<? super Object> after) {
return new DebuggableConsumer(this, after, false);
}
public @Override String toString() {
if(leaf) return first.toString();
return toString(new StringBuilder(200), 0, 0).toString();
}
private StringBuilder toString(StringBuilder sb, int preS, int preEnd) {
int myHandle = sb.length()-2;
sb.append(leaf? first: "combined").append('
');
if(!leaf) {
int nPreS=sb.length();
((DebuggableConsumer)first).toString(
sb.append(sb, preS, preEnd).append("u2502 "), nPreS, sb.length());
nPreS=sb.length();
sb.append(sb, preS, preEnd);
int lastItemHandle=sb.length();
((DebuggableConsumer)second).toString(sb.append(" "), nPreS, sb.length());
sb.setCharAt(lastItemHandle, 'u2514');
}
if(myHandle>0) {
sb.setCharAt(myHandle, 'u251c');
sb.setCharAt(myHandle+1, 'u2500');
}
return sb;
}
}
will print
0
1
2
3
4
5
6
7
8
9
combined
├─combined
│ ├─combined
│ │ ├─combined
│ │ │ ├─combined
│ │ │ │ ├─combined
│ │ │ │ │ ├─combined
│ │ │ │ │ │ ├─combined
│ │ │ │ │ │ │ ├─combined
│ │ │ │ │ │ │ │ ├─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@378fd1ac
│ │ │ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@49097b5d
│ │ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@6e2c634b
│ │ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@37a71e93
│ │ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7e6cbb7a
│ │ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7c3df479
│ │ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7106e68e
│ │ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@7eda2dbb
│ └─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@6576fe71
└─SO$DebuggableConsumer$$Lambda$21/0x0000000840069040@76fb509a
whereas changing the reduction code to
private static <T> void performAllTasks(Stream<Consumer<T>> consumerList, T data) {
Consumer<T> reduced = consumerList.parallel().reduce(Consumer::andThen).orElse(x -> {});
reduced.accept(data);
System.out.println(reduced);
}
prints on my machine
0
1
2
3
4
5
6
7
8
9
combined
├─combined
│ ├─combined
│ │ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@49097b5d
│ │ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@6e2c634b
│ └─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@37a71e93
│ └─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7e6cbb7a
│ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7c3df479
└─combined
├─combined
│ ├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7106e68e
│ └─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@7eda2dbb
└─combined
├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@6576fe71
└─combined
├─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@76fb509a
└─SO$DebuggableConsumer$$Lambda$22/0x0000000840077c40@300ffa5d
illustrating the point of Andreas’ answer, but also highlighting an entirely different problem. You may max it out by using, e.g. IntStream.range(0, 100)
in the example code.
The result of the parallel evaluation is actually better than the sequential evaluation, as the sequential evaluation creates an unbalanced tree. When accepting an arbitrary stream of consumers, this can be an actual performance issue or even lead to a StackOverflowError
when trying to evaluate the resulting consumer.
For any nontrivial number of consumers, you actually want a balanced consumer tree, but using a parallel stream for that is not the right solution, as a) Consumer::andThen
is a cheap operation with no real benefit from parallel evaluation and b) the balancing would depend on unrelated properties, like the nature of the stream source and the number of CPU cores, which determine when the reduction falls back to the sequential algorithm.
Of course, the simplest solution would be
private static <T> void performAllTasks(Stream<Consumer<T>> consumers, T data) {
consumers.forEachOrdered(c -> c.accept(data));
}
But when you want to construct a compound Consumer
for re-using, you may use
private static final int ITERATION_THRESHOLD = 16; // tune yourself
public static <T> Consumer<T> combineAllTasks(Stream<Consumer<T>> consumers) {
List<Consumer<T>> consumerList = consumers.collect(Collectors.toList());
if(consumerList.isEmpty()) return t -> {};
if(consumerList.size() == 1) return consumerList.get(0);
if(consumerList.size() < ITERATION_THRESHOLD)
return balancedReduce(consumerList, Consumer::andThen, 0, consumerList.size());
return t -> consumerList.forEach(c -> c.accept(t));
}
private static <T> T balancedReduce(List<T> l, BinaryOperator<T> f, int start, int end) {
if(end-start>2) {
int mid=(start+end)>>>1;
return f.apply(balancedReduce(l, f, start, mid), balancedReduce(l, f, mid, end));
}
T t = l.get(start++);
if(start<end) t = f.apply(t, l.get(start));
assert start==end || start+1==end;
return t;
}
The code will provide a single Consumer
just using a loop when the number of consumers exceeds a threshold. This is the simplest and most efficient solution for a larger number of consumers and in fact, you could drop all other approaches for the smaller numbers and still get a reasonable performance…
Note that this still doesn’t hinder parallel processing of the stream of consumers, if their construction really benefits from it.