I'm using a LongAccumulator
as a shared counter in map operations. But it seems that I'm not using it correctly because the state of the counter on the worker nodes is not updated. Here's what my counter class looks like:
public class Counter implements Serializable {
private LongAccumulator counter;
public Long increment() {
log.info("Incrementing counter with id: " + counter.id() + " on thread: " + Thread.currentThread().getName());
counter.add(1);
Long value = counter.value();
log.info("Counter's value with id: " + counter.id() + " is: " + value + " on thread: " + Thread.currentThread().getName());
return value;
}
public Counter(JavaSparkContext javaSparkContext) {
counter = javaSparkContext.sc().longAccumulator();
}
}
As far as I can understand the documentation this should work fine when the application is run within multiple worker nodes:
Accumulators are variables that are only “added” to through an associative and commutative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric types, and programmers can add support for new types.
But here is the result when the counter is incremented on 2 different workers and as it looks like the state is not shared between the nodes:
INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-6
INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-6
INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-0
INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-0
Do I understand the accumulators conception wrong or is there any setting that I must start the task with?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…