spill to disk
and shuffle write
are two different things
spill to disk
- Data move from Host RAM to Host Disk - is used when there is no enough RAM on your machine, and it place part of its RAM into disk
http://spark.apache.org/faq.html
Does my data need to fit in memory to use Spark?
No. Spark's operators spill data to disk if it does not fit in memory,
allowing it to run well on any sized data. Likewise, cached datasets
that do not fit in memory are either spilled to disk or recomputed on
the fly when needed, as determined by the RDD's storage level.
shuffle write
- Data move from Executor(s) to another Executor(s) - is used when data needs to move between executors (e.g. due to JOIN, groupBy, etc)
more data can be found here:
An edge case example which might help clearing this issue:
- You have 10 executors
- Each executor with 100GB RAM
- Data size is 1280MB, and is partitioned into 10 partitions
- Each executor holds 128MB of data.
Assuming that the data holds one key, Performing groupByKey, will bring all the data into one partition. Shuffle size
will be 9*128MB (9 executors will transfer their data into the last executor), and there won't be any spill to disk
as the executor has 100GB of RAM and only 1GB of data
Regarding AppendOnlyMap :
As written in the AppendOnlyMap
code (see above) - this function is
a low level implementation of a simple open hash table optimized for
the append-only use case, where keys are never removed, but the value
for each key may be changed.
The fact that two different modules uses the same low-level function doesn't mean that those functions are related in hi-level.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…