java - What is the principle ? When Spark deal with data bigger than memory capacity? -


as know , spark use memory cache data , compute data in memory.but if data bigger memory? read source code ,but don't know class schedule job? or explain principle of how spark deal question?

om-nom-nom gave answer, comment reason, thought i'd post actual answer:

https://spark.apache.org/docs/latest/scala-programming-guide.html#rdd-persistence


Comments

Popular posts from this blog

java - Intellij Synchronizing output directories .. -

git - Initial Commit: "fatal: could not create leading directories of ..." -