java - What is the principle ? When Spark deal with data bigger than memory capacity? -


as know , spark use memory cache data , compute data in memory.but if data bigger memory? read source code ,but don't know class schedule job? or explain principle of how spark deal question?

om-nom-nom gave answer, comment reason, thought i'd post actual answer:

https://spark.apache.org/docs/latest/scala-programming-guide.html#rdd-persistence


Comments

Popular posts from this blog

How to access named pipes using JavaScript in Firefox add-on? -

multithreading - OPAL (Open Phone Abstraction Library) Transport not terminated when reattaching thread? -

node.js - req param returns an empty array -