When you restart the cluster, the spark application is initialized over again, like literally from scratch all cache in clusters are wiped.
You will see this evident in cluster driver logs when you restart, spark initialize and boots all libraries loads metastore and DBFS.
One thing a immediate a quick restart (not more than ~5 mins gap) does not do is not deprovisioning the underlying VM instance hosting the application. If you think the VM is in bad state terminate - give a gap of 5 mins and start again. ( this does not work clusters over pool as pools sustain VMs even after termination.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…