Will writing to a Delta table from a Spark structured streaming job create a version for every micro batch of data written?
From the docs:
As you write into a Delta table or directory, every operation is automatically versioned.
So, yes you are correct.
Getting the data:
Using a timestamp
Using version number
Reference: https://databricks.com/blog/2019/02/04/introducing-delta-time-travel-for-large-scale-data-lakes.html
2.1m questions
2.1m answers
60 comments
57.0k users