Due to the nature of how Spark works, it's not possible to name the file. However, it's possible to rename the file right afterward.
URI = sc._gateway.jvm.java.net.URI
Path = sc._gateway.jvm.org.apache.hadoop.fs.Path
FileSystem = sc._gateway.jvm.org.apache.hadoop.fs.FileSystem
fs = FileSystem.get(URI("s3://{bucket_name}"), sc._jsc.hadoopConfiguration())
file_path = "s3://{bucket_name}/processed/source={source_name}/year={partition_year}/week={partition_week}/"
df.coalesce(1).write.format("json").mode(
"overwrite").option("codec", "gzip").save(file_path)
# rename created file
created_file_path = fs.globStatus(Path(file_path + "part*.gz"))[0].getPath()
fs.rename(
created_file_path,
Path(file_path + "{desired_name}.jl.gz"))
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…