I'm going to make a video from series of screenshots (.png files). For each screenshot there is assosiated timestamp information about when it was taken. The time intervals between screenshots may vary and it's highly desired to preserve that difference in the video.
Is there a way to use single ffmpeg
command/API providing to it sequence of image + time (or frame) offset and getting one video file as output?
By now I have to generate short video files of custom length for each image, and then merge them manually:
ffmpeg -y -loop 1 -i image1.png -c:v libx264 -t 1.52 video1.avi
ffmpeg -y -loop 1 -i image2.png -c:v libx264 -t 2.28 video2.avi
...
ffmpeg -y -loop 1 -i imageN.png -c:v libx264 -t 1.04 videoN.avi
ffmpeg -i "concat:video1.avi|video2.avi|...videoN.avi" -c copy output.avi
This is quite ok, while intervals are large, but the whole approach seems to me a bit fragile.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…