The issue you are running into is an artifact of the image output format you have requested. The JPEG encoding process puts a large stall time on the camera pipeline, so there is a lot of downtime between when one exposure ends and the next begins while this encoding happens.
The 30fps rate that is quoted can be achieved by setting the output image format on the ImageReader
as YUV, since that is a more "native" output for the camera. This would be the way to store the images as they are captured, and then you would have to do JPEG encoding afterwards, separate of the camera's inline processing.
For example, on the Nexus 5 the output stall time for JPEG
encoding is 243ms, which you have been observing. For YUV_420_888
output, it is 0ms. Likewise, because of their large size, RAW_SENSOR
encoding introduces a stall time of 200ms.
Note also that even if you remove the stall time obstacle by choosing a "faster" format, there is still a minimum frame time, depending on the output image size. But for a Nexus 5's full resolution output, this is 33ms, which is what you were expecting.
The relevant information is in the camera metadata's StreamConfigurationMap
object, here. Check out the getOutputStallDuration(int format, Size size)
and getOutputMinFrameDuration(int format, Size size)
methods for confirmation.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…