我正在使用 VTDecompressionSession 通过网络解码 H.264 流。我需要从给定的图像缓冲区复制 YUV 缓冲区。我已经验证给定的 imageBuffer 的 typeID 等于 CVPixelBufferGetTypeID() 。
但每当我尝试检索缓冲区或任何平面的基地址时,它们总是返回 NULL。 iOS 传递的 OSStatus 为 0,所以我的假设是这里没有任何问题。也许我不知道如何提取数据。有人可以帮忙吗?
void decompressionCallback(void * CM_NULLABLE decompressionOutputRefCon,
void * CM_NULLABLE sourceFrameRefCon,
OSStatus status,
VTDecodeInfoFlags infoFlags,
CM_NULLABLE CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime presentationDuration )
{
CFShow(imageBuffer);
size_t dataSize = CVPixelBufferGetDataSize(imageBuffer);
void * decodedBuffer = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(pYUVBuffer, decodedBuffer, dataSize);
}
编辑:这里也是 CVImageBufferRef 对象的转储。看起来可疑的一件事是我希望有 3 个平面(Y、U 和 V)。但是只有两架飞机。我的期望是使用 CVPixelBufferGetBaseAddressOfPlane 来提取每个数据平面。我正在实现它以消除对单独软件编解码器的依赖,因此我需要以这种方式提取每个平面,因为我的渲染管道的其余部分需要它。
{type = immutable
dict, count = 5, entries => 0 : {contents = "ixelFormatDescription"} = {type = immutable dict, count = 10, entries
=> 0 : {contents = "lanes"} = {type = mutable-small, count = 2,
values = ( 0 : {type = mutable
dict, count = 3, entries => 0 : {contents = "FillExtendedPixelsCallback"} = {length = 24, capacity = 24, bytes =
0x000000000000000030139783010000000000000000000000} 1 : {contents = "BitsPerBlock"} = {value = +8, type =
kCFNumberSInt32Type} 2 : {contents = "BlackBlock"} = {length = 1, capacity = 1, bytes = 0x10} }
1 : {type = mutable dict,
count = 5, entries => 2 : {contents = "HorizontalSubsampling"} = {value = +2, type =
kCFNumberSInt32Type} 3 : {contents = "BlackBlock"} = {length = 2, capacity = 2, bytes = 0x8080} 4 :
{contents = "BitsPerBlock"} =
{value = +16, type =
kCFNumberSInt32Type} 5 : {contents = "VerticalSubsampling"} = {value = +2, type =
kCFNumberSInt32Type} 6 : {contents = "FillExtendedPixelsCallback"} = {length = 24, capacity = 24, bytes =
0x0000000000000000ac119783010000000000000000000000} }
)} 2 : {contents =
"IOSurfaceOpenGLESFBOCompatibility"} = {value = true} 3 : {contents = "ContainsYCbCr"} = {value = true} 4 : {contents = "IOSurfaceOpenGLESTextureCompatibility"} =
{value = true} 5 : {contents = "ComponentRange"} = {contents = "VideoRange"} 6 : {contents = "ixelFormat"} = {value = +875704438, type =
kCFNumberSInt32Type} 7 : {contents = "IOSurfaceCoreAnimationCompatibility"} =
{value = true} 9 : {contents = "ContainsAlpha"} = {value = false} 10 : {contents = "ContainsRGB"} = {value = false} 11 : {contents = "OpenGLESCompatibility"} = {value = true} }
2 : {contents =
"ExtendedPixelsRight"} = {value = +8, type = kCFNumberSInt32Type} 3 : {contents = "ExtendedPixelsTop"} = {value = +0, type =
kCFNumberSInt32Type} 4 : {contents = "ExtendedPixelsLeft"} = {value = +0, type =
kCFNumberSInt32Type} 5 : {contents = "ExtendedPixelsBottom"} = {value = +0, type =
kCFNumberSInt32Type} } propagatedAttachments={type = mutable dict, count = 7, entries => 0 :
{contents =
"CVImageBufferChromaLocationTopField"} = Left 1 : {contents = "CVImageBufferYCbCrMatrix"} =
{contents = "ITU_R_601_4"} 2 :
{contents = "ColorInfoGuessedBy"}
= {contents = "VideoToolbox"} 5 : {contents =
"CVImageBufferColorPrimaries"} = SMPTE_C 8 : {contents = "CVImageBufferTransferFunction"} = {contents = "ITU_R_709_2"} 10 : {contents =
"CVImageBufferChromaLocationBottomField"} = Left 12 : {contents = "CVFieldCount"} = {value = +1, type =
kCFNumberSInt32Type} } nonPropagatedAttachments={type = mutable dict, count = 0, entries =>
}
Best Answer-推荐答案 strong>
所以您的格式是 kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v' 并且两个平面对于 4:2:0 YUV 数据有意义,因为第一个平面是全尺寸 Y 单 channel 位图,第二个平面是半宽半高UV双 channel 位图。
你是对的,对于平面数据,你应该调用 CVPixelBufferGetBaseAddressOfPlane ,虽然你应该能够使用 CVPixelBufferGetBaseAddress ,将其结果解释为 CVPlanarPixelBufferInfo_YCbCrBiPlanar ,所以问题可能是您没有在 CVPixelBufferGetBaseAddress* 之前调用 CVPixelBufferLockBaseAddress 也没有在之后调用 CVPixelBufferUnlockBaseAddress 。
通过编写一些有趣的 YUV->RGB 着色器代码,您可以使用 Metal 或 OpenGL 有效地显示 2 个 YUV 平面。
关于ios - CVPixelBufferGetBaseAddress 返回空 VTDecompressionSessionDecodeFrame 回调,我们在Stack Overflow上找到一个类似的问题:
https://stackoverflow.com/questions/37887639/
|