I want to convert YUV420P
image (received from H.264
stream) to RGB
, while also resizing it, using sws_scale
.
The size of the original image is 480 × 800
. Just converting with same dimensions works fine.
But when I try to change the dimensions, I get a distorted image, with the following pattern:
- changing to
481 × 800
will yield a distorted B&W image which looks like it's cut in the middle
482 × 800
will be even more distorted
483 × 800
is distorted but in color
484 × 800
is ok (scaled correctly).
Now this pattern follows - scaling will only work fine if the difference between divides by 4.
Here's a sample code of the way that I decode and convert the image. All methods show "success".
int srcX = 480;
int srcY = 800;
int dstX = 481; // or 482, 483 etc
int dstY = 800;
AVFrame* avFrameYUV = avcodec_alloc_frame();
avpicture_fill((AVPicture *)avFrameYUV, decoded_yuv_frame, PIX_FMT_YUV420P, srcX , srcY);
AVFrame *avFrameRGB = avcodec_alloc_frame();
AVPacket avPacket;
av_init_packet(&avPacket);
avPacket.size = read; // size of raw data
avPacket.data = raw_data; // raw data before decoding to YUV
int frame_decoded = 0;
int decoded_length = avcodec_decode_video2(g_avCodecContext, avFrameYUV, &frame_decoded, &avPacket);
int size = dstX * dstY * 3;
struct SwsContext *img_convert_ctx = sws_getContext(srcX, srcY, SOURCE_FORMAT, dstX, dstY, PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);
avpicture_fill((AVPicture *)avFrameRGB, rgb_frame, PIX_FMT_RGB24, dstX, dstY);
sws_scale(img_convert_ctx, avFrameYUV->data, avFrameYUV->linesize, 0, srcY, avFrameRGB->data, avFrameRGB->linesize);
// draws the resulting frame with windows BitBlt
DrawBitmap(hdc, dstX, dstY, rgb_frame, size);
sws_freeContext(img_convert_ctx);
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…