My purpose is a screen recorder. I am using the Windows DXGI API to receive screenshots and I'm encoding the screenshots into a video using libx264. Feeding BGRA images directly to libx264 is producing weird colors in the output video. So, to get correct colors, I am trying to convert the BGRA to YUV420p. To speed up encoding, I am also trying to downscale the BGRA image.
So I am getting an 1920x1080 BGRA image and I want to convert it to 1280x720 YUV420p. For this, I am using FFmpeg swscale library to do both the format conversion and downscaling.
The problem is that the output video is coming like 3 images in the same frame. Please see this video. https://imgur.com/a/EYimjrJ
I tried just BGRA to YUV conversion without any downscaling and it is working fine. But BGRA to YUV with downscaling is giving this problem.
What is the cause for this problem? How do I fix it?
Here is my code snippet:
uint8_t* Image;
x264_picture_t picIn, picOut;
x264_picture_alloc(&picIn, X264_CSP_I420, 1280, 720);
SwsContext* sws = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1280, 720, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
while (true)
{
take_screenshot(&Image);
AVFrame BGRA;
BGRA.linesize[0] = 1280 * 4;
BGRA.data[0] = Image;
sws_scale(sws, BGRA.data, BGRA.linesize, 0, 1080, picIn.img.plane, picIn.img.i_stride);
nal_size = x264_encoder_encode(h, &nals, &nal_count, &picIn, &picOut);
save_to_flv(nals, nal_size, nal_count);
}
Here are my libx264 parameters:
x264_param_default_preset(¶m, preset, 0);
param.i_csp = X264_CSP_I420;
param.i_width = 1280;
param.i_height = 720;
param.i_fps_num = 30;
param.i_fps_den = 1;
param.rc.i_bitrate = 2500;
param.i_bframe = 0;
param.b_repeat_headers = 0;
param.b_annexb = 1;
x264_param_apply_profile(¶m, 0);
h = x264_encoder_open(¶m);
Change BGRA.linesize[0] to be 1920*4. You see this 3 images pattern because
1240*3 == 1920*2
Related
I need convert YUV to RGB. I also need the RGB values to be in the limited range (16-235).
I try to use sws_scale function for this task.
My code you can see below. But after conversion I got the black pixel is (0, 0, 0) instead of (16, 16, 16).
Maybe there are some options to tell sws_scale function to calculate the limited range.
AVFrame* frameRGB = avFrameConvertPixelFormat(_decodedBuffer[i].pAVFrame, AV_PIX_FMT_RGB24);
AVFrame* Decoder::avFrameConvertPixelFormat(const AVFrame* src, AVPixelFormat dstFormat) {
int width = src->width;
int height = src->height;
AVFrame* dst = allocPicture(dstFormat, width, height);
SwsContext* conversion = sws_getContext(width,
height,
(AVPixelFormat)src->format,
width,
height,
dstFormat,
SWS_FAST_BILINEAR,
NULL,
NULL,
NULL);
sws_scale(conversion, src->data, src->linesize, 0, height, dst->data, dst->linesize);
sws_freeContext(conversion);
dst->format = dstFormat;
dst->width = src->width;
dst->height = src->height;
return dst;
}
Also I tried convert YUV pixel to RGB pixel manualy with formula and I got correct result. From YUV (16, 128, 128) I got (16, 16, 16) in RGB.
cmpR = y + 1.402 * (v - 128);
cmpG = y - 0.3441 * (u - 128) - 0.7141 * (v - 128);
cmpB = y + 1.772 * (u - 128);
You may the source format to "full scale" YUVJ.
As far as I know, sws_scale has no option for selecting Studio RGB as output format.
Changing the input format is the best solution I can think of.
The color conversion formula of "JPEG: YUV -> RGB" is the same as the formula in your post.
Examples for setting the source format:
If src->format is PIX_FMT_YUV420P, set the format to PIX_FMT_YUVJ420P.
If src->format is PIX_FMT_YUV422P, set the format to PIX_FMT_YUVJ422P.
If src->format is PIX_FMT_YUV444P, set the format to PIX_FMT_YUVJ444P.
If PIX_FMT_YUV440P, use PIX_FMT_YUVJ440P.
I know the solution is not covering all the possibilists, and there might be some output pixels exceeding the range of [16, 235], so it's not the most general solution...
yuv to rgb conversion using FFMPEG I see lot of information given already for this above. However for code completeness I am re-sharing the code with missing allocPicture() function, header & library to include, it works for me like a charm. Thanks to #Валентин Никин & #Rotem for most of the info & code.
Headers:
#include <libswscale/swscale.h>
Link FFMPEG Library:
libswscale
static AVFrame* allocPicture(enum AVPixelFormat pix_fmt, int width, int height)
{
// Allocate a frame
AVFrame* frame = av_frame_alloc();
if (frame == NULL)
{
fprintf(stderr, "avcodec_alloc_frame failed");
}
if (av_image_alloc(frame->data, frame->linesize, width, height, pix_fmt, 1) < 0)
{
fprintf(stderr, "av_image_alloc failed");
}
frame->width = width;
frame->height = height;
frame->format = pix_fmt;
return frame;
}
static AVFrame* avFrameConvertPixelFormat(const AVFrame* src, enum AVPixelFormat dstFormat)
{
int width = src->width;
int height = src->height;
AVFrame* dst = allocPicture(dstFormat, width, height);
struct SwsContext* conversion = sws_getContext(width,
height,
(enum AVPixelFormat)src->format,
width,
height,
dstFormat,
SWS_FAST_BILINEAR | SWS_FULL_CHR_H_INT | SWS_ACCURATE_RND,
NULL,
NULL,
NULL);
sws_scale(conversion, src->data, src->linesize, 0, height, dst->data, dst->linesize);
sws_freeContext(conversion);
dst->format = dstFormat;
dst->width = src->width;
dst->height = src->height;
return dst;
}
// convert yuv420p10le to rgb24 (or any other RGB formats)
AVFrame* frame = avFrameConvertPixelFormat(frame, AV_PIX_FMT_RGB24);
I have been following a tutorial on how to use ffmpeg and SDL to make a simple video player with no audio (yet). While looking through the tutorial I realized it was out of date and many of the functions it used, for both ffmpeg and SDL, were deprecated. So I searched for an up-to-date solution and found a stackoverflow question answer that completed what the tutorial was missing.
However, it uses YUV420 which is of low quality. I want to implement YUV444 and after studying chroma-subsampling for a bit and looking at the different formats for YUV am confused as to how to implement it. From what I understand YUV420 is a quarter of the quality YUV444 is. YUV444 means every pixel has its own chroma sample and as such is more detailed while YUV420 means pixels are grouped together and have the same chroma sample and therefore is less detailed.
And from what I understand the different formats of YUV(420, 422, 444) are different in the way they order y, u, and v. All of this is a bit overwhelming because I haven't done much with codecs, conversions, etc. Any help would be much appreciated and if additional info is needed please let me know before downvoting.
Here is the code from the answer I mentioned concerning the conversion to YUV420:
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_YV12,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV420P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// set up YV12 pixel array (12 bits per pixel)
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width / 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished) {
AVPicture pict;
pict.data[0] = yPlane;
pict.data[1] = uPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = uvPitch;
pict.linesize[2] = uvPitch;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
pCodecCtx->width,
uPlane,
uvPitch,
vPlane,
uvPitch
);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(screen);
SDL_Quit();
exit(0);
break;
default:
break;
}
}
// Free the YUV frame
av_frame_free(&pFrame);
free(yPlane);
free(uPlane);
free(vPlane);
// Close the codec
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);
EDIT:
After more research I learned that in YUV420 is stored with all Y's first then a combination of U and V bytes one after another as illustrated by this image:
(source: wikimedia.org)
However I also learned that YUV444 is stored in the order U, Y, V and repeats like this picture shows:
I tried changing some things around in code:
// I changed SDL_PIXELFORMAT_YV12 to SDL_PIXELFORMAT_UYVY
// as to reflect the order of YUV444
texture = SDL_CreateTexture(
renderer,
SDL_PIXELFORMAT_UYVY,
SDL_TEXTUREACCESS_STREAMING,
pCodecCtx->width,
pCodecCtx->height
);
if (!texture) {
fprintf(stderr, "SDL: could not create texture - exiting\n");
exit(1);
}
// Changed AV_PIX_FMT_YUV420P to AV_PIX_FMT_YUV444P
// for rather obvious reasons
sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
AV_PIX_FMT_YUV444P,
SWS_BILINEAR,
NULL,
NULL,
NULL);
// There are as many Y, U and V bytes as pixels I just
// made yPlaneSz and uvPlaneSz equal to the number of pixels
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane) {
fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
exit(1);
}
uvPitch = pCodecCtx->width * 2;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == videoStream) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Rearranged the order of the planes to reflect UYV order
// then set linesize to the number of Y, U and V bytes
// per row
if (frameFinished) {
AVPicture pict;
pict.data[0] = uPlane;
pict.data[1] = yPlane;
pict.data[2] = vPlane;
pict.linesize[0] = pCodecCtx->width;
pict.linesize[1] = pCodecCtx->width;
pict.linesize[2] = pCodecCtx->width;
// Convert the image into YUV format that SDL uses
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height, pict.data,
pict.linesize);
SDL_UpdateYUVTexture(
texture,
NULL,
yPlane,
1,
uPlane,
uvPitch,
vPlane,
uvPitch
);
//.................................................
But now I get an access violation at the call to SDL_UpdateYUVTexture... I'm honestly not sure what's wrong. I think it may have to do with setting AVPicture pic's member data and linesize improperly but I'm not positive.
After many hours of scouring the web for possible answers I stumbled upon this post in which someone was asking about YUV444 support for packed or planar mode. The only current format I've found is AYUV which is packed.
The answer they got was a list of all the currently supported formats which did not include AYUV. Therefore SDL does not support YUV444.
The only solution is to use a different library that supports AYUV / YUV444.
I want to make use of hardware acceleration for decoding an h264 encoded MP4 file.
My computing environment:
Hardware: MacPro (2015 model)
Software: FFmpeg (installed by brew)
Here is the output of FFmpeg command:
$ffmpeg -hwaccels
Hardware acceleration methods:
vda
videotoolbox
According to this document, there are two options for my environment, that is, VDA and VideoToolBox. I tried VDA in C++:
Codec = avcodec_find_decoder_by_name("h264_vda");
It kind of worked, but the output of the pixel format is UYVY422 which I have trouble to deal with (any suggestion on how to render UYVY422 in C++? The ideal format is yuv420p)
So I want to try VideotoolBox, but there is no such simple thing like (it may work in the case of encoding though)
Codec = avcodec_find_decoder_by_name("h264_videotoolbox");
It seems I should use AVHWAccel, but what is AVHWAccel and how to use it?
Part of My C++ code:
for( unsigned int i = 0; i < pFormatCtx->nb_streams; i++ ){
if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO){
pCodecCtx = pFormatCtx->streams[i]->codec;
video_stream = pFormatCtx->streams[i];
if( pCodecCtx->codec_id == AV_CODEC_ID_H264 ){
//pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
pCodec = avcodec_find_decoder_by_name("h264_vda");
break;
}
}
}
// open codec
if( pCodec ){
if((ret=avcodec_open2(pCodecCtx, pCodec, NULL)) < 0) {
....
It's nothing to do with the decoder for which pix format to choose.
Your video pix format is UYVY422, so you got this format after you decode the frame.
Like the answer #halfelf mentioned, you can perform a swscale after you decode a frame, to convert the pix format to your ideal format yuv420p, then render it.
Meanwhile, if you are sure it's the format UYVY422, SDL2 can handle the render directly for you.
In the example below, my format is yuv420p, and I use swscale to convert to UYVY422 to render to SDL2
// prepare swscale context, AV_PIX_FMT_UYVY422 is my destination pix format
SwsContext *swsCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
codecCtx->width, codecCtx->height, AV_PIX_FMT_UYVY422,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Window *window;
SDL_Renderer *render;
SDL_Texture *texture;
SDL_CreateWindowAndRenderer(codecCtx->width,
codecCtx->height, SDL_WINDOW_OPENGL, &window, &render);
texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_UYVY, SDL_TEXTUREACCESS_STREAMING,
codecCtx->width, codecCtx->height);
// ......
// decode the frame
// ......
AVFrame *frameUYVY = av_frame_alloc();
av_image_alloc(frameUYVY->data, frameUYVY->linesize, codecCtx->width, codecCtx->height, AV_PIX_FMT_UYVY422, 32);
SDL_LockTexture(texture, NULL, (void **)frameUYVY->data, frameUYVY->linesize);
// convert the decoded frame to destination frameUYVY (yuv420p -> uyvy422)
sws_scale(swsCtx, frame->data, frame->linesize, 0, frame->height,
frameUYVY->data, frameUYVY->linesize);
SDL_UnlockTexture(texture);
// performa render
SDL_RenderClear(render);
SDL_RenderCopy(render, texture, NULL, NULL);
SDL_RenderPresent(render);
In your example, if your pix format is uyvy422, you can skip the swscale part, and perform the render directly after decode from ffmpeg.
Decoders won't choose which pixel format the output is, it is determined by the video itself. swscale lib is used to convert one pixel format to another.
auto sws_ctx = sws_getContext(src_width, src_height, AV_PIX_FMT_UYUV422, dst_width, dst_height, AV_PIX_FMT_YUV420P, 0,0,0,0);
av_image_alloc(new_data, new_linesize, dst_width, dst_height, AV_PIX_FMT_BGR24, FRAME_ALIGN);
sws_scale(sws_ctx, frame->data, frame->linesize, 0, src_height, new_data, new_linesize);
And there is no h264_videotoolbox decoder, only encoder. To list decoders/encoders available:
ffmpeg -encoders
ffmpeg -decoders
The decoder/encoder names is written in the source, for example, at the end of libavcodec/vda_h264_dec.c and libavcodec/videotoolboxenc.c.
This question is similar to this one and particularly this one but my desired output is different. I'm trying to capture the desktop to video using opencv. The preferred output is an avi file with divx encoding. I'm new to opencv and bitmap programming in general.
As a first step, to make sure the divx codec is present, I create a single frame (cv::Mat) of a solid color (yellow) and write that 100 times to the video file, as shown here:
int main(int argc, char* argv[])
{
cv::Mat frame(1200, 1920, CV_8UC3, cv::Scalar(0, 50000, 50000));
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, cv::Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
This works perfectly - the video file is created and can be played on my Win 10 machine with VLC, Windows Media Player or the Films&TV app. It's 100 frames of solid yellow, but it shows the video is being created properly.
Next step: replace the dummy cv::Mat frame in the code above with a series of screenshots of the desktop. I get a handle to the desktop window using GetDesktopWindow(), and then use the function hwnd2mat (taken from this SO question - thanks!) to convert the bitmap obtained from the desktop handle to a cv::Mat that I can write to my video.
I copied the hwnd2mat function verbatim except I don't scale the image - the desktop bitmap is already 1920x1200, and also the cv::Mat I create is CV_8UC3 instead of CV_8UC4 (CV_8UC4 causes my app to crash).
Here's the code, including a reprint of hwnd2mat:
int main(int argc, char* argv[])
{
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
HWND hDsktopWindow = ::GetDesktopWindow();
cv::Mat frame = hwnd2mat(hDsktopWindow);
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
src.create(height, width, CV_8UC3);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0,srcwidth, srcheight, SRCCOPY);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS);
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd,hwindowDC);
return src;
}
The result of this is that the video file is created and can be played without errors, but it's just solid grey. It seems like the bitmap of the desktop is not getting copied correctly into the cv::Mat frame. I've tried a zillion combinations of the values in the BITMAPINFOHEADER, but nothing works and I don't know what I'm doing to be honest. I know opencv has conversion functions but again, I don't even really know what I'm trying to convert to/from.
Any help appreciated!
Figured out a way to make it work - I have no idea if this is the best way, so comments or alternative solutions are still welcome.
It seems like for GetDIBits to work, the cv::Mat has to be 4-channel, i.e. CV_8UC4, like the original code of hwnd2mat before I changed it. If it is not CV_8UC4, no data is copied (GetDIBits returns 0 scan lines copied) and that's why my avi was just gray. So the first change was to create the src cv::Mat as 4-channel:
src.create(height, width, CV_8UC4);
But for the divx-encoded avi file that I'm trying to create, the frames should be 3-channel (don't ask me why). I added a call to an opencv conversion function after calling GetDIBits(), as follows:
cv::Mat dst;
dst.create(height, width, CV_8UC3);
cv::cvtColor(src, dst, CV_RGBA2RGB);
And then I return dst from hwnd2mat instead of src. The call to cvtColor removes the alpha channel (the A in RGBA) and dst ends up with just the R,G,B channels.
You can get bitmap with no alpha channel from GetDIBits and write it straight to cv::VideoWriter. Just change biBitCount to 24. Leave Mat format to CV_8IC3. This worked for me.
src.create(height, width, CV_8UC3);
bi.biBitCount = 24; // this is where to change
I want to convert YUV420P image (received from H.264 stream) to RGB, while also resizing it, using sws_scale.
The size of the original image is 480 × 800. Just converting with same dimensions works fine.
But when I try to change the dimensions, I get a distorted image, with the following pattern:
changing to 481 × 800 will yield a distorted B&W image which looks like it's cut in the middle
482 × 800 will be even more distorted
483 × 800 is distorted but in color
484 × 800 is ok (scaled correctly).
Now this pattern follows - scaling will only work fine if the difference between divides by 4.
Here's a sample code of the way that I decode and convert the image. All methods show "success".
int srcX = 480;
int srcY = 800;
int dstX = 481; // or 482, 483 etc
int dstY = 800;
AVFrame* avFrameYUV = avcodec_alloc_frame();
avpicture_fill((AVPicture *)avFrameYUV, decoded_yuv_frame, PIX_FMT_YUV420P, srcX , srcY);
AVFrame *avFrameRGB = avcodec_alloc_frame();
AVPacket avPacket;
av_init_packet(&avPacket);
avPacket.size = read; // size of raw data
avPacket.data = raw_data; // raw data before decoding to YUV
int frame_decoded = 0;
int decoded_length = avcodec_decode_video2(g_avCodecContext, avFrameYUV, &frame_decoded, &avPacket);
int size = dstX * dstY * 3;
struct SwsContext *img_convert_ctx = sws_getContext(srcX, srcY, SOURCE_FORMAT, dstX, dstY, PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);
avpicture_fill((AVPicture *)avFrameRGB, rgb_frame, PIX_FMT_RGB24, dstX, dstY);
sws_scale(img_convert_ctx, avFrameYUV->data, avFrameYUV->linesize, 0, srcY, avFrameRGB->data, avFrameRGB->linesize);
// draws the resulting frame with windows BitBlt
DrawBitmap(hdc, dstX, dstY, rgb_frame, size);
sws_freeContext(img_convert_ctx);
When you make a bitmap image, the width of image MUST be multiple of 4.
So you have to change width like 480, 484, 488, 492 ...
Here is method to change to multiple of 4
#define WIDTHBYTES(bits) (((bits) + 31) / 32 * 4)
void main()
{
BITMAPFILEHEADER bmFileHeader;
BITMAPINFOHEADER bmInfoHeader;
// load image
// ...
// when you use the method, put parameter like this.
int tempWidth = WIDTHBYTES(width * bmInfoHeader.biBitCount);
}
I hope you solve the problem.