I have an mp2 stream that has klv metadata. I stored the klv in a file using ffmpeg command line:
ffmpeg -i input.mpg -map data-re -codec copy -f data output.klv
I now want to do this in c++. So, I have
FFMPEG setup …..
Then the main loop
// Read frames
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if(packet.stream_index == videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished)
{
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
QImage myImage(pFrameRGB->data[0], pCodecCtx->width, pCodecCtx->height, QImage::Format_RGB888);
QPixmap img(QPixmap::fromImage(myImage.scaled(ui->label->width(),ui->label->height(),Qt::KeepAspectRatio)));
ui->label->setPixmap(img);
QCoreApplication::processEvents();
}
}
else // klv stream
{
// Decode klv data
qDebug() << packet.buf->size;
for(int i=0; i<packet.buf->size; i++)
{
qDebug() << packet.buf->data[i];
}
}
The resulting klv output is different - I must be doing something wrong processing the packet. The frames are good and I'm viewing it in a qt label - so my ffmpeg setup is working on images but not the klv data.
My bad - this code is working - I was comparing the int output to the ffmpeg output being viewed in notepad - when I used notepad++ - I can make sense of the ffmpeg output and it does correlate :)
Related
I want to covert mp4 format to wav format with different sample rate in my c++ application.
First of all I have extracted audio from mp4 file by ffmpeg in c++, then i have converted that to a raw file, but I down not know how can I convert raw file to a wav file with different sample rate.
How can I solve this?
#include "ffmpeg.h"
int decode_packet(int *got_frame, int cached)
{
int ret = 0;
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
// fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
// fprintf(stderr, "Error: Width, height and pixel format have to be "
// "constant in a rawvideo file, but the width, height or "
// "pixel format of the input video changed:\n"
// "old: width = %d, height = %d, format = %s\n"
// "new: width = %d, height = %d, format = %s\n",
// width, height, av_get_pix_fmt_name(pix_fmt),
// frame->width, frame->height,
// av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame%s n:%d coded_n:%d\n",
cached ? "(cached)" : "",
video_frame_count++, frame->coded_picture_number);
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
}
} else if (pkt.stream_index == audio_stream_idx) {
/* decode audio frame */
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
// fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
return ret;
}
/* Some audio decoders decode only part of the packet, and have to be
* called again with the remainder of the packet data.
* Sample: fate-suite/lossless-audio/luckynight-partial.shn
* Also, some decoders might over-read the packet. */
decoded = FFMIN(ret, pkt.size);
if (*got_frame) {
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample((AVSampleFormat)frame->format);
// printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
// cached ? "(cached)" : "",
// audio_frame_count++, frame->nb_samples,
// av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
// fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
//encode function
encode(cOut, frame, &pktout, audio_dst_file);
// av_init_packet(&pktout);
// pktout.data = NULL; // packet data will be allocated by the encoder
// pktout.size = 0;
// /* encode the samples */
// ret = avcodec_encode_audio2(cOut, &pktout, frame, &got_outputOut);
// if (ret < 0) {
// fprintf(stderr, "Error encoding audio frame\n");
// exit(1);
// }
// if (got_outputOut) {
// fwrite(pktout.data, 1, pktout.size, audio_dst_file);
// av_free_packet(&pktout);
// }
}
}
/* If we use frame reference counting, we own the data and need
* to de-reference it when we don't use it anymore */
if (*got_frame && refcount)
av_frame_unref(frame);
return decoded;
}
First you should use Libswresample to resample audio data.
Then you can save audio raw data with wav format.
I'm using OpenCV and v4l2loopback library to emulate video devices:
modprobe v4l2loopback devices=2
Then I check what devices I have:
root#blah:~$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video1
Dummy video device (0x0001) (platform:v4l2loopback-001):
/dev/video2
XI100DUSB-SDI (usb-0000:00:14.0-9):
/dev/video0
video0 is my actual camera where I grab frames from, then I plan to process them via OpenCV and write it to video2 (which is a sink I believe).
Here is how I attempt to do so:
int width = 320;
int height = 240;
Mat frame(height, width, CVX_8UC3, Scalar(0, 0, 255));
cvtColor(frame, frame, CVX_BGR2YUV);
int fourcc = CVX_FOURCC('Y', 'U', 'Y', '2');
cout << "Trying to open video for write: " << FLAGS_out_video << endl;
VideoWriter outputVideo = VideoWriter(
FLAGS_out_video, fourcc, 30, frame.size());
if (!outputVideo.isOpened()) {
cerr << "Could not open the output video for write: " << FLAGS_out_video
<< endl;
}
As far as I know video output format should be YUYV (which is equal to YUY2 in OpenCV). Please correct me if I'm wrong. In my code I'm not writing into outputVideo anything yet, just trying to open it for write, but I keep getting outputVideo.isOpened()==false for some reason, no additional errors/info in the output:
root#blah:~$ main --uid='' --in_video='0' --out_video='/dev/video2'
Trying to open video for write: /dev/video2
Could not open the output video for write: /dev/video2
I'd appreciate any advice or help on how to debug/resolve this issue. Thank you in advance!
I am trying to scale a decoded YUV420p frame(1018x700) via sws_scale to RGBA, I am saving data to a raw video file and then playing the raw video using ffplay to see the result.
Here is my code:
sws_ctx = sws_getContext(video_dec_ctx->width, video_dec_ctx->height,AV_PIX_FMT_YUV420P, video_dec_ctx->width, video_dec_ctx->height, AV_PIX_FMT_BGR32, SWS_LANCZOS | SWS_ACCURATE_RND, 0, 0, 0);
ret = avcodec_decode_video2(video_dec_ctx, yuvframe, got_frame, &pkt);
if (ret < 0) {
std::cout<<"Error in decoding"<<std::endl;
return ret;
}else{
//the source and destination heights and widths are the same
int sourceX = video_dec_ctx->width;
int sourceY = video_dec_ctx->height;
int destX = video_dec_ctx->width;
int destY = video_dec_ctx->height;
//declare destination frame
AVFrame avFrameRGB;
avFrameRGB.linesize[0] = destX * 4;
avFrameRGB.data[0] = (uint8_t*)malloc(avFrameRGB.linesize[0] * destY);
//scale the frame to avFrameRGB
sws_scale(sws_ctx, yuvframe->data, yuvframe->linesize, 0, yuvframe->height, avFrameRGB.data, avFrameRGB.linesize);
//write to file
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize, video_dst_file);
}
Here is the result without scaling (i.e. in YUV420p Format)
Here is the after scaling while playing using ffplay (i.e. in RGBA format)
I run the ffplay using the following command ('video' is the raw video file)
ffplay -f rawvideo -pix_fmt bgr32 -video_size 1018x700 video
What should I fix to make the correct scaling happen to RGB32?
I found the solution, the problem here was that I was not using the correct buffer size to write to the file.
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize, video_dst_file);
The variable video_dst_file was being taken from the return value of
video_dst_bufsize = av_image_alloc(yuvframe.data, yuvframe.linesize, destX, destY, AV_PIX_FMT_YUV420P, 1);
The solution is to get the return value from and use this in the fwrite statement:
video_dst_bufsize_RGB = av_image_alloc(avFrameRGB.data, avFrameRGB.linesize, destX, destY, AV_PIX_FMT_BGR32, 1);
fwrite(avFrameRGB.data[0], 1, video_dst_bufsize_RGB, video_dst_file);
Here is the code I used to decode a rtsp stream in a worker thread:
while(1)
{
// Read a frame
if(av_read_frame(pFormatCtx, &packet)<0)
break; // Frame read failed (e.g. end of stream)
if(packet.stream_index==videoStream)
{
// Is this a packet from the video stream -> decode video frame
int frameFinished;
avcodec_decode_video2(pCodecCtx,pFrame,&frameFinished,&packet);
// Did we get a video frame?
if (frameFinished)
{
if (LastFrameOk == false)
{
LastFrameOk = true;
}
// Convert the image format (init the context the first time)
int w = pCodecCtx->width;
int h = pCodecCtx->height;
img_convert_ctx = ffmpeg::sws_getCachedContext(img_convert_ctx, w, h, pCodecCtx->pix_fmt, w, h, ffmpeg::PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
if (img_convert_ctx == NULL)
{
printf("Cannot initialize the conversion context!\n");
return false;
}
ffmpeg::sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
// Convert the frame to QImage
LastFrame = QImage(w, h, QImage::Format_RGB888);
for (int y = 0; y < h; y++)
memcpy(LastFrame.scanLine(y), pFrameRGB->data[0] + y*pFrameRGB->linesize[0], w * 3);
LastFrameOk = true;
} // frameFinished
} // stream_index==videoStream
av_free_packet(&packet); // Free the packet that was allocated by av_read_frame
}
I followed the ffmpeg's tutorial and used a while loop to read the packet and decode the video.
But is there a more efficient way to do this, like a event-triggered function when there is packet received?
I haven't seen any event driven approach for reading frames, but what is the purpose of reading RTSP stream? But I can give some recommendations for improving performance. First of all, you may add a very short sleep in your loop (e.g. Sleep(1);). In your program, if your purpose is to:
Display images to the user: Don't use conversion to RGB, after decoding, the resulting frame is in YUV420P format which can be directly displayed to the user using GPU without any CPU usage. Almost all graphics cards support YUV420P (or YV12) format. Conversion to RGB is a highly CPU-consuming operation, especially for large images.
Record (save) to disk: I you want to record the stream to play it later, there is no need to decode the frames. You may use OpenRTSP to record directly to the disk without any CPU usage.
Process realtime images: You may find alternative algorithms to process on YUV420P format instead of RGB. The Y plane in YUV420P is actually a grayscale version of the colored RGB images.
Marked question as outdated as using the deprecated avcodec_decode_video2
I'm currently experiencing artifacts when decoding video using ffmpegs api. On what I would assume to be intermediate frames, artifacts build slowly only from active movement in the frame. These artifacts build for 50-100 frames until I assume a keyframe resets them. Frames are then decoded correctly and the artifacts proceed to build again.
One thing that is bothering me is I have a few video samples that are 30fps(h264) that work correctly, but all of my 60fps videos(h264) experience the problem.
I don't currently have enough reputation to post an image, so hopefully this link will work.
http://i.imgur.com/PPXXkJc.jpg
int numBytes;
int frameFinished;
AVFrame* decodedRawFrame;
AVFrame* rgbFrame;
//Enum class for decoding results, used to break decode loop when a frame is gathered
DecodeResult retResult = DecodeResult::Fail;
decodedRawFrame = av_frame_alloc();
rgbFrame = av_frame_alloc();
if (!decodedRawFrame) {
fprintf(stderr, "Could not allocate video frame\n");
return DecodeResult::Fail;
}
numBytes = avpicture_get_size(PIX_FMT_RGBA, mCodecCtx->width,mCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
avpicture_fill((AVPicture *) rgbFrame, buffer, PIX_FMT_RGBA, mCodecCtx->width, mCodecCtx->height);
AVPacket packet;
while(av_read_frame(mFormatCtx, &packet) >= 0 && retResult != DecodeResult::Success)
{
// Is this a packet from the video stream?
if (packet.stream_index == mVideoStreamIndex)
{
// Decode video frame
int decodeValue = avcodec_decode_video2(mCodecCtx, decodedRawFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished)// && rgbFrame->pict_type != AV_PICTURE_TYPE_NONE )
{
// Convert the image from its native format to RGB
int SwsFlags = SWS_BILINEAR;
// Accurate round clears up a problem where the start
// of videos have green bars on them
SwsFlags |= SWS_ACCURATE_RND;
struct SwsContext *ctx = sws_getCachedContext(NULL, mCodecCtx->width, mCodecCtx->height, mCodecCtx->pix_fmt, mCodecCtx->width, mCodecCtx->height,
PIX_FMT_RGBA, SwsFlags, NULL, NULL, NULL);
sws_scale(ctx, decodedRawFrame->data, decodedRawFrame->linesize, 0, mCodecCtx->height, rgbFrame->data, rgbFrame->linesize);
//if(count%5 == 0 && count < 105)
// DebugSavePPMImage(rgbFrame, mCodecCtx->width, mCodecCtx->height, count);
++count;
// Viewable frame is a struct to hold buffer and frame together in a queue
ViewableFrame frame;
frame.buffer = buffer;
frame.frame = rgbFrame;
mFrameQueue.push(frame);
retResult = DecodeResult::Success;
sws_freeContext(ctx);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Check for end of file leftover frames
if(retResult != DecodeResult::Success)
{
int result = av_read_frame(mFormatCtx, &packet);
if(result < 0)
isEoF = true;
av_free_packet(&packet);
}
// Free the YUV frame
av_frame_free(&decodedRawFrame);
I'm attempting to build a queue of the decoded frames that I then use and free as needed. Is my seperation of the frames causing the intermediate frames to be decoded incorrectly? I also break the decoding loop once I've successfully gathered a frame(Decode::Success, most examples I've seen tend to loop through the whole video.
All codec contect, video stream information, and format contexts are setup up exactly as shown in the main function of https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c
Any suggestions would be greatly appreciated.
For reference if someone finds themselves in a similar position. Apparently with some of the older versions of FFMPEG there's an issue when using sws_scale to convert an image and not changing the actual dimensions of the final frame. If instead you create a flag for the SwsContext using:
int SwsFlags = SWS_BILINEAR; //Whatever you want
SwsFlags |= SWS_ACCURATE_RND; // Under the hood forces ffmpeg to use the same logic as if scaled
SWS_ACCURATE_RND has a performance penalty but for regular video it's probably not that noticeable. This will remove the splash of green, or green bars along the edges of textures if present.
I wanted to thank Multimedia Mike, and George Y, they were also right in that the way I was decoding the frame wasn't preserving the packets correctly and that was what caused the video artifacts building from previous frames.