I have videofile. How can I get fps for this video with ffmpeg in c++?
Type full code, please.
This is a simple program I wrote to dump video information to console:
#include <libavformat/avformat.h>
int main(int argc, const char *argv[])
{
if (argc < 2)
{
printf("No video file.\n");
return -1;
}
av_register_all();
AVFormatContext *pFormatCtx = NULL;
//open video file
if (avformat_open_input(&pFormatCtx, argv[1], NULL, NULL) != 0)
return -1;
//get stream info
if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
return -1;
av_dump_format(pFormatCtx, 0, argv[1], 0);
}
Compile and run it, output looks like:
s#ubuntu-vm:~/Desktop/video-info-dump$ ./vdump a.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isom
creation_time : 2014-04-23 06:18:02
encoder : FormatFactory : www.pcfreetime.com
Duration: 00:07:20.60, start: 0.000000, bitrate: 1354 kb/s
Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 1228 kb/s, 24 fps, 24 tbr, 24k tbn, 24 tbc (default)
Metadata:
creation_time : 2014-04-23 06:18:02
handler_name : video
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 123 kb/s (default)
Metadata:
creation_time : 2014-04-23 06:18:25
handler_name : sound
Recommend a very good tutorial for ffmpeg and SDL.
#zhm answer was very close, I've made a small update to get the frame rate only. On my end, I need the bit_rate and this is an int64_t value in the AVFormatContext *.
For the FPS, you need to go through the list of streams, probably check whether it's audio or video, and then access the r_frame_rate, which is an AVRational value. The parameter is a nominator and denominator, you can simple divide one by the other to get a double and they even offer a function (av_q2d()) to do it.
int main(int argc, char * argv[])
{
if (argc < 2)
{
printf("No video file.\n");
return -1;
}
av_register_all();
AVFormatContext *pFormatCtx = NULL;
//open video file
if (avformat_open_input(&pFormatCtx, argv[1], NULL, NULL) != 0)
return -1;
//get stream info
if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
return -1;
// dump the whole thing like ffprobe does
//av_dump_format(pFormatCtx, 0, argv[1], 0);
// get the frame rate of each stream
for(int idx(0); idx < pFormatCtx->nb_streams; ++idx)
{
AVStream *s(pFormatCtx->streams[idx]);
std::cout << idx << ". " << s->r_frame_rate.nom
<< " / " << s->r_frame_rate.den
<< " = " << av_q2d(s->r_frame_rate)
<< "\n";
}
// get the video bit rate
std::cout << "bit rate " << pFormatCtx->bit_rate << "\n";
return 0;
}
For more information, you may want to take a look at the avformat.h header where the AVFormatContext and AVStream structures are defined.
You could execute ffmpeg.exe like this ffmpeg -i filename and it would output the framerate if its not variable.
Example:
Input #0, matroska,webm, from 'somerandom.mkv':
Duration: 01:16:10.90, start: 0.000000, bitrate: N/A
Stream #0.0: Video: h264 (High), yuv420p, 720x344 [PAR 1:1 DAR 90:43], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
Stream #0.1: Audio: aac, 48000 Hz, stereo, s16 (default)
This video has a fps of 25.
To execute a program you can use the answer in https://stackoverflow.com/a/17703834/58553
Source: https://askubuntu.com/questions/110264/how-to-find-frames-per-second-of-any-video-file
Related
I am trying to write a program to generate frames to be encoded via ffmpeg/libav into an mp4 file with a single h264 stream. I found these two examples and am sort of trying to merge them together to make what I want: [video transcoder] [raw MPEG1 encoder]
I have been able to get video output (green circle changing size), but no matter how I set the PTS values of the frames or what time_base I specify in the AVCodecContext or AVStream, I'm getting frame rates of about 7000-15000 instead of 60, resulting in a video file that lasts 70ms instead of 1000 frames / 60 fps = 166 seconds. Every time I change some of my code, the frame rate changes a little bit, almost as if it's reading from uninitialized memory. Other references to an issue like this on StackOverflow seem to be related to incorrectly set PTS values; however, I've tried printing out all the PTS, DTS, and time base values I can find and they all seem normal. Here's my proof-of-concept code (with the error catching stuff around the libav calls removed for clarity):
#include <iostream>
#include <opencv2/opencv.hpp>
#include <math.h>
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <libavutil/timestamp.h>
}
using namespace cv;
int main(int argc, char *argv[]) {
const char *filename = "testvideo.mp4";
AVFormatContext *avfc;
avformat_alloc_output_context2(&avfc, NULL, NULL, filename);
AVStream *stream = avformat_new_stream(avfc, NULL);
AVCodec *h264 = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext *avcc = avcodec_alloc_context3(h264);
av_opt_set(avcc->priv_data, "preset", "fast", 0);
av_opt_set(avcc->priv_data, "crf", "20", 0);
avcc->thread_count = 1;
avcc->width = 1920;
avcc->height = 1080;
avcc->pix_fmt = AV_PIX_FMT_YUV420P;
avcc->time_base = av_make_q(1, 60);
stream->time_base = avcc->time_base;
if(avfc->oformat->flags & AVFMT_GLOBALHEADER)
avcc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
avcodec_open2(avcc, h264, NULL);
avcodec_parameters_from_context(stream->codecpar, avcc);
avio_open(&avfc->pb, filename, AVIO_FLAG_WRITE);
avformat_write_header(avfc, NULL);
Mat frame, nothing = Mat::zeros(1080, 1920, CV_8UC1);
AVFrame *avf = av_frame_alloc();
AVPacket *avp = av_packet_alloc();
int ret;
avf->format = AV_PIX_FMT_YUV420P;
avf->width = 1920;
avf->height = 1080;
avf->linesize[0] = 1920;
avf->linesize[1] = 1920;
avf->linesize[2] = 1920;
for(int x=0; x<1000; x++) {
frame = Mat::zeros(1080, 1920, CV_8UC1);
circle(frame, Point(1920/2, 1080/2), 250*(sin(2*M_PI*x/1000*3)+1.01), Scalar(255), 10);
avf->data[0] = frame.data;
avf->data[1] = nothing.data;
avf->data[2] = nothing.data;
avf->pts = x;
ret = 0;
do {
if(ret == AVERROR(EAGAIN)) {
av_packet_unref(avp);
ret = avcodec_receive_packet(avcc, avp);
if(ret) break; // deal with error
av_write_frame(avfc, avp);
} //else if(ret) deal with error
ret = avcodec_send_frame(avcc, avf);
} while(ret);
}
// flush the rest of the packets
avcodec_send_frame(avcc, NULL);
do {
av_packet_unref(avp);
ret = avcodec_receive_packet(avcc, avp);
if(!ret)
av_write_frame(avfc, avp);
} while(!ret);
av_frame_free(&avf);
av_packet_free(&avp);
av_write_trailer(avfc);
avformat_close_input(&avfc);
avformat_free_context(avfc);
avcodec_free_context(&avcc);
return 0;
}
This is the output of ffprobe run on the output video file
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testvideo.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.76.100
Duration: 00:00:00.07, start: 0.000000, bitrate: 115192 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 115389 kb/s, 15375.38 fps, 15360 tbr, 15360 tbn, 120 tbc (default)
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
What might be causing my frame rate to be so high? Thanks in advance for any help.
You are getting high frame rate because you have failed to set packet duration.
Set the time_base to higher resolution (like 1/60000) as described here:
avcc->time_base = av_make_q(1, 60000);
Set avp->duration as described here:
AVRational avg_frame_rate = av_make_q(60, 1); //60 fps
avp->duration = avcc->time_base.den / avcc->time_base.num / avg_frame_rate.num * avg_frame_rate.den; //avp->duration = 1000 (60000/60)
And set the pts accordingly.
Complete code:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <math.h>
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <libavutil/timestamp.h>
}
using namespace cv;
int main(int argc, char* argv[]) {
const char* filename = "testvideo.mp4";
AVFormatContext* avfc;
avformat_alloc_output_context2(&avfc, NULL, NULL, filename);
AVStream* stream = avformat_new_stream(avfc, NULL);
AVCodec* h264 = avcodec_find_encoder(AV_CODEC_ID_H264);
AVCodecContext* avcc = avcodec_alloc_context3(h264);
av_opt_set(avcc->priv_data, "preset", "fast", 0);
av_opt_set(avcc->priv_data, "crf", "20", 0);
avcc->thread_count = 1;
avcc->width = 1920;
avcc->height = 1080;
avcc->pix_fmt = AV_PIX_FMT_YUV420P;
//Sey the time_base to higher resolution like 1/60000
avcc->time_base = av_make_q(1, 60000); //avcc->time_base = av_make_q(1, 60);
stream->time_base = avcc->time_base;
if (avfc->oformat->flags & AVFMT_GLOBALHEADER)
avcc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
avcodec_open2(avcc, h264, NULL);
avcodec_parameters_from_context(stream->codecpar, avcc);
avio_open(&avfc->pb, filename, AVIO_FLAG_WRITE);
avformat_write_header(avfc, NULL);
Mat frame, nothing = Mat::zeros(1080, 1920, CV_8UC1);
AVFrame* avf = av_frame_alloc();
AVPacket* avp = av_packet_alloc();
int ret;
avf->format = AV_PIX_FMT_YUV420P;
avf->width = 1920;
avf->height = 1080;
avf->linesize[0] = 1920;
avf->linesize[1] = 1920;
avf->linesize[2] = 1920;
for (int x = 0; x < 1000; x++) {
frame = Mat::zeros(1080, 1920, CV_8UC1);
circle(frame, Point(1920 / 2, 1080 / 2), (int)(250.0 * (sin(2 * M_PI * x / 1000 * 3) + 1.01)), Scalar(255), 10);
AVRational avg_frame_rate = av_make_q(60, 1); //60 fps
int64_t avp_duration = avcc->time_base.den / avcc->time_base.num / avg_frame_rate.num * avg_frame_rate.den;
avf->data[0] = frame.data;
avf->data[1] = nothing.data;
avf->data[2] = nothing.data;
avf->pts = (int64_t)x * avp_duration; // avp->duration = 1000
ret = 0;
do {
if (ret == AVERROR(EAGAIN)) {
av_packet_unref(avp);
ret = avcodec_receive_packet(avcc, avp);
if (ret) break; // deal with error
////////////////////////////////////////////////////////////////
//avp->duration was zero.
avp->duration = avp_duration; //avp->duration = 1000 (60000/60)
//avp->pts = (int64_t)x * avp->duration;
////////////////////////////////////////////////////////////////
av_write_frame(avfc, avp);
} //else if(ret) deal with error
ret = avcodec_send_frame(avcc, avf);
} while (ret);
}
// flush the rest of the packets
avcodec_send_frame(avcc, NULL);
do {
av_packet_unref(avp);
ret = avcodec_receive_packet(avcc, avp);
if (!ret)
av_write_frame(avfc, avp);
} while (!ret);
av_frame_free(&avf);
av_packet_free(&avp);
av_write_trailer(avfc);
avformat_close_input(&avfc);
avformat_free_context(avfc);
avcodec_free_context(&avcc);
return 0;
}
Result of FFprobe:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testvideo.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.76.100
Duration: 00:00:16.65, start: 0.000000, bitrate: 456 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 450 kb/s, 60.06 fps, 60 tbr, 60k tbn, 120k tbc (default)
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
Notes:
I don't know why the fps is 60.06 and not 60.
There is a warning message MB rate (734400000) > level limit (16711680) that I didn't fix.
Though the answer I accepted fixes the problem I was having, here is some more information I've figured out that may be useful:
The time_base field has some restrictions on its value (for example 1/10000 works, but 1/9999 doesn't) based on the container format, and this seems to have been the root problem I was having. When the time base was set to 1/60, the call to avformat_write_header() changed it to 1/15360. Because I had hardcoded the PTS increment to 1, this resulted in the 15360 FPS video. The strange denominator of 15360 seems to result from the given denominator being multiplied by 2 repeatedly until it reaches some minimum value. I have no idea how this algorithm works actually works. This SO question led me on to this.
By setting the time base to 1/60000 and making the PTS increment by 1000 each frame, the fast video problem was fixed. Setting the packet duration doesn't seem necessary, but is probably a good idea.
The main lesson here is to use whatever time_base libav gives you instead of assuming the value you set it to stays unchanged. #Rotem's updated code does this, and would therefore "work" with a time base of 1/60, since the PTS and packet duration will actually be based off the 1/15360 value time_base changes to.
i have some ffmpeg code in c++ that generates a RTMP stream from H264 NALU and audio samples encoded in AAC. I'am using NGINX to take the RTMP stream and forwards to clients and it is working fine. My issue is that when i use NGINX to convert the RTMP stream to HLS, there is no HLS chunks and playlist generated. I use ffmpeg to copy the RTMP stream and generate a new stream to NGINX, the HLS conversion works.
Here is what i get when i do the stream copy using FFMPEG :
Input #0, flv, from 'rtmp://127.0.0.1/live/beam_0'
Metadata:
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1920
displayHeight : 1080
fps : 30
profile :
level :
Duration: 00:00:00.00, start: 5.019000, bitrate: N/A
Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 128 kb/s
Stream #0:1: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 1920x1080 (1920x1088), 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 60 tbc
Output #0, flv, to 'rtmp://localhost/live/copy_stream':
Metadata:
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1920
displayHeight : 1080
fps : 30
profile :
level :
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (High), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p(progressive, left), 1920x1080 (0x0), q=2-31, 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 1k tbc
Stream #0:1: Audio: aac ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 128 kb/s
There are not much differences between the two streams, so i don't really get what is going wrong and what i should change in my C++ code. One very weird issue i see is that my audio stream is 48kHz when i publish it, but here it is recognized as 44100Hz :
Output #0, flv, to 'rtmp://127.0.0.1/live/beam_0':
Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame, yuv420p, 1920x1080, 0/1, q=-1--1, 8000 kb/s, 30 fps, 1k tbn, 1k tbc
Stream #0:1, 0, 1/1000: Audio: aac, 48000 Hz, 1 channels, fltp, 128 kb/s
UPDATE 1 :
The output context is created using the following code :
pOutputFormatContext->oformat = av_guess_format("flv", url.toStdString().c_str(), nullptr);
memcpy(pOutputFormatContext->filename, url.toStdString().c_str(), url.length());
avio_open(&pOutputFormatContext->pb, url.toStdString().c_str(), AVIO_FLAG_WRITE));
pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
pOutputFormatContext->oformat->audio_codec = AV_CODEC_ID_AAC ;
The audio stream is created with :
AVDictionary *opts = nullptr;
//pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);
pAudioCodecContext = avcodec_alloc_context3(pAudioCodec);
pAudioCodecContext->thread_count = 1;
pAudioFrame = av_frame_alloc();
av_dict_set(&opts, "strict", "experimental", 0);
pAudioCodecContext->bit_rate = 128000;
pAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
pAudioCodecContext->sample_rate = static_cast<int>(sample_rate) ;
pAudioCodecContext->channels = nb_channels ;
pAudioCodecContext->time_base.num = 1;
pAudioCodecContext->time_base.den = 1000 ;
//pAudioCodecContext->time_base.den = static_cast<int>(sample_rate) ;
pAudioCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
avcodec_open2(pAudioCodecContext, pAudioCodec, &opts);
pAudioFrame->nb_samples = pAudioCodecContext->frame_size;
pAudioFrame->format = pAudioCodecContext->sample_fmt;
pAudioFrame->channel_layout = pAudioCodecContext->channel_layout;
mAudioSamplesBufferSize = av_samples_get_buffer_size(nullptr, pAudioCodecContext->channels, pAudioCodecContext->frame_size, pAudioCodecContext->sample_fmt, 0);
avcodec_fill_audio_frame(pAudioFrame, pAudioCodecContext->channels, pAudioCodecContext->sample_fmt, (const uint8_t*) pAudioSamples, mAudioSamplesBufferSize, 0);
if( pOutputFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
pAudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
pAudioStream = avformat_new_stream(pOutputFormatContext, 0);
pAudioStream->codec = pAudioCodecContext ;
pAudioStream->id = pOutputFormatContext->nb_streams-1;;
pAudioStream->time_base.den = pAudioStream->pts.den = pAudioCodecContext->time_base.den;
pAudioStream->time_base.num = pAudioStream->pts.num = pAudioCodecContext->time_base.num;
mAudioPacketTs = 0 ;
The video stream is created with :
pVideoCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);
pVideoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO ;
pVideoCodecContext->thread_count = 1 ;
pVideoCodecContext->width = width;
pVideoCodecContext->height = height;
pVideoCodecContext->bit_rate = 8000000 ;
pVideoCodecContext->time_base.den = 1000 ;
pVideoCodecContext->time_base.num = 1 ;
pVideoCodecContext->gop_size = 10;
pVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
pVideoCodecContext->flags = 0x0007 ;
pVideoCodecContext->extradata_size = sizeof(extra_data_buffer);
pVideoCodecContext->extradata = (uint8_t *) av_malloc ( sizeof(extra_data_buffer) );
memcpy ( pVideoCodecContext->extradata, extra_data_buffer, sizeof(extra_data_buffer));
avcodec_open2(pVideoCodecContext,pVideoCodec,0);
pVideoFrame = av_frame_alloc();
AVDictionary *opts = nullptr;
av_dict_set(&opts, "strict", "experimental", 0);
memcpy(pOutputFormatContext->filename, this->mStreamUrl.toStdString().c_str(), this->mStreamUrl.length());
pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
if( pOutputFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
pVideoStream = avformat_new_stream(pOutputFormatContext, pVideoCodec);
//This following section is because AVFormat complains about parameters being passed throught the context and not CodecPar
pVideoStream->codec = pVideoCodecContext ;
pVideoStream->id = pOutputFormatContext->nb_streams-1;
pVideoStream->time_base.den = pVideoStream->pts.den = pVideoCodecContext->time_base.den;
pVideoStream->time_base.num = pVideoStream->pts.num = pVideoCodecContext->time_base.num;
pVideoStream->avg_frame_rate.num = fps ;
pVideoStream->avg_frame_rate.den = 1 ;
pVideoStream->codec->gop_size = 10 ;
mVideoPacketTs = 0 ;
Then each video packet and audio packet is pushed with correct scaled pts/dts. I have corrected the 48kHz issue. It was because i was configuring the stream through the codec context and the through the codec parameters (because of waarning at runtime).
This RTMP stream still does not work for HLS conversion by NGINX, but if i just use FFMPEG to take the RTMP stream from NGINX and re-publish it with copy codec then it works.
Good day.
For brevity, the code omits error handling and memory management.
I want to capture h264 video stream and pack it to mp4 container without changes. Since i don't control the source of stream, i can not make assumptions about stream structure. In this way i must probe input stream.
AVProbeData probeData;
probeData.buf_size = s->BodySize();
probeData.buf = s->GetBody();
probeData.filename = "";
AVInputFormat* inFormat = av_probe_input_format(&probeData, 1);
This code correctly defines h264 stream.
Next, i create input format context,
unsigned char* avio_input_buffer = reinterpret_cast<unsigned char*> (av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_input_ctx = avio_alloc_context(avio_input_buffer, AVIO_BUFFER_SIZE,
0, this, &read_packet, NULL, NULL);
AVFormatContext* ifmt_ctx = avformat_alloc_context();
ifmt_ctx->pb = avio_input_ctx;
int ret = avformat_open_input(&ifmt_ctx, NULL, inFormat, NULL);
set image size,
ifmt_ctx->streams[0]->codec->width = ifmt_ctx->streams[0]->codec->coded_width = width;
ifmt_ctx->streams[0]->codec->height = ifmt_ctx->streams[0]->codec->coded_height = height;
create output format context,
unsigned char* avio_output_buffer = reinterpret_cast<unsigned char*>(av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_output_ctx = avio_alloc_context(avio_output_buffer, AVIO_BUFFER_SIZE,
1, this, NULL, &write_packet, NULL);
AVFormatContext* ofmt_ctx = nullptr;
avformat_alloc_output_context2(&ofmt_ctx, NULL, "mp4", NULL);
ofmt_ctx->pb = avio_output_ctx;
AVDictionary* dict = nullptr;
av_dict_set(&dict, "movflags", "faststart", 0);
av_dict_set(&dict, "movflags", "frag_keyframe+empty_moov", 0);
AVStream* outVideoStream = avformat_new_stream(ofmt_ctx, nullptr);
avcodec_copy_context(outVideoStream->codec, ifmt_ctx->streams[0]->codec);
ret = avformat_write_header(ofmt_ctx, &dict);
Initialization is done. Further there is a shifting packets from h264 stream to mp4 container. I dont calculate pts and dts, because source packet has AV_NOPTS_VALUE in them.
AVPacket pkt;
while (...)
{
ret = av_read_frame(ifmt_ctx, &pkt);
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
av_free_packet(&pkt);
}
Further i write trailer and free allocated memory. That is all. Code works and i got playable mp4 file.
Now the problem: the stream characteristics of the resulting file is not completely consisent with the characteristics of the source stream. In particular, fps and bitrate is higher than it should be.
As sample, below is output ffplay.exe for source stream
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/source.mp4':0/0
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isom
creation_time : 2014-04-14T13:03:54.000000Z
Duration: 00:00:58.08, start: 0.000000, bitrate: 12130 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661),yuv420p, 1920x1080, 12129 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 1428KB sq= 0B f=0/0
Seek to 49% ( 0:00:28) of total duration ( 0:00:58) B f=0/0
30.32 M-V: -0.030 fd= 87 aq= 0KB vq= 1360KB sq= 0B f=0/0
and for resulting stream (contains part of source stream)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/target.mp4':f=0/0
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1iso6mp41
encoder : Lavf57.56.101
Duration: 00:00:11.64, start: 0.000000, bitrate: 18686 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 18683 kb/s, 38.57 fps, 40 tbr, 90k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 2309KB sq= 0B f=0/0
5.70 M-V: 0.040 fd= 127 aq= 0KB vq= 2562KB sq= 0B f=0/0
So there is a question, what i missed when copying stream? I will be grateful for any help.
Best regards
I dont calculate pts and dts This is your problem. frame rate and bit rate are both ratios where time is the denominator. But not writing pts/dts you end up with a video shorter than you want. h.264 does not timestamp every frame. that is the containers job. You must make up time stamps from the known frame rate, or another value.
I am trying to use OpenCV to apply treatments on an mj2 file encoded as grayscale uint16 per pixel. Unfortunately, a non disclosure agreement covers this file (which I did not generate myself) and I can not post a sample mj2 file.
The description of my .mj2 file as provided by ffmpeg is:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'data/DEVISSAGE_181.mj2':
Metadata:
major_brand : mjp2
minor_version : 0
compatible_brands: mjp2
creation_time : 2015-10-09 08:07:43
Duration: 00:01:03.45, start: 0.000000, bitrate: 14933 kb/s
Stream #0:0: Video: jpeg2000 (mjp2 / 0x32706A6D), gray16le, 1152x288, lossless, 14933 kb/s, SAR 1:4 DAR 1:1, 5.50 fps, 5.50 tbr, 55 tbn, 55 tbc (default)
Metadata:
creation_time : 2015-10-09 08:07:43
handler_name : Video
encoder : Motion JPEG2000
I take it that gray16le confirms the uint16 encoding somehow.
Here is my C++ code:
#include<iostream>
#include "opencv2/opencv.hpp"
int main(int, char**) {
cv::VideoCapture cap("data/DEVISSAGE_181.mj2"); // open the video file
cap.set(CV_CAP_PROP_FORMAT, CV_16UC1);
cv::Mat frame;
cap.read(frame); // get a new frame from file
std::cout << "frame.rows: " << frame.rows << ", frame.cols: " << frame.cols << ", frame.channels(): " << frame.channels() << std::endl;
return 0;
}
The result of running this code is:
frame.rows: 288, frame.cols: 1152, frame.channels(): 3, frame.depth(): 0
Which indicates a 3 channels, CV_8U pixel encoding. Why does the cap.set instruction appear to be ignored ? What should I do to get the correct encoding ?
I'm trying to play a movie using the QT Multimedia framework (5.0.1), but I only get a black screen with a mov coded with H.264.
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QWidget *mainWidget = new QWidget();
mainWidget->setGeometry(0,0, 1920, 1080);
QVideoWidget *widget = new QVideoWidget(mainWidget);
widget->setGeometry(0, 0, 1920, 1080);
QMediaPlayer *player = new QMediaPlayer;
QUrl localUrl = QUrl::fromLocalFile("test_mov.mov");
player->setMedia(localUrl);
qDebug() << "Player error state -> " << player->error();
qDebug() << "Media supported state -> " << QMediaPlayer::hasSupport("video/mov");
player->setVideoOutput(widget);
mainWidget->show();
player->play();
return a.exec();
}
The code compiles correctly and gives the following output on console, while the video widget remains black:
Player error state -> QMediaPlayer::NoError
Media supported state -> 1 // means "Probably supported"
I'm using Qt 5.0.1 on a Mac OSX 10.7.5. The file is correctly played by the player and ffmpeg -i test_mov.mov gives
Duration: 00:00:02.52, start: 0.000000, bitrate: 63708 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1920x1080, 63684 kb/s, SAR 1745:1920 DAR 349:216, 25 fps, 25 tbr, 25 tbn, 50 tbc
Does anyone knows what are the formats supported by QT Multimedia ?
Thank you
In Windows, QT video file formats usually appear with the .mov filename extension.
Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV, MP3, and MPEG-1.