FFmpeg / libavcodec: Encoding with x264 ignores bitrate setting - c++

I am trying to write a video using the FFmpeg libraries. So far I can successfully encode videos with the .avi extension, but when I use the .mp4 extension the application ignores completely the bitrate option I specify.
Here's a snippet of the code I use to specify the encoding settings:
//define video stream
AVOutputFormat* outFmt = nullptr;
outFmt = av_guess_format(NULL, m_pcFilename.c_str(), NULL);
avformat_alloc_output_context2(&m_pcOC, outFmt, NULL, NULL);
AVFormatContext* m_pcOC;
AVStream* m_pcVideoSt = avformat_new_stream(m_pcOC, NULL);
AVCodec* codec = nullptr;
codec = avcodec_find_encoder(codecID);
avcodec_get_context_defaults3(m_pcVideoSt->codec, codec);
//set some parameters
double dBitrate = std::stod(bitrate);
m_pcVideoSt->codec->codec_id = codecID;
m_pcVideoSt->codec->codec_type = AVMEDIA_TYPE_VIDEO;
m_pcVideoSt->codec->bit_rate = dBitrate;
m_pcVideoSt->codec->bit_rate_tolerance = 20000;
m_pcVideoSt->codec->width = m_iOutCols;
m_pcVideoSt->codec->height = m_iOutRows;
m_pcVideoSt->codec->time_base.den = static_cast<int>(dFps);
m_pcVideoSt->codec->time_base.num = 1;
if (m_pcOC->oformat->flags & AVFMT_GLOBALHEADER)
{
m_pcVideoSt->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
/* open the codec */
AVDictionary* pcOpts = nullptr;
int res1 = av_dict_set(&pcOpts, "b", bitrate.c_str(), 0);
int res = avcodec_open2(m_pcVideoSt->codec, codec, &pcOpts);
This is the output I get when creating a .avi file
ffprobe version 2.8.6 Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, avi, from 'testGrey2.avi':
Metadata:
encoder : Lavf56.40.101
Duration: 00:01:20.00, start: 0.000000, **bitrate: 433 kb/s**
Stream #0:0: Video: **mpeg4 (Advanced Simple Profile)** (FMP4 / 0x34504D46), yuv420p, 720x576 [SAR 1:1 DAR 5:4], 428 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
And this is what I get when creating an .mp4 file
ffprobe version 2.8.6 Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --disable-static --enable-shared --enable-gpl --enable- version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable- libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testGrey2.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Duration: 00:01:20.00, start: 0.000000, **bitrate: 1542 kb/s**
Stream #0:0(und): **Video: h264 (High)** (avc1 / 0x31637661), yuv420p, 720x576, 1540 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Any ideas on why is this happening? What is the preferred way to specify the bitrate of the output video?

Related

RTMP streaming using FFMPEG and HLS conversion in NGINX

i have some ffmpeg code in c++ that generates a RTMP stream from H264 NALU and audio samples encoded in AAC. I'am using NGINX to take the RTMP stream and forwards to clients and it is working fine. My issue is that when i use NGINX to convert the RTMP stream to HLS, there is no HLS chunks and playlist generated. I use ffmpeg to copy the RTMP stream and generate a new stream to NGINX, the HLS conversion works.
Here is what i get when i do the stream copy using FFMPEG :
Input #0, flv, from 'rtmp://127.0.0.1/live/beam_0'
Metadata:
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1920
displayHeight : 1080
fps : 30
profile :
level :
Duration: 00:00:00.00, start: 5.019000, bitrate: N/A
Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 128 kb/s
Stream #0:1: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 1920x1080 (1920x1088), 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 60 tbc
Output #0, flv, to 'rtmp://localhost/live/copy_stream':
Metadata:
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1920
displayHeight : 1080
fps : 30
profile :
level :
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (High), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p(progressive, left), 1920x1080 (0x0), q=2-31, 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 1k tbc
Stream #0:1: Audio: aac ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 128 kb/s
There are not much differences between the two streams, so i don't really get what is going wrong and what i should change in my C++ code. One very weird issue i see is that my audio stream is 48kHz when i publish it, but here it is recognized as 44100Hz :
Output #0, flv, to 'rtmp://127.0.0.1/live/beam_0':
Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame, yuv420p, 1920x1080, 0/1, q=-1--1, 8000 kb/s, 30 fps, 1k tbn, 1k tbc
Stream #0:1, 0, 1/1000: Audio: aac, 48000 Hz, 1 channels, fltp, 128 kb/s
UPDATE 1 :
The output context is created using the following code :
pOutputFormatContext->oformat = av_guess_format("flv", url.toStdString().c_str(), nullptr);
memcpy(pOutputFormatContext->filename, url.toStdString().c_str(), url.length());
avio_open(&pOutputFormatContext->pb, url.toStdString().c_str(), AVIO_FLAG_WRITE));
pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
pOutputFormatContext->oformat->audio_codec = AV_CODEC_ID_AAC ;
The audio stream is created with :
AVDictionary *opts = nullptr;
//pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);
pAudioCodecContext = avcodec_alloc_context3(pAudioCodec);
pAudioCodecContext->thread_count = 1;
pAudioFrame = av_frame_alloc();
av_dict_set(&opts, "strict", "experimental", 0);
pAudioCodecContext->bit_rate = 128000;
pAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
pAudioCodecContext->sample_rate = static_cast<int>(sample_rate) ;
pAudioCodecContext->channels = nb_channels ;
pAudioCodecContext->time_base.num = 1;
pAudioCodecContext->time_base.den = 1000 ;
//pAudioCodecContext->time_base.den = static_cast<int>(sample_rate) ;
pAudioCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
avcodec_open2(pAudioCodecContext, pAudioCodec, &opts);
pAudioFrame->nb_samples = pAudioCodecContext->frame_size;
pAudioFrame->format = pAudioCodecContext->sample_fmt;
pAudioFrame->channel_layout = pAudioCodecContext->channel_layout;
mAudioSamplesBufferSize = av_samples_get_buffer_size(nullptr, pAudioCodecContext->channels, pAudioCodecContext->frame_size, pAudioCodecContext->sample_fmt, 0);
avcodec_fill_audio_frame(pAudioFrame, pAudioCodecContext->channels, pAudioCodecContext->sample_fmt, (const uint8_t*) pAudioSamples, mAudioSamplesBufferSize, 0);
if( pOutputFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
pAudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
pAudioStream = avformat_new_stream(pOutputFormatContext, 0);
pAudioStream->codec = pAudioCodecContext ;
pAudioStream->id = pOutputFormatContext->nb_streams-1;;
pAudioStream->time_base.den = pAudioStream->pts.den = pAudioCodecContext->time_base.den;
pAudioStream->time_base.num = pAudioStream->pts.num = pAudioCodecContext->time_base.num;
mAudioPacketTs = 0 ;
The video stream is created with :
pVideoCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);
pVideoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO ;
pVideoCodecContext->thread_count = 1 ;
pVideoCodecContext->width = width;
pVideoCodecContext->height = height;
pVideoCodecContext->bit_rate = 8000000 ;
pVideoCodecContext->time_base.den = 1000 ;
pVideoCodecContext->time_base.num = 1 ;
pVideoCodecContext->gop_size = 10;
pVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
pVideoCodecContext->flags = 0x0007 ;
pVideoCodecContext->extradata_size = sizeof(extra_data_buffer);
pVideoCodecContext->extradata = (uint8_t *) av_malloc ( sizeof(extra_data_buffer) );
memcpy ( pVideoCodecContext->extradata, extra_data_buffer, sizeof(extra_data_buffer));
avcodec_open2(pVideoCodecContext,pVideoCodec,0);
pVideoFrame = av_frame_alloc();
AVDictionary *opts = nullptr;
av_dict_set(&opts, "strict", "experimental", 0);
memcpy(pOutputFormatContext->filename, this->mStreamUrl.toStdString().c_str(), this->mStreamUrl.length());
pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
if( pOutputFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
pVideoStream = avformat_new_stream(pOutputFormatContext, pVideoCodec);
//This following section is because AVFormat complains about parameters being passed throught the context and not CodecPar
pVideoStream->codec = pVideoCodecContext ;
pVideoStream->id = pOutputFormatContext->nb_streams-1;
pVideoStream->time_base.den = pVideoStream->pts.den = pVideoCodecContext->time_base.den;
pVideoStream->time_base.num = pVideoStream->pts.num = pVideoCodecContext->time_base.num;
pVideoStream->avg_frame_rate.num = fps ;
pVideoStream->avg_frame_rate.den = 1 ;
pVideoStream->codec->gop_size = 10 ;
mVideoPacketTs = 0 ;
Then each video packet and audio packet is pushed with correct scaled pts/dts. I have corrected the 48kHz issue. It was because i was configuring the stream through the codec context and the through the codec parameters (because of waarning at runtime).
This RTMP stream still does not work for HLS conversion by NGINX, but if i just use FFMPEG to take the RTMP stream from NGINX and re-publish it with copy codec then it works.

Why does FFMPEG work with 1080p but doesn't work with 720p sizeā€¦ (code included)

I've uploaded my fully compiled code with its Makefile here:
https://pastebin.com/xtCTj06F
If I set 1280x720 I get the segmentation fault:
[libx264 # 0x7fdf4d25a600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
[libx264 # 0x7fdf4d25a600] profile High, level 3.1
[libx264 # 0x7fdf4d25a600] 264 - core 148 r2748 97eaef2 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=12 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=3400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '1.mp4':
Stream #0:0: Video: h264, yuv420p, 1280x720, q=2-31, 3400 kb/s, 20 tbn
Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 64 kb/s
sws_scale BEGIN
./MakeVideo.sh: line 2: 26422 Segmentation fault: 11 ./MakeVideo 1.mp4
There's some problem with this line:
std::cout << "sws_scale BEGIN\n";
sws_scale( sws_context, ( const uint8_t * const * ) &rgb, inLinesize, 0, frame->height, frame->data, frame->linesize );
std::cout << "sws_scale END\n";
but if I set the size of a video to 1920x1080 or to 320x240 - all works fine.
Is that some sort of a magick?? Or a bug?
OS X 10.12.3
ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
Your code has numerous issues. For instance
void ffmpeg_encoder_set_frame_yuv_from_rgb( AVFrame *frame ) {
// frame pointer may be NULL, but is later used without check
uint8_t *rgb = (uint8_t *) malloc( 3 * sizeof( uint8_t ) * frame->width * frame->height );
// malloc may return NULL, but rgb is later used without check
// also rgb is never freed so memory leaks
sws_context = sws_getCachedContext(...
// sws_getCachedContext may return NULL, but sws_context is later used without check

FFMpeg: write h264 stream to mp4 container without changes

Good day.
For brevity, the code omits error handling and memory management.
I want to capture h264 video stream and pack it to mp4 container without changes. Since i don't control the source of stream, i can not make assumptions about stream structure. In this way i must probe input stream.
AVProbeData probeData;
probeData.buf_size = s->BodySize();
probeData.buf = s->GetBody();
probeData.filename = "";
AVInputFormat* inFormat = av_probe_input_format(&probeData, 1);
This code correctly defines h264 stream.
Next, i create input format context,
unsigned char* avio_input_buffer = reinterpret_cast<unsigned char*> (av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_input_ctx = avio_alloc_context(avio_input_buffer, AVIO_BUFFER_SIZE,
0, this, &read_packet, NULL, NULL);
AVFormatContext* ifmt_ctx = avformat_alloc_context();
ifmt_ctx->pb = avio_input_ctx;
int ret = avformat_open_input(&ifmt_ctx, NULL, inFormat, NULL);
set image size,
ifmt_ctx->streams[0]->codec->width = ifmt_ctx->streams[0]->codec->coded_width = width;
ifmt_ctx->streams[0]->codec->height = ifmt_ctx->streams[0]->codec->coded_height = height;
create output format context,
unsigned char* avio_output_buffer = reinterpret_cast<unsigned char*>(av_malloc(AVIO_BUFFER_SIZE));
AVIOContext* avio_output_ctx = avio_alloc_context(avio_output_buffer, AVIO_BUFFER_SIZE,
1, this, NULL, &write_packet, NULL);
AVFormatContext* ofmt_ctx = nullptr;
avformat_alloc_output_context2(&ofmt_ctx, NULL, "mp4", NULL);
ofmt_ctx->pb = avio_output_ctx;
AVDictionary* dict = nullptr;
av_dict_set(&dict, "movflags", "faststart", 0);
av_dict_set(&dict, "movflags", "frag_keyframe+empty_moov", 0);
AVStream* outVideoStream = avformat_new_stream(ofmt_ctx, nullptr);
avcodec_copy_context(outVideoStream->codec, ifmt_ctx->streams[0]->codec);
ret = avformat_write_header(ofmt_ctx, &dict);
Initialization is done. Further there is a shifting packets from h264 stream to mp4 container. I dont calculate pts and dts, because source packet has AV_NOPTS_VALUE in them.
AVPacket pkt;
while (...)
{
ret = av_read_frame(ifmt_ctx, &pkt);
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
av_free_packet(&pkt);
}
Further i write trailer and free allocated memory. That is all. Code works and i got playable mp4 file.
Now the problem: the stream characteristics of the resulting file is not completely consisent with the characteristics of the source stream. In particular, fps and bitrate is higher than it should be.
As sample, below is output ffplay.exe for source stream
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/source.mp4':0/0
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isom
creation_time : 2014-04-14T13:03:54.000000Z
Duration: 00:00:58.08, start: 0.000000, bitrate: 12130 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661),yuv420p, 1920x1080, 12129 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 1428KB sq= 0B f=0/0
Seek to 49% ( 0:00:28) of total duration ( 0:00:58) B f=0/0
30.32 M-V: -0.030 fd= 87 aq= 0KB vq= 1360KB sq= 0B f=0/0
and for resulting stream (contains part of source stream)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/target.mp4':f=0/0
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1iso6mp41
encoder : Lavf57.56.101
Duration: 00:00:11.64, start: 0.000000, bitrate: 18686 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 18683 kb/s, 38.57 fps, 40 tbr, 90k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Switch subtitle stream from #-1 to #-1 vq= 2309KB sq= 0B f=0/0
5.70 M-V: 0.040 fd= 127 aq= 0KB vq= 2562KB sq= 0B f=0/0
So there is a question, what i missed when copying stream? I will be grateful for any help.
Best regards
I dont calculate pts and dts This is your problem. frame rate and bit rate are both ratios where time is the denominator. But not writing pts/dts you end up with a video shorter than you want. h.264 does not timestamp every frame. that is the containers job. You must make up time stamps from the known frame rate, or another value.

Can not set the number of channels in VideoCapture in OpenCV

I am trying to use OpenCV to apply treatments on an mj2 file encoded as grayscale uint16 per pixel. Unfortunately, a non disclosure agreement covers this file (which I did not generate myself) and I can not post a sample mj2 file.
The description of my .mj2 file as provided by ffmpeg is:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'data/DEVISSAGE_181.mj2':
Metadata:
major_brand : mjp2
minor_version : 0
compatible_brands: mjp2
creation_time : 2015-10-09 08:07:43
Duration: 00:01:03.45, start: 0.000000, bitrate: 14933 kb/s
Stream #0:0: Video: jpeg2000 (mjp2 / 0x32706A6D), gray16le, 1152x288, lossless, 14933 kb/s, SAR 1:4 DAR 1:1, 5.50 fps, 5.50 tbr, 55 tbn, 55 tbc (default)
Metadata:
creation_time : 2015-10-09 08:07:43
handler_name : Video
encoder : Motion JPEG2000
I take it that gray16le confirms the uint16 encoding somehow.
Here is my C++ code:
#include<iostream>
#include "opencv2/opencv.hpp"
int main(int, char**) {
cv::VideoCapture cap("data/DEVISSAGE_181.mj2"); // open the video file
cap.set(CV_CAP_PROP_FORMAT, CV_16UC1);
cv::Mat frame;
cap.read(frame); // get a new frame from file
std::cout << "frame.rows: " << frame.rows << ", frame.cols: " << frame.cols << ", frame.channels(): " << frame.channels() << std::endl;
return 0;
}
The result of running this code is:
frame.rows: 288, frame.cols: 1152, frame.channels(): 3, frame.depth(): 0
Which indicates a 3 channels, CV_8U pixel encoding. Why does the cap.set instruction appear to be ignored ? What should I do to get the correct encoding ?

How to record output frames to a video file in OpenFrameworks OF 0.8.4 OSX

Hi I'm attempting to use this to record output of my openframeworks app.
I was trying to use this extension here. As there seems like there is no built in method for this.
https://github.com/timscaffidi/ofxVideoRecorder
I have it producing a video file, but it looks like one of those scrolling sidewalk adverts, the frames scroll upwards instead of proceeding like a normal video. Any idea why?
I had this at the end of my setup method.
if(!vidRecorder.isInitialized())
{
vidRecorder.setup(fileName+ofGetTimestampString()+fileExt, ofGetWidth(), ofGetHeight(), 30); // no audio
}
Now I had the window mode set to fullscreen intitially and that gave me scanlines on the output, so I made it not full screen and I got the frames I already mentioned and then I set it to 640, 480 and I got more scanlines.
I was using this code in my update loop
ofImage img;
img.grabScreen(0, 0, ofGetWidth(), ofGetHeight());
vidRecorder.addFrame(img.getPixelsRef());
Here is the ffmpeg output.
[warning] ofThread: - name: Thread 4 - Calling startThread with verbose is deprecated.
[warning] ofThread: - name: Thread 2 - Calling startThread with verbose is deprecated.
ffmpeg version 2.6.1 Copyright (c) 2000-2015 the FFmpeg developers
built with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --disable-doc --arch=x86_64 --enable-runtime-cpudetect
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, rawvideo, from '/Users/benjgorman/Code/OF_ROOT/addons/ofxTimeline/duplicate_lips/bin/data/ofxvrpipe0':
Duration: N/A, start: 0.000000, bitrate: 566231 kb/s
Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 1024x768, 566231 kb/s, 30 tbr, 30 tbn, 30 tbc
Please use -b:a or -b:v, -b is ambiguous
Output #0, mov, to '/Users/benjgorman/Code/OF_ROOT/addons/ofxTimeline/duplicate_lips/bin/data/testMovie2015-04-16-11-21-31-504.mov':
Metadata:
encoder : Lavf56.25.101
Stream #0:0: Video: mpeg4 (mp4v / 0x7634706D), yuv420p, 1024x768, q=2-31, 800 kb/s, 30 fps, 15360 tbn, 30 tbc
Metadata:
encoder : Lavc56.26.100 mpeg4
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg4 (native))
Press [q] to stop, [?] for help
frame= 9 fps=0.0 q=14.2 size= 861kB time=00:00:00.30 bitrate=23510.3kbits/s
frame= 19 fps= 18 q=24.8 size= 1330kB time=00:00:00.63 bitrate=17202.4kbits/s
frame= 31 fps= 20 q=24.8 size= 1864kB time=00:00:01.03
//
more frames here omitted for clarity
//
frame= 87 fps= 16 q=24.8 size= 4349kB time=00:00:02.90 bitrate=12286.0kbits/s
frame= 95 fps= 16 q=24.8 size= 4704kB time=00:00:03.16 bitrate=12169.1kbits/s
frame= 103 fps= 16 q=24.8 size= 5061kB time=00:00:03.43 bitrate=12075.1kbits/s
frame= 111 fps= 16 q=24.8 size= 5415kB time=00:00:03.70 bitrate=11990.0kbits/s
frame= 118 fps= 15 q=24.8 size= 5726kB time=00:00:03.93 bitrate=11925.3kbits/s
frame= 130 fps= 16 q=24.8 size= 6261kB time=00:00:04.33 bitrate=11835.3kbits/s
[rawvideo # 0x10201e000] Invalid buffer size, packet size 1179648 < expected frame_size 2359296
Error while decoding stream #0:0: Invalid argument
frame= 138 fps= 14 q=24.8 size= 6618kB time=00:00:04.60 bitrate=11785.4kbits/s
frame= 138 fps= 14 q=24.8 Lsize= 6620kB time=00:00:04.60 bitrate=11788.8kbits/s
video:6618kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.028923%
Is this an ffmpeg error? Or is there a better solution.
Could you suply the complete code? Are you using vidGrabber.update(); in the update?
Why don't you use:
ofApp.h
ofVideoGrabber vidGrabber;
ofxVideoRecorder vidRecorder;
bool bRecording;
void startRecord();
void stopRecord();
ofApp.c
void ofApp::setup(){
vidInput = ofPtr<ofQTKitGrabber>( new ofQTKitGrabber() );
vidGrabber.setGrabber(vidInput);
vidGrabber.initGrabber(1280, 720);
vidRecorder.setFfmpegLocation(ofFilePath::getAbsolutePath("ASSETS/ffmpeg"));
}
void ofApp::update(){
vidGrabber.update();
if(vidGrabber.isFrameNew()){
vidRecorder.addFrame(vidGrabber.getPixelsRef());
}
}
void ofApp::startRecord() {
bRecording = true;
if(bRecording && !vidRecorder.isInitialized()) {
vidRecorder.setup("your-file-name.mp4", vidGrabber.getWidth(), vidGrabber.getHeight(), 30, 44100, 2);
}
}
//--------------------------------------------------------------
void ofApp::stopRecord() {
bRecording = false;
vidRecorder.close();
}