Problems in my sample C code using FFmpeg API - c++

I've been trying to change an FFmpeg's example code HERE to call other filters using its C APIs. Say the filter be freezedetect=n=-60dB:d=8 which normally runs like this:
ffmpeg -i small.mp4 -vf "freezedetect=n=-60dB:d=8" -map 0:v:0 -f null -
And prints outputs like this:
[freezedetect # 0x25b91c0] lavfi.freezedetect.freeze_start: 5.005
[freezedetect # 0x25b91c0] lavfi.freezedetect.freeze_duration: 2.03537
[freezedetect # 0x25b91c0] lavfi.freezedetect.freeze_end: 7.04037
However, the original example displays frames, not these metadata information. How can I change the code to print this metadata information (and not the frames)?
I've been trying to change the display_frame function below into a display_metadata function. Looks like the frame variable has a metadata dictionary which looks promising, but my attempts failed to use it. I'm also new to C language.
Original display_frame function:
static void display_frame(const AVFrame *frame, AVRational time_base)
{
int x, y;
uint8_t *p0, *p;
int64_t delay;
if (frame->pts != AV_NOPTS_VALUE) {
if (last_pts != AV_NOPTS_VALUE) {
/* sleep roughly the right amount of time;
* usleep is in microseconds, just like AV_TIME_BASE. */
delay = av_rescale_q(frame->pts - last_pts,
time_base, AV_TIME_BASE_Q);
if (delay > 0 && delay < 1000000)
usleep(delay);
}
last_pts = frame->pts;
}
/* Trivial ASCII grayscale display. */
p0 = frame->data[0];
puts("\033c");
for (y = 0; y < frame->height; y++) {
p = p0;
for (x = 0; x < frame->width; x++)
putchar(" .-+#"[*(p++) / 52]);
putchar('\n');
p0 += frame->linesize[0];
}
fflush(stdout);
}
My new display_metadata function that needs to be completed:
static void display_metadata(const AVFrame *frame)
{
// printf("%d\n",frame->height);
AVDictionary* dic = frame->metadata;
printf("%d\n",*(dic->count));
// fflush(stdout);
}

Related

What's the difference between initializing a vector in Class Header or Class constructor body?

I encountered a strange behavior in my C++ program that I don't understand and I don't know how to search for more information. So I ask for advice here hoping someone might know.
I have a class Interface that has a 2 dimensional vector that I initialize in the header :
class Interface {
public:
// code...
const unsigned short int SIZE_X_ = 64;
const unsigned short int SIZE_Y_ = 32;
std::vector<std::vector<bool>> screen_memory_ =
std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
// code...
};
Here I expect that I have a SIZE_X_ x SIZE_Y_ vector filled with false booleans.
Later in my program I loop at a fixed rate like so :
void Emulator::loop() {
const milliseconds intervalPeriodMillis{static_cast<int>((1. / FREQ) * 1000)};
//Initialize the chrono timepoint & duration objects we'll be //using over & over inside our sleep loop
system_clock::time_point currentStartTime{system_clock::now()};
system_clock::time_point nextStartTime{currentStartTime};
while (!stop) {
currentStartTime = system_clock::now();
nextStartTime = currentStartTime + intervalPeriodMillis;
// ---- Stuff happens here ----
registers_->trigger_timers();
interface_->toogle_buzzer();
interface_->poll_events();
interface_->get_keys();
romParser_->step();
romParser_->decode();
// ---- ------------------ ----
stop = stop || interface_->requests_close();
std::this_thread::sleep_until(nextStartTime);
}
}
But then during the execution I get a segmentation fault
[1] 7585 segmentation fault (core dumped) ./CHIP8 coin.ch8
I checked with the debugger and some part of the screen_memory_ cannot be accessed anymore. And it seems to happen at random time.
But when I put the initialization of the vector in the constructor body like so :
Interface::Interface(const std::shared_ptr<reg::RegisterManager> & registers, bool hidden)
: registers_(registers) {
// code ...
screen_memory_ =
std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
// code ...
}
The segmentation fault doesn't happen anymore. So the solution is just to initialize the vector in the constructor body.
But why ? what is happening there ?
I don't understand what I did wrong, I'm sure someone knows.
Thanks for your help !
[Edit] I found the source of the bug (Or at least what to change so it doesnt give me a segfault anymore).
In my class Interface I use the SDL and SDL_audio libraries to create the display and the buzzer sound. Have a special look where I set the callback want_.callback, the callback Interface::forward_audio_callback and Interface::audio_callback. Here's the code :
// (c) 2021 Maxandre Ogeret
// Licensed under MIT License
#include "Interface.h"
Interface::Interface(const std::shared_ptr<reg::RegisterManager> & registers, bool hidden)
: registers_(registers) {
if (SDL_Init(SDL_INIT_AUDIO != 0) || SDL_Init(SDL_INIT_VIDEO) != 0) {
throw std::runtime_error("Unable to initialize rendering engine.");
}
want_.freq = SAMPLE_RATE;
want_.format = AUDIO_S16SYS;
want_.channels = 1;
want_.samples = 2048;
want_.callback = Interface::forward_audio_callback;
want_.userdata = &sound_userdata_;
if (SDL_OpenAudio(&want_, &have_) != 0) {
SDL_LogError(SDL_LOG_CATEGORY_AUDIO, "Failed to open audio: %s", SDL_GetError());
}
if (want_.format != have_.format) {
SDL_LogError(SDL_LOG_CATEGORY_AUDIO, "Failed to get the desired AudioSpec");
}
window = SDL_CreateWindow("CHIP8", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
SIZE_X_ * SIZE_MULTIPLIER_, SIZE_Y_ * SIZE_MULTIPLIER_,
hidden ? SDL_WINDOW_HIDDEN : 0);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
bpp_ = SDL_GetWindowSurface(window)->format->BytesPerPixel;
SDL_Delay(1000);
// screen_memory_ = std::vector<std::vector<bool>>(SIZE_X_, std::vector<bool>(SIZE_Y_, false));
}
Interface::~Interface() {
SDL_CloseAudio();
SDL_DestroyWindow(window);
SDL_Quit();
}
// code ...
void Interface::audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
audio_buffer_ = reinterpret_cast<Sint16 *>(raw_buffer);
sample_length_ = bytes / 2;
int & sample_nr(*(int *) user_data);
for (int i = 0; i < sample_length_; i++, sample_nr++) {
double time = (double) sample_nr / (double) SAMPLE_RATE;
audio_buffer_[i] = static_cast<Sint16>(
AMPLITUDE * (2 * (2 * floor(220.0f * time) - floor(2 * 220.0f * time)) + 1));
}
}
void Interface::forward_audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
static_cast<Interface *>(user_data)->audio_callback(user_data, raw_buffer, bytes);
}
}
In the function Interface::audio_callback, replacing the class variable assignation :
sample_length_ = bytes / 2;
By an int creation and assignation :
int sample_length = bytes / 2;
which gives :
void Interface::audio_callback(void * user_data, Uint8 * raw_buffer, int bytes) {
audio_buffer_ = reinterpret_cast<Sint16 *>(raw_buffer);
int sample_length = bytes / 2;
int &sample_nr(*(int*)user_data);
for(int i = 0; i < sample_length; i++, sample_nr++)
{
double time = (double)sample_nr / (double)SAMPLE_RATE;
audio_buffer_[i] = (Sint16)(AMPLITUDE * sin(2.0f * M_PI * 441.0f * time)); // render 441 HZ sine wave
}
}
The class variable sample_length_ is defined and initialized as private in the header like so :
int sample_length_ = 0;
So I had an idea and I created the variable sample_length_ as public and it works ! So the problem was definitely a scope problem of the class variable sample_length_. But it doesn't explain why the segfault disappeared when I moved the init of some other variable in the class constructor... Did I hit some undefined behavior with my callback ?
Thanks for reading me !

Audio distorted with VST plugin

I had to plug into a pre-existing software, managing ASIO audio streams, a simple VST host. Despite of lack of some documentation, I managed to do so however once I load the plugin I get a badly distorted audio signal back.
The VST I'm using works properly (with other VST Hosts) so it's probably some kind of bug in the code I made, however when I disable the "PROCESS" from the plugin (my stream goes through the plugin, it simply does not get processed) it gets back as I sent without any noise or distortion on it.
One thing I'm slightly concerned about is the type of the data used as the ASIO driver fills an __int32 buffer while the plugins wants some float buffer.
That's really depressing as I reviewed zillions of times my code and it seems to be fine.
Here is the code of the class I'm using; please note that some numbers are temporarily hard-coded to help debugging.
VSTPlugIn::VSTPlugIn(const char* fullDirectoryName, const char* ID)
: plugin(NULL)
, blocksize(128) // TODO
, sampleRate(44100.0F) // TODO
, hostID(ID)
{
this->LoadPlugin(fullDirectoryName);
this->ConfigurePluginCallbacks();
this->StartPlugin();
out = new float*[2];
for (int i = 0; i < 2; ++i)
{
out[i] = new float[128];
memset(out[i], 0, 128);
}
}
void VSTPlugIn::LoadPlugin(const char* path)
{
HMODULE modulePtr = LoadLibrary(path);
if(modulePtr == NULL)
{
printf("Failed trying to load VST from '%s', error %d\n", path, GetLastError());
plugin = NULL;
}
// vst 2.4 export name
vstPluginFuncPtr mainEntryPoint = (vstPluginFuncPtr)GetProcAddress(modulePtr, "VSTPluginMain");
// if "VSTPluginMain" was not found, search for "main" (backwards compatibility mode)
if(!mainEntryPoint)
{
mainEntryPoint = (vstPluginFuncPtr)GetProcAddress(modulePtr, "main");
}
// Instantiate the plugin
plugin = mainEntryPoint(hostCallback);
}
void VSTPlugIn::ConfigurePluginCallbacks()
{
// Check plugin's magic number
// If incorrect, then the file either was not loaded properly, is not a
// real VST plugin, or is otherwise corrupt.
if(plugin->magic != kEffectMagic)
{
printf("Plugin's magic number is bad. Plugin will be discarded\n");
plugin = NULL;
}
// Create dispatcher handle
this->dispatcher = (dispatcherFuncPtr)(plugin->dispatcher);
// Set up plugin callback functions
plugin->getParameter = (getParameterFuncPtr)plugin->getParameter;
plugin->processReplacing = (processFuncPtr)plugin->processReplacing;
plugin->setParameter = (setParameterFuncPtr)plugin->setParameter;
}
void VSTPlugIn::StartPlugin()
{
// Set some default properties
dispatcher(plugin, effOpen, 0, 0, NULL, 0);
dispatcher(plugin, effSetSampleRate, 0, 0, NULL, sampleRate);
dispatcher(plugin, effSetBlockSize, 0, blocksize, NULL, 0.0f);
this->ResumePlugin();
}
void VSTPlugIn::ResumePlugin()
{
dispatcher(plugin, effMainsChanged, 0, 1, NULL, 0.0f);
}
void VSTPlugIn::SuspendPlugin()
{
dispatcher(plugin, effMainsChanged, 0, 0, NULL, 0.0f);
}
void VSTPlugIn::ProcessAudio(float** inputs, float** outputs, long numFrames)
{
plugin->processReplacing(plugin, inputs, out, 128);
memcpy(outputs, out, sizeof(float) * 128);
}
EDIT: Here's the code I use to interface my sw with the VST Host
// Copying the outer buffer in the inner container
for(unsigned i = 0; i < bufferLenght; i++)
{
float f;
f = ((float) buff[i]) / (float) std::numeric_limits<int>::max()
if( f > 1 ) f = 1;
if( f < -1 ) f = -1;
samples[0][i] = f;
}
// DO JOB
for(auto it = inserts.begin(); it != inserts.end(); ++it)
{
(*it)->ProcessAudio(samples, samples, bufferLenght);
}
// Copying the result back into the buffer
for(unsigned i = 0; i < bufferLenght; i++)
{
float f = samples[0][i];
int intval;
f = f * std::numeric_limits<int>::max();
if( f > std::numeric_limits<int>::max() ) f = std::numeric_limits<int>::max();
if( f < std::numeric_limits<int>::min() ) f = std::numeric_limits<int>::min();
intval = (int) f;
buff[i] = intval;
}
where "buff" is defined as "__int32* buff"
I'm guessing that when you call f = std::numeric_limits<int>::max() (and the related min() case on the line below), this might cause overflow. Have you tried f = std::numeric_limits<int>::max() - 1?
Same goes for the code snippit above with f = ((float) buff[i]) / (float) std::numeric_limits<int>::max()... I'd also subtract one there to avoid a potential overflow later on.

Get frame time in ffmpeg

I am trying to make a little video player that has seek bar (with ffmpeg, of course). For that i need function that will, using data from frame and/or packet, get me current time in the video that should be set in seek slider.
It should work like this:
my_time = get_cur_time()
seek(my_time + 10)
assert(my_time+10 == get_cur_time())
seek(my_time - 10)
assert(my_time-10 == get_cur_time())
I do understand thatffmpeg does not support precise seeking, so equality here means "something reasonably cloae).
What code have i used for this thus far:
frame_time = frame->pts*av_q2d(video_dec_ctx->time_base) * 1000;
where frame is AVFrame and video_dec_ctx is AVCodecContext.
And for seeking:
int fn = ffmpeg::av_rescale(tsms,fmt_ctx->streams[video_stream->index]->time_base.den,
fmt_ctx->streams[video_stream->index]->time_base.num);
int frame = fn/1000;
printf("\t avformat_seek_file to %d\n",frame);
int flags = AVSEEK_FLAG_FRAME;
if (frame < this->frame->pts)
flags |= AVSEEK_FLAG_BACKWARD;
if(ffmpeg::av_seek_frame(fmt_ctx,video_stream->index,frame,flags))
{
printf("\nFailed to seek for time %d",frame);
return false;
}
avcodec_flush_buffers(video_dec_ctx);
int got_frame = 0;
do
if (av_read_frame(fmt_ctx, &pkt) >= 0) {
decode_packet_ro(&got_frame, 0);
av_free_packet(&pkt);
}
else
{
read_cache = true;
pkt.data = NULL;
pkt.size = 0;
break;
}
while(!(got_frame && this->frame->pts >= frame));
The code does forward seeking passably, but after any attempt of backward seeking my second assertion fails. After seeking to previous position, my method of getting time does not return position less that one before seeking. That causes my seek slider to work grossly incorrectly.

How to write a Live555 FramedSource to allow me to stream H.264 live

I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.
I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(&param, "baseline");
encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
Enjoy and good luck!
The deliverFrame method lacks the following check at its start:
if (!isCurrentlyAwaitingData()) return;
see DeviceSource.cpp in LIVE

How to use x264 for encoding with ffmpeg?

I tryed to use ffmpeg for encoding video/ But it fails on initialization of AVCodecContext annd AVCodec.
What I do:
_codec = avcodec_find_encoder(CODEC_ID_H264);
_codecContext = avcodec_alloc_context3(_codec);
_codecContext->coder_type = 0;
_codecContext->me_cmp|= 1;
_codecContext->me_method=ME_HEX;
_codecContext->me_subpel_quality = 0;
_codecContext->me_range = 16;
_codecContext->gop_size = 12;
_codecContext->scenechange_threshold = 40;
_codecContext->i_quant_factor = 0.71;
_codecContext->b_frame_strategy = 1;
_codecContext->qcompress = 0.5;
_codecContext->qmin = 2;
_codecContext->qmax = 31;
_codecContext->max_qdiff = 4;
_codecContext->max_b_frames = 3;
_codecContext->refs = 3;
_codecContext->trellis = 1;
_codecContext->width = format.biWidth;
_codecContext->height = format.biHeight;
_codecContext->time_base.num = 1;
_codecContext->time_base.den = 30;
_codecContext->pix_fmt = PIX_FMT_YUV420P;
_codecContext->chromaoffset = 0;
_codecContext->thread_count =1;
_codecContext->bit_rate = (int)(128000.f * 0.80f);
_codecContext->bit_rate_tolerance = (int) (128000.f * 0.20f);
int error = avcodec_open2(_codecContext, _codec, NULL);
if(error< )
{
std::cout<<"Open codec fail. Error "<<error<<"\n";
return NULL;
}
In such way ii fails on avopen_codec2() with:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xae1fdb70 (LWP 30675)]
0xb2eb2cbb in x264_param_default () from /usr/lib/libx264.so.120
If i comment all AVCodecContext parameters settins - I have :
[libx264 # 0xac75edd0] invalid width x height (0x0)
And avcodec_open retunrs negative value. Which steps, I'm doing, are wrong?
Thanks for any help (ffmpeg 0.10 && libx264 daily snapshot for yesterday)
In my experience you should give FFMPEG the least amount of information when initialising your codec as possible. This may seem counter intuitive but it means that FFMPEG will use it's default settings that are more likely to work than your own guesses. See what I would include below:
AVStream *st;
m_video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
st = avformat_new_stream(_outputCodec, m_video_codec);
_outputCodecContext = st->codec;
_outputCodecContext->codec_id = m_fmt->video_codec;
_outputCodecContext->bit_rate = m_AVIMOV_BPS; //Bits Per Second
_outputCodecContext->width = m_AVIMOV_WIDTH; //Note Resolution must be a multiple of 2!!
_outputCodecContext->height = m_AVIMOV_HEIGHT; //Note Resolution must be a multiple of 2!!
_outputCodecContext->time_base.den = m_AVIMOV_FPS; //Frames per second
_outputCodecContext->time_base.num = 1;
_outputCodecContext->gop_size = m_AVIMOV_GOB; // Intra frames per x P frames
_outputCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;//Do not change this, H264 needs YUV format not RGB
As in previous answers, here is a working example of the FFMPEG library encoding RGB frames to a H264 video:
http://www.imc-store.com.au/Articles.asp?ID=276
An extra thought on your code though:
Have you called register all like below?
avcodec_register_all();
av_register_all();
If you don't call these two functions near the start of your code your subsequent calls to FFMPEG will fail and you'll most likely seg-fault.
Have a look at the linked example, I tested it on VC++2010 and it works perfectly.