Decoding loop logic from matroska (mkv, webm) to audio (C++ via libvorbis) - c++

(I'm not fluent in english i'll try to do my best)
I try to code (C++) a simple mkv player. I'm very new in this subject, so I discover all I need little by little. For the beginning, I use VP8 codec for video and Vorbis for audio.
The video side seem ok for now, but I'm in trouble with audio.
I can't figure out the loop logic to decode the audio frames I get from mkvparser with the libvorbis.
I looked up to this sample and this brief explanation but can't manage to make it work in my case. And I didn't find other simple examples.
Here is a chunk of my code:
const mkvparser::Block* const pBlock = m_pMkvContext->pBlockEntry->GetBlock();
const mkvparser::Track* const pTrack = m_pMkvContext->pTracks->GetTrackByNumber( (unsigned long)pBlock->GetTrackNumber() );
if ( pTrack != NULL )
{
const long long trackType = pTrack->GetType();
const int frameCount = pBlock->GetFrameCount();
if ( frameCount > 0 )
{
const mkvparser::Block::Frame& oFrame = pBlock->GetFrame( 0 );
unsigned char* pData = (unsigned char*)malloc( (size_t)oFrame.len );
oFrame.Read( &m_pMkvContext->oReader, pData );
if ( trackType == mkvparser::Track::kVideo )
{
// i'm ok here
}
else if ( trackType == mkvparser::Track::kAudio )
{
// what to do here with my audio frame data ?
}
free( pData );
}
}
And maybe the way I get frames is good for video and not for audio...
Do you guys know some good resources to share about it? Or some advices?
Thanks for help !
[EDIT] : I forgot to add one of my try:
bool MoviePlayer::DecodeAudioData( unsigned char* pData, uint32 iSize )
{
int ret;
char* pBuffer = NULL;
pBuffer = ogg_sync_buffer( &m_pOVContext->oOggSyncState, iSize );
memcpy( pBuffer, pData, iSize );
ogg_sync_wrote( &m_pOVContext->oOggSyncState, iSize );
ret = ogg_sync_pageout( &m_pOVContext->oOggSyncState, &m_pOVContext->oOggPage );
ret = ogg_stream_init( &m_pOVContext->oOggStreamState, ogg_page_serialno(&m_pOVContext->oOggPage) );
ret = ogg_stream_pagein( &m_pOVContext->oOggStreamState, &m_pOVContext->oOggPage );
int iPacketsCount = ogg_page_packets( &m_pOVContext->oOggPage );
for ( int i = 0; i < iPacketsCount; ++i )
{
ret = ogg_stream_packetout(&m_pOVContext->oOggStreamState, &m_pOVContext->oOggPacket);
// do something with the packet...
}
return true;
}
It crashes at ogg_sync_pageout, as my ogg_page was not correctly initialized.
But, not coming from a proper .ogg file as in the examples i found, i don't know how to correctly initialize the vorbis structures.

https://matroska.org/technical/specs/codecid/index.html
in A_VORBIS section
The private data contains the first three Vorbis packet in order....
and the codec private is here
https://matroska.org/technical/specs/index.html
"CodecPrivate 3 [63][A2]"

Related

Decoding with OGG/Vorbis gives no sound

I'd like to play an Ogg/Vorbis audio/video file, but right now I can't get to read audio from a file.
My algorithm to read audio is:
Initialize required structures:
vorbis_info info;
vorbis_comment comment;
vorbis_dsp_state dsp;
vorbis_block block;
vorbis_info_init(&info);
vorbis_comment_init(&comment);
Read headers:
Call vorbis_synthesis_headerin(&info, &comment, packet); until it returns OV_ENOTVORBIS
vorbis_synthesis_init(&dsp, &info);
vorbis_block_init(&dsp, &block);
Pass the first non-header packet to function below
Parse packets, do it until audioReady == READY
putPacket(ogg_packet *packet) {
int ret;
ret = vorbis_synthesis(&block, packet);
if( ret == 0 ) {
ret = vorbis_synthesis_blockin(&dsp, &block);
audioReady = (ret == 0) ? READY : NOT_READY;
} else {
audioReady = NOT_READY;
}
}
Read audio data:
float** rawData = nullptr;
readSamples = vorbis_synthesis_pcmout(&dsp, &rawData);
if( readSamples == 0 ) {
audioReady = NOT_READY;
return;
}
int16_t* newData = new int16_t[readSamples * getChannels()];
int16_t* dst = newData;
for(unsigned int i=0; i<readSamples; ++i) {
for(unsigned char ch=0; ch<getChannels(); ++ch) {
*(dst++) = math::clamp<int16_t>(rawData[ch][i]*32767 + 0.5f, -32767, 32767);
}
}
audioData.push_back({readSamples * getChannels() , newData});
vorbis_synthesis_read(&dsp, static_cast<int>(readSamples));
audioReady = NOT_READY;
This is where it gets wrong: after examining the newData contents it is revealed that it contains a very silent sound. I doubt if it is the right data which means somewhere along my algorithm I did something wrong.
I tried to find some examples of similar programs, but all I got are sources with very spaghetti-like code, which seems to do the same algorithm like mine, yet they do their job. (There is one off such library: https://github.com/icculus/theoraplay )
Is there any reason why I'm getting (almost) silence in my application?
PS: If you are wondering if I might getting OGG packets wrong, then I assure you this part of my code is working right, as I'm also reading video data from the same file, using the same code and it shows the video right.
I've found it: during reading packets I assumed that one Ogg Page = one Ogg packet. I's wrong: for audio one page can contain many packets. To read it properly one has to make a code like:
do{
putPacket(&packet);
}while( ogg_stream_packetout(&state, &packet) == 1 );
I did this mistake because for video packets (which I did first) a page contains only one packet.

get packet number in libpcap callback

I'm using libpcap to process the WS output.
My question is: can I have access in the packet number in the pcap_loop callback? Or I will have to use a static variable?
EDIT:
As requested:
long Foo:Main()
{
handle = pcap_open_dead( DLT_EN10MB, MAX_PACKET_SIZE );
if( !handle )
{
}
dumper = pcap_dump_open( handle, fileOut.ToString() );
if( !dumper )
{
}
handle = pcap_open_offline( fileNameStr.ToString(), errbuf );
if( !handle )
{
}
if( pcap_compile( handle, &fp, FltString.ToString(), 0, net ) == PCAP_ERROR )
{
}
// Set filter for JREAP only
if( pcap_setfilter( handle, &fp ) == PCAP_ERROR )
{
}
unchar *uncharThis = reinterpret_cast<unchar*>( this );
// The pcap_loop is implemented like:
// for( int i = 0; i < num_of_packets; i++ )
// ProcessPackets();
// where i is the current packet number to process
int ret_val = pcap_loop( handle, 0, ProcessPackets, uncharThis );
if( ret_val == PCAP_ERROR )
{
}
}
bool Foo::ProcessPackets(unchar *userData, const struct pcap_pkthdr *pkthdr, const unchar *packet)
{
// This function will be called for every packet in the pcap file
// that satisfy the filter condition.
// Inside this function do I have access to the packet number.
// Do I have an access to the variable `i` from the comment above
// Or I will have to introduce a static variable here?
}
libpcap does not keep track of the ordinal numbers of packets, so you'll have to maintain a packet count in your code.

FFMPEG I/O output buffer

I'm currently having issues trying to encapsulate raw H264 nal packets into a mp4 container. Instead of writing them to disk however, I want to have the result stored in memory. I followed this approach Raw H264 frames in mpegts container using libavcodec but haven't been successful so far.
First, is this the right way to write to memory? I have a small struct in my header
struct IOOutput {
uint8_t* outBuffer;
int bytesSet;
};
where I initialize the buffer and bytesset. I then initialize my AVIOContext variable
AVIOContext* pIOCtx = avio_alloc_context(pBuffer, iBufSize, 1, outptr, NULL, write_packet, NULL);
where outptr is a void pointer to IOOutput output, and write_packet looks like the following
int write_packet (void *opaque, uint8_t *buf, int buf_size) {
IOOutput* out = reinterpret_cast<IOOutput*>(opaque);
memcpy(out->outBuffer+out->bytesSet, buf, buf_size);
out->bytesSet+=buf_size;
return buf_size;
}
I then set
fc->pb = pIOCtx;
fc->flags = AVFMT_FLAG_CUSTOM_IO;
on my AVFormatContext *fc variable.
Then, whenever I encode the nal packets I have from a frame, I write them to the AVFormatContext via av_interleaved_write_frame and then get the mp4 contents via
void getBufferContent(char* buffer) {
memcpy(buffer, output.outBuffer, output.bytesSet);
output.bytesSet=0;
}
and thus reset the variable bytesSet, so during the next writing operation bytes will be inserted at the start of the buffer. Is there a better way to do this? Is this actually a valid way to do it? Does FFMPEG do any reading operation if I only do call av_interleaved_write_frame and avformat_write_header in order to add packets?
Thank you very much in advance!
EDIT
Here is the code regarding the muxing process - in my encode Function I have something like
int frame_size = x264_encoder_encode(obj->mEncoder, &obj->nals, &obj->i_nals, obj->pic_in, obj->pic_out);
int total_size=0;
for(int i=0; i<obj->i_nals;i++)
{
if ( !obj->fc ) {
obj->create( obj->nals[i].p_payload, obj->nals[i].i_payload );
}
if ( obj->fc ) {
obj->write_frame( obj->nals[i].p_payload, obj->nals[i].i_payload);
}
}
// Here I get the output values
int currentBufferSize = obj->output.bytesSet;
char* mem = new char[currentBufferSize];
obj->getBufferContent(mem);
And the create and write functions look like this
int create(void *p, int len) {
AVOutputFormat *of = av_guess_format( "mp4", 0, 0 );
fc = avformat_alloc_context();
// Add video stream
AVStream *pst = av_new_stream( fc, 0 );
vi = pst->index;
void* outptr = (void*) &output;
// Create Buffer
pIOCtx = avio_alloc_context(pBuffer, iBufSize, 1, outptr, NULL, write_packet, NULL);
fc->oformat = of;
fc->pb = pIOCtx;
fc->flags = AVFMT_FLAG_CUSTOM_IO;
pcc = pst->codec;
AVCodec c= {0};
c.type= AVMEDIA_TYPE_VIDEO;
avcodec_get_context_defaults3( pcc, &c );
pcc->codec_type = AVMEDIA_TYPE_VIDEO;
pcc->codec_id = codec_id;
pcc->bit_rate = br;
pcc->width = w;
pcc->height = h;
pcc->time_base.num = 1;
pcc->time_base.den = fps;
}
void write_frame( const void* p, int len ) {
AVStream *pst = fc->streams[ vi ];
// Init packet
AVPacket pkt;
av_init_packet( &pkt );
pkt.flags |= ( 0 >= getVopType( p, len ) ) ? AV_PKT_FLAG_KEY : 0;
pkt.stream_index = pst->index;
pkt.data = (uint8_t*)p;
pkt.size = len;
pkt.dts = AV_NOPTS_VALUE;
pkt.pts = AV_NOPTS_VALUE;
av_interleaved_write_frame( fc, &pkt );
}
See the AVFormatContext.pb documentation. You set it correctly, but you shouldn't touch AVFormatContext.flags. Also, make sure you set it before calling avformat_write_header().
When you say "it doesn't work", what exactly doesn't work? Is the callback not invoked? Is the data in it not of the expected type/format? Something else? If all you want to do is write raw nal packets, then you could just take encoded data directly from the encoder (in the AVPacket), that's the raw nal data. If you use libx264's api directly, it even gives you each nal individually so you don't need to parse it.

How can I make this code play the wave file for longer?

I couldn't figure out how to create my own sound player, so I have opted to use one from ChiliTomatoNoodle's framework.
The issue I'm having, however, is I have a 180s wave file, that's only playing the first second or so. What do I have to do to make it play longer?
Sound.h:
#pragma once
#include <windows.h>
#include <mmsystem.h>
#include <dsound.h>
#include <stdio.h>
class DSound;
class Sound
{
friend DSound;
public:
Sound( const Sound& base );
Sound();
~Sound();
const Sound& operator=( const Sound& rhs );
void Play( int attenuation = DSBVOLUME_MAX );
private:
Sound( IDirectSoundBuffer8* pSecondaryBuffer );
private:
IDirectSoundBuffer8* pBuffer;
};
class DSound
{
private:
struct WaveHeaderType
{
char chunkId[4];
unsigned long chunkSize;
char format[4];
char subChunkId[4];
unsigned long subChunkSize;
unsigned short audioFormat;
unsigned short numChannels;
unsigned long sampleRate;
unsigned long bytesPerSecond;
unsigned short blockAlign;
unsigned short bitsPerSample;
char dataChunkId[4];
unsigned long dataSize;
};
public:
DSound( HWND hWnd );
~DSound();
Sound CreateSound( char* wavFileName );
private:
DSound();
private:
IDirectSound8* pDirectSound;
IDirectSoundBuffer* pPrimaryBuffer;
};
Sound.cpp:
#include "Sound.h"
#include <assert.h>
#pragma comment(lib, "dsound.lib")
#pragma comment(lib, "dxguid.lib")
#pragma comment(lib, "winmm.lib" )
DSound::DSound( HWND hWnd )
: pDirectSound( NULL ),
pPrimaryBuffer( NULL )
{
HRESULT result;
DSBUFFERDESC bufferDesc;
WAVEFORMATEX waveFormat;
result = DirectSoundCreate8( NULL,&pDirectSound,NULL );
assert( !FAILED( result ) );
// Set the cooperative level to priority so the format of the primary sound buffer can be modified.
result = pDirectSound->SetCooperativeLevel( hWnd,DSSCL_PRIORITY );
assert( !FAILED( result ) );
// Setup the primary buffer description.
bufferDesc.dwSize = sizeof(DSBUFFERDESC);
bufferDesc.dwFlags = DSBCAPS_PRIMARYBUFFER | DSBCAPS_CTRLVOLUME;
bufferDesc.dwBufferBytes = 0;
bufferDesc.dwReserved = 0;
bufferDesc.lpwfxFormat = NULL;
bufferDesc.guid3DAlgorithm = GUID_NULL;
// Get control of the primary sound buffer on the default sound device.
result = pDirectSound->CreateSoundBuffer( &bufferDesc,&pPrimaryBuffer,NULL );
assert( !FAILED( result ) );
// Setup the format of the primary sound bufffer.
// In this case it is a .WAV file recorded at 44,100 samples per second in 16-bit stereo (cd audio format).
waveFormat.wFormatTag = WAVE_FORMAT_PCM;
waveFormat.nSamplesPerSec = 44100;
waveFormat.wBitsPerSample = 16;
waveFormat.nChannels = 2;
waveFormat.nBlockAlign = (waveFormat.wBitsPerSample / 8) * waveFormat.nChannels;
waveFormat.nAvgBytesPerSec = waveFormat.nSamplesPerSec * waveFormat.nBlockAlign;
waveFormat.cbSize = 0;
// Set the primary buffer to be the wave format specified.
result = pPrimaryBuffer->SetFormat( &waveFormat );
assert( !FAILED( result ) );
}
DSound::~DSound()
{
if( pPrimaryBuffer )
{
pPrimaryBuffer->Release();
pPrimaryBuffer = NULL;
}
if( pDirectSound )
{
pDirectSound->Release();
pDirectSound = NULL;
}
}
// must be 44.1k 16bit Stereo PCM Wave
Sound DSound::CreateSound( char* wavFileName )
{
int error;
FILE* filePtr;
unsigned int count;
WaveHeaderType waveFileHeader;
WAVEFORMATEX waveFormat;
DSBUFFERDESC bufferDesc;
HRESULT result;
IDirectSoundBuffer* tempBuffer;
IDirectSoundBuffer8* pSecondaryBuffer;
unsigned char* waveData;
unsigned char* bufferPtr;
unsigned long bufferSize;
// Open the wave file in binary.
error = fopen_s( &filePtr,wavFileName,"rb" );
assert( error == 0 );
// Read in the wave file header.
count = fread( &waveFileHeader,sizeof( waveFileHeader ),1,filePtr );
assert( count == 1 );
// Check that the chunk ID is the RIFF format.
assert( (waveFileHeader.chunkId[0] == 'R') &&
(waveFileHeader.chunkId[1] == 'I') &&
(waveFileHeader.chunkId[2] == 'F') &&
(waveFileHeader.chunkId[3] == 'F') );
// Check that the file format is the WAVE format.
assert( (waveFileHeader.format[0] == 'W') &&
(waveFileHeader.format[1] == 'A') &&
(waveFileHeader.format[2] == 'V') &&
(waveFileHeader.format[3] == 'E') );
// Check that the sub chunk ID is the fmt format.
assert( (waveFileHeader.subChunkId[0] == 'f') &&
(waveFileHeader.subChunkId[1] == 'm') &&
(waveFileHeader.subChunkId[2] == 't') &&
(waveFileHeader.subChunkId[3] == ' ') );
// Check that the audio format is WAVE_FORMAT_PCM.
assert( waveFileHeader.audioFormat == WAVE_FORMAT_PCM );
// Check that the wave file was recorded in stereo format.
assert( waveFileHeader.numChannels == 2 );
// Check that the wave file was recorded at a sample rate of 44.1 KHz.
assert( waveFileHeader.sampleRate == 44100 );
// Ensure that the wave file was recorded in 16 bit format.
assert( waveFileHeader.bitsPerSample == 16 );
// Check for the data chunk header.
assert( (waveFileHeader.dataChunkId[0] == 'd') &&
(waveFileHeader.dataChunkId[1] == 'a') &&
(waveFileHeader.dataChunkId[2] == 't') &&
(waveFileHeader.dataChunkId[3] == 'a') );
// Set the wave format of secondary buffer that this wave file will be loaded onto.
waveFormat.wFormatTag = WAVE_FORMAT_PCM;
waveFormat.nSamplesPerSec = 44100;
waveFormat.wBitsPerSample = 16;
waveFormat.nChannels = 2;
waveFormat.nBlockAlign = (waveFormat.wBitsPerSample / 8) * waveFormat.nChannels;
waveFormat.nAvgBytesPerSec = waveFormat.nSamplesPerSec * waveFormat.nBlockAlign;
waveFormat.cbSize = 0;
// Set the buffer description of the secondary sound buffer that the wave file will be loaded onto.
bufferDesc.dwSize = sizeof(DSBUFFERDESC);
bufferDesc.dwFlags = DSBCAPS_CTRLVOLUME;
bufferDesc.dwBufferBytes = waveFileHeader.dataSize;
bufferDesc.dwReserved = 0;
bufferDesc.lpwfxFormat = &waveFormat;
bufferDesc.guid3DAlgorithm = GUID_NULL;
// Create a temporary sound buffer with the specific buffer settings.
result = pDirectSound->CreateSoundBuffer( &bufferDesc,&tempBuffer,NULL );
assert( !FAILED( result ) );
// Test the buffer format against the direct sound 8 interface and create the secondary buffer.
result = tempBuffer->QueryInterface( IID_IDirectSoundBuffer8,(void**)&pSecondaryBuffer );
assert( !FAILED( result ) );
// Release the temporary buffer.
tempBuffer->Release();
tempBuffer = 0;
// Move to the beginning of the wave data which starts at the end of the data chunk header.
fseek( filePtr,sizeof(WaveHeaderType),SEEK_SET );
// Create a temporary buffer to hold the wave file data.
waveData = new unsigned char[ waveFileHeader.dataSize ];
assert( waveData );
// Read in the wave file data into the newly created buffer.
count = fread( waveData,1,waveFileHeader.dataSize,filePtr );
assert( count == waveFileHeader.dataSize);
// Close the file once done reading.
error = fclose( filePtr );
assert( error == 0 );
// Lock the secondary buffer to write wave data into it.
result = pSecondaryBuffer->Lock( 0,waveFileHeader.dataSize,(void**)&bufferPtr,(DWORD*)&bufferSize,NULL,0,0 );
assert( !FAILED( result ) );
// Copy the wave data into the buffer.
memcpy( bufferPtr,waveData,waveFileHeader.dataSize );
// Unlock the secondary buffer after the data has been written to it.
result = pSecondaryBuffer->Unlock( (void*)bufferPtr,bufferSize,NULL,0 );
assert( !FAILED( result ) );
// Release the wave data since it was copied into the secondary buffer.
delete [] waveData;
waveData = NULL;
return Sound( pSecondaryBuffer );
}
Sound::Sound( IDirectSoundBuffer8* pSecondaryBuffer )
: pBuffer( pSecondaryBuffer )
{}
Sound::Sound()
: pBuffer( NULL )
{}
Sound::Sound( const Sound& base )
: pBuffer( base.pBuffer )
{
pBuffer->AddRef();
}
Sound::~Sound()
{
if( pBuffer )
{
pBuffer->Release();
pBuffer = NULL;
}
}
const Sound& Sound::operator=( const Sound& rhs )
{
this->~Sound();
pBuffer = rhs.pBuffer;
pBuffer->AddRef();
return rhs;
}
// attn is the attenuation value in units of 0.01 dB (larger
// negative numbers give a quieter sound, 0 for full volume)
void Sound::Play( int attn )
{
attn = max( attn,DSBVOLUME_MIN );
HRESULT result;
// check that we have a valid buffer
assert( pBuffer != NULL );
// Set position at the beginning of the sound buffer.
result = pBuffer->SetCurrentPosition( 0 );
assert( !FAILED( result ) );
// Set volume of the buffer to attn
result = pBuffer->SetVolume( attn );
assert( !FAILED( result ) );
// Play the contents of the secondary sound buffer.
result = pBuffer->Play( 0,0,0 );
assert( !FAILED( result ) );
}
Thanks for your help in advance!
Assuming you have a .wav file, and you are loading the sound file somewhere along the lines of:
yourSound = audio.CreateSound("fileName.WAV"); //Capslock on WAV
yourSound.Play();
With this comes the declaration of the Sound in the header:
Sound yourSound;
Now because you have probably done this already and this is not working it likely has to do with your file as playing sounds 160 seconds+ should not be a problem.
Are you using a .WAV file for the sound? If so did you happen to convert that (as it is probably a background sound?). If you did try converting it with this converter:
Converter MP3 -> WAV
Please let me know if this works!
Your buffer is probably only large enough to play the first second or so. What you need to do is setup "notifications". See the documentation.
Notifications are a way to ask the audio hardware to let you know when they have reached a specific point in the buffer.
The idea is to setup a notification in the middle of the buffer and at the end of the buffer. When you receive the notification from the notification in the middle, you fill the first half of the buffer with more data. When you receive the notification from the end, you fill the second half of the buffer with more data. This way, you can stream an infinite amount of data with a single buffer.

h264 ffmpeg: How to initialize ffmpeg to decode NALs created with x264

I have encoded some frames using x264, using x264_encoder_encode and after that I have created AVPackets using a function like this:
bool PacketizeNals( uint8_t* a_pNalBuffer, int a_nNalBufferSize, AVPacket* a_pPacket )
{
if ( !a_pPacket )
return false;
a_pPacket->data = a_pNalBuffer;
a_pPacket->size = a_nNalBufferSize;
a_pPacket->stream_index = 0;
a_pPacket->flags = AV_PKT_FLAG_KEY;
a_pPacket->pts = int64_t(0x8000000000000000);
a_pPacket->dts = int64_t(0x8000000000000000);
}
I call this function like this:
x264_nal_t* nals;
int num_nals = encode_frame(pic, &nals);
for (int i = 0; i < num_nals; i++)
{
AVPacket* pPacket = ( AVPacket* )av_malloc( sizeof( AVPacket ) );
av_init_packet( pPacket );
if ( PacketizeNals( nals[i].p_payload, nals[i].i_payload, pPacket ) )
{
packets.push_back( pPacket );
}
}
Now what I want to do is to decode these AVPackets using avcodec_decode_video2. I think the problem is that I haven't initialized properly the decoder because to encode I used "ultrafast" profile and "zerolatency" tune ( x264 ) and to decode I don't know how to specify to ffmpeg these options.
In some examples I have read people initialize the decoder using the file where the video is stored, but in this case I have directly the AVPackets.
What I'm doing to try to decode is:
avcodec_init();
avcodec_register_all();
AVCodec* pCodec;
pCodec=avcodec_find_decoder(CODEC_ID_H264);
AVCodecContext* pCodecContext;
pCodecContext=avcodec_alloc_context();
avcodec_open(pCodecContext,pCodec);
pCodecContext->width = 320;
pCodecContext->height = 200;
pCodecContext->extradata = NULL;
unsigned int nNumPackets = packets.size();
int frameFinished = 0;
for ( auto it = packets.begin(); it != packets.end(); it++ )
{
AVFrame* pFrame;
pFrame = avcodec_alloc_frame();
AVPacket* pPacket = *it;
int iReturn = avcodec_decode_video2( pCodecContext, pFrame, &frameFinished, pPacket );
}
But in iReturn always is -1.
Can anyone help me? Sorry if my knowledge in this area es low, I'm new.
Thanks.
I have written a simple client/server application that streams raw RGB video using lib x264 for encoding and ffmpeg for decoding.
You can find the code here: https://github.com/filippobrizzi/raw_rgb_straming
It shows how to setup x264 and ffmpeg to encode/decode.
Right now you initialize the decoder like
pCodecContext->extradata = NULL;
this is not correct. You need to allocate a memory for this field and copy data from the encoder's AVCodecContext::extradata into the allocated buffer. AVCodecContext::extradata_size specifies size of this extradata buffer in bytes
Make sure that you are building correct packets. See how this is done in the ffmpeg: http://ffmpeg.org/doxygen/trunk/libx264_8c_source.html (static int encode_nals(AVCodecContext *ctx, AVPacket *pkt, x264_nal_t *nals, int nnal) and static int X264_frame(AVCodecContext *ctx, AVPacket *pkt, const AVFrame *frame, int *got_packet))