ALSA - samplerate conversion - c++

I have a text-to-speech application, that generate an audio-stream (raw-data) with a samplerate of 22kHz.
I have a USB-SoundCard that support only 44kHz.
With my asound.conf I can play wav-files that contains 22kHz and 44kHz audio-stream without problems in aplay.
My Application use the alsa-libs and set the samplerate of the device.
In this case only 44kHz will succeed, because the hardware supports only this samplerate. But now, when i write the generated audio-stream to alsa, it sounds wrong, because the samplerates dosn't match. The audio-stream (raw-data) dosn't contain any header information, so I think alsa don't use any plugin zu convert the samplerate. Alsa don't know that the stream has a different samplerate
My question is now, what is the right way to tell alsa, that the generated audio-stream have a different samplerate, so the alsa-plugin convert it.
The following code works on the USB SoundCard only with sampleRate = 44100, otherwise an error occured (-22, invalid parameters).
void initAlsa()
{
const char* name = "default";
alsaAudio = true;
writeRiffAtClose = false;
int err = snd_pcm_open (&alsaPlaybackHandle, name, SND_PCM_STREAM_PLAYBACK, 0);
if (err < 0)
throw TtsException({"Alsa: cannot open playback audio device ", name, " (" , snd_strerror (err), ")"}, 0);
sampleRate = 44100;
err = snd_pcm_set_params(alsaPlaybackHandle, // pcm PCM handle
SND_PCM_FORMAT_S16_LE, // format required PCM format
SND_PCM_ACCESS_RW_INTERLEAVED, // access required PCM access
2, // channels required PCM channels (Stereo)
sampleRate, // rate required sample rate in Hz
1, // soft_resample 0 = disallow alsa-lib resample stream, 1 = allow resampling
250000); /* 0.25sec */ // latency required overall latency in us
if (err < 0)
throw TtsException({"Alsa: cannot set parameters (" , err, " = " , snd_strerror(err), ") on ", name}, 0);
LOG_DEBUG("Alsa audio initialized");
}
Other way are, I manuelly convert the sample rate, before i put it to alsa, but i think: why not use the alsa-plugin.
I don't have the option to get 44kHz audio-stream from the tts-engine (it's another software).
Or exist another way, that I don't see?
Best regards.

Related

I'm Trying to open a stream in PortAudio

i'm using this api: Pa_OpenStream()
// Open line-in stream
err = Pa_OpenStream(&m_stream,
&m_inputParameters,
&m_outputParameters,
44100, // sample rate
128, // frames per buffer
0, // paClipOff
OmniLineInCallback,
NULL);
and i'm getting err = -9993, i.e. paBadIODeviceCombination.
I configured both input and output device and i want to record from the input and transmit to the output playback device.
i don't know why i'm getting this err?!
Appreciate your help,
Aviel
Make sure you pass correct parameters to the method. For that you may to do the following.
Initialize PortAudio via Pa_Initialize()
Check what audio devices are actually available for you through PortAudio. Use Pa_GetDeviceCount() and then Pa_GetDeviceInfo() for each available device. Look how many inputs and outputs are actually available for each device, and don't pass a quantity greater than it supports.
Fill the corresponding fields of the PaStreamParameters struct with the correct values.
This is how I open my ASIO/CoreAudio device (I also use Qt framework, but this does not affect the meaning of the example).
How I init the library and find the device I need:
int MyClass::initSoundInterfaces()
{
int result = -1; // target ASIO/CoreAudio device index
PaError err = Pa_Initialize();
const PaDeviceInfo* deviceInfo;
int numDevices = Pa_GetDeviceCount();
for( int DevIndex=0; DevIndex<numDevices; DevIndex++ )
{
deviceInfo = Pa_GetDeviceInfo( DevIndex );
QString str = Pa_GetHostApiInfo(deviceInfo->hostApi)->name;
qDebug() << "DEV: ApiInfo: " << str;
qDebug() << "defaultSampleRate = " << deviceInfo->defaultSampleRate;
qDebug() << "maxInputChannels = " << deviceInfo->maxInputChannels;
qDebug() << "maxOutputChannels = " << deviceInfo->maxOutputChannels;
QRegExp reg_exp(".*(ASIO|Core.*Audio).*", Qt::CaseInsensitive);
if( str.contains(reg_exp) )
{
if(deviceInfo->maxInputChannels > 0
&& deviceInfo->maxOutputChannels > 1)
{
result = DevIndex;
break;
}
}
}
return result;
}
Then I pass the given device index to the following method to open and start a stream:
bool MyClass::startAudio(int DevIndex)
{
PaError err = paNoError;
PaStreamParameters in_param;
in_param.device = DevIndex;
g_ChannelCount = min(Pa_GetDeviceInfo(DevIndex)->maxInputChannels,
MAX_INPUT_COUNT);
in_param.channelCount = g_ChannelCount;
in_param.hostApiSpecificStreamInfo = NULL;
in_param.sampleFormat = paFloat32 | paNonInterleaved;
in_param.suggestedLatency = 0;
// Pa_GetDeviceInfo(DevIndex)->defaultLowInputLatency;
PaStreamParameters out_param;
out_param.device = DevIndex;
out_param.channelCount = 2; // I do not need more than 2 output channels
out_param.hostApiSpecificStreamInfo = NULL;
out_param.sampleFormat = paFloat32 | paNonInterleaved; // Not all devices support 32 bits
out_param.suggestedLatency = 0;
// Pa_GetDeviceInfo(DevIndex)->defaultLowOutputLatency;
if(err == paNoError)
{
err = Pa_OpenStream(&stream,
&in_param,
&out_param,
nSampleRate,
cBufferSize/*paFramesPerBufferUnspecified*/,
paNoFlag,
process,
0);
}
err = Pa_StartStream(stream);
...
}
OK, when call Pa_GetDeviceCount() i get many available devices.
currently i have the onborad sound card and a usb sound card. (each of them have input and output devices)
when i configure input and output of the OnBoard sound card it works fine.
but when i configure input of the usb card and output of the onboard card it returns err = paInvalidDevice.
also i saw that each card has several devices that differs in hostApi (paInDevelopment=0, paDirectSound=1, paMME=2)
what is the diffeence between them? and which device should i choose? it's ok to mix between them, i.e. choose input device that have "paDirectSound" and output that have "paInDevelopment"?
another thing that i paid attention to is the sample rate and number of channels, is it ok to have input with sample rate of 44100 and output of 48000?
and one last thing: you confiugured the variables: nSampleRate nBufferSize according to what?
thanks,
aviel.
it is because the host api type of input device and output device are not same. For example:
import sounddevice as sd
sd.query_devices()
will get the available devices:
> 1 ADAT (7+8) (RME Fireface UC), MME (2 in, 0 out)
2 SPDIF/ADAT (1+2) (RME Fireface , MME (2 in, 0 out)
3 Analog (1+2) (RME Fireface UC), MME (2 in, 0 out)
...
14 扬声器 (RME Fireface UC), MME (0 in, 8 out)
...
44 ASIO Fireface USB, ASIO (18 in, 18 out)
where I have delete the unconcerned devices. here we can see:
device 3 is an input device with host api MME,
device 14 is an output device with host api MME,
device 44 is an io device with host api ASIO.
Now if you want call the sounddevice.playrec() method, you must make sure the io devices you choose has the same kind of api, for example:
import sounddevice as sd
sd.playrec(data, device=(3, 14)) # OK
sd.playrec(data, device=(44, 44)) # OK
sd.playrec(data, device=(3, 44)) # bad
sd.playrec(data, device=(44, 14)) # bad

Live555 truncates encoded data of FFMpeg

I am trying to stream H264 based data using Live555 over RTSP.
I am capturing data using V4L2, and then encodes it using FFMPEG and then passing data to Live555's DeviceSource file, in that I using H264VideoStreamFramer class,
Below is my codec settings to configure AVCodecContext of encoder,
codec = avcodec_find_encoder_by_name(CODEC_NAME);
if (!codec) {
cerr << "Codec " << codec_name << " not found\n";
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
cerr << "Could not allocate video codec context\n";
exit(1);
}
pkt = av_packet_alloc();
if (!pkt)
exit(1);
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = PIC_HEIGHT;
c->height = PIC_WIDTH;
/* frames per second */
c->time_base = (AVRational){1, FPS};
c->framerate = (AVRational){FPS, 1};
c->gop_size = 10;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
c->rtp_payload_size = 30000;
if (codec->id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "fast", 0);
av_opt_set_int(c->priv_data, "slice-max-size", 30000, 0);
/* open it */
ret = avcodec_open2(c, codec, NULL);
if (ret < 0) {
cerr << "Could not open codec\n";
exit(1);
}
And I am getting encoded data using avcodec_receive_packet() function. which will return AVPacket.
And I am passing AVPacket's data into DeviceSource file below is code snippet of my Live555 code:
void DeviceSource::deliverFrame() {
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet
u_int8_t* newFrameDataStart = (u_int8_t*) pkt->data;
unsigned newFrameSize = pkt->size; //%%% TO BE WRITTEN %%%
// Deliver the data here:
if (newFrameSize > fMaxSize) { // Condition becomes true many times
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}
gettimeofday(&fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
// If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
memmove(fTo, newFrameDataStart, fFrameSize);
}
But here, sometimes my packet's size is getting more than fMaxSize value and as per LIVE555 logic it will truncate frame data, so that sometimes I am getting bad frames on my VLC,
From Live555 forum, I get to know that encoder should not send packet whose size is more than fMaxSize value, so my question is:
How to restrict encoder to limit size of packet?
Thanks in Advance,
Harshil
You can increase the maximum allowed sample size by changing "maxSize" in the OutPacketBuffer class in MediaSink.cpp. This worked for me. There are cases we may require high-quality video to be streamed, I don't think we will always be able to restrict the encoder to not to produce samples of size more than a particular value which would result in video quality issues. In fact, the samples are fragmented by the UDP sink live555 to match the default MTU (1500), so increasing the max sample size limit has no side effects.

How to read YUV8 data from avi file?

I have avi file that contains uncompressed gray video data. I need to extract frames from it. The size of file is 22 Gb.
How do i do that?
I have already tried ffmpeg, but it gives me "could not find codec parameters for video stream" message - because there is no codec at work, just frames.
Since Opencv just uses ffmpeg to read video, that rules out opencv as well.
The only path that seems to be left is to try and dig into the raw data, but i do not know how.
Edit: this is the code i use to read from the file with opencv. The failure occurs inside the second if. Running ffmpeg binary on the file also fails with the message above (could not find codec aprameters etc)
/* register all formats and codecs */
av_register_all();
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
ret = 1;
goto end;
}
fmt_ctx->seek2any = true;
/* retrieve stream information */
int res = avformat_find_stream_info(fmt_ctx, NULL);
if (res < 0) {
fprintf(stderr, "Could not find stream information\n");
ret = 1;
goto end;
}
Edit:
Here is sample code i have tried to make the extraction: pastebin. The result i get is an unchanging buffer after every call to AVIStreamRead.
If you do not need cross platform functionality Video for Windows (VFW) API is a good alternative (http://msdn.microsoft.com/en-us/library/windows/desktop/dd756808(v=vs.85).aspx), i will not put an entire code block, since there's quite much to do, but you should be able to figure it out from the reference link. Basically, you do a AVIFileOpen, then get the video stream via AVIFileGetStream with streamtypeVIDEO, or alternatively do it at once with AVIStreamOpenFromFile and then read samples from the stream with AVIStreamRead. If you get to a point where you fail I can try to help, but it should be pretty straightforward.
Also, not sure why ffmpeg is failing, I have been doing raw AVI reading with ffmpeg without any codecs involved, can you post what call to ffpeg actually fails?
EDIT:
For the issue that you are seeing when the read data size is 0. The AVI file has N slots for frames in each second where N is the fps of the video. In real life the samples won't come exactly at that speed (e.g. IP surveillance cameras) so the actual data sample indexes can be non continuous like 1,5,11,... and VFW would insert empty samples between them (that is from where you read a sample with a zero size). What you have to do is call AVIStreamRead with NULL as buffer and 0 as size until the bRead is not 0 or you run past last sample. When you get an actual size, then you can again call AVIStreamRead on that sample index with the buffer pointer and size. I usually do compressed video so i don't use the suggested size, but at least according to your code snipplet I would do something like this:
...
bRead = 0;
do
{
aviOpRes = AVIStreamRead(ppavi,smpS,1,NULL,0,&bRead,&smpN);
} while (bRead == 0 && ++smpS < si.dwLength + si.dwStart);
if(smpS >= si.dwLength + si.dwStart)
break;
PUCHAR tempBuffer = new UCHAR[bRead];
aviOpRes = AVIStreamRead(ppavi,smpS,1,tempBuffer,bRead,&bRead,&smpN);
/* do whatever you need */
delete tempBuffer;
...
EDIT 2:
Since this may come in handy to someone or yourself to make a choice between VFW and FFMPEG I also updated your FFMPEG example so that it parsed the same file (sorry for the code quality since it lacks error checking but i guess you can see the logical flow):
/* register all formats and codecs */
av_register_all();
AVFormatContext* fmt_ctx = NULL;
/* open input file, and allocate format context */
const char *src_filename = "E:\\Output.avi";
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
abort();
}
/* retrieve stream information */
int res = avformat_find_stream_info(fmt_ctx, NULL);
if (res < 0) {
fprintf(stderr, "Could not find stream information\n");
abort();
}
int video_stream_index = 0; /* video stream is usualy 0 but still better to lookup in case it's not present */
for(; video_stream_index < fmt_ctx->nb_streams; ++video_stream_index)
{
if(fmt_ctx->streams[video_stream_index]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
break;
}
if(video_stream_index == fmt_ctx->nb_streams)
abort();
AVPacket *packet = new AVPacket;
while(av_read_frame(fmt_ctx, packet) == 0)
{
if (packet->stream_index == video_stream_index)
printf("Sample nr %d\n", packet->pts);
av_free_packet(packet);
}
Basically you open the context and read packets from it. You will get both audio and video packets so you should check if the packet belongs to the stream of interest. FFMPEG will save you the trouble with empty frames and give only those samples that have data in them.

how do i create a stereo mp3 file with latest version of ffmpeg?

I'm updating my code from the older version of ffmpeg (53) to the newer (54/55). Code that did work has now been deprecated or removed so i'm having problems updating it.
Previously I could create a stereo MP3 file using a sample format called:
SAMPLE_FMT_S16
That matched up perfectly with my source stream. This has now been replace with
AV_SAMPLE_FMT_S16
Which works fine for mono recordings but when I try to create a stereo MP3 file it bugs out at avcodec_open2 with:
"Specified sample_fmt is not supported."
Through trial and error I've found that using
AV_SAMPLE_FMT_S16P
...is accepted by avcodec_open2 but when I get through and create the MP3 file the sound is very distorted - it sounds about 2 octaves lower than usual with a massive hum in the background - here's an example recording:
http://hosting.ispyconnect.com/example.mp3
I've been told by the ffmpeg guys that this is because I now need to manually deinterleave my byte stream before calling:
avcodec_fill_audio_frame
How do I do that? I've tried using the swrescale library without success and i've tried manually feeding in L/R data into avcodec_fill_audio_frame but the results i'm getting are sounding exactly the same as without interleaving.
Here is my code for encoding:
void add_audio_sample( AudioWriterPrivateData^ data, BYTE* soundBuffer, int soundBufferSize)
{
libffmpeg::AVCodecContext* c = data->AudioStream->codec;
memcpy(data->AudioBuffer + data->AudioBufferSizeCurrent, soundBuffer, soundBufferSize);
data->AudioBufferSizeCurrent += soundBufferSize;
uint8_t* pSoundBuffer = (uint8_t *)data->AudioBuffer;
DWORD nCurrentSize = data->AudioBufferSizeCurrent;
libffmpeg::AVFrame *frame;
int got_packet;
int ret;
int size = libffmpeg::av_samples_get_buffer_size(NULL, c->channels,
data->AudioInputSampleSize,
c->sample_fmt, 1);
while( nCurrentSize >= size) {
frame=libffmpeg::avcodec_alloc_frame();
libffmpeg::avcodec_get_frame_defaults(frame);
frame->nb_samples = data->AudioInputSampleSize;
ret = libffmpeg::avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt, pSoundBuffer, size, 1);
if (ret<0)
{
throw gcnew System::IO::IOException("error filling audio");
}
//audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
libffmpeg::AVPacket pkt = { 0 };
libffmpeg::av_init_packet(&pkt);
ret = libffmpeg::avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret<0)
throw gcnew System::IO::IOException("error encoding audio");
if (got_packet) {
pkt.stream_index = data->AudioStream->index;
if (pkt.pts != AV_NOPTS_VALUE)
pkt.pts = libffmpeg::av_rescale_q(pkt.pts, c->time_base, c->time_base);
if (pkt.duration > 0)
pkt.duration = av_rescale_q(pkt.duration, c->time_base, c->time_base);
pkt.flags |= AV_PKT_FLAG_KEY;
if (libffmpeg::av_interleaved_write_frame(data->FormatContext, &pkt) != 0)
throw gcnew System::IO::IOException("unable to write audio frame.");
}
nCurrentSize -= size;
pSoundBuffer += size;
}
memcpy(data->AudioBuffer, data->AudioBuffer + data->AudioBufferSizeCurrent - nCurrentSize, nCurrentSize);
data->AudioBufferSizeCurrent = nCurrentSize;
}
Would love to hear any ideas - I've been trying to get this working for 3 days now :(
you don't want to increase pSoundBuffer if a frame hasn't been fully encoded (e.g. got_packet isn't set to true) as no memory has been written yet. Also, you are allocating a frame during each loop: there's no need for that, you can re-use the same AVFrame over an over. Your code is also leaking as you never free the AVFrame.
I wrote a code as part of MythTV that encode audio to AC3.
This also do what you were looking for: deinterleave the content.
https://github.com/MythTV/mythtv/blob/476b2a826d43fca5e658ebe787c3cb1ec2334f98/mythtv/libs/libmyth/audio/audiooutputdigitalencoder.cpp#L178
I know this question is old, but for posterity: I'm working on some audio resampling code, and after I arrived at an audio sounding very similar to the mp3 the author linked, I identified the cause as being a mismatch in audio sampling rate between the input the resampler expects and the actual data.

streaming H.264 over RTP with libavformat

I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.
Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple:
x264_picture_t pic_out;
x264_nal_t* nals;
int num_nals;
int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);
if (frame_size <= 0)
{
return frame_size;
}
// push NALs into the queue
for (int i = 0; i < num_nals; i++)
{
// create a NAL storage unit
NAL nal;
nal.size = nals[i].i_payload;
nal.payload = new uint8_t[nal.size];
memcpy(nal.payload, nals[i].p_payload, nal.size);
// push the storage into the NAL queue
{
// lock and push the NAL to the queue
boost::mutex::scoped_lock lock(this->nal_lock);
this->nal_queue.push(nal);
}
}
nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.
But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class:
Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
{
this->encoder = encoder;
// initalize the AV context
this->ctx = avformat_alloc_context();
if (!this->ctx)
{
throw runtime_error("Couldn't initalize AVFormat output context");
}
// get the output format
this->fmt = av_guess_format("rtp", NULL, NULL);
if (!this->fmt)
{
throw runtime_error("Unsuitable output format");
}
this->ctx->oformat = this->fmt;
// try to open the RTP stream
snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
{
throw runtime_error("Couldn't open RTP output stream");
}
// add an H.264 stream
this->stream = av_new_stream(this->ctx, 1);
if (!this->stream)
{
throw runtime_error("Couldn't allocate H.264 stream");
}
// initalize codec
AVCodecContext* c = this->stream->codec;
c->codec_id = CODEC_ID_H264;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = bitrate;
c->width = width;
c->height = height;
c->time_base.den = fps;
c->time_base.num = 1;
// write the header
av_write_header(this->ctx);
}
This is where things seem to go wrong. av_write_header above seems to do absolutely nothing; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.
Now when I want to stream out a NAL, I use this:
// grab a NAL
NAL nal = this->encoder->nal_pop();
cout << "NAL popped with size " << nal.size << endl;
// initalize a packet
AVPacket p;
av_init_packet(&p);
p.data = nal.payload;
p.size = nal.size;
p.stream_index = this->stream->index;
// send it out
av_write_frame(this->ctx, &p);
At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.
I then tried to play it through gst-launch:
gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink
This will sit waiting for UDP data, but when it is received, I get:
ERROR: element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: No RTP
format was negotiated. Additional debug info:
gstbasertpdepayload.c(372): gst_base_rtp_depayload_chain ():
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Input buffers
need to have RTP caps set on them. This is usually achieved by setting
the 'caps' property of the upstream source element (often udpsrc or
appsrc), or by putting a capsfilter element before the depayloader and
setting the 'caps' property on that. Also see
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README
As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints?
h264 is an encoding standard. It specifies how video data is compressed and stored in a format that can be decompressed into a video stream at later point.
RTP is a transmission protocol. It specifies format and order of packets that can carry audio-video data that was encoded by an arbitrary encoder.
GStreamer expects to receive data that conforms to the RTP procotol. Is your expectation that libaformat will produce the RTP packets immediately readable by GStreamer warranted? Maybe GStreamers expect an additional stream description that would enable it to accept and decode the streamed packets using the proper decoder? Maybe it requires an additional RTSP exchange or the SDP stream descriptor file?
The error message states pretty clearly that an RTP format has not been negotiated. caps are short-hand for capabilities. Receiver needs to know transmitter's capabilities to set up the receiver/decoding machinery correctly.
I strongly suggest trying at least to create an SDP file for your RTP stream. libavformat should be able to do it for you.