We have the captured pcap file which includes RTP opus payload per rfc6716, now we can cut off the RTP header and extract the opus payload, we want to encapsulate the payload to ogg opus per spec https://datatracker.ietf.org/doc/html/draft-ietf-codec-oggopus-07 (Ogg Encapsulation for the Opus Audio Codec) and send out, so that VLC can playback the captured opus, we don't want to save to an ogg file then let VLC to playback, we will send the ogg opus out to VLC directly once one packet is encapsulated, anyone who have the referenced implementation of the encapsulation, or 3rd party library I can refer?
The packets can be read using the libpcap library and then encapsulated in Ogg using the libogg library. There is an example program called opusrtp in the opus-tools package that can sniff for Opus RTP packets on the loopback interface using libpcap and write them to Ogg. You would want to do something similar, but change the pcap_open_live() to something like pcap_open_offline() if you want to read from a pcap save file, and write the Ogg pages from libogg to a socket instead of a file. Also define OPUS_PAYLOAD_TYPE to be the RTP payload type you want to look for.
I had a similar need, and following the advice from this answer I wrote an integration of opusrtp that can receive as input a pcap file and then generates the .opus from it.
The gist was in fact using pcap_open_offline() instead of pcap_open_live(), set the correct payload type, and a few other details to adapt to the input file format.
I have the modified opusrtp in a fork on github.
You can use it with something like
./opusrtp --extract PCAPFILE
It generates rtpdump.opus, which you can then transform as needed.
Related
I need to implement VP9 decoding over RTP protocol in existing project which is written in C++ for VS09 compiler. As result i need bytes of decoded picture. I tried using libvpx. I compiled libvpx 1.8.0 for VS9 compiler and add it to my project. I already have part where i receive packets on some port, then i parse raw packet like RTP and payload pass to vpx_codec_decode but libvpx cannot parse payload.
I can record audio using MediaFoundation, but it only gives me a PCM Wave buffer. I want to grab the full buffer, encode it to MP3, and then use the new buffer for networking stuff.
What is the right way to encode the audio after receiving the samples? I have gotten lost reading through MediaSession, MediaSinks, SinkWriter, Transcode API, Transform API, Source Resolver, etc.
I see there is an MP3 encoder object, but I cant find the documentation on how to use it. I also found an MP3 MediaSink but im not sure how the MediaSink fits in with the SourceReader / SinkWriter schema or how to create/use the IMFByteStream it requires.
Is the MediaFoundation the right WinAPI for the task?
overall procedure is like below.
client record the voice for some duration(ex, 5 sec) in some format(webm or wav)
then it send it to server using websocket.
server received the several packets(each packet size is 4096bytes), and each packet is sent to opus decoder.
but opus decoder return invalid packet error.
server is coded with c++(using libwebsocket and libopus library) in ubuntu.
could anyone help me how to do ?
general procedure or some example code is ok
its difficult to find info or community.
thanks
The opus decoder may be expecting an Ogg Opus container file. If you're using WebM, you could extract the encoded audio pages and pass them to a raw Opus decoder that is not dependent on the Ogg container. You could also see how the ffmpeg project is decoding WebM Opus files to PCM
Could the client send an Ogg Opus file instead of a WebM Opus file? There's a chunk-based decoder written in C that can decode Ogg Opus files very quickly. It's intended for WebAssembly, but the C code is not dependent on WASM. See opus_chunk_decoder.c:
https://github.com/AnthumChris/opus-stream-decoder/tree/master/src
I'm developing app which sends mpeg2ts stream using FFMPEG API.(avio_open, avformat_new_stream etc..)
The problem is that the app already has AAC-LC audio so audio frame does not need to be encoded because my app just bypass data received from socket buffer.
To open and send mpegts using FFMPEG, I must have AVFormattContext data which is created from FFMPEG API for encoder as far as I know.
Can I create AVFormatContext manually with encoded AAC-LC data? or I should decode and encode the data? The information I know is samplerate, codec, bitrate..
Any help will be greatly appreciated. Thanks in advance.
Yes, you can use the encoded data as-is if your container supports it. There are two steps involved here - encoding and muxing. Encoding compress the data, muxing mixes it together in the output file, so the packets are properly interleaved. Muxing example in FFMpeg distribution helped me with this.
You might also take a look at the following class: https://sourceforge.net/p/karlyriceditor/code/HEAD/tree/src/ffmpegvideoencoder.cpp - this file is from one of my projects, and contains video encoder. Starting from the line 402 you'll see the setup for non-converted audio - it is kind of a hackish way, but it worked. Unfortunately I still end up reencoding audio because for my formats it was not possible to achieve frame-perfect synchronization which I needed
From this WebRTC Code and API page:
StartRTPDump
This function enables capturing of RTP packets to a binary file on a
specific channel and for a given direction. The file can later be
replayed using e.g. RTP Tools’ rtpplay since the binary file format
is compatible with the rtpdump format.
NOTE: It is recommended that you use this API for debugging purposes only since the created files can become very large.
What is a recommended way to save RTP packets into a file for later processing?