How to write ROS AudioData message into wav file? - c++

I'm using ReSpeaker Mic Array v2.0 on my robot, I used the following git repo: https://github.com/furushchev/respeaker_ros.git to capture the audio received by the speaker. I subscribed to it's raw audio ros topic /audio which is just byte array data(http://docs.ros.org/noetic/api/audio_common_msgs/html/msg/AudioData.html)
How can I write the AudioData message's uint8[] data into a wav file in C++? I would like to play the wav file by other means afterwards.
I saw that in ros audio_common library example it uses gstreamer to do the writing, but I'm quite confused after reading the code(https://github.com/ros-drivers/audio_common/blob/master/audio_capture/src/audio_capture.cpp)

Example that you saw is using Gstremaer's alsasrc to capture audio from mic in this line
_source = gst_element_factory_make("alsasrc", "source");
So Gstreamer's pipeline is internally handling/capturing audio byte array and, in case of input parameters dst_type=="filesink" and format=="wave", encoding it with
_filter = gst_element_factory_make("wavenc", "filter");
and creating .wav file with
_sink = gst_element_factory_make("filesink", "sink");
On the other hand, running that code with input parameters dst_type=="appsink" and format=="wave" actually captures audio bytes again but, instead of writing to file, publishes them on ros topic /audio.
If you cannot (from any reason) use this code with input parameters dst_type=="filesink" and format=="wave", I suppose you will need to use Gstreamer's appsrc element and feed it with bytes from your AudioData message. In that case, the rest of Gstreamer pipeline for encoding and writing to file should remain the same as in the example.

Related

Write H.264 stream in buffer to a streamable mp4 using ffmpeg

I wrote code to create H.264 stream, which has a loop to generate H.264 encoded frame.
while(true) {
...
x264_encoder_encode(encoder, &buffer, &i_buffer, &pic_in, &pic_out);
...
/*TODO: Write one frame in the buffer to a streamable mp4 file*/
}
Every single time, an H.264 encoded frame is generated and stored in the buffer. How can I write it into a streamable mp4 file directly through the buffer?
I spent lots of time searching for the solution. All I can find is to read stream from a file using
avformat_open_input(&fmtCtx, in_filename, 0, 0)
Is there any way to read directly from buffer without a file?
MP4 is actually not streamable. So in other words, you can't do it at all. I ran in that very problem.
The reason why it won't work is because when you open an mp4 file, you have to have all sorts of parameters, which by default get saved at the end of the file. When you create an MP4, you can always forcibly save that info at the start. However, to know what those parameters are, you need all the data. And without those parameters, the software trying to load the mp4 fails very early on. This is true for some other formats such as webm videos and .m4a or .wav for audio.
What you have to do is stream the actual H.264, possibly using RTSP or a format of your own if you're in control of both sides.

ffmpeg c++ API encode mpegts with KLV data stream

I need to encode an mpegts video using the ffmpeg C++ API. The output video shall have two streams: the first one shall be of type AVMEDIA_TYPE_VIDEO; the second one shall be of type AVMEDIA_TYPE_DATA and shall contain a set of KLV data.
I have written my own KLV library to manage the KLV format.
However I'm not able to create "from scratch" a new video by combining the two streams. Following the implementation as in FFMPEG C api h.264 encoding / MPEG2 ts streaming problems I can successfully encode a mpegts video with a single video stream.
However I'm not able to add a new AVMEDIA_TYPE_DATA stream to the output video since, as soon as I add a new data stream using methods like avformat_new_stream(...) the output video is empty: neither the data stream nor the video one are produced and the output file is empty.
Can anyone suggest me a tutorial page or a sample on how to properly add a data stream to my output video in mpegts format?
Thanks a lot!
I was able to get a KLV stream added to a muxed output by starting with the "muxing.c" example that comes with the FFmpeg source, and modifying it as follows.
First, I created the AVStream as follows, where "oc" is the AVFormatContext (muxer) variable:
AVStream *klv_stream = klv_stream = avformat_new_stream(oc, NULL);
klv_stream->codec->codec_type = AVMEDIA_TYPE_DATA;
klv_stream->codec->codec_id = AV_CODEC_ID_TIMED_ID3;
klv_stream->time_base = AVRational{ 1, 30 };
klv_stream->id = oc->nb_streams - 1;
Then, during the encoding/muxing loop:
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = (uint8_t*)GetKlv(pkt.size);
auto res = write_frame(oc, &video_st.st->time_base, klv_stream, &pkt);
free(pkt.data);
(The GetKlv() function returns a malloc()'ed array of binary data that would be replaced by whatever you're using to get your encoded KLV. It sets pkt.size to the length of the data.)
With this modification, and specifying a ".ts" target file, I get a three-stream file that plays just fine in VLC. The KLV stream has a stream_type of 0x15, indicating synchronous KLV.
Note the codec_id value of AV_CODEC_ID_TIMED_ID3. According to the libavformat source file "mpegtsenc.c", a value of AV_CODEC_ID_OPUS should result in stream_type 6, for asynchronous KLV (no accompanying PTS or DTS). This is actually important for my application, but I'm unable to get it to work -- the call to avformat_write_header() throws a division by zero error. If I get that figured out, I'll add an update here.

Encoding video on H.263 to send over RTP

I'm developing an application to send video over RTP to a client that can play only H.263 (1996) and H263+ (1998).
To do this i've encoded the video using libav following these steps: (this is only part of the code)
av_register_all();
avformat_network_init();
Fmt = av_guess_format("rtp", NULL, NULL);
...
st = add_video_stream(FmtCtx, CODEC_ID_H263);
...
avio_open(&FmtCtx->pb, rtp_url, URL_WRONLY)
To finally enter a loop where i encode the video, the problem is that the stream generated by this program is encoded in H.263-2000 (or H.263++) which the other side cannot undertand, even though i use CODEC_ID_H263 or CODEC_ID_H263P in the initialization the same thing happens.
Is it possible to encode in those old H.263 versions using libav? i havent managed to do it not even using ffmpeg commands. The stream is always h.263-2000 (PT=96)

gstreamer find out decoding bit rate

The function query_position(gst.FORMAT_BYTES, None)[0] returns me the no. of bytes in the pipeline after gstreamer has decoded the video/audio. I want to know the no. of bytes of the source file that were consumed to decode till this point of time. Is there a function in gstreamer API to do this?
Please read the seeking chapter from pygst docs. You can replace pos_int = self.player.query_position(gst.FORMAT_TIME, None)[0] with your version to get the bytes in real time. They are using thread object.
You can also add the timeout method. In Python its gobject.timeout_add(interval, callback, ...)
I have received the download data size in souphttpsrc source using onGotChunk event. This onGotChunk is MPEGDASH specific patch for souphttpsrc element.
In general
gboolean gst_element_query_duration (GstElement *element, GstFormat format, gint64 *duration); this API can be used. Pass source element as a 1st argument to this function and check.

Saving H.264 RTP stream without re-encoding?

My C++ application receives a H.264 RTP video stream.
Right now it decodes the stream, saves it into a YUV file and later I use ffmpeg to re-ecode the file into something suitable to watch on a Windows PC (eg. Mpeg4 AVI).
Shouldn't it be possible to save the H.264 stream into a AVI (or similar) container without having to decode and re-encode it ? That would require some H.264 decoder on the PC to watch, but it should be much more efficient.
How could that be done ? Are there any libraries supporting that ?
using ffmpeg is correct but the answers posted so far dont look right to me.
the correct switch should be:
-vcodec copy
Your program could pipe the rtp itself through ffmpeg - even invoking it using popen3().
It seems that you need to use an intermediate SDP file - I speculate that you can specify a file you created as a named pipe or with tmpfile() which your application writes to - using the file as an intermediary.
The command-line would be something like:
int p[3];
const char* const out_fmt = "avi";
const char* cmd[] = {"ffmpeg","-f",,"-i",temp_sdp_filename,"-vcodec","copy","-f",out_fmt,"-",NULL};
if(-1 == popen3(p,cmd)) ...
// write the rtp that you receive to p[STDIN_FILENO]
// read the avi from p[STDOUT_FILENO]
// read any messages and error text from p[STDERR_FILENO]
I believe that in this circumstance ffmpeg is clever enough to repackage the container (rtp stream vs AVI) without transcoding the video and audio (this is the -vcodec copy switch); therefore, you'd have no loss of quality and it'd be blazingly fast.