I'm developing an application to send video over RTP to a client that can play only H.263 (1996) and H263+ (1998).
To do this i've encoded the video using libav following these steps: (this is only part of the code)
av_register_all();
avformat_network_init();
Fmt = av_guess_format("rtp", NULL, NULL);
...
st = add_video_stream(FmtCtx, CODEC_ID_H263);
...
avio_open(&FmtCtx->pb, rtp_url, URL_WRONLY)
To finally enter a loop where i encode the video, the problem is that the stream generated by this program is encoded in H.263-2000 (or H.263++) which the other side cannot undertand, even though i use CODEC_ID_H263 or CODEC_ID_H263P in the initialization the same thing happens.
Is it possible to encode in those old H.263 versions using libav? i havent managed to do it not even using ffmpeg commands. The stream is always h.263-2000 (PT=96)
Related
I'm using ReSpeaker Mic Array v2.0 on my robot, I used the following git repo: https://github.com/furushchev/respeaker_ros.git to capture the audio received by the speaker. I subscribed to it's raw audio ros topic /audio which is just byte array data(http://docs.ros.org/noetic/api/audio_common_msgs/html/msg/AudioData.html)
How can I write the AudioData message's uint8[] data into a wav file in C++? I would like to play the wav file by other means afterwards.
I saw that in ros audio_common library example it uses gstreamer to do the writing, but I'm quite confused after reading the code(https://github.com/ros-drivers/audio_common/blob/master/audio_capture/src/audio_capture.cpp)
Example that you saw is using Gstremaer's alsasrc to capture audio from mic in this line
_source = gst_element_factory_make("alsasrc", "source");
So Gstreamer's pipeline is internally handling/capturing audio byte array and, in case of input parameters dst_type=="filesink" and format=="wave", encoding it with
_filter = gst_element_factory_make("wavenc", "filter");
and creating .wav file with
_sink = gst_element_factory_make("filesink", "sink");
On the other hand, running that code with input parameters dst_type=="appsink" and format=="wave" actually captures audio bytes again but, instead of writing to file, publishes them on ros topic /audio.
If you cannot (from any reason) use this code with input parameters dst_type=="filesink" and format=="wave", I suppose you will need to use Gstreamer's appsrc element and feed it with bytes from your AudioData message. In that case, the rest of Gstreamer pipeline for encoding and writing to file should remain the same as in the example.
I need to encode an mpegts video using the ffmpeg C++ API. The output video shall have two streams: the first one shall be of type AVMEDIA_TYPE_VIDEO; the second one shall be of type AVMEDIA_TYPE_DATA and shall contain a set of KLV data.
I have written my own KLV library to manage the KLV format.
However I'm not able to create "from scratch" a new video by combining the two streams. Following the implementation as in FFMPEG C api h.264 encoding / MPEG2 ts streaming problems I can successfully encode a mpegts video with a single video stream.
However I'm not able to add a new AVMEDIA_TYPE_DATA stream to the output video since, as soon as I add a new data stream using methods like avformat_new_stream(...) the output video is empty: neither the data stream nor the video one are produced and the output file is empty.
Can anyone suggest me a tutorial page or a sample on how to properly add a data stream to my output video in mpegts format?
Thanks a lot!
I was able to get a KLV stream added to a muxed output by starting with the "muxing.c" example that comes with the FFmpeg source, and modifying it as follows.
First, I created the AVStream as follows, where "oc" is the AVFormatContext (muxer) variable:
AVStream *klv_stream = klv_stream = avformat_new_stream(oc, NULL);
klv_stream->codec->codec_type = AVMEDIA_TYPE_DATA;
klv_stream->codec->codec_id = AV_CODEC_ID_TIMED_ID3;
klv_stream->time_base = AVRational{ 1, 30 };
klv_stream->id = oc->nb_streams - 1;
Then, during the encoding/muxing loop:
AVPacket pkt;
av_init_packet(&pkt);
pkt.data = (uint8_t*)GetKlv(pkt.size);
auto res = write_frame(oc, &video_st.st->time_base, klv_stream, &pkt);
free(pkt.data);
(The GetKlv() function returns a malloc()'ed array of binary data that would be replaced by whatever you're using to get your encoded KLV. It sets pkt.size to the length of the data.)
With this modification, and specifying a ".ts" target file, I get a three-stream file that plays just fine in VLC. The KLV stream has a stream_type of 0x15, indicating synchronous KLV.
Note the codec_id value of AV_CODEC_ID_TIMED_ID3. According to the libavformat source file "mpegtsenc.c", a value of AV_CODEC_ID_OPUS should result in stream_type 6, for asynchronous KLV (no accompanying PTS or DTS). This is actually important for my application, but I'm unable to get it to work -- the call to avformat_write_header() throws a division by zero error. If I get that figured out, I'll add an update here.
I have searched all around and can not find any examples or tutorials on how to access a webcam using ffmpeg in C++. Any sample code or any help pointing me to some documentation, would greatly be appreciated.
Thanks in advance.
I have been working on this for months now. Your first "issue" is that ffmpeg (libavcodec and other ffmpeg libs) does NOT access web cams, or any other device.
For a basic USB webcam, or audio/video capture card, you first need driver software to access that device. For linux, these drivers fall under the Video4Linux (V4L2 as it is known) category, which are modules that are part of most distros. If you are working with MS Windows, then you need to get an SDK that allows you to access the device. MS may have something for accessing generic devices, (but from my experience, they are not very capable, if they work at all) If you've made it this far, then you now have raw frames (video and/or audio).
THEN you get to the ffmpeg part - libavcodec - which takes the raw frames (audio and/or video) and encodes them into a streams, which ffmpeg can then mux into your final container.
I have searched, but have found very few examples of all of these, and most are piece-meal.
If you don't need to actually code of this yourself, the command line ffmpeg, as well as vlc, can access these devices, capture and save to files, and even stream.
That's the best I can do for now.
ken
For windows use dshow
For Linux (like ubuntu) use Video4Linux (V4L2).
FFmpeg can take input from V4l2 and can do the process.
To find the USB video path type : ls /dev/video*
E.g : /dev/video(n) where n = 0 / 1 / 2 ….
AVInputFormat – Struct which holds the information about input device format / media device format.
av_find_input_format ( “v4l2”) [linux]
av_format_open_input(AVFormatContext , “/dev/video(n)” , AVInputFormat , NULL)
if return value is != 0 then error.
Now you have accessed the camera using FFmpeg and can continue the operation.
sample code is below.
int CaptureCam()
{
avdevice_register_all(); // for device
avcodec_register_all();
av_register_all();
char *dev_name = "/dev/video0"; // here mine is video0 , it may vary.
AVInputFormat *inputFormat =av_find_input_format("v4l2");
AVDictionary *options = NULL;
av_dict_set(&options, "framerate", "20", 0);
AVFormatContext *pAVFormatContext = NULL;
// check video source
if(avformat_open_input(&pAVFormatContext, dev_name, inputFormat, NULL) != 0)
{
cout<<"\nOops, could'nt open video source\n\n";
return -1;
}
else
{
cout<<"\n Success !";
}
} // end function
Note : Header file < libavdevice/avdevice.h > must be included
This really doesn't answer the question as I don't have a pure ffmpeg solution for you, However, I personally use Qt for webcam access. It is C++ and will have a much better API for accomplishing this. It does add a very large dependency on your code however.
It definitely depends on the webcam - for example, at work we use IP cameras that deliver a stream of jpeg data over the network. USB will be different.
You can look at the DirectShow samples, eg PlayCap (but they show AmCap and DVCap samples too). Once you have a directshow input device (chances are whatever device you have will be providing this natively) you can hook it up to ffmpeg via the dshow input device.
And having spent 5 minutes browsing the ffmpeg site to get those links, I see this...
I am writing client-server system that uses FFMPEG library to parse H.264 stream into NAL units on the server side, then uses channel coding to send them over network to client side, where my application must be able to play video.
The question is how to play received AVPackets (NAL units) in my application as video stream.
I have found this tutorial helpful and used it as base for both server and client side.
Some sample code or resource related to playing video not from file, but from data inside program using FFMPEG library would be very helpful.
I am sure that received information will be sufficient to play video, because I tried to save received data as .h264 or .mp4 file and it can be played by VLC player.
Of what I understand from your question, you have the AVPackets and want to play a video. In reality this is two problems; 1. decoding your packets, and 2. playing the video.
For decoding your packets, with FFmpeg, you should take a look at the documentation for AVPacket, AVCodecContext and avcodec_decode_video2 to get some ideas; the general idea is that you want to do something (just wrote this in the browser, take with a grain of salt) along the lines of:
//the context, set this appropriately based on your video. See the above links for the documentation
AVCodecContext *decoder_context;
std::vector<AVPacket> packets; //assume this has your packets
...
AVFrame *decoded_frame = av_frame_alloc();
int ret = -1;
int got_frame = 0;
for(AVPacket packet : packets)
{
avcodec_get_frame_defaults(frame);
ret = avcodec_decode_video2(decoder_context, decoded_frame, &got_frame, &packet);
if (ret <= 0) {
//had an error decoding the current packet or couldn't decode the packet
break;
}
if(got_frame)
{
//send to whatever video player queue you're using/do whatever with the frame
...
}
got_frame = 0;
av_free_packet(&packet);
}
It's a pretty rough sketch, but that's the general idea for your problem of decoding the AVPackets. As for your problem of playing the video, you have many options, which will likely depend more on your clients. What you're asking is a pretty large problem, I'd advise familiarizing yourself with the FFmpeg documentation and the provided examples at the FFmpeg site. Hope that makes sense
My C++ application receives a H.264 RTP video stream.
Right now it decodes the stream, saves it into a YUV file and later I use ffmpeg to re-ecode the file into something suitable to watch on a Windows PC (eg. Mpeg4 AVI).
Shouldn't it be possible to save the H.264 stream into a AVI (or similar) container without having to decode and re-encode it ? That would require some H.264 decoder on the PC to watch, but it should be much more efficient.
How could that be done ? Are there any libraries supporting that ?
using ffmpeg is correct but the answers posted so far dont look right to me.
the correct switch should be:
-vcodec copy
Your program could pipe the rtp itself through ffmpeg - even invoking it using popen3().
It seems that you need to use an intermediate SDP file - I speculate that you can specify a file you created as a named pipe or with tmpfile() which your application writes to - using the file as an intermediary.
The command-line would be something like:
int p[3];
const char* const out_fmt = "avi";
const char* cmd[] = {"ffmpeg","-f",,"-i",temp_sdp_filename,"-vcodec","copy","-f",out_fmt,"-",NULL};
if(-1 == popen3(p,cmd)) ...
// write the rtp that you receive to p[STDIN_FILENO]
// read the avi from p[STDOUT_FILENO]
// read any messages and error text from p[STDERR_FILENO]
I believe that in this circumstance ffmpeg is clever enough to repackage the container (rtp stream vs AVI) without transcoding the video and audio (this is the -vcodec copy switch); therefore, you'd have no loss of quality and it'd be blazingly fast.