Read SEI data from a HEVC video with FFmpeg - c++

I’ve been trying to create some HEVC videos programmatically using FFmpeg C++ libraries and x265 encoder, using --master-display, --max-cll and other SEI data options from x265. Now, to make sure this information is being written correctly I’ll like to know how I can read back this SEI data from the generated video file, preferably using the FFmpeg C++ library and functions.
I have implemented a video decoder which works using av_read_frame() and decoding frames from each AVPacket obtained. I’m not sure if somehow I can get SEI information before this process, from these packets or with a similar approach.

The SEI reading code lives here, you can add debug messages to see if individual values are being read as expected. The consumer code for the sei bits is here, and it calculates the angle at which the video should be presented to the user. This is exported in AVFrame as display matrix side-data, which you can read using the API in display.h (source, doxy, annotated source).
The application can then use this angle to rotate the image accordingly, e.g. using the rotate avfilter.

Related

how to create AVPacket manually with encoded AAC data to send mpegts?

I'm developing app which sends mpeg2ts stream using FFMPEG API.(avio_open, avformat_new_stream etc..)
The problem is that the app already has AAC-LC audio so audio frame does not need to be encoded because my app just bypass data received from socket buffer.
To open and send mpegts using FFMPEG, I must have AVFormattContext data which is created from FFMPEG API for encoder as far as I know.
Can I create AVFormatContext manually with encoded AAC-LC data? or I should decode and encode the data? The information I know is samplerate, codec, bitrate..
Any help will be greatly appreciated. Thanks in advance.
Yes, you can use the encoded data as-is if your container supports it. There are two steps involved here - encoding and muxing. Encoding compress the data, muxing mixes it together in the output file, so the packets are properly interleaved. Muxing example in FFMpeg distribution helped me with this.
You might also take a look at the following class: https://sourceforge.net/p/karlyriceditor/code/HEAD/tree/src/ffmpegvideoencoder.cpp - this file is from one of my projects, and contains video encoder. Starting from the line 402 you'll see the setup for non-converted audio - it is kind of a hackish way, but it worked. Unfortunately I still end up reencoding audio because for my formats it was not possible to achieve frame-perfect synchronization which I needed

FFmpeg C++, H.264 parser build

I currently working on a project simulate webcam video transmission in C++, at sender side, I capture the raw webcame video with v4l2, encoded with FFmpeg, video file are put into an array and transmitted. And at decoder side, video data received to an array, decoded and play. The program works fine with codec_id AV_CODEC_ID_MPEG1VIDEO, but when I try replace it with AV_CODEC_ID_H264, some problem happen in decoding, please refer to FFmpeg c++ H264 decoding error. Some people suggest me to use parser but I have no idea how is a parse in ffmpeg looks like. Any simple example of how to build a parser for H.264 in FFmepg? I cannot find such tutorial in google.....

How to feed video data into a DirectShow filter to compress/encode and save to file?

First of all, here is what I'm trying to accomplish:
We'd like to add the capability to our commercial application to generate a video file to visualize data. It should be saved in a reasonably compressed format. It is important that the encoding library/codecs are licensed such that we're allowed to use and sell our software without restriction. It's also important that the media format can easily be played by a customer, i.e. can be played by Windows Media Player without requiring a codec pack to be installed.
I'm trying to use DirectShow in c++ by creating a source filter with one output pin that generates the video. I'm closely following the DirectShow samples called Bouncing Ball and Push Source. In GraphEdit I can successfully connect to a video renderer and see the video play. I have also managed to connect to AVI Mux and then file writer to write an uncompressed AVI file. The only issue with this is the huge file size. However, I have not been able to save the video in a compressed format. This problem also happens with the Ball and Push Source samples.
I can connect the output pin to a WM ASF Writer, but when I click play I get "This graph can't play. Unspecified error (Return code: 0x80004005)."
I can't even figure out how to connect to the WMVideo9 Encoder DMO ("These filters cannot agree on a connection"). I could successfully save to mjpeg, but compression was not very substantial.
Please let me know if I'm doing something wrong in GraphEdit or if my source filter code needs to be modified. Alternatively, if there is another (non-DirectShow) option that would work for me I'm open to suggestions. Thanks.
You don't give details to help you with your modification of the filters, however Ball sample generates output which can be written to a file.
Your choice of WM ASF Writer filter is okay - it is a stock filter and it is more or less easy to deal with. There is however a caveat: you need to select video only profile on the filter first, and then connect video input. WM ASF Writer won't run with an unconnected input pin, and in default state it also has an audio input. Of course this can also be done programmatically.
The graph can be as simple as this, and it can be run and it generates a playable file.

FFMpeg- Raw compressed data to video

I'm trying to use FFMpeg to create a video. So far i've been playing with a multiplexing example:
http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html, and i'm able to create a compressed video from an already existing video.
Because my program is going to run on an embedded platform I would like to use some custom code (generated by a colleague) to compress the video data and place it into the video file.
So I'm looking for a way to create a video file in c/c++ using ffmpeg in which i have full control over the compression part (to basically circumvent ffmpeg from doing the compression for me and inserting my own code).
To clarify i'm planning to use this to save film from an intelligent camera into a compressed h264 mpeg-4 file.
You could pipe the output with -vcodec rawvideo to your custom program, or write it as a codec and have ffmpeg handle it.
By the way, ffmpeg was superceded by avconv. ffmpeg only exists for backwards compatibility now.
Edit: apparently avconv is a newer fork of ffmpeg, and seems to have more support. Either way, the options are almost the same.

Recording application output to video using FFmpeg (or similar)

We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.