I am trying to decode a h264 video that I encoded using FFMPEG. The encoder used is libx264rgb with a AV_PIX_FMT_BGR0 pixel format. The encoded video plays well inside ffplay.
When I decode the frames, I obtain a planar pixel format(AV_PIX_FMT_GBRP) which is different from the original. Is it normal? If yes is there a way to disable the planar format? (either at encoding or decoding). This would allow me to skip the packing overhead from planar at runtime.
I used this decoder sample with a AV_CODEC_ID_H264 decoder instead of AV_CODEC_ID_MPEG1VIDEO.
Thank you!
Related
I coded an encoder using FFMPEG (c++). The requirements for this encoder are:
The output format should be uncompressed avi,
Preferably using RGB24/YUV444 pixel format since we do not want chroma subsampling.
Most standard players should support the format (windows media player (WMP), VLC)
Using the encoder I wrote, I can write a number of file types right now:
Lossless H.264 encoded video using the YUV420p pixel format and AVI container. (Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
MPEG4 encoded video using the YUV420p pixel format and AVI container.(Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
AYUV encoded video using the YUVA444P pixel format. (uncompressed as far as I understand and not chroma subsampled. However, VLC does not play this.)
FFV1 encoded video using the YUV444P pixel format. (lossless, and not chroma subsampled. However, WMP does not play this.)
The above is derived from this very usefull post.
So I am now looking into the RAWVIDEO encoder from FFMPEG. I can't get this to work and neither can I find an example in the FFMPEG documentation on how to use this encoder for writing video. Can somebody point me in the right direction or supply sample code for this?
Also, if there is another direction I should follow to meet my requirements feel free to point me to it.
Thanks in advance
I've written a C++ converter based on FFMpeg which can receive a link to hls-stream and convert it into local .mp4 video. So far, so good, converter works like a charm, no questions about that.
PROBLEM: No matter what input source I'm providing to the converter, at the end of convertation I need to receive video with key-frames ONLY. I need such video due to perfect seeking forward and reverse.
It's a well-known fact that subsidiary video frames (P and B) dependent on their owner-frame (I frame), because this frame contains full pixel map. According to that, we can recreate a I frame for each P and B frame by merging their data with their I frame. That's why such ffmpeg command ffmpeg -i video.mp4 output%4d.jpg works.
QUESTION: How can I implement an algorithm of merging of frames in order to recreate Key-frames ONLY video at the end? What kind of quirks I need to know about merging datas of AVPackets?
Thanks.
You cannot "merge" P and B-frames of a compressed stream (e.g. with H.264 codec), to obtain I-frames.
What ffmpeg does with
ffmpeg -i video.mp4 output%4d.jpg
is decoding each frame (thus it needs to start from an I-frame, then decode all subsequent P and B-frames in the stream), and compress them back to JPEG and output a JPEG image for each frame in the original input stream.
If you want to convert an input stream with P/B frames to an intra-only stream (with all I-frames), you need to transcode the stream.
That means decode all frames from the original stream and encode them back to an intra-only stream.
I received some video data via RTP / RTSP / SIP, the data is encoded by H264 and sent by a IP camera. I would like to convert H264 keyframe data into a picture and analyze whether it contains faces. I do not want to use FFMPEG such a huge library, just use libx264 and opencv can do it? How?
Thanks.
No, not possible. X264 can not decode (it is a h264 encoder only). It also can not encode jpeg/png. Ffmpeg is what you need. If it is too large, custom compile including only the features you need. And static link so unused functions are striped out.
I am compressing frames coming from webcam with libx264. So far I used YUY2 raw frames and swscale to transcode the frames to I420, which is usable by x264.
Anyway I would like to add support for mJPEG webcams (usually webcam provides both, but mJPEG allows higher frame rates and resolutions). What can I use to transcode mJPEG to some format, that can be used by x264?
If you already use swscale why not to use ffmpeg/libav (libavcodec) for decoding mjpeg?
I'm using FFMPEG to decode video stream from IP Camera, I have the example code that can decode video stream with any codec into YUI frames format.
But my case is special, I will describe as below
The IP camera stream is MJPEG, and I want using FFMPEG to decode, but I don't want to decode frame into YUV, I want to decode frame under jpeg format, and save those jpeg buffer into image files (*.jpg).
So far, I can do it by converting YUV frame (after decoding) to Jpeg, but this will cause bad performance. Since video stream is MJPEG, I think I can get jpeg data before decoding to YUI, but I don't know how to do it.
Someone can help me?
Many thanks,
T&T