I have raw buffer video data which I need to stream over the net as and when requested by the clients. This data is very huge. So I will need to compress and save it at my server location.
One way I understand of doing this convert the data into a video file ".avi" and save it and then stream it frame by frame as and when requested.
I have used the VideoWriter function of OpenCV for converting the buffer to mat and then to ".avi" file with MPEG4 encoding.
However I want to know if there is any way I can compress the raw buffer data and save it in a file. Can anyone suggest me if this is possible?
Thank You.
Related
I'm using ffmpeg to encode H263 and MPEG4 videos.
I take frames from a camera by opencv.
I'd like to know if it's possible to store in a memory buffer the frame I would write on the file with the av_write_frame() function I use now.
And if it's possible, how do you do it? (which functions of the API do you use, and how?)
I've written a C++ converter based on FFMpeg which can receive a link to hls-stream and convert it into local .mp4 video. So far, so good, converter works like a charm, no questions about that.
PROBLEM: No matter what input source I'm providing to the converter, at the end of convertation I need to receive video with key-frames ONLY. I need such video due to perfect seeking forward and reverse.
It's a well-known fact that subsidiary video frames (P and B) dependent on their owner-frame (I frame), because this frame contains full pixel map. According to that, we can recreate a I frame for each P and B frame by merging their data with their I frame. That's why such ffmpeg command ffmpeg -i video.mp4 output%4d.jpg works.
QUESTION: How can I implement an algorithm of merging of frames in order to recreate Key-frames ONLY video at the end? What kind of quirks I need to know about merging datas of AVPackets?
Thanks.
You cannot "merge" P and B-frames of a compressed stream (e.g. with H.264 codec), to obtain I-frames.
What ffmpeg does with
ffmpeg -i video.mp4 output%4d.jpg
is decoding each frame (thus it needs to start from an I-frame, then decode all subsequent P and B-frames in the stream), and compress them back to JPEG and output a JPEG image for each frame in the original input stream.
If you want to convert an input stream with P/B frames to an intra-only stream (with all I-frames), you need to transcode the stream.
That means decode all frames from the original stream and encode them back to an intra-only stream.
I am trying to send Iplimage to another program as a array of bytes in c++/cli. The program needs to save the Iplimage as jpeg image. Let's say the Iplimage is img which I am obtaing by cvQyeryFrame from an avi video file, I am returning img->imagedata to the other program. Does img->imagedata contain the header for the jpeg image to be saved or does it only contain the data. If it only contains the data, how can I include the header? I can save the image using cvSaveImage and then read it but there should be a more direct way (maybe cvEncodeImage? )
thanks.
The data in the Iplimage is already decoded into own format of openCV (typically 8bit B,G,R) once it has been read from disk.
The newer versions of opencv can encode/decode an image in memory, see imencode.
If you need the entire JPEG data, you might want to consider reading the entire file from the disk with fread() instead of cvLoadImage().
If that's not going to work for you, consider Martin's answer.
I'm writing a program in C++ using DirectShow API in order to accelerate video encoding part. This program should read video stream from Video Capture Card, and pass it to the encoder, without intermediate raw data file which is usually passed.
But encoder is not my software, in fact, it was bought.
This encoder used to accept a raw data file with it's details and give an encoded file as an output. So I've decided to read video stream from Video Capture Card, save it to some buffer, and when the size of a buffer is appropriate (appropriate is specified in encoder) pass it to encoder.
But I'm new to DirectShow as well as to a whole multimedia programming, so what I'm asking is an advise about a function to use to read a stream, or about total solution, or any useful links.
Thanks in Advance
EDIT 1: What I meant by Accelerate is to read video stream in encoder directly, instead of creating intermediate YUV file and making encoder to read YUV file.
I am having a bit of a problem.
I get a RAW char* buffer from a camera and I need to add this tags before I can save it to disk. Writing the file to disk and reading it back again is not an option, as this will happen thousands of times.
The buffer data I receive from the camera does not contain any EXIF information, apart from the Width, Height and Pixels per Inch.
Any ideas? (C++)
Look at this PDF, on page 20 you have a diagram showing you were to place or modify your exif information. What is the difference with a file on disk ?
Does the JPEG buffer of your camera contain an EXIF section already ?
What's the difference? Why would doing it to a file on the disk be any different from doing it in memory?
Just do whatever it is you do after you read the file from the disk..
As far as I know EXIF data in JPEG is continuous subpart of file.
So
prepare EXIF data in memory
write part of JPEG file upto EXIF
write prepared EXIF
write rest of JPEG file
You might want to take a look into Exiv2 library. I know it can work on files but I suppose it also has functions to work on memory buffers.