Convert raw RGB32 file to JPEG or PNG using FFmpeg - c++

Context
I used a C++ program to write raw bytes to a file (image.raw) in RGB32 format:
R G B A R G B A R G B A ...
and I want to be able to view it in some way. I have the dimensions of the image.
My tools are limited to command line commands (e.g. ffmpeg). I have visited the ffmpeg website for instructions, but it deals more with converting videos to images.
Questions
Is it possible to turn this file into a viewable file type (e.g. .jpeg, .png) using ffmpeg. If so, how would I do it?
If it's not possible, is there a way I can use another command?
It that's still not viable, is there any way I can manipulate the RGB32 bytes inside a C++ program to make it more suitable without the use of external libraries? I also don't want to encode .jpeg myself like this.

Use the rawvideo demuxer:
ffmpeg -f rawvideo -pixel_format rgba -video_size 320x240 -i input.raw output.png
Since there is no header specifying the assumed video parameters you must specify them, as shown above, in order to be able to decode the data correctly.
See ffmpeg -pix_fmts for a list of supported input pixel formats which may help you choose the appropriate -pixel_format.

get a single frame from raw RGBA data
ffmpeg -y -f rawvideo -pix_fmt rgba -ss 00:01 -r 1 -s 320x240 -i input.raw -frames:v 1 output.png
-y overwrite output
-r input framerate (placed before -i)
-ss skip to this time
-frames:v number of frames ot output

Related

Creating a .bmp grayscale image from a vector of uint8_t

I'm trying read a .bmp grayscale image from a file with a given width and height, convert it to std::vector<uint8_t>, run some sort of filter function on that vector, and then create a new image from that std::vector. I'm stuck in the last part.
How do I create a .bmp file from a given std::vector<uint8_t>, height and width?
P.S. I'm trying to do this without using external libraries.
This is the code I have thus far:
class Image {
int weight;
int width;
std::vector<uint8_t> image;
Image(int weight,int width) : weight(weight),width(width);
void read_image(char* pic);
void save_image(const std::vector<uint8_t>& new_image);
std::vector<uint8_t> filter_image(int ww,int wh,double filter);
};
void Image::read_image(char *pic) {
std::ifstream file(pic,std::ios_base::in | std::ios_base::binary);
if(!file.is_open()) return;
while(file.peek()!=EOF){
this->image.push_back(static_cast<uint8_t>(file.get()));
}
}
void Image::save_image(const std::vector<uint8_t> &new_image) {
//what to do here?
}
A .bmp file does not only store raw pixel data. It begins with a header describing the image stored inside the file: width, height, pixel size, color type, etc... The read_image() function you wrote reads the whole file, including the header, and running any image processing algorithm on your vector will ruin your data and produce garbage.
If you are learning image processing, it would be far easier to use raw image files. A raw image file contains only pixel data, without any metadata. When working with a raw image file, it is your responsibility to know the width and height of the image, as well as the pixel encoding.
Converting an image file to a raw image file, and vice versa, involves the use of an external tool. ffmpeg is such a tool. ffmpeg is a linux tool, but it should be easy to find ffmpeg packaged for any operating system.
For converting from a file in almost any format to a raw image file (ffmpeg deduces the size of the image from the input file). The order of each parameter is important:
ffmpeg -i your_file.jpeg -f rawvideo -c rawvideo -pix_fmt gray output.raw
When converting back to your input format, you have to explicitly tell ffmpeg the size of your picture. Again the order of each parameter is important:
ffmpeg -f rawvideo -c rawvideo -pix_fmt gray -s 1280x720 -i input.raw your_processed_file.jpeg
Adapt the width and height to the real size of your image, or ffmpeg will resize the image. you can also play with the pixel type: gray specifies an 8 bits per pixel grayscale format, but you can use rgb24 to keep color information (use ffmpeg -pix_fmts to see a list of all available formats).
If you are lucky enough to have ffplay availabel in your ffmpeg package, you can view the raw file directly on screen:
ffplay -f rawvideo -c rawvideo -pix_fmt gray -s 1280x720 input.raw
Additionally some image processing software are able to open a raw image file: gimp, photoshop, ...

How can I make all frames in my video become i-frames?

Now I use ffmpeg to encode my video in c++. I need to decode a h264 frame without other frames. So I need to make all my frames in my video become i-frames. But I don't know how to set parameters in order to do this. What should I do if I need to make all my video frame i-frames?
ffmpeg -i yourfile -c:v libx264 -x264opts keyint=1 out.mp4
-x264opts keyint=1 sets the keyframe interval to 1 (I believe you can also use -g 1). You probably want to set other rate control parameters also, e.g. -crf 10 (for quality) and -preset veryslow (for speed), see this page.

Ffmpeg: writing audio and video to a stream in an mpeg format

I had the commands for exporting a video stream to an mpeg file working correctly with the following code:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
Now, I wanted to add the commands so that audio can be used as well since there's no option for that right now.
I've edited the previous command to the next one:
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -f s16le -ac 1 -ar 44100 -i - -acodec pcm_s16le -ac 1 -b:a 320k -ar 44100 -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
So as you can see I added a new input with the settings for the audio I'm going to be inputting (I'm going to test this with the values of a sine wave).
I'm writing the data to the file like this:
// Write a frame to the ffmpeg stream
fwrite(frame, sizeof(unsigned char*) * frameWidth * frameHeight, 1, ffmpeg);
// Write multiple sound samples per written frame
for (int t = 0; t < 44100/24; ++t)
fwrite(&audio, sizeof(short int), 1, ffmpeg);
The first line is the one that only writes the video (where the frame object is a render to texture from the video I'm inputting)
After that I'm trying to add the audio. I'm using a for-loop so I can use multiple samples per video frame (because otherwise you would only have 24 audio samples per second)
This does render with a couple of issues:
The rendered video shows green flashes
The video slides across the screen. For example, if it slides 200 pixels to the right those pixels get rendered at the other side. Also a bit of the frame that should be at the bottom is rendered at the top (so the frame also slides down but this is a constant, it doesn't move over time)
I can't figure out where my mistake is. I've tried multiple codecs and tried different orders for the commands but it stays the same or gets worse.
Thanks in advance

How to convert YUV frames to a video?

How to convert set of YUV frames to a Video and later convert the video to YUV frames without any loss using C ?(I dont want to convert it to RGB in between)
if you have a raw YUV file, you need to tell ffmpeg which pixel format/subsampling that is used. YUV have no header, so you also need to specify the width and height of the data.
The following ffmpeg commands encodes a 1080p YUV 4:2:0 to H.264 using the x264 encoder and place the result in a mp4-container. This operation is however not losless.
ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1080 -r 25 -i input.yuv \
-c:v libx264 output.mp4
To get the YUV-frames back again, do
ffmpeg -i output.mp4 frames.yuv
If you are looking for a lossless encoder, try HuffYUV or FFV1

How to write YUV 420 video frames from RGB data using OpenCV or other image processing library?

I have an array of rgb data generated from glReadPixels().
Note that RGB data is pixel packed (r1,g1,b1,r2,g2,b2,...).
How can I quickly write a YUV video frame using OpenCV or another C++ library, so that I can stream them to FFMPEG? Converting RGB pixel to YUV pixel is not a problem, as there are many conversion formula available online. However writing the YUV frame is the main problem for me. I have been trying to write the YUV video frame since the last few days and were not successful in doing that.
This is one of my other question about writing YUV frame and the issues that I encountered: Issue with writing YUV image frame in C/C++
I don't know what is wrong with my current approach in writing the YUV frame to a file.
So right now I may want to use existing library (if any), that accepts an RGB data, and convert them to YUV and write the YUV frame directly to a file or to a pipe. Of course it would be much better if I can fix my existing program to write the YUV frame, but you know, there is also a deadline in every software development project, so time is also a priority for me and my project team members.
FFmpeg will happily receive RGB data in. You can see what pixel formats FFmpeg supports by running:
ffmpeg -pix_fmts
Any entry with an I in the first column can be used as an input.
Since you haven't specified the pixel bit depth, I am going to assume it's 8-bit and use the rgb8 pixel format. So to get FFmpeg to read rgb8 data from stdin you would use the following command (I am cating data in but you would be supplying via your pipe):
cat data.rgb | ffmpeg -f rawvideo -pix_fmt rgb8 -s WIDTHxHEIGHT -i pipe:0 output.mov
Since it is a raw pixel format with no framing, you need to subsitite WIDTH and HEIGHT for the appropriate values of your image dimensions so that FFmpeg knows how to frame the data.
I have specifed the output as a MOV file but you would need to configure your FFmpeg/Red5 output accordingly.
OpenCV does not support the YUV format directly, as you know, so it's really up to you to find a way to do RGB <-> YUV conversions.
This is a very interesting post as it shows how to load and create YUV frames on the disk, while storing the data as IplImage.
ffmpeg will write an AVI file with YUV but as karl says there isn't direct support for it in openCV.
Alternatively (and possibly simpler) you can just write the raw UYVY values to a file and then use ffmpeg to convert it to an AVI/MP4 in any format you want. It's also possible to write directly to a pipe and call ffmpeg directly from your app avoiding the temporary yuv file
eg. to convert an HD yuv422 stream to a h264 MP4 file at 30fps
ffmpeg -pix_fmt yuyv422 -s 1920x1080 -i input.yuv -vcodec libx264 -x264opts -r 30 output.mp4