How to convert YUY2 to JPEG without reading/writing files (C++)? - c++

I have a single frame (YUY2) from webcam in LPVIDEOHDR structure captured with vfw32 (capGrabFrame(...) and CALLBACK on it). I need to get from VHDR->lpData another buffer (char*, LPBYTE... doesn't matter) with compressed JPEG. In all examples I saw in web (GDI+, libjpeg) this conversion is performed with reading/writing files...
So how to convert YUY2 to JPEG in buffer?
PS: if RGB needed, I have YUY2 to RGB algorithm

Related

OpenCV C++ convert buffer image from YUV 4:2:2 (UYVY format) into YUV

I have a video streamed from a source camera, that provides each frame into a uchar* buffer.
This buffer is YUV 4:2:2, using UYVY format.
I want to obtain a workable Mat in YUV, where I can split channels and do every filter I want.
The problem is that UYVY format is compressed and I don't find any way to explode data directly into YUV format, whitout passing from BGR.
For example, I tried this, but I don't want for performance reasons, to pass from BGR:
cvtColor(inMat, middleMat, COLOR_YUV2BGR_UYVY);
cvtColor(middleMat, outMat, COLOR_BGR2YUV);
I need something similar to:
cvtColor(inMat, outMat, COLOR_YUV2YUV_UYVY);
Thanks in advance
Giuseppe

Decoded video gives different format from encoded one

I am trying to decode a h264 video that I encoded using FFMPEG. The encoder used is libx264rgb with a AV_PIX_FMT_BGR0 pixel format. The encoded video plays well inside ffplay.
When I decode the frames, I obtain a planar pixel format(AV_PIX_FMT_GBRP) which is different from the original. Is it normal? If yes is there a way to disable the planar format? (either at encoding or decoding). This would allow me to skip the packing overhead from planar at runtime.
I used this decoder sample with a AV_CODEC_ID_H264 decoder instead of AV_CODEC_ID_MPEG1VIDEO.
Thank you!

libjpeg decode nv12 or yuv420p

I'm trying to get a YUV420 palanar or semiplanar (NV12) image out of jpeg using libjpeg.
I see that there is a option to specify output format to JCS_YCbCr which would generally be a YUV format, but as far as i understand it would give me the data as arrays of 3 elements { Y, U, V }. So to get the image to the right format i would have to rearange and subsample the pixels myself and i want to avoid that for performance reasons.
So I was wondering is there a way to configure libjpeg to output a YUV420p / NV12 buffer directly.
Just take a look at gst_jpeg_decode() in gstreamer source tree. This function along with gst_jpeg_decode_direct() function does exactly what you want to do.
Note that it gives YUV420 planar output, bypassing all color conversion done by libjpeg. (Note: this assumes that the input JPEG is encoded in YUV420 color space (aka I420), which is true for almost all JPEGs out there.

How to write YUV 420 video frames from RGB data using OpenCV or other image processing library?

I have an array of rgb data generated from glReadPixels().
Note that RGB data is pixel packed (r1,g1,b1,r2,g2,b2,...).
How can I quickly write a YUV video frame using OpenCV or another C++ library, so that I can stream them to FFMPEG? Converting RGB pixel to YUV pixel is not a problem, as there are many conversion formula available online. However writing the YUV frame is the main problem for me. I have been trying to write the YUV video frame since the last few days and were not successful in doing that.
This is one of my other question about writing YUV frame and the issues that I encountered: Issue with writing YUV image frame in C/C++
I don't know what is wrong with my current approach in writing the YUV frame to a file.
So right now I may want to use existing library (if any), that accepts an RGB data, and convert them to YUV and write the YUV frame directly to a file or to a pipe. Of course it would be much better if I can fix my existing program to write the YUV frame, but you know, there is also a deadline in every software development project, so time is also a priority for me and my project team members.
FFmpeg will happily receive RGB data in. You can see what pixel formats FFmpeg supports by running:
ffmpeg -pix_fmts
Any entry with an I in the first column can be used as an input.
Since you haven't specified the pixel bit depth, I am going to assume it's 8-bit and use the rgb8 pixel format. So to get FFmpeg to read rgb8 data from stdin you would use the following command (I am cating data in but you would be supplying via your pipe):
cat data.rgb | ffmpeg -f rawvideo -pix_fmt rgb8 -s WIDTHxHEIGHT -i pipe:0 output.mov
Since it is a raw pixel format with no framing, you need to subsitite WIDTH and HEIGHT for the appropriate values of your image dimensions so that FFmpeg knows how to frame the data.
I have specifed the output as a MOV file but you would need to configure your FFmpeg/Red5 output accordingly.
OpenCV does not support the YUV format directly, as you know, so it's really up to you to find a way to do RGB <-> YUV conversions.
This is a very interesting post as it shows how to load and create YUV frames on the disk, while storing the data as IplImage.
ffmpeg will write an AVI file with YUV but as karl says there isn't direct support for it in openCV.
Alternatively (and possibly simpler) you can just write the raw UYVY values to a file and then use ffmpeg to convert it to an AVI/MP4 in any format you want. It's also possible to write directly to a pipe and call ffmpeg directly from your app avoiding the temporary yuv file
eg. to convert an HD yuv422 stream to a h264 MP4 file at 30fps
ffmpeg -pix_fmt yuyv422 -s 1920x1080 -i input.yuv -vcodec libx264 -x264opts -r 30 output.mp4

Convert YUV to lossy compression with OpenCV

How would I go about converting an image in YUV colorspace to a JPEG image?
I have a raw image data saved in a char* variable:
char* frame = (char*)camera->getFrame(); // YUV colorspace image data
I need to convert this to a JPEG image data instead. I don't want to save it to disk because I will be sending it in a stream.
OpenCV itself does not export this functionality. Cleanest is to use libjpeg for encoding. See the answers to these questions:
Convert IplImage into a JPEG without using CvSaveImage in OpenCV
OpenCV to use in memory buffers or file pointers
Check the opencv src for the file cvcolor.cpp. This has all the color conversions in it.
I suggest you modify the existing routines near this line:
/* BGR/RGB -> YCrCb */
They are almost exactly what you need for YUV encoding. if its 4:4:4 and not 4:2:2 or 4:1:1
for jpg compression
The jpg encoder and decoder are in grfmt_jpeg.cpp which happens to #include "jpeglib.h"
You can call these directly