libjpeg decode nv12 or yuv420p - libjpeg

I'm trying to get a YUV420 palanar or semiplanar (NV12) image out of jpeg using libjpeg.
I see that there is a option to specify output format to JCS_YCbCr which would generally be a YUV format, but as far as i understand it would give me the data as arrays of 3 elements { Y, U, V }. So to get the image to the right format i would have to rearange and subsample the pixels myself and i want to avoid that for performance reasons.
So I was wondering is there a way to configure libjpeg to output a YUV420p / NV12 buffer directly.

Just take a look at gst_jpeg_decode() in gstreamer source tree. This function along with gst_jpeg_decode_direct() function does exactly what you want to do.
Note that it gives YUV420 planar output, bypassing all color conversion done by libjpeg. (Note: this assumes that the input JPEG is encoded in YUV420 color space (aka I420), which is true for almost all JPEGs out there.

Related

OpenCV C++ convert buffer image from YUV 4:2:2 (UYVY format) into YUV

I have a video streamed from a source camera, that provides each frame into a uchar* buffer.
This buffer is YUV 4:2:2, using UYVY format.
I want to obtain a workable Mat in YUV, where I can split channels and do every filter I want.
The problem is that UYVY format is compressed and I don't find any way to explode data directly into YUV format, whitout passing from BGR.
For example, I tried this, but I don't want for performance reasons, to pass from BGR:
cvtColor(inMat, middleMat, COLOR_YUV2BGR_UYVY);
cvtColor(middleMat, outMat, COLOR_BGR2YUV);
I need something similar to:
cvtColor(inMat, outMat, COLOR_YUV2YUV_UYVY);
Thanks in advance
Giuseppe

Is the output of the libjpeg always RGB (or luminance for monochrome images)?

I use he libjpeg to manipulate jpeg images.
My question is simple :
Is the output of the libjpeg always RGB (or luminance for monochrome images) ?
I'm not an expert with the color spaces...
Thanks !
Strictly speaking, JPEG is always YCbCr (YUV), so that's the output. You may apply other colorspaces, but typically only RGB or (less commonly) CMYK are supported, and the underlying data in the file will remain YCbCr. Grayscale simply leaves the second two channels out and only encodes Y.

YCbCr input directly from jpeg loader

I need to extract CbCr chroma data from JPEG images, for image analysis. (in C/C++)
As I understand it, the JPEG "raw data" is compressed YCbCr. Am I correct in this assumption? How can I verify this for a given image?
I am currently using TubroJpeg lib. The documentation of tjDecompressToYUV says that it:
Decompress a JPEG image to a YUV planar image. This function
performs JPEG but leaves out the color conversion step, so a
planar YUV is generated instead of an RGB image.
I am a bit confused as to the output of this function. I thought that YUV and YCbCr were slightly different color spaces. Does this mean that for UV chroma I'd need to manipulate the output, and that the output "UV" components are actually CbCr components ?
The JPEG standard has no knowledge of color spaces. It simply compresses color components.
If is the specific file format (e.g. JFIF, EXIF, ADOBE) that specifies the color format. In most cases it is YCbCR. In some cases it is not (some adobe).
This link may explained the confusion
http://en.wikipedia.org/wiki/Yuv#Confusion_with_Y.27CbCr
YUV and YCbCR are similar, but different. If there is no color conversion, I have to believe that they have confused YUV and YCbCr.

How to write YUV 420 video frames from RGB data using OpenCV or other image processing library?

I have an array of rgb data generated from glReadPixels().
Note that RGB data is pixel packed (r1,g1,b1,r2,g2,b2,...).
How can I quickly write a YUV video frame using OpenCV or another C++ library, so that I can stream them to FFMPEG? Converting RGB pixel to YUV pixel is not a problem, as there are many conversion formula available online. However writing the YUV frame is the main problem for me. I have been trying to write the YUV video frame since the last few days and were not successful in doing that.
This is one of my other question about writing YUV frame and the issues that I encountered: Issue with writing YUV image frame in C/C++
I don't know what is wrong with my current approach in writing the YUV frame to a file.
So right now I may want to use existing library (if any), that accepts an RGB data, and convert them to YUV and write the YUV frame directly to a file or to a pipe. Of course it would be much better if I can fix my existing program to write the YUV frame, but you know, there is also a deadline in every software development project, so time is also a priority for me and my project team members.
FFmpeg will happily receive RGB data in. You can see what pixel formats FFmpeg supports by running:
ffmpeg -pix_fmts
Any entry with an I in the first column can be used as an input.
Since you haven't specified the pixel bit depth, I am going to assume it's 8-bit and use the rgb8 pixel format. So to get FFmpeg to read rgb8 data from stdin you would use the following command (I am cating data in but you would be supplying via your pipe):
cat data.rgb | ffmpeg -f rawvideo -pix_fmt rgb8 -s WIDTHxHEIGHT -i pipe:0 output.mov
Since it is a raw pixel format with no framing, you need to subsitite WIDTH and HEIGHT for the appropriate values of your image dimensions so that FFmpeg knows how to frame the data.
I have specifed the output as a MOV file but you would need to configure your FFmpeg/Red5 output accordingly.
OpenCV does not support the YUV format directly, as you know, so it's really up to you to find a way to do RGB <-> YUV conversions.
This is a very interesting post as it shows how to load and create YUV frames on the disk, while storing the data as IplImage.
ffmpeg will write an AVI file with YUV but as karl says there isn't direct support for it in openCV.
Alternatively (and possibly simpler) you can just write the raw UYVY values to a file and then use ffmpeg to convert it to an AVI/MP4 in any format you want. It's also possible to write directly to a pipe and call ffmpeg directly from your app avoiding the temporary yuv file
eg. to convert an HD yuv422 stream to a h264 MP4 file at 30fps
ffmpeg -pix_fmt yuyv422 -s 1920x1080 -i input.yuv -vcodec libx264 -x264opts -r 30 output.mp4

Convert YUV to lossy compression with OpenCV

How would I go about converting an image in YUV colorspace to a JPEG image?
I have a raw image data saved in a char* variable:
char* frame = (char*)camera->getFrame(); // YUV colorspace image data
I need to convert this to a JPEG image data instead. I don't want to save it to disk because I will be sending it in a stream.
OpenCV itself does not export this functionality. Cleanest is to use libjpeg for encoding. See the answers to these questions:
Convert IplImage into a JPEG without using CvSaveImage in OpenCV
OpenCV to use in memory buffers or file pointers
Check the opencv src for the file cvcolor.cpp. This has all the color conversions in it.
I suggest you modify the existing routines near this line:
/* BGR/RGB -> YCrCb */
They are almost exactly what you need for YUV encoding. if its 4:4:4 and not 4:2:2 or 4:1:1
for jpg compression
The jpg encoder and decoder are in grfmt_jpeg.cpp which happens to #include "jpeglib.h"
You can call these directly