I want a pair of conversion algorithms, one from RGB to YUV, the other from YUV to RGB, that are inverses of each other. That is, a round-trip conversion should leave the value unchanged. (If you like, replace YUV with Y'UV, YUV, YCbCr, YPbPr.)
Does such a thing exist? If so, what is it?
Posted solutions (How to perform RGB->YUV conversion in C/C++?, http://www.fourcc.org/fccyvrgb.php, http://en.wikipedia.org/wiki/YUV) are only inverses (the two 3x3 matrices are inverses), when omitting the clamping to [0,255]. But omitting that clamping allows things like negative luminance, which plays merry havoc with image processing in YUV space. Retaining the clamping makes the conversion nonlinear, which makes it tricky to define an inverse.
Yes, invertible transformations exist.
equasys GmbH posted invertible transformations from RGB to YUV, YCbCr, and YPbPr, along with explanations of which situation each is appropriate for, what this clamping is really about, and links to references. (Like a good SO answer.)
For my own application (jpg images, not analog voltages) YCbCr was appropriate, so I wrote code for those two transformations. Indeed, there-and-back-again values differed by less than 1 part in 256, for many images; and the before-and-after images were visually indistinguishable.
PIL's colour space conversion YCbCr -> RGB gets credit for mentioning equasys's web page.
Other answers, that could doubtfully improve on equasys's precision and concision:
https://code.google.com/p/imagestack/ includes rgb_to_x and x_to_rgb
functions, but I didn't try to compile and test them.
Cory Nelson's answer links to code with similar functions, but it says that
inversion's not possible in general, contradicting equasys.
The source code of FFmpeg, OpenCV, VLFeat, or ImageMagick.
2019 Edit: Here's the C code from github, mentioned in my comment.
void YUVfromRGB(double& Y, double& U, double& V, const double R, const double G, const double B)
{
Y = 0.257 * R + 0.504 * G + 0.098 * B + 16;
U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
V = 0.439 * R - 0.368 * G - 0.071 * B + 128;
}
void RGBfromYUV(double& R, double& G, double& B, double Y, double U, double V)
{
Y -= 16;
U -= 128;
V -= 128;
R = 1.164 * Y + 1.596 * V;
G = 1.164 * Y - 0.392 * U - 0.813 * V;
B = 1.164 * Y + 2.017 * U;
}
RGB to YUV and back again
There is a nice diagram over on Wikipedia on the topic of YUV which depicts the layout of YUV420p. However, if you're like me you want NV21, sometimes called YUV420sp, which interleaves the V and U components in a single plane so in this case that diagram is wrong, but it gives you the intuition on how it works.
This format (NV21) is the standard picture format on Android camera
preview. YUV 4:2:0 planar image, with 8 bit Y samples, followed by
interleaved V/U plane with 8bit 2x2 subsampled chroma samples.
So a lot of code I've seen just starts coding literally to this specification without taking into account Endianess. Furthermore, they tend to only support YUV to RGB and only one or two formats. I however, wanted something a little more trustworthy and it turns out C++ code taken from the Android source code repository does the trick. It is pretty much straight C++ and should be easily used in any project.
JNI/C++ code that takes an RGB565 image and converts it to NV21
From Java in this case, but easily C or C++ you pass in an array of bytes containing the RGB565 image and output an NV21 byte array.
#include <jni.h>
#include <cstring>
#include <cstdint>
#include "Converters.h"
#define JNI(X) JNIEXPORT Java_algorithm_ImageConverter_##X
#ifdef __cplusplus
extern "C" {
#endif
void JNI(RGB565ToNV21)(JNIEnv *env, jclass *, jbyteArray aRGB565in, jbyteArray aYUVout, jint width, jint height) {
//get jbyte array into C space from JVN
jbyte *rgb565Pixels = env->GetByteArrayElements(aRGB565in, NULL);
jbyte *yuv420sp = env->GetByteArrayElements(aYUVout, NULL);
size_t pixelCount = width * height;
uint16_t *rgb = (uint16_t *) rgb565Pixels;
// This format (NV21) is the standard picture format on Android camera preview. YUV 4:2:0 planar
// image, with 8 bit Y samples, followed by interleaved V/U plane with 8bit 2x2 subsampled
// chroma samples.
int uvIndex = pixelCount;
for (int row = 0; row < height; row++) {
for (int column = 0; column < width; column++) {
int pixelIndex = row * width + column;
uint8_t y = 0;
uint8_t u = 0;
uint8_t v = 0;
chroma::RGB565ToYUV(rgb[pixelIndex], &y, &u, &v);
yuv420sp[pixelIndex] = y;
if (row % 2 == 0 && pixelIndex % 2 == 0) {
#if __BYTE_ORDER == __LITTLE_ENDIAN
yuv420sp[uvIndex++] = u;
yuv420sp[uvIndex++] = v;
#else
yuv420sp[uvIndex++] = v;
yuv420sp[uvIndex++] = u;
#endif
}
}
}
//release temp reference of jbyte array
env->ReleaseByteArrayElements(aYUVout, yuv420sp, 0);
env->ReleaseByteArrayElements(aRGB565in, rgb565Pixels, 0);
}
#ifdef __cplusplus
}
#endif
Converters.h
As you will see in the header there are many different conversion options available to/from any number of formats.
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef HW_EMULATOR_CAMERA_CONVERTERS_H
#define HW_EMULATOR_CAMERA_CONVERTERS_H
#include <endian.h>
#ifndef __BYTE_ORDER
#error "could not determine byte order"
#endif
/*
* Contains declaration of framebuffer conversion routines.
*
* NOTE: RGB and big/little endian considerations. Wherever in this code RGB
* pixels are represented as WORD, or DWORD, the color order inside the
* WORD / DWORD matches the one that would occur if that WORD / DWORD would have
* been read from the typecasted framebuffer:
*
* const uint32_t rgb = *reinterpret_cast<const uint32_t*>(framebuffer);
*
* So, if this code runs on the little endian CPU, red color in 'rgb' would be
* masked as 0x000000ff, and blue color would be masked as 0x00ff0000, while if
* the code runs on a big endian CPU, the red color in 'rgb' would be masked as
* 0xff000000, and blue color would be masked as 0x0000ff00,
*/
namespace chroma {
/*
* RGB565 color masks
*/
#if __BYTE_ORDER == __LITTLE_ENDIAN
static const uint16_t kRed5 = 0x001f;
static const uint16_t kGreen6 = 0x07e0;
static const uint16_t kBlue5 = 0xf800;
#else // __BYTE_ORDER
static const uint16_t kRed5 = 0xf800;
static const uint16_t kGreen6 = 0x07e0;
static const uint16_t kBlue5 = 0x001f;
#endif // __BYTE_ORDER
static const uint32_t kBlack16 = 0x0000;
static const uint32_t kWhite16 = kRed5 | kGreen6 | kBlue5;
/*
* RGB32 color masks
*/
#if __BYTE_ORDER == __LITTLE_ENDIAN
static const uint32_t kRed8 = 0x000000ff;
static const uint32_t kGreen8 = 0x0000ff00;
static const uint32_t kBlue8 = 0x00ff0000;
#else // __BYTE_ORDER
static const uint32_t kRed8 = 0x00ff0000;
static const uint32_t kGreen8 = 0x0000ff00;
static const uint32_t kBlue8 = 0x000000ff;
#endif // __BYTE_ORDER
static const uint32_t kBlack32 = 0x00000000;
static const uint32_t kWhite32 = kRed8 | kGreen8 | kBlue8;
/*
* Extracting, and saving color bytes from / to WORD / DWORD RGB.
*/
#if __BYTE_ORDER == __LITTLE_ENDIAN
/* Extract red, green, and blue bytes from RGB565 word. */
#define R16(rgb) static_cast<uint8_t>((rgb) & kRed5)
#define G16(rgb) static_cast<uint8_t>(((rgb) & kGreen6) >> 5)
#define B16(rgb) static_cast<uint8_t>(((rgb) & kBlue5) >> 11)
/* Make 8 bits red, green, and blue, extracted from RGB565 word. */
#define R16_32(rgb) static_cast<uint8_t>((((rgb) & kRed5) << 3) | (((rgb) & kRed5) >> 2))
#define G16_32(rgb) static_cast<uint8_t>((((rgb) & kGreen6) >> 3) | (((rgb) & kGreen6) >> 9))
#define B16_32(rgb) static_cast<uint8_t>((((rgb) & kBlue5) >> 8) | (((rgb) & kBlue5) >> 14))
/* Extract red, green, and blue bytes from RGB32 dword. */
#define R32(rgb) static_cast<uint8_t>((rgb) & kRed8)
#define G32(rgb) static_cast<uint8_t>((((rgb) & kGreen8) >> 8) & 0xff)
#define B32(rgb) static_cast<uint8_t>((((rgb) & kBlue8) >> 16) & 0xff)
/* Build RGB565 word from red, green, and blue bytes. */
#define RGB565(r, g, b) static_cast<uint16_t>((((static_cast<uint16_t>(b) << 6) | (g)) << 5) | (r))
/* Build RGB32 dword from red, green, and blue bytes. */
#define RGB32(r, g, b) static_cast<uint32_t>((((static_cast<uint32_t>(b) << 8) | (g)) << 8) | (r))
#else // __BYTE_ORDER
/* Extract red, green, and blue bytes from RGB565 word. */
#define R16(rgb) static_cast<uint8_t>(((rgb) & kRed5) >> 11)
#define G16(rgb) static_cast<uint8_t>(((rgb) & kGreen6) >> 5)
#define B16(rgb) static_cast<uint8_t>((rgb) & kBlue5)
/* Make 8 bits red, green, and blue, extracted from RGB565 word. */
#define R16_32(rgb) static_cast<uint8_t>((((rgb) & kRed5) >> 8) | (((rgb) & kRed5) >> 14))
#define G16_32(rgb) static_cast<uint8_t>((((rgb) & kGreen6) >> 3) | (((rgb) & kGreen6) >> 9))
#define B16_32(rgb) static_cast<uint8_t>((((rgb) & kBlue5) << 3) | (((rgb) & kBlue5) >> 2))
/* Extract red, green, and blue bytes from RGB32 dword. */
#define R32(rgb) static_cast<uint8_t>(((rgb) & kRed8) >> 16)
#define G32(rgb) static_cast<uint8_t>(((rgb) & kGreen8) >> 8)
#define B32(rgb) static_cast<uint8_t>((rgb) & kBlue8)
/* Build RGB565 word from red, green, and blue bytes. */
#define RGB565(r, g, b) static_cast<uint16_t>((((static_cast<uint16_t>(r) << 6) | g) << 5) | b)
/* Build RGB32 dword from red, green, and blue bytes. */
#define RGB32(r, g, b) static_cast<uint32_t>((((static_cast<uint32_t>(r) << 8) | g) << 8) | b)
#endif // __BYTE_ORDER
/* An union that simplifies breaking 32 bit RGB into separate R, G, and B colors.
*/
typedef union RGB32_t {
uint32_t color;
struct {
#if __BYTE_ORDER == __LITTLE_ENDIAN
uint8_t r; uint8_t g; uint8_t b; uint8_t a;
#else // __BYTE_ORDER
uint8_t a; uint8_t b; uint8_t g; uint8_t r;
#endif // __BYTE_ORDER
};
} RGB32_t;
/* Clips a value to the unsigned 0-255 range, treating negative values as zero.
*/
static __inline__ int
clamp(int x)
{
if (x > 255) return 255;
if (x < 0) return 0;
return x;
}
/********************************************************************************
* Basics of RGB -> YUV conversion
*******************************************************************************/
/*
* RGB -> YUV conversion macros
*/
#define RGB2Y(r, g, b) (uint8_t)(((66 * (r) + 129 * (g) + 25 * (b) + 128) >> 8) + 16)
#define RGB2U(r, g, b) (uint8_t)(((-38 * (r) - 74 * (g) + 112 * (b) + 128) >> 8) + 128)
#define RGB2V(r, g, b) (uint8_t)(((112 * (r) - 94 * (g) - 18 * (b) + 128) >> 8) + 128)
/* Converts R8 G8 B8 color to YUV. */
static __inline__ void
R8G8B8ToYUV(uint8_t r, uint8_t g, uint8_t b, uint8_t* y, uint8_t* u, uint8_t* v)
{
*y = RGB2Y((int)r, (int)g, (int)b);
*u = RGB2U((int)r, (int)g, (int)b);
*v = RGB2V((int)r, (int)g, (int)b);
}
/* Converts RGB565 color to YUV. */
static __inline__ void
RGB565ToYUV(uint16_t rgb, uint8_t* y, uint8_t* u, uint8_t* v)
{
R8G8B8ToYUV(R16_32(rgb), G16_32(rgb), B16_32(rgb), y, u, v);
}
/* Converts RGB32 color to YUV. */
static __inline__ void
RGB32ToYUV(uint32_t rgb, uint8_t* y, uint8_t* u, uint8_t* v)
{
RGB32_t rgb_c;
rgb_c.color = rgb;
R8G8B8ToYUV(rgb_c.r, rgb_c.g, rgb_c.b, y, u, v);
}
/********************************************************************************
* Basics of YUV -> RGB conversion.
* Note that due to the fact that guest uses RGB only on preview window, and the
* RGB format that is used is RGB565, we can limit YUV -> RGB conversions to
* RGB565 only.
*******************************************************************************/
/*
* YUV -> RGB conversion macros
*/
/* "Optimized" macros that take specialy prepared Y, U, and V values:
* C = Y - 16
* D = U - 128
* E = V - 128
*/
#define YUV2RO(C, D, E) clamp((298 * (C) + 409 * (E) + 128) >> 8)
#define YUV2GO(C, D, E) clamp((298 * (C) - 100 * (D) - 208 * (E) + 128) >> 8)
#define YUV2BO(C, D, E) clamp((298 * (C) + 516 * (D) + 128) >> 8)
/*
* Main macros that take the original Y, U, and V values
*/
#define YUV2R(y, u, v) clamp((298 * ((y)-16) + 409 * ((v)-128) + 128) >> 8)
#define YUV2G(y, u, v) clamp((298 * ((y)-16) - 100 * ((u)-128) - 208 * ((v)-128) + 128) >> 8)
#define YUV2B(y, u, v) clamp((298 * ((y)-16) + 516 * ((u)-128) + 128) >> 8)
/* Converts YUV color to RGB565. */
static __inline__ uint16_t
YUVToRGB565(int y, int u, int v)
{
/* Calculate C, D, and E values for the optimized macro. */
y -= 16; u -= 128; v -= 128;
const uint16_t r = (YUV2RO(y,u,v) >> 3) & 0x1f;
const uint16_t g = (YUV2GO(y,u,v) >> 2) & 0x3f;
const uint16_t b = (YUV2BO(y,u,v) >> 3) & 0x1f;
return RGB565(r, g, b);
}
/* Converts YUV color to RGB32. */
static __inline__ uint32_t
YUVToRGB32(int y, int u, int v)
{
/* Calculate C, D, and E values for the optimized macro. */
y -= 16; u -= 128; v -= 128;
RGB32_t rgb;
rgb.r = YUV2RO(y,u,v) & 0xff;
rgb.g = YUV2GO(y,u,v) & 0xff;
rgb.b = YUV2BO(y,u,v) & 0xff;
return rgb.color;
}
/* YUV pixel descriptor. */
struct YUVPixel {
uint8_t Y;
uint8_t U;
uint8_t V;
inline YUVPixel()
: Y(0), U(0), V(0)
{
}
inline explicit YUVPixel(uint16_t rgb565)
{
RGB565ToYUV(rgb565, &Y, &U, &V);
}
inline explicit YUVPixel(uint32_t rgb32)
{
RGB32ToYUV(rgb32, &Y, &U, &V);
}
inline void get(uint8_t* pY, uint8_t* pU, uint8_t* pV) const
{
*pY = Y; *pU = U; *pV = V;
}
};
/* Converts an YV12 framebuffer to RGB565 framebuffer.
* Param:
* yv12 - YV12 framebuffer.
* rgb - RGB565 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void YV12ToRGB565(const void* yv12, void* rgb, int width, int height);
/* Converts an YV12 framebuffer to RGB32 framebuffer.
* Param:
* yv12 - YV12 framebuffer.
* rgb - RGB32 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void YV12ToRGB32(const void* yv12, void* rgb, int width, int height);
/* Converts an YU12 framebuffer to RGB32 framebuffer.
* Param:
* yu12 - YU12 framebuffer.
* rgb - RGB32 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void YU12ToRGB32(const void* yu12, void* rgb, int width, int height);
/* Converts an NV12 framebuffer to RGB565 framebuffer.
* Param:
* nv12 - NV12 framebuffer.
* rgb - RGB565 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void NV12ToRGB565(const void* nv12, void* rgb, int width, int height);
/* Converts an NV12 framebuffer to RGB32 framebuffer.
* Param:
* nv12 - NV12 framebuffer.
* rgb - RGB32 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void NV12ToRGB32(const void* nv12, void* rgb, int width, int height);
/* Converts an NV21 framebuffer to RGB565 framebuffer.
* Param:
* nv21 - NV21 framebuffer.
* rgb - RGB565 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void NV21ToRGB565(const void* nv21, void* rgb, int width, int height);
/* Converts an NV21 framebuffer to RGB32 framebuffer.
* Param:
* nv21 - NV21 framebuffer.
* rgb - RGB32 framebuffer.
* width, height - Dimensions for both framebuffers.
*/
void NV21ToRGB32(const void* nv21, void* rgb, int width, int height);
}; /* namespace chroma */
#endif /* HW_EMULATOR_CAMERA_CONVERTERS_H */
Converters.cpp
/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Contains implemenation of framebuffer conversion routines.
*/
#define LOG_NDEBUG 0
#define LOG_TAG "EmulatedCamera_Converter"
#include "Converters.h"
namespace chroma {
static void _YUV420SToRGB565(const uint8_t* Y,
const uint8_t* U,
const uint8_t* V,
int dUV,
uint16_t* rgb,
int width,
int height)
{
const uint8_t* U_pos = U;
const uint8_t* V_pos = V;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x += 2, U += dUV, V += dUV) {
const uint8_t nU = *U;
const uint8_t nV = *V;
*rgb = YUVToRGB565(*Y, nU, nV);
Y++; rgb++;
*rgb = YUVToRGB565(*Y, nU, nV);
Y++; rgb++;
}
if (y & 0x1) {
U_pos = U;
V_pos = V;
} else {
U = U_pos;
V = V_pos;
}
}
}
static void _YUV420SToRGB32(const uint8_t* Y,
const uint8_t* U,
const uint8_t* V,
int dUV,
uint32_t* rgb,
int width,
int height)
{
const uint8_t* U_pos = U;
const uint8_t* V_pos = V;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x += 2, U += dUV, V += dUV) {
const uint8_t nU = *U;
const uint8_t nV = *V;
*rgb = YUVToRGB32(*Y, nU, nV);
Y++; rgb++;
*rgb = YUVToRGB32(*Y, nU, nV);
Y++; rgb++;
}
if (y & 0x1) {
U_pos = U;
V_pos = V;
} else {
U = U_pos;
V = V_pos;
}
}
}
void YV12ToRGB565(const void* yv12, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* Y = reinterpret_cast<const uint8_t*>(yv12);
const uint8_t* U = Y + pix_total;
const uint8_t* V = U + pix_total / 4;
_YUV420SToRGB565(Y, U, V, 1, reinterpret_cast<uint16_t*>(rgb), width, height);
}
void YV12ToRGB32(const void* yv12, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* Y = reinterpret_cast<const uint8_t*>(yv12);
const uint8_t* V = Y + pix_total;
const uint8_t* U = V + pix_total / 4;
_YUV420SToRGB32(Y, U, V, 1, reinterpret_cast<uint32_t*>(rgb), width, height);
}
void YU12ToRGB32(const void* yu12, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* Y = reinterpret_cast<const uint8_t*>(yu12);
const uint8_t* U = Y + pix_total;
const uint8_t* V = U + pix_total / 4;
_YUV420SToRGB32(Y, U, V, 1, reinterpret_cast<uint32_t*>(rgb), width, height);
}
/* Common converter for YUV 4:2:0 interleaved to RGB565.
* y, u, and v point to Y,U, and V panes, where U and V values are interleaved.
*/
static void _NVXXToRGB565(const uint8_t* Y,
const uint8_t* U,
const uint8_t* V,
uint16_t* rgb,
int width,
int height)
{
_YUV420SToRGB565(Y, U, V, 2, rgb, width, height);
}
/* Common converter for YUV 4:2:0 interleaved to RGB32.
* y, u, and v point to Y,U, and V panes, where U and V values are interleaved.
*/
static void _NVXXToRGB32(const uint8_t* Y,
const uint8_t* U,
const uint8_t* V,
uint32_t* rgb,
int width,
int height)
{
_YUV420SToRGB32(Y, U, V, 2, rgb, width, height);
}
void NV12ToRGB565(const void* nv12, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* y = reinterpret_cast<const uint8_t*>(nv12);
_NVXXToRGB565(y, y + pix_total, y + pix_total + 1,
reinterpret_cast<uint16_t*>(rgb), width, height);
}
void NV12ToRGB32(const void* nv12, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* y = reinterpret_cast<const uint8_t*>(nv12);
_NVXXToRGB32(y, y + pix_total, y + pix_total + 1,
reinterpret_cast<uint32_t*>(rgb), width, height);
}
void NV21ToRGB565(const void* nv21, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* y = reinterpret_cast<const uint8_t*>(nv21);
_NVXXToRGB565(y, y + pix_total + 1, y + pix_total,
reinterpret_cast<uint16_t*>(rgb), width, height);
}
void NV21ToRGB32(const void* nv21, void* rgb, int width, int height)
{
const int pix_total = width * height;
const uint8_t* y = reinterpret_cast<const uint8_t*>(nv21);
_NVXXToRGB32(y, y + pix_total + 1, y + pix_total,
reinterpret_cast<uint32_t*>(rgb), width, height);
}
}; /* namespace chroma */
Once you clamp, you're done. They become a different color and you can't go back. I've written some of my own code to convert between all of those and more if you'd like to see, but it won't help inverse clamped colors back to their originals.
The conversion is lossy by necessity. Since the 8 bit YUV only uses Y values [16, 235] and U, V values [16, 240] it has less possible colors than RGB using [0, 255]. However, far more than this are lost in the conversion as the results are rounded.
I am taking all 16.8 million colors through the conversion code posted by Camille Goudeseune and applying rounding and integer conversion on the result (this seems slightly better than truncation without rounding). I take note that all Y values are within [16, 235] and the U/V values are within [16, 240] as they are supposed to, no clipping was required (no out of range values and with all of the limited range employed). When converting back to RGB, a range of [-2, 257] is produced because of rounding errors and this can be fixed by clipping.
Of the 16.8M RGB colors only 15.9 % were present after the round trip, with 15.7 % having been restored to the same color that they were.
System
How to determine
Number of colors
Proportion
Accuracy
8-bit RGB
256 * 256 * 256
16 777 216
100.0 %
100.0 %
8-bit YUV
220 * 225 * 225
11 137 500
66.4 %
-
YUV from RGB
Count unique YUV of all RGB colors
2 666 665
15.9 %
-
RGB-YUV-RGB
Count unique RGB after roundtrip, clipping
2 666 625
15.9 %
15.7 %
All YUV in RGB
8-bit YUV converted to RGB with clipping
2 956 551
17.6 %
15.7 %
Note: In the last conversion over 8 million colors turned into invalid RGB triplets, with red ranging [-179, 434], green [-135, 390] and blue [-226, 481]. Real world SDR YUV material (movies) contain such "out of range" values that possibly would be better displayed as intended in HDR10 with brighter saturated colors rather than using SDR and clipping (but the latter is standard practice).
Since YUV has so many fewer colors than RGB, it is recommended to add quantization noise to reduce visible banding. Random [-.5, .5) is to be added to each YUV value while they are in float format in [16, 235]/[16, 240] range. This should be done in conversions both ways, although some visible noise will then be produced on otherwise solid surfaces. The better option is to use 10 bit (HDR) formats where banding is far less visible and no quantization noise is necessary.
In my application, once I load an image into an SDL_Surface object, I need to go through each RGB value in the image and replace it with another RGB value from a lookup function.
(rNew, gNew, bNew) = lookup(rCur, gCur, bCur);
It seems surface->pixels gets me the pixels. I would appreciate it if someone can explain to me how to obtain R, G, and B values from the pixel and replace it with the new RGB value.
Use built-in functions SDL_GetRGB and SDL_MapRGB
#include <stdint.h>
/*
...
*/
short int x = 200 ;
short int y = 350 ;
uint32_t pixel = *( ( uint32_t * )screen->pixels + y * screen->w + x ) ;
uint8_t r ;
uint8_t g ;
uint8_t b ;
SDL_GetRGB( pixel, screen->format , &r, &g, &b );
screen->format deals with the format so you don't have to.
You can also use SDL_Color instead of writing r,g,b variables separately.
Depending on the format of the surface, the pixels are arranged as an array in the buffer.
For typical 32 bit surfaces, it is R G B A R G B A
where each component is 8 bit, and every 4 are a pixel
First of all you need to lock the surface to safely access the data for modification. Now to manipulate the array you need to know the numbers of bit per pixels, and the alignment of the channels (A, R, G, B). As Photon said if is 32 bits per pixel the array can be RGBARGBA.... if it is 24 the array can be RGBRGB.... (can also be BGR, BGR, blue first)
//i assume the signature of lookup to be
int lookup(Uint8 r, Uint8 g, Uint8 b, Uint8 *rnew, Uint8* gnew, Uint8* bnew);
SDL_LockSurface( surface );
/* Surface is locked */
/* Direct pixel access on surface here */
Uint8 byteincrement = surface->format->BytesPerPixel;
int position;
for(position = 0; position < surface->w * surface->h* byteincrement; position += byteincrement )
{
Uint8* curpixeldata = (Uint8*)surface->data + position;
/* assuming RGB, you need to know the position of channels otherwise the code is overly complex. for instance, can be BGR */
Uint8* rdata = curpixeldata +1;
Uint8* gdata = curpixeldata +2;
Uint8* bdata = curpixeldata +3;
/* those pointers point to r, g, b, use it as you want */
lookup(*rdata, *gdata, *bdata, rdata,gdata,bdata);
}
.
SDL_LockSurface( surface );