OpenCV as I know, does not offer pixelwise add() but only addWeighted() that applies one scalar to all pixels. Using the C-style array access that is the fastest among all means of pixel access, my custom alpha compositing function is still slow as hell - it took nearly 2 seconds of operation for a 1400x900 image. I don't think building in release mode helps optimization... Is there a way to increase the speed?
I'm writing alphaCompositeLayers() - an alpha compositing function that multiplies each pixel of the background cv::Mat by the alpha value of the corresponding pixel of the foreground cv::Mat. Both cv::Mats areCV_8UC4` based (unsigned char, 4 channels):
// mat1 in foreground, mat0 in background
cv::Mat alphaCompositeLayers(cv::Mat mat0, cv::Mat mat1) {
cv::Mat res = mat0.clone();
int nRows = res.rows;
int nCols = res.cols * res.channels();
if (res.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
for (int u = 0; u < nRows; u++) {
unsigned char *resrgb = res.ptr<unsigned char>(u);
unsigned char *matrgb = mat1.ptr<unsigned char>(u);
for (int v = 0; v < nCols; v += 4) {
unsigned char newalpha = cv::saturate_cast<unsigned char>(resrgb[v + 3] * (255.0f - matrgb[v + 3]) + matrgb[v + 3]);
resrgb[v] = cv::saturate_cast<unsigned char>((resrgb[v] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 1] = cv::saturate_cast<unsigned char>((resrgb[v + 1] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v + 1] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 2] = cv::saturate_cast<unsigned char>((resrgb[v + 2] * resrgb[v + 3] / 255.0f * (255 - matrgb[v + 3]) / 255.0f + matrgb[v + 2] * matrgb[v + 3] / 255.0f)); // / newalpha);
resrgb[v + 3] = newalpha;
resrgb[v + 3] = cv::saturate_cast<unsigned char>(rand() % 256);
}
}
return res;
}
Here's another function multiplyLayerByAlpha() that multiplies each pixel by its alpha value (0% opacity = black, 100% opacity = pixel color):
cv::Mat multiplyLayerByAlpha(cv::Mat mat) {
cv::Mat res = mat.clone();
int nRows = res.rows;
int nCols = res.cols * res.channels();
if (res.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
for (int u = 0; u < nRows; u++) {
unsigned char *resrgb = res.ptr<unsigned char>(u);
for (int v = 0; v < nCols; v += 4) {
resrgb[v] = cv::saturate_cast<unsigned char>(resrgb[v] * resrgb[v + 3] / 255.0f);
resrgb[v + 1] = cv::saturate_cast<unsigned char>(resrgb[v + 1] * resrgb[v + 3] / 255.0f);
resrgb[v + 2] = cv::saturate_cast<unsigned char>(resrgb[v + 2] * resrgb[v + 3] / 255.0f);
}
}
return res;
}
An array of cv::Mats, for example {mat0, mat1, mat2} with mat2 on foremost (on top of all 3), I basically run this:
cv::Mat resultingCvMat = multiplyLayerByAlpha(
alphaCompositeLayers(
mat0,
alphaCompositeLayers(mat1, mat2)
)
);
How can I make the program compute the resultingCvMat faster? With C++ ways like multi-threading (then how)? Or with OpenCV functions and ways (again, then how)?
Related
My function getHeightOfTerrain() is calling a barycentric formula function that is not returning the correct height for the one set test height in : heightMapFromArray[][].
I've tried watching OpenGL JAVA Game tutorials 14,21, 22, by "thin matrix" and I am confused on how to use my array: heightMapforBaryCentric in both of the supplied functions, and how to set the arguments that are passed to the baryCentic() function in some sort of manner so that I can solve the problem.
int creaateTerrain(int height, int width)
{
float holderY[6] = { 0.f ,0.f,0.f,0.f,0.f,0.f };
float scaleit = 1.5f;
float holder[6] = { 0.f,0.f,0.f,0.f,0.f,0.f };
for (int z = 0, z2 =0; z < iterationofHeightMap;z2++)
{
//each loop is two iterations and creates one quad (two triangles)
//however because each iteration is by two (i.e. : x=x+2) om bottom
//the amount of triangles is half the x value
//
//number of vertices : 80 x 80 x 6.
//column
for (int x = 0, x2 = 0; x < iterationofHeightMap;x2++)
{
//relevant - A : first triangle - on left triangle
//[row] [colum[]
holder[0] = heightMapFromArray[z][x];
//holder[0] = (float)imageData[(z / 2 * MAP_Z + (x / 2)) * 3];
//holder[0] = holder[0] / 255;// *scaleit;
vertices.push_back(glm::vec3(x, holder[0], z));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2] = holder[0];
holder[1] = heightMapFromArray[z+2][x];
//holder[1] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x) / 2))) * 3];
//holder[1] = holder[1] / 255;// 6 * scaleit;
vertices.push_back(glm::vec3(x, holder[1], z + 2));
//match height map with online barycentric use
heightMapforBaryCentric[x2][z2+1] = holder[1];
holder[2] = heightMapFromArray[z+2][x+2];
//holder[2] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[2] = holder[2] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[2], z + 2));
////match height map with online barycentric use
heightMapforBaryCentric[x2+1][z2+1] = holder[2];
//relevant - B - second triangle (on right side)
holder[3] = heightMapFromArray[z][x];
//holder[3] = (float)imageData[((z / 2)*MAP_Z + (x / 2)) * 3];
//holder[3] = holder[3] / 255;// 256 * scaleit;
vertices.push_back(glm::vec3(x, holder[3], z));
holder[4] = heightMapFromArray[x+2][z+2];
//holder[4] = (float)imageData[(((z + 2) / 2 * MAP_Z + ((x + 2) / 2))) * 3];
//holder[4] = holder[4] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[4], z + 2));
holder[5] = heightMapFromArray[x+2][z];
//holder[5] = (float)imageData[((z / 2)*MAP_Z + ((x + 2) / 2)) * 3];
//holder[5] = holder[5] / 255;// *scaleit;
vertices.push_back(glm::vec3(x + 2, holder[5], z));
x = x + 2;
}
z = z + 2;
}
return(1);
}
float getHeightOfTerrain(float worldX, float worldZ) {
float terrainX = worldX;
float terrainZ = worldZ;
int gridSquareSize = 2.0f;
gridX = (int)floor(terrainX / gridSquareSize);
gridZ = (int)floor(terrainZ / gridSquareSize);
xCoord = ((float)(fmod(terrainX, gridSquareSize)) / (float)gridSquareSize);
zCoord = ((float)(fmod(terrainZ, gridSquareSize)) / (float)gridSquareSize);
if (xCoord <= (1 - zCoord))
{
answer = baryCentric(
//left triangle
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ], 0.0f),
glm::vec3(0.0f, heightMapforBaryCentric[gridX][gridZ+1], 1.0f),
glm::vec3(1.0f, heightMapforBaryCentric[gridX+1][gridZ+1], 1.0f),
glm::vec2(xCoord, zCoord));
// if (answer != 1)
// {
// fprintf(stderr, "Z:gridx: %d gridz: %d answer: %f\n", gridX, gridZ,answer);
//
// }
}
else
{
//right triangle
answer = baryCentric(glm::vec3(0, heightMapforBaryCentric[gridX][gridZ], 0),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ+1], 1),
glm::vec3(1,heightMapforBaryCentric[gridX+1][gridZ], 0),
glm::vec2(xCoord, zCoord));
}
if (answer == 1)
{
answer = 0;
}
//answer = abs(answer - 1);
return(answer);
}
float baryCentric(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3 , glm::vec2 pos) {
float det = (p2.z - p3.z) * (p1.x - p3.x) + (p3.x - p2.x) * (p1.z - p3.z);
float l1 = ((p2.z - p3.z) * (pos.x - p3.x) + (p3.x - p2.x) * (pos.y - p3.z)) / det;
float l2 = ((p3.z - p1.z) * (pos.x - p3.x) + (p1.x - p3.x) * (pos.y - p3.z)) / det;
float l3 = 1.0f - l1 - l2;
return (l1 * p1.y + l2 * p2.y + l3 * p3.y);
}
My expected results were that the center of the test grid's height to be the set value .5 and gradually less as the heights declined. My results were the heights being all the same, varied, or increasing. Usually these heights were under the value of one.
Here is my sobel filter function performed on a grayscale image. Apparently I'm not doing my calculations correct because I keep getting an all black image. I have already turned in the project but it is bothering me that the results aren't right.
int sobelH[3][3] = { -1, 0, 1,
-2, 0, 2,
-1, 0, 1 },
sobelV[3][3] = { 1, 2, 1,
0, 0, 0,
-1, -2, -1 };
//variable declaration
int mag;
int pix_x, pix_y = 0;
int img_x, img_y;
for (img_x = 0; img_x < img->x; img_x++)
{
for (img_y = 0; img_y < img->y; img_y++)
{
pix_x = 0;
pix_y = 0;
//calculating the X and Y convolutions
for (int i = -1; i <= 1; i++)
{
for (int j = -1; j <= 1; j++)
{
pix_x += (img->data[img_y * img->x + img_x].red + img->data[img_y * img->x + img_x].green + img->data[img_y * img->x + img_x].blue) * sobelH[1 + i][1 + j];
pix_y += (img->data[img_y * img->x + img_x].red + img->data[img_y * img->x + img_x].green + img->data[img_y * img->x + img_x].blue) * sobelV[1 + i][1 + j];
}
}
//Gradient magnitude
mag = sqrt((pix_x * pix_x) + (pix_y * pix_y));
if (mag > RGB_COMPONENT_COLOR)
mag = 255;
if (mag < 0)
mag = 0;
//Setting the new pixel value
img->data[img_y * img->x + img_x].red = mag;
img->data[img_y * img->x + img_x].green = mag;
img->data[img_y * img->x + img_x].blue = mag;
}
}
Although your code could use some improvement, the main reason is that you compute the convolution at constant img_y and img_x. What you need to do is:
pix_x += (img->data[img_y * img->x + img_x + i].red + img->data[img_y * img->x + img_x + i].green + img->data[img_y * img->x + img_x + i].blue) * sobelH[1 + i][1 + j];
Indeed, the Sobel convolution is symmetric, so if you compute the convolution with a constant image, it will result in only black.
Note that in the above example I do not take into account the border of the image. You should make sure to not access pixels that are outside your pixel array.
Another mistake is that you're writing in the input image. You write at location (x,y), then compute the filter result for location (x+1,y) using the modified value at (x,y), which is the wrong value to use.
You need to write your result to a new image.
I have a code on C++ it's b-spline curve that has 4 points if I want to change it to 6 point what shall I change in the code?
You can check the code:
#include "graphics.h"
#include <math.h>
int main(void) {
int gd, gm, page = 0;
gd = VGA;
gm = VGAMED;
initgraph(&gd, &gm, "");
point2d pontok[4] = { 100, 100, 150, 200, 170, 130, 240, 270 }; //pontok means points
int ap;
for (;;) {
setactivepage(page);
cleardevice();
for (int i = 0; i < 4; i++)
circle(integer(pontok[i].x), integer(pontok[i].y), 3);
double t = 0;
moveto((1.0 / 6) * (pontok[0].x * pow(1 - t, 3) +
pontok[1].x * (3 * t * t * t - 6 * t * t + 4) +
pontok[2].x * (-3 * t * t * t + 3 * t * t + 3 * t + 1) +
pontok[3].x * t * t * t),
(1.0 / 6) * (pontok[0].y * pow(1 - t, 3) +
pontok[1].y * (3 * t * t * t - 6 * t * t + 4) +
pontok[2].y * (-3 * t * t * t + 3 * t * t + 3 * t + 1) +
pontok[3].y * t * t * t));
for (t = 0; t <= 1; t += 0.01)
lineto(
(1.0 / 6) * (pontok[0].x * pow(1 - t, 3) +
pontok[1].x * (3 * t * t * t - 6 * t * t + 4) +
pontok[2].x * (-3 * t * t * t + 3 * t * t + 3 * t + 1) +
pontok[3].x * t * t * t),
(1.0 / 6) * (pontok[0].y * pow(1 - t, 3) +
pontok[1].y * (3 * t * t * t - 6 * t * t + 4) +
pontok[2].y * (-3 * t * t * t + 3 * t * t + 3 * t + 1) +
pontok[3].y * t * t * t));
/* Egerkezeles */ //Egerkezeles means mouse event handling
if (!balgomb)
ap = getactivepoint((point2d *)pontok, 4, 5);
if (ap >= 0 && balgomb) { //balgomb means left mouse button
pontok[ap].x = egerx; //eger means mouse
pontok[ap].y = egery;
}
/* Egerkezeles vege */
setvisualpage(page);
page = 1 - page;
if (kbhit())
break;
}
getch();
closegraph();
return 0;
}
From your formula, it looks like you are trying to draw a cubic Bezier curve. But the formula does not seem entirely correct. You can google "cubic Bezier curve" to find the correct formula. The Wikipedia page contains the formula for any degree of Bezier curve. You can find the "6-points" formula from there by using degree = 5.
I have a buffer containing a "raw" BGRA texture with one byte per color.
The lines are in reversed order (the texture is upside down).
The BGRA buffer is all green (0, 255, 0, 255).
I need to convert that to RGBA and flip the textures lines.
I tried this:
// bgra is an unsigned char*
int width = 1366;
int height = 768;
unsigned char* rgba = new unsigned char[width * height * 4];
for(int y = height - 1; y >= 0; y--)
{
for(int x = 0; x < width; x++)
{
rgba[(x * y * 4)] = bgra[(x * y * 4) + 2];
rgba[(x * y * 4) + 1] = bgra[(x * y * 4) + 1];
rgba[(x * y * 4) + 2] = bgra[(x * y * 4)];
rgba[(x * y * 4) + 3] = bgra[(x * y * 4) + 3];
}
}
But the result when rendered is not a full green screen, but this:
What might i be doing wrong here?
You're indexing wrong.
This is how it should be done:
rgba[(x + y * width) * 4] = bgra[(x + y * width) * 4 + 2]
I've been trying to tackle a YUV422 into a RGB conversion problem for about a week. I've visited many different websites and have gotten different formulas from each one. If anyone else has any suggestions I would be glad to hear about them. The formulas below give me an image with either and overall purple or a green hue in them. As of this moment I haven't been able to find a formula that allows me to get back a proper RGB image. I have include all my various chunks of code below.
//for(int i = 0; i < 1280 * 720 * 3; i=i+3)
//{
// /*m_RGB->imageData[i] = pData[i] + pData[i+2]*((1 - 0.299)/0.615);
// m_RGB->imageData[i+1] = pData[i] - pData[i+1]*((0.114*(1-0.114))/(0.436*0.587)) - pData[i+2]*((0.299*(1 - 0.299))/(0.615*0.587));
// m_RGB->imageData[i+2] = pData[i] + pData[i+1]*((1 - 0.114)/0.436);*/
// m_RGB->imageData[i] = pData[i] + 1.403 * (pData[i+1] - 128);
// m_RGB->imageData[i+1] = pData[i] + 0.344 * (pData[i+1] - 128) - 0.714 * (pData[i+2] - 128);
// m_RGB->imageData[i+2] = pData[i] + 1.773 * (pData[i+2] - 128);
//}
for(int i = 0, j=0; i < 1280 * 720 * 3; i+=6, j+=4)
{
/*m_RGB->imageData[i] = pData[j] + pData[j+3]*((1 - 0.299)/0.615);
m_RGB->imageData[i+1] = pData[j] - pData[j+1]*((0.114*(1-0.114))/(0.436*0.587)) - pData[j+3]*((0.299*(1 - 0.299))/(0.615*0.587));
m_RGB->imageData[i+2] = pData[j] + pData[j+1]*((1 - 0.114)/0.436);
m_RGB->imageData[i+3] = pData[j+2] + pData[j+3]*((1 - 0.299)/0.615);
m_RGB->imageData[i+4] = pData[j+2] - pData[j+1]*((0.114*(1-0.114))/(0.436*0.587)) - pData[j+3]*((0.299*(1 - 0.299))/(0.615*0.587));
m_RGB->imageData[i+5] = pData[j+2] + pData[j+1]*((1 - 0.114)/0.436);*/
/*m_RGB->imageData[i] = pData[j] + 1.403 * (pData[j+3] - 128);
m_RGB->imageData[i+1] = pData[j] + 0.344 * (pData[j+1] - 128) - 0.714 * (pData[j+3] - 128);
m_RGB->imageData[i+2] = pData[j] + 1.773 * (pData[j+1] - 128);
m_RGB->imageData[i+3] = pData[j+2] + 1.403 * (pData[j+3] - 128);
m_RGB->imageData[i+4] = pData[j+2] + 0.344 * (pData[j+1] - 128) - 0.714 * (pData[j+3] - 128);
m_RGB->imageData[i+5] = pData[j+2] + 1.773 * (pData[j+1] - 128);*/
BYTE Cr = pData[j+3] - 128;
BYTE Cb = pData[j+1] - 128;
/*m_RGB->imageData[i] = pData[j] + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
m_RGB->imageData[i+1] = pData[j] - ((Cb >> 2) + (Cb >> 4) + (Cb >> 5)) - ((Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5));
m_RGB->imageData[i+2] = pData[j] + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
m_RGB->imageData[i+3] = pData[j+2] + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
m_RGB->imageData[i+4] = pData[j+2] - ((Cb >> 2) + (Cb >> 4) + (Cb >> 5)) - ((Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5));
m_RGB->imageData[i+5] = pData[j+2] + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);*/
/*int R1 = clamp(1 * pData[j] + 0 * Cb + 1.4 * Cr, 0, 255), R2 = clamp(1 * pData[j+2] + 0 * Cb + 1.4 * Cr, 0, 255);
int G1 = clamp(1 * pData[j] - 0.343 * Cb - 0.711 * Cr, 0, 255), G2 = clamp(1 * pData[j+2] - 0.343 * Cb - 0.711 * Cr, 0, 255);
int B1 = clamp(1 * pData[j] + 1.765 * Cb + 0 * Cr, 0, 255), B2 = clamp(1 * pData[j+2] + 1.765 * Cb + 0 * Cr, 0, 255);*/
/*int R1 = clamp(pData[j] + 1.403 * (pData[j+3] - 128), 0, 255), R2 = clamp(pData[j+2] + 1.403 * (pData[j+3] - 128), 0, 255);
int G1 = clamp(pData[j] + 0.344 * (pData[j+1] - 128) - 0.714 * (pData[j+3] - 128), 0, 255), G2 = clamp(pData[j+2] + 0.344 * (pData[j+1] - 128) - 0.714 * (pData[j+3] - 128), 0, 255);
int B1 = clamp(pData[j] + 1.773 * (pData[j+1] - 128), 0, 255), B2 = clamp(pData[j+2] + 1.773 * (pData[j+1] - 128), 0, 255);*/
int R1 = clamp((298 * (pData[j] - 16) + 409 * (pData[j+3] - 128) + 128) >> 8, 0, 255), R2 = clamp((298 * (pData[j+2] - 16) + 409 * (pData[j+3] - 128) + 128) >> 8, 0, 255);
int G1 = clamp((298 * (pData[j] - 16) - 100 * (pData[j+1] - 128) - 208 * (pData[j+3] - 128) + 128) >> 8, 0, 255), G2 = clamp((298 * (pData[j+2] - 16) - 100 * (pData[j+1] - 128) - 208 * (pData[j+3] - 128) + 128) >> 8, 0, 255);
int B1 = clamp((298 * (pData[j] - 16) + 516 * (pData[j+1] - 128) + 128) >> 8, 0, 255), B2 = clamp((298 * (pData[j+2] - 16) + 516 * (pData[j+1] - 128) + 128) >> 8, 0, 255);
//printf("R: %d, G: %d, B: %d, R': %d, G': %d, B': %d \n", R1, G1, B1, R2, G2, B2);
m_RGB->imageData[i] = (char)R1;
m_RGB->imageData[i+1] = (char)G1;
m_RGB->imageData[i+2] = (char)B1;
m_RGB->imageData[i+3] = (char)R2;
m_RGB->imageData[i+4] = (char)G2;
m_RGB->imageData[i+5] = (char)B2;
/*m_RGB->imageData[i] = (char)(clamp(1.164 * (pData[j] - 16) + 1.793 * (Cr), 0, 255));
m_RGB->imageData[i+1] = (char)(clamp(1.164 * (pData[j] - 16) - 0.534 * (Cr) - 0.213 * (Cb), 0, 255));
m_RGB->imageData[i+2] = (char)(clamp(1.164 * (pData[j] - 16) + 2.115 * (Cb), 0, 255));
m_RGB->imageData[i+3] = (char)(clamp(1.164 * (pData[j+2] - 16) + 1.793 * (Cr), 0, 255));
m_RGB->imageData[i+4] = (char)(clamp(1.164 * (pData[j+2] - 16) - 0.534 * (Cr) - 0.213 * (Cb), 0, 255));
m_RGB->imageData[i+5] = (char)(clamp(1.164 * (pData[j+2] - 16) + 2.115 * (Cb), 0, 255));*/
}
Any help is greatly appreciated.
Some clues to help you along:
You are confusing Cr with Cb.
Assuming UYVY/422
Y1 = data[j+0];
Cr = data[j+1];
Y2 = data[j+2];
Cb = data[j+3];
Your conversion calculation are wierd, and incorrect for HD.
For SD
R = max(0, min(255, 1.164(Y - 16) + 1.596(Cr - 128)));
G = max(0, min(255, 1.164(Y - 16) - 0.813(Cr - 128) - 0.391(Cb - 128)));
B = max(0, min(255, 1.164(Y - 16) + 2.018(Cr - 128)));
For HD
R = max(0, min(255, 1.164(Y - 16) + 1.793(Cr - 128)));
G = max(0, min(255, 1.164(Y - 16) - 0.534(Cr - 128) - 0.213(Cb - 128)));
B = max(0, min(255, 1.164(Y - 16) + 2.115(Cr - 128)));
You could simply use ConvertFrame which is a part of the Decklink SDK.
Your problem is that there are lots of YUV422 formats out there. You must find the exact one (the FOURCC index for the specific video you're using), and then figure out the correct way to decode it.
What you can do is to save some video from your board, open it in VLC, and look at the Codec details to find the exact FOURCC used.
http://www.fourcc.org/yuv.php
Assuming packed 422 I don't see any of your blocks sampling the input data correctly. In packed 422 the input data will go Y1U1Y2V1 Y3U2Y4V2 where the overall image is a Y (luma) image at full resolution and one each of U and V each at half horizontal resolution.
Here's where I would start: Unpack alternating values of the input and extract a grayscale image:
for (uint i = 0, j = 0; i < 1280 * 720 * 3; i += 3, j += 2) {
m_RGB->imageData[i] = pData[j];
m_RGB->imageData[i+1] = pData[j];
m_RGB->imageData[i+2] = pData[j];
}
Once you have that tuned to produce a grayscale image then introduce U and V by looking at pData[j+1] and pData[j+3] (or, on even pixels, pData[j-1] and pData[j+1]). Simplifying that is why some algorithms do two YUV pixels at a time.
When that works consider extracting the U and V images and properly resampling them to full resolution to produce a 444 image. Simply duplicating U and V for adjacent pixels is like upscaling by duplicating pixels.
(Note that other arrangements like 420 have even more complicated co-siting)
I also struggled with the conversion
// Get the bytes
var u = bytes[0];
var y1 = bytes[1];
var v = bytes[2];
var y2 = bytes[3];
// Convert, cast to signed byte is important!
var r = y + (1.403 * (sbyte)v);
var g = y - (0.344 * (sbyte)u) - (0.714 * (sbyte)v);
var b = y + (1.770 * (sbyte)u);
if (r < 0)
r = 0;
else if (r > 255)
r = 255;
if (g < 0)
g = 0;
else if (g > 255)
g = 255;
if (b < 0)
b = 0;
else if (b > 255)
b = 255;
return Color.FromArgb((byte)r, (byte)g, (byte)b);
u and v are sbyte, and y is just a byte.