I followed the code at this link read pixel value in bmp file to be able to read the RGB values of pixels and when I have the entire image as one color and read a random pixel's values they are correct. After this I tried to make it so the function would also try and find how many unique colors there were so I added a box with a different color to the image but the function still only finds one color. I'm wondering if maybe I'm somehow not looking at all the bytes contained in the BMP but I'm not sure how that would be as I'm new to trying this stuff.
To make sure the code wasn't finding different colored pixels but failing to add them to the list of unique pixels I tried printing output when a color is found that is different from the one that is always found but no output ever came from it.
struct Color {
int R = -1;
int G = -1;
int B = -1;
};
unsigned char* readBMP(char* filename) {
int i;
FILE* f = fopen(filename, "rb");
unsigned char info[54];
fread(info, sizeof(unsigned char), 54, f);
int width = *(int*)&info[18]; //the reason *(int*) is used here because there's an integer stored at 18 in the array that indicates how wide the BMP is
int height = *(int*)&info[22]; // same reasoning for *(int*)
int size = 3 * width * height;
unsigned char* data = new unsigned char[size];
fread(data, sizeof(unsigned char), size, f);
fclose(f);
// windows has BMP saved as BGR tuples and this switches it to RGB
for(i = 0; i < size; i += 3){
unsigned char tmp = data[i];
data[i] = data[i+2];
data[i+2] = tmp;
}
i = 0; // i is the x value of the pixel that is having its RGB values checked
int j = 0; // j is the y value of the pixel that is having its RGB values checked
unsigned char R = data[3 * (i * width + j)]; // value of R of the pixel at (i,j)
unsigned char G = data[3 * (i * width + j) + 1]; // value of G of the pixel at (i,j)
unsigned char B = data[3 * (i * width + j) + 2]; // value of B of the pixel at (i,j)
std::cout << "value of R is " << int(R);
std::cout << " value of G is " << int(G);
std::cout << " value of B is " << int(B);
Color num_colors[5];
int count;
int z;
int flag;
int iterator;
int sum;
for(count = 0; count < size; count += 1){
unsigned char R = data[3 * (i * width + j)];
unsigned char G = data[3 * (i * width + j) + 1];
unsigned char B = data[3 * (i * width + j) + 2];
sum = int(R) + int(G) + int(B);
if(sum != 301) {// 301 is the sum of the RGB values of the color that the program does manage to find
std::cout << sum;
}
flag = 0;
for(z = 0; z < 5; z += 1){
if(num_colors[z].R == R && num_colors[z].G == G && num_colors[z].B == B){
flag = 1;
}
}
if(flag == 1){
continue;
}
iterator = 0;
while(num_colors[iterator].R != -1){
iterator += 1;
}
num_colors[iterator].R = R;
num_colors[iterator].G = G;
num_colors[iterator].B = B;
}
int number = 0;
for(int r = 0; r < 5; r += 1){
std::cout << "\nValue of R here: " << num_colors[r].R;
if(num_colors[r].R != -1){
number += 1;
}
}
std::cout << "\nNumber of colors in image: " << number;
return data;
}
https://imgur.com/a/dXllIWL
This is the picture I'm using so there should be two colors found but the code only finds red pixels.
Your problem is that you are always checking the RGB values at (0,0)
i = 0; // i is the x value of the pixel that is having its RGB values checked
int j = 0; // j is the y value of the pixel that is having its RGB values checked
...
for(count = 0; count < size; count += 1){
unsigned char R = data[3 * (i * width + j)];
unsigned char G = data[3 * (i * width + j) + 1];
unsigned char B = data[3 * (i * width + j) + 2];
i and j defines the X and Y position of the pixel you are checking, but notice that you never change those in the loop. Your loop will keep doing the same thing over and over again. What you probably want is a double loop, going through all coordinates in your image:
for(int y=0; y<height; y++)
for(int x=0; x<width; x++){
unsigned char R = data[3 * (y * width + x) + 0];
unsigned char G = data[3 * (y * width + x) + 1];
unsigned char B = data[3 * (y * width + x) + 2];
Here , I am trying to do normalize RGB image.
Here is my code.
cv::Mat channels[3], normalize_rgb;
split(image, channels);
for (int i = 0; i < image.size().height; i++)
{
for (int j = 0; j < image.size().width; j++)
{
int b = (int)(image).at<cv::Vec3b>(i, j)[0];
int g = (int)(image).at<cv::Vec3b>(i, j)[1];
int r = (int)(image).at<cv::Vec3b>(i, j)[2];
double sum = b + g + r;
double bnorm = b / sum * 255;
double gnorm = g / sum * 255;
double rnorm = r / sum * 255;
channels[0].at<uchar>(i, j) = bnorm;
channels[1].at<uchar>(i, j) = gnorm;
channels[2].at<uchar>(i, j) = rnorm;
}
}
merge(channels, 3, normalize_rgb);
normalize = normalize_rgb.clone();
Problem: after normalizing r,g, b value is giving me very small value which turns to 0. and hence I get black image.
Anyone can please help me to figure out problems. thanks
Change image to double first. It is black because when you divide int by int you get zero. And then you're trying to set uchar = double which is completely wrong.
Try:
double bnorm = double (b) / sum * 255;
You need to divide the sum by the number of pixels.
In a 1000x1000 image you want to divide by the average, not by one million times the average.
Note that dividing by the average may give you values bigger than 255 (values higher than the average) so clamping is required after the scaling. Probably a better solution would be aiming at the average becoming 128, not 255 with
double bnorm = b * k;
double gnorm = g * k;
double rnorm = r * k;
where k is computed once per image as
double average = sum / (image.size().height*image.size().width);
double k = 127.0 / average;
but note that this can still produce values higher than 255 in output.
If you want to stretch you need to normalize with the maximum value, not with the average and in this case clamping is not needed.
I found a link which related to RGB Normalization, there is a mathematical list in the article.
So actually, u need to divide a var total, and each total is just the sum of each pixel, not the whole image(width() * height()). Maybe you should try this:
for (int i = 0; i < image.size().height; i++)
{
for (int j = 0; j < image.size().width; j++)
{
int b = (int)(image).at<cv::Vec3b>(i, j)[0];
int g = (int)(image).at<cv::Vec3b>(i, j)[1];
int r = (int)(image).at<cv::Vec3b>(i, j)[2];
int sum = b + g + r;
double bnorm = b / sum * 255;
double gnorm = g / sum * 255;
double rnorm = r / sum * 255;
channels[0].at<uchar>(i, j) = bnorm;
channels[1].at<uchar>(i, j) = gnorm;
channels[2].at<uchar>(i, j) = rnorm;
}
}
I am trying to do Sobel operator in the HSV dimension (told to do this in the HSV by my guide but I dont understand why it will work better on HSV than on RGB) .
I have built a function that converts from RGB to HSV . while I have some mediocre knowledge in C++ I am getting confused by the Image Processing thus I tried to keep the code as simple as possible , meaning I dont care (at this stage) about time nor space .
From looking on the results I got in gray levels bmp photos , my V and S seems to be fine but my H looks very gibbrish .
I got 2 questions here :
1. How a normal H photo in gray level should look a like comparing to the source photo ?
2. Where was I wrong in the code :
void RGBtoHSV(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double Rn, Gn, Bn;
double C;
double H, S, V;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
Rn = (1.0*image[row][column][R]) / 255;
Gn = (1.0*image[row][column][G] )/ 255;
Bn = (1.0*image[row][column][B] )/ 255;
//double RGBn[3] = { Rn, Gn, Bn };
double max = Rn;
if (max < Gn) max = Gn;
if (max < Bn) max = Bn;
double min = Rn;
if (min > Gn) min = Gn;
if (min > Bn) min = Bn;
C = max - min;
H = 0;
if (max==0)
{
S = 0;
H = -1; //undifined;
V = max;
}
else
{
/* if (max == Rn)
H = (60.0* ((int)((Gn - Bn) / C) % 6));
else if (max == Gn)
H = 60.0*( (Bn - Rn)/C + 2);
else
H = 60.0*( (Rn - Gn)/C + 4);
*/
if (max == Rn)
H = ( 60.0* ( (Gn - Bn) / C) ) ;
else if (max == Gn)
H = 60.0*((Bn - Rn) / C + 2);
else
H = 60.0*((Rn - Gn) / C + 4);
V = max; //AKA lightness
S = C / max; //saturation
}
while (H < 0)
H += 360;
while (H>360)
H -= 360;
Him[row][column] = (float)H;
Vim[row][column] = (float)V;
Sim[row][column] = (float)S;
}
}
}
also my hsvtorgb :
void HSVtoRGB(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double R1, G1, B1;
double C;
double V;
double S;
double H;
int Htag;
double Htag2;
double x;
double m;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
H = (double)Him[row][column];
S = (double)Sim[row][column];
V = (double)Vim[row][column];
C = V*S;
Htag = (int) (H / 60.0);
Htag2 = H/ 60.0;
//x = C*(1 - abs(Htag % 2 - 1));
double tmp1 = fmod(Htag2, 2);
double temp=(1 - abs(tmp1 - 1));
x = C*temp;
//switch (Htag)
switch (Htag)
{
case 0 :
R1 = C;
G1 = x;
B1 = 0;
break;
case 1:
R1 = x;
G1 = C;
B1 = 0;
break;
case 2:
R1 = 0;
G1 = C;
B1 = x;
break;
case 3:
R1 = 0;
G1 = x;
B1 = C;
break;
case 4:
R1 = x;
G1 = 0;
B1 = C;
break;
case 5:
R1 = C;
G1 = 0;
B1 = x;
break;
default:
R1 = 0;
G1 = 0;
B1 = 0;
break;
}
m = V - C;
//this is also good change I found
//image[row][column][R] = unsigned char( (R1 + m)*255);
//image[row][column][G] = unsigned char( (G1 + m)*255);
//image[row][column][B] = unsigned char( (B1 + m)*255);
image[row][column][R] = round((R1 + m) * 255);
image[row][column][G] = round((G1 + m) * 255);
image[row][column][B] = round((B1 + m) * 255);
}
}
}
void HSVfloattoGrayconvert(unsigned char grayimage[NUMBER_OF_ROWS] [NUMBER_OF_COLUMNS], float hsvimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS], char hsv)
{
//grayimage , flaotimage , h/s/v
float factor;
if (hsv == 'h' || hsv == 'H') factor = (float) 1 / 360;
else factor = 1;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
grayimage[row][column] = (unsigned char) (0.5f + 255.0f * (float)hsvimage[row][column] / factor);
}
}
}
and my main:
unsigned char ColorImage1[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS] [NUMBER_OF_COLORS];
float Himage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float Vimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float Simage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char ColorImage2[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS] [NUMBER_OF_COLORS];
unsigned char HimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char VimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char SimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char HAfterSobel[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char VAfterSobel[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char SAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char HSVcolorAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
unsigned char RGBAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
int KernelX[3][3] = {
{-1,0,+1}, {-2,0,2}, {-1,0,1 }
};
int KernelY[3][3] = {
{-1,-2,-1}, {0,0,0}, {1,2,1}
};
void main()
{
//work
LoadBgrImageFromTrueColorBmpFile(ColorImage1, "P22A.bmp");
// add noise
AddSaltAndPepperNoiseRGB(ColorImage1, 350, 255);
StoreBgrImageAsTrueColorBmpFile(ColorImage1, "saltandpepper.bmp");
AddGaussNoiseCPPstileRGB(ColorImage1, 0.0, 1.0);
StoreBgrImageAsTrueColorBmpFile(ColorImage1, "Saltandgauss.bmp");
//saves hsv in float array
RGBtoHSV(ColorImage1, Himage, Vimage, Simage);
//saves hsv float arrays in unsigned char arrays
HSVfloattoGrayconvert(HimageGray, Himage, 'h');
HSVfloattoGrayconvert(VimageGray, Vimage, 'v');
HSVfloattoGrayconvert(SimageGray, Simage, 's');
StoreGrayImageAsGrayBmpFile(HimageGray, "P22H.bmp");
StoreGrayImageAsGrayBmpFile(VimageGray, "P22V.bmp");
StoreGrayImageAsGrayBmpFile(SimageGray, "P22S.bmp");
WaitForUserPressKey();
}
edit : Changed Code + add sources for equations :
Soruce : for equations :
http://www.rapidtables.com/convert/color/hsv-to-rgb.htm
http://www.rapidtables.com/convert/color/rgb-to-hsv.htm
edit3:
listening to #gpasch advice and using better reference and deleting the mod6 I am now able to restore the RGB original photo!!! but unfortunately now my H photo in grayscale is even more chaotic than before .
I'll edit the code about so it will have more info about how I am saving the H grayscale photo .
That is the peril of going through garbage web sites; I suggest the following:
https://www.cs.rit.edu/~ncs/color/t_convert.html
That mod 6 seems fishy there.
You also need to make sure you understand that H is in degrees from 0 to 360; if your filter expects 0..1 you have the change.
I am trying to do Sobel operator in the HSV dimension (told to do this in the HSV by my guide but I dont understand why it will work better on HSV than on RGB)
It depends on what you are trying to achieve. If you're trying to do edge detection based on brightness for example, then just working with say the V channel might be simpler than processing all three channels of RGB and combining them afterwards.
How a normal H photo in gray level should look a like comparing to the source photo ?
You would see regions which are a similar colour appear as a similar shade of grey, and for a real-world scene you would still see gradients. But where there are spatially adjacent regions with colours far apart in hue, there would be a sharp jump. The shapes would generally be recognisable though.
Where was I wrong in the code :
There are two main problems with your code. The first is that the hue scaling in HSVfloattoGrayconvert is wrong. Your code is setting factor=1.0/360.0f but then dividing by the factor, which means it's multiplying by 360. If you simply multiply by the factor, it produces the expected output. This is because the earlier calculation uses normalised values (0..1) for S and V but angle in degrees for H, so you need to divide by 360 to normalise H.
Second, the conversion back to RGB has a problem, mainly to do with calculating Htag where you want the original value for calculating x but the floor only when switching on the sector.
Note that despite what #gpasch suggested, the mod 6 operation is actually correct. This is because the conversion you are using is based on the hexagonal colour space model for HSV, and this is used to determine which sector your colour is in. For a continuous model, you could use a radial conversion instead which is slightly different. Both are well explained on Wikipedia.
I took your code, added a few functions to generate input data and save output files so it is completely standalone, and fixed the bugs above while making minimal changes to the source.
Given the following generated input image:
the Hue channel extracted is:
The saturation channel is:
and finally value:
After fixing up the HSV to RGB conversion, I verified that the resulting output image matches the original.
The updated code is below (as mentioned above, changed minimally to make a standalone test):
#include <string>
#include <cmath>
#include <cstdlib>
enum ColorIndex
{
R = 0,
G = 1,
B = 2,
};
namespace
{
const unsigned NUMBER_OF_COLUMNS = 256;
const unsigned NUMBER_OF_ROWS = 256;
const unsigned NUMBER_OF_COLORS = 3;
};
void RGBtoHSV(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double Rn, Gn, Bn;
double C;
double H, S, V;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
Rn = image[row][column][R] / 255.0;
Gn = image[row][column][G] / 255.0;
Bn = image[row][column][B] / 255.0;
double max = Rn;
if (max < Gn) max = Gn;
if (max < Bn) max = Bn;
double min = Rn;
if (min > Gn) min = Gn;
if (min > Bn) min = Bn;
C = max - min;
H = 0;
if (max==0)
{
S = 0;
H = 0; // Undefined
V = max;
}
else
{
if (max == Rn)
H = 60.0*fmod((Gn - Bn) / C, 6.0);
else if (max == Gn)
H = 60.0*((Bn - Rn) / C + 2);
else
H = 60.0*((Rn - Gn) / C + 4);
V = max; //AKA lightness
S = C / max; //saturation
}
while (H < 0)
H += 360.0;
while (H > 360)
H -= 360.0;
Him[row][column] = (float)H;
Vim[row][column] = (float)V;
Sim[row][column] = (float)S;
}
}
}
void HSVtoRGB(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double R1, G1, B1;
double C;
double V;
double S;
double H;
double Htag;
double x;
double m;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
H = (double)Him[row][column];
S = (double)Sim[row][column];
V = (double)Vim[row][column];
C = V*S;
Htag = H / 60.0;
double x = C*(1.0 - fabs(fmod(Htag, 2.0) - 1.0));
int i = floor(Htag);
switch (i)
{
case 0 :
R1 = C;
G1 = x;
B1 = 0;
break;
case 1:
R1 = x;
G1 = C;
B1 = 0;
break;
case 2:
R1 = 0;
G1 = C;
B1 = x;
break;
case 3:
R1 = 0;
G1 = x;
B1 = C;
break;
case 4:
R1 = x;
G1 = 0;
B1 = C;
break;
case 5:
R1 = C;
G1 = 0;
B1 = x;
break;
default:
R1 = 0;
G1 = 0;
B1 = 0;
break;
}
m = V - C;
image[row][column][R] = round((R1 + m) * 255);
image[row][column][G] = round((G1 + m) * 255);
image[row][column][B] = round((B1 + m) * 255);
}
}
}
void HSVfloattoGrayconvert(unsigned char grayimage[][NUMBER_OF_COLUMNS], float hsvimage[][NUMBER_OF_COLUMNS], char hsv)
{
//grayimage , flaotimage , h/s/v
float factor;
if (hsv == 'h' || hsv == 'H') factor = 1.0f/360.0f;
else factor = 1.0f;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
grayimage[row][column] = (unsigned char) (0.5f + 255.0f * (float)hsvimage[row][column] * factor);
}
}
}
int KernelX[3][3] = {
{-1,0,+1}, {-2,0,2}, {-1,0,1 }
};
int KernelY[3][3] = {
{-1,-2,-1}, {0,0,0}, {1,2,1}
};
void GenerateTestImage(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS])
{
for (unsigned y = 0; y < NUMBER_OF_ROWS; y++)
{
for (unsigned x = 0; x < NUMBER_OF_COLUMNS; x++)
{
image[y][x][R] = x % 256;
image[y][x][G] = y % 256;
image[y][x][B] = (255-x) % 256;
}
}
}
void GenerateTestImage(unsigned char image[][NUMBER_OF_COLUMNS])
{
for (unsigned y = 0; y < NUMBER_OF_ROWS; y++)
{
for (unsigned x = 0; x < NUMBER_OF_COLUMNS; x++)
{
image[x][y] = x % 256;
}
}
}
// Color (three channel) images
void SaveImage(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS], const std::string& filename)
{
FILE* fp = fopen(filename.c_str(), "w");
fprintf(fp, "P6\n%u %u\n255\n", NUMBER_OF_COLUMNS, NUMBER_OF_ROWS);
fwrite(image, NUMBER_OF_COLORS, NUMBER_OF_ROWS*NUMBER_OF_COLUMNS, fp);
fclose(fp);
}
// Grayscale (single channel) images
void SaveImage(unsigned char image[][NUMBER_OF_COLUMNS], const std::string& filename)
{
FILE* fp = fopen(filename.c_str(), "w");
fprintf(fp, "P5\n%u %u\n255\n", NUMBER_OF_COLUMNS, NUMBER_OF_ROWS);
fwrite(image, 1, NUMBER_OF_ROWS*NUMBER_OF_COLUMNS, fp);
fclose(fp);
}
unsigned char ColorImage1[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
unsigned char Himage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char Simage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char Vimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float HimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float SimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float VimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
int main()
{
// Test input
GenerateTestImage(ColorImage1);
SaveImage(ColorImage1, "test_input.ppm");
//saves hsv in float array
RGBtoHSV(ColorImage1, HimageGray, VimageGray, SimageGray);
//saves hsv float arrays in unsigned char arrays
HSVfloattoGrayconvert(Himage, HimageGray, 'h');
HSVfloattoGrayconvert(Vimage, VimageGray, 'v');
HSVfloattoGrayconvert(Simage, SimageGray, 's');
SaveImage(Himage, "P22H.pgm");
SaveImage(Vimage, "P22V.pgm");
SaveImage(Simage, "P22S.pgm");
// Convert back to get the original test image
HSVtoRGB(ColorImage1, HimageGray, VimageGray, SimageGray);
SaveImage(ColorImage1, "test_output.ppm");
return 0;
}
The input image was generated by a very simple algorithm which gives us gradients in each dimension, so we can easily inspect and verify the expected output. I used ppm/pgm files as they are simpler to write and more portable than BMP.
Hope this helps - let me know if you have any questions.
The program runs but the curved line isn't being displayed .
Here is my code and note, I have 4 vertices in an array.
void GLWidget::drawControlPolygon(){
for (int i = 0; i < vertices.size()-1;i++){
drawEdge(vertices[i], vertices[i+1], RGBValue(0,0,0));
}
}
void GLWidget::drawDeCasteljau(float t) {
Point p;
int N_PTS = 4;
p.x = pow((1-t),3)*vertices[0].x+3* t * pow((1 -t), 2) * vertices[1].x + 3 * (1-t)*pow(t,2)*vertices[2].x+ pow (t, 3)*vertices[3].x;
p.y = pow((1-t),3)*vertices[0].y+3* t * pow((1 -t), 2) * vertices[1].y + 3 * (1-t)*pow(t,2)*vertices[2].y+ pow (t, 3)*vertices[3].y;
p.z = pow((1-t),3)*vertices[0].z+3* t * pow((1 -t), 2) * vertices[1].z + 3 * (1-t)*pow(t,2)*vertices[2].z+ pow (t, 3)*vertices[3].z;
int bezPoints[3][3] ;
for (float u = 0.0; u <= 1.0; u += t) {
for (int diag = N_PTS-2; diag >= 0; diag--) {
for (int i = 0; i <= diag; i++) {
int j = diag - i;
bezPoints[i][j] = (1.0-u)*bezPoints[i][j+1] + u*bezPoints[i+1][j];
}
}
// set the pixel for this parameter value
//Set pixel method for theImage object.
// void setPixel(Index row, Index col, Byte red, Byte green, Byte blue, Byte alpha=255);
// void setPixel(Index row, Index col, RGBValue colour, Byte alpha = 255);
theImage.setPixel(bezPoints[0], bezPoints[0][0], RGBValue());
}
}
void GLWidget::drawBezierCurve() {
}
for the full class here is the link to it...
https://www.dropbox.com/s/j6jw51uhz30m3tb/testApp.cc?dl=0
So far the output looks like this
Thanks!
I tried a quick and dirty translation of the code here.
However, my version outputs noise comparable to grey t-shirt material, or heather if it please you:
#include <fstream>
#include "perlin.h"
double Perlin::cos_Interp(double a, double b, double x)
{
ft = x * 3.1415927;
f = (1 - cos(ft)) * .5;
return a * (1 - f) + b * f;
}
double Perlin::noise_2D(double x, double y)
{
/*
int n = (int)x + (int)y * 57;
n = (n << 13) ^ n;
int nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff;
return 1.0 - ((double)nn / 1073741824.0);
*/
int n = (int)x + (int)y * 57;
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}
double Perlin::smooth_2D(double x, double y)
{
corners = ( noise_2D(x - 1, y - 1) + noise_2D(x + 1, y - 1) + noise_2D(x - 1, y + 1) + noise_2D(x + 1, y + 1) ) / 16;
sides = ( noise_2D(x - 1, y) + noise_2D(x + 1, y) + noise_2D(x, y - 1) + noise_2D(x, y + 1) ) / 8;
center = noise_2D(x, y) / 4;
return corners + sides + center;
}
double Perlin::interp(double x, double y)
{
int x_i = int(x);
double x_left = x - x_i;
int y_i = int(y);
double y_left = y - y_i;
double v1 = smooth_2D(x_i, y_i);
double v2 = smooth_2D(x_i + 1, y_i);
double v3 = smooth_2D(x_i, y_i + 1);
double v4 = smooth_2D(x_i + 1, y_i + 1);
double i1 = cos_Interp(v1, v2, x_left);
double i2 = cos_Interp(v3, v4, x_left);
return cos_Interp(i1, i2, y_left);
}
double Perlin::perlin_2D(double x, double y)
{
double total = 0;
double p = .25;
int n = 1;
for(int i = 0; i < n; ++i)
{
double freq = pow(2, i);
double amp = pow(p, i);
total = total + interp(x * freq, y * freq) * amp;
}
return total;
}
int main()
{
Perlin perl;
ofstream ofs("./noise2D.ppm", ios_base::binary);
ofs << "P6\n" << 512 << " " << 512 << "\n255\n";
for(int i = 0; i < 512; ++i)
{
for(int j = 0; j < 512; ++j)
{
double n = perl.perlin_2D(i, j);
n = floor((n + 1.0) / 2.0 * 255);
unsigned char c = n;
ofs << c << c << c;
}
}
ofs.close();
return 0;
}
I don't believe that I strayed too far from the aforementioned site's directions aside from adding in the ppm image generation code, but then again I'll admit to not fully grasping what is going on in the code.
As you'll see by the commented section, I tried two (similar) ways of generating pseudorandom numbers for noise. I also tried different ways of scaling the numbers returned by perlin_2D to RGB color values. These two ways of editing the code have just yielded different looking t-shirt material. So, I'm forced to believe that there's something bigger going on that I am unable to recognize.
Also, I'm compiling with g++ and the c++11 standard.
EDIT: Here's an example: http://imgur.com/Sh17QjK
To convert a double in the range of [-1.0, 1.0] to an integer in range [0, 255]:
n = floor((n + 1.0) / 2.0 * 255.99);
To write it as a binary value to the PPM file:
ofstream ofs("./noise2D.ppm", ios_base::binary);
...
unsigned char c = n;
ofs << c << c << c;
Is this a direct copy of your code? You assigned an integer to what should be the Y fractional value - it's a typo and it will throw the entire noise algorithm off if you don't fix:
double Perlin::interp(double x, double y)
{
int x_i = int(x);
double x_left = x - x_i;
int y_i = int(y);
double y_left = y = y_i; //This Should have a minus, not an "=" like the line above
.....
}
My guess is if you're successfully generating the bitmap with the proper color computation, you're getting vertical bars or something along those lines?
You also need to remember that the Perlin generator usually spits out numbers in the range of -1 to 1 and you need to multiply the resultant value as such:
value * 127 + 128 = {R, G, B}
to get a good grayscale image.