I have a for loop with a condition inside it to find a value of a buffer at an index:
// uint index = ...
// const float *bufferPtr = ...
// uint stride = ...
// uint vertexCount = ...
for (uint i = 0; i < vertexCount; i++) {
float xVal = *bufferPtr++;
float yVal = *bufferPtr++;
float zVal = *bufferPtr++;
bufferPtr += stride;
if (i == index) {
qDebug() << "Vertex coord: " << xVal << " , " << yVal << " , " << zVal;
}
}
I try to replace the for loop (and condition inside it) with such direct access by index:
float xVal = *(bufferPtr + index * stride + 0);
float yVal = *(bufferPtr + index * stride + 1);
float zVal = *(bufferPtr + index * stride + 2);
qDebug() << "Vertex coord without loop: " << xVal << " , " << yVal << " , " << zVal;
But output logs give me different results:
Vertex coord: 14.574 , -8.236 , 7.644
Vertex coord without loop: 20.67 , -19.098 , 18.536
Vertex coord: 14.552 , -8.024 , 7.842
Vertex coord without loop: -0.361096 , 0.109164 , 0.926117
Vertex coord: 14.722 , -8.18 , 7.842
Vertex coord without loop: 20.648 , -19.052 , 18.522
I cannot figure out why the results are different :(
FIX
As suggested by #LanceDeGate answer, the issue was resolved by reducing stride by 3 before the loop:
stride = stride - 3; // Three floats per vertex
for (uint i = 0; i < vertexCount; i++) {
float xVal = *bufferPtr++;
float yVal = *bufferPtr++;
float zVal = *bufferPtr++;
bufferPtr += stride;
if (i == index) {
qDebug() << "Vertex coord: " << xVal << " , " << yVal << " , " << zVal;
}
}
Now the logs are the same:
Vertex coord: -0.522632 , -0.803892 , -9.02102
Vertex coord without loop: -0.522632 , -0.803892 , -9.02102
Vertex coord: -0.39095 , -2.04955 , -8.91668
Vertex coord without loop: -0.39095 , -2.04955 , -8.91668
Vertex coord: -0.259928 , -0.804899 , -9.03231
Vertex coord without loop: -0.259928 , -0.804899 , -9.03231
Maybe it's because a whole stride is added to the bufferPtr, after three "bufferPtr++"s.
Maybe this is what you mean:
float xVal = *bufferPtr;
float yVal = *(bufferPtr+1);
float zVal = *(bufferPtr+2);
bufferPtr += stride;
or
float xVal = *bufferPtr++;
float yVal = *bufferPtr++;
float zVal = *bufferPtr++;
bufferPtr += (stride-3);
A first hint:
Please provide if possible a full example which everybody can compile. It takes some time to get your code up and running...
OK, as I understand your! code is something like that:
float var[]= { 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20};
size_t elements = sizeof(var)/sizeof(float);
int stride = 2;
int vertexCount = elements/(3+stride);
void f( float* bufferPtr, int index )
{
for (uint i = 0; i < vertexCount; i++)
{
float xVal = *bufferPtr++;
float yVal = *bufferPtr++;
float zVal = *bufferPtr++;
bufferPtr += stride;
if (i == index) {
std::cout << "Vertex coord: " << xVal << " , " << yVal << " , " << zVal << std::endl;
}
}
}
can be simplified to:
void f2( float* bufferPtr, int index )
{
struct Data
{
float x;
float y;
float z;
float dummy[2]; // stride
};
Data& d = (reinterpret_cast<Data*>(bufferPtr))[index];
std::cout << "Vertex coord: " << d.x << " " << d.y << " " << d.z << std::endl;
}
int main()
{
f( var, 2 );
f2( var, 2 );
}
With the following test case, I get the correct result:
#include <iostream>
int main()
{
float tabla[16] = {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15};
unsigned int index = 0;
const float *bufferPtr = &tabla[0];
unsigned int stride = 2;
unsigned int vertexCount = 2;
for (uint i = 0; i < vertexCount; i++) {
float xVal = *bufferPtr++;
float yVal = *bufferPtr++;
float zVal = *bufferPtr++;
bufferPtr += stride;
if (i == index) {
std::cout << "Vertex coord: " << xVal << " , " << yVal << " , " << zVal << std::endl;
}
}
const float *bufferPtr2 = &tabla[0];
float xVal2 = *(bufferPtr2 + index * stride + 0);
float yVal2 = *(bufferPtr2 + index * stride + 1);
float zVal2 = *(bufferPtr2 + index * stride + 2);
std::cout << "Vertex coord without loop: " << xVal2 << " , " << yVal2 << " , " << zVal2 << std::endl;
return 0;
}
Output:
Vertex coord: 0 , 1 , 2
Vertex coord without loop: 0 , 1 , 2
I didn't change your code at all, essentially. The only difference is that I added both tests in one main function, and obviously use a different buffer pointer (bufferPtr2) that I initialise with the first address of the table tabla. Are you sure you reset your pointer before trying the alternative method? It's hard to say because you've only provided snippets of your code.
I'm trying to create a vector2D class for my game but I think I'm getting the math wrong.
When I create a new vector2d object it automatically sets its x and y to 1, 1 in the constructor.
Vector2D vec;
std::cout << " x: " << vec.GetX() << " y: " << vec.GetY() << " angle rad: " << vec.GetAngleRad() << " magnitude: " << vec.GetMagnitude() << std::endl;
system("pause");
return 0;
and it outputs:
x: 1
y: 1
angle in rad: 0.785398
magnitude: 1.41421
(which is exactly what i expect)
but the problem is when I parse anything to the setAngle funciton, I get some wired results.
For example:
Vector2D vec;
vec.SetAngleRad(3);
std::cout << " x: " << vec.GetX() << " y: " << vec.GetY() << " angle rad: " << vec.GetAngleRad() << " magnitude: " << vec.GetMagnitude() << std::endl;
system("pause");
return 0;
I would expect it to output angle in rad: 3
but instead I get
angle in rad: 0.141593.
This is the vector2D class (I've tried to comment my code so you can see my what I was thinking when I wrote it):
#include "Vector2D.h"
Vector2D::Vector2D():
_x(1.0f),
_y(1.0f)
{
}
Vector2D::~Vector2D()
{
}
void Vector2D::SetX(float x)
{
_x = x;
}
float Vector2D::GetX()
{
return _x;
}
void Vector2D::SetY(float y)
{
_y = y;
}
float Vector2D::GetY()
{
return _y;
}
void Vector2D::SetAngleRad(float angle)
{
float hypotenuse = GetMagnitude();
SetX( cos(angle) * hypotenuse); // cos of angle = x / hypotenuse
// so x = cos of angle * hypotenuse
SetY( sin(angle) * hypotenuse); //sin of angle = y / hypotenuse
// so y = sin of angle * hypotenuse
}
float Vector2D::GetAngleRad()
{
float hypotenuse = GetMagnitude();
return asin( _y / hypotenuse ); // if sin of angle A = y / hypotenuse
// then asin of y / hypotenuse = angle
}
void Vector2D::SetMagnitude(float magnitude)
{
float angle = GetAngleRad();
float hypotenuse = GetMagnitude();
SetX( (cos(angle) * hypotenuse) * magnitude ); // cos of angle = x / hypotenuse
// so cos of angle * hypotenuse = x
// multiplied by the new magnitude
SetY( (sin(angle) * hypotenuse) * magnitude); //sin of angle = y / hypotenuse
// so sin of angle * hypotenuse = y
// multipied by the new magnitude
}
float Vector2D::GetMagnitude()
{
return sqrt( (_x * _x) + (_y * _y) ); // a^2 + b^2 = c^2
//so c = sqrt( a^2 + b^2 )
}
So I'd really appreciate it if someone could explain to me what I'm doing wrong here :)
To get angle in full circle range, you have to use both y and x components with atan2 function
return atan2( _y, _x );
Note result range -Pi..Pi and correct negative one by +2*Pi if you need range 0..2*Pi
Another issue: :SetMagnitude method really multiplies current magnitude by magnitude multiplier, while name assumes that method should set it (so vector length 2 after applying SetMagnitude(2) will have magnitude 4)).
So it would better to remove *hypotenuse multiplication (or change method name)
I have read this and I tried to implement it in C++, but the output is quite different. I have no idea what is wrong.
The code I used:
double cordinate_print()
{
int x, y;
int number_of_chunks = 5;
double angle=0;
double x_p[5] ; // number of chunks
double y_p[5]; // number of chunks
//double x0, y0 = radious;
double rad = 150;
for (int i = 0; i < number_of_chunks; i++)
{
angle = i * (360 / number_of_chunks);
float degree = (angle * 180 / M_PI);
x_p[i] = 0 + rad * cos(degree);
y_p[i] = 0 + rad * sin(degree);
//printf("x-> %d y-> %d \n", x_p[i], y_p[i]);
cout << "x -> " << x_p[i] << "y -> " << y_p[i] << "\n";
}
//printing x and y values
printf("\n \n");
return 0;
}
Output
x -> 150 y -> 0
x -> -139.034 y -> -56.2983
x -> 107.74 y -> 104.365
x -> -60.559 y -> -137.232
x -> 4.77208 y -> 149.924
The correct output
(150,0)
(46,142)
(-121,88)
(-121,-88)
(46,-142)
Issue with the conversion of degree into radian
float degree = (angle * 180 / M_PI);
The correct conversion formula is
float radian = (angle * M_PI / 180);
Also as mentioned in the comment use the good name to avoid any confusion.
Since your default angles are in degrees, you need to convert them to radians first before using sin() and cos(), then multiplying it to the radius.
double cordinate_print()
{
int number_of_chunks = 5;
double degrees = 0; // <-- correction
double x_p[5]; // number of chunks
double y_p[5]; // number of chunks
double radius = 150; // <-- correction
for (int i = 0; i < number_of_chunks; i++)
{
degrees = i * (360 / number_of_chunks); // <-- correction
float radian = (degrees * (M_PI / 180)); // <-- correction
x_p[i] = radius * cos(radian); // <-- correction
y_p[i] = radius * sin(radian); // <-- correction
cout << "x -> " << x_p[i] << "y -> " << y_p[i] << "\n";
}
//printing x and y values
printf("\n \n");
return 0;
}
I want to write a program that will calculate a deltaE (distance between colors) in AnLab color space.
The formula is here: Click Here, where deltaVy is a difference between brightness coordinates and delta(Vx-Vy) and delta(Vz-Vy) are differences between color coordinates.
So,
1. How to calulate deltaVx, deltaVy, deltaVz?
2. Maybe someone has some more information about this color space. I would appreciate.
I made so far a calculation from RGB to XYZ but got stuck in deltaVx,Vy,Vz calculations.
EDIT:
After some searching i found that deltaVx,Vy,Vz comes from adding shrinking coordinates to XYZ. But after all calculations i don't know why the distance is wrong (very big value). I edited my code too, so now it has implemented formulas. Also for testing purpose i've put very similar colors (now deltaE should be small value but something is wrong).
Edited code so far:
#include <iostream>
#include <math.h>
using namespace std;
class colorRGB
{
public:
double R1, G1, B1; // RGB colors
double X, Y, Z; // vectors
double Vx, Vy, Vz;
double L, a, b;
colorRGB(double R, double G, double B) //Make new color
{
this->R1 = R;
this->G1 = G;
this->B1 = B;
}
void wypisz()
{
cout << "X: " << X << " " << "Y: " << Y << " " << "Z: " << Z << endl;
cout << "Vx: " << Vx << " Vy: " << Vy << " Vz: " << Vz << endl;
cout << "L: " << L << " a: " << a << " b: " << b << endl;
cout << "--------------------------------------------------------------" << endl;
}
void calcLab(double pVx, double pVy, double pVz) // calulate L*ab
{
L = 9.2*(pVy);
a = 40 * (pVx - pVy);
b = 16 * (pVy - pVz);
}
void RGBtoXYZ()
{
double R = R1 / 255; // R from 0 to 255 etc.
double G = G1 / 255;
double B = B1 / 255;
if (R > 0.04045) { // R to X
R = (R + 0.055) / 1.055;
R = pow(R, 2.4);
}
else R = R / 12.92;
if (G > 0.04045) { // G to Y
G = (G + 0.055) / 1.055;
G = pow(G, 2.4);
}
else G = G / 12.92;
if (B > 0.04045) { // B to Z
B = (B + 0.055) / 1.055;
B = pow(B, 2.4);
}
else B = B / 12.92;
R = R * 100;
G = G * 100;
B = B * 100;
//Illuminant = D65
X = (R * 0.4124) + (G * 0.3576) + (B * 0.1805);
Y = (R * 0.2126) + (G * 0.7152) + (B * 0.0722);
Z = (R * 0.0193) + (G * 0.1192) + (B * 0.9505);
X = X - Y; // subtract from X and Z (prepare variables to further calculations)
Z = Z - Y;
Vx = X + 0.4124 + 0.3576 + 0.1805; // add shrinking coordinates
Vy = Y + 0.2126 + 0.7152 + 0.0722;
Vz = Z + 0.0193 + 0.1192 + 0.9505;
calcLab(Vx, Vy, Vz);
}
};
void calcDeltaVar(colorRGB *color1, colorRGB *color2) {
int x = 0;
double dE;
double ins;
double dVy = color1->Vy - color2->Vy;
double dVxVy = color1->Vx - color2->Vy;
double dVzVy = color1->Vz - color2->Vy;
ins = 0.23*(pow(dVy, 2)) + pow(dVxVy, 2) + 0.4*(pow(dVzVy, 2));
dE = 40 * (sqrt(ins));
x++;
cout << "Delta Ean for pair " << x << " is: " << dE << endl;
};
int main()
{
colorRGB rgb(255.0, 255.0, 255.0);
colorRGB rgb1(254.0, 254.0, 254.0);
rgb.RGBtoXYZ();
rgb.wypisz();
rgb1.RGBtoXYZ();
rgb1.wypisz();
calcDeltaVar(&rgb, &rgb1);
return 0;
}
Thanks in advance :)
I am trying to generate a set of points that I will connect to make polygon. The data has to be generated in a systematic way.
I am trying to generate the point set by randomly deriving radial coordinate r and evenly incrementing angular coordinate theta such that all the points are linked orderly without crossing with each other. I followed the correct formulas and I increment the angle but the data comes out negative because of sin and cos. I wanted to know if I'm doing this correctly.
struct Point2D {
int x;
int y;
};
Point2D poly[10];
int N = 80;
int x = (rand() % N + 1) * 5;
int y = (rand() % N + 1) * 5;
int r = sqrt(x*x + y*y);
int theta = int(atan ((float)y/(float)x) * 180 / PI);
cout << "x " << x << " y " << y << " r " << r << " theta " << theta << endl;
for (int i = 0; i < 10; i++) {
Point2D p;
p.x = x;
p.y = y;
poly[i] = p;
theta += 20;
x = r * sin(theta);
y = r * cos(theta);
cout << "x " << x << " y " << y << endl;
}
sin and cos return points on a unit circle centered around (0, 0), as paddy pointed out. To have no negative values in the points on your own polygon, you'll need to shift the origin of that circle. You're already changing its size, with r * sin(theta); you can accomplish a minimum translation with:
x = r * cos(theta) + r;
y = r * cos(theta) + r;
When I make this change to your program, I don't get negative values anymore.
Having said that, I suspect that you're not incrementing theta the way you intend. If you're trying to divide the circle into 10 equal angles, then theta should be a float or double and incremented like this:
theta += (2 * M_PI / 10);
theta is in radians, so 2 * M_PI is once around the unit circle.