Comparison between GLfloats - c++

So, I am doing a little code for opengl that picks the color of one square and sum 0.01 on his value, so the color will be more shining. I have values of colors for each square in one array , and I got one variable that holds the value of the maximum one element of the color can go, in this case this value is one.
This is part of the function
for(GLint i = 0; i < 3; i++) {
if(colors[selectedSquare][i] > 0) {
colors[selectedSquare][i] += 0.01;
if(colors[selectedSquare][i] == maxColor) {
flag = false;
}
}
}
I call this function in glutTimerFunc, and improve the value of the color in 0.01 for each time. When the value of the color goes egual 1 (the maxColor) i start to reducing the color in other part of the function.
The problem here is that the comparison
(colors[selectedSquare][i] == maxColor)
Never gets true, I made some output to check and this is what I got
colors[selectedSquare][i] value = 0.99 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 0
colors[selectedSquare][i] value = 1 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 0
colors[selectedSquare][i] value = 1.01 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 0
colors[selectedSquare][i] value = 1.02 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) = 0
But the interesting thing starts here, when I change the comparison to
((int)colors[selectedSquare][i] == maxColor)
I get this output
colors[selectedSquare][i] value = 0.99 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 0
colors[selectedSquare][i] value = 1 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 0
colors[selectedSquare][i] value = 1.01 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 1
colors[selectedSquare][i] value = 1.02 size = 4
maxColor value = 1 size = 4
(colors[selectedSquare][i] == maxColor) is 1
I measure the size using sizeof(), and the declaration of colors and maxColor is like that
GLfloat (Memoria::colors)[9][3] = {
{ 0.80, 0.80, 0.00 },
{ 0.00, 0.80, 0.80 },
{ 0.80, 0.00, 0.00 },
{ 0.00, 0.80, 0.00 },
{ 0.00, 1.00, 1.00 },
{ 1.00, 0.00, 0.00 },
{ 1.00, 0.00, 1.00 },
{ 1.00, 1.00, 0.00 },
{ 1.00, 1.00, 1.00 },
};
const GLfloat maxColor;
Both belong to the same class, but colors is static.
Hope someone knows the problem.

Directly comparing doubles is a bad idea. You could use >= instead of == or do something like
if(fabs(colors[selectedSquare][i] - maxColor) > delta)
where delta is a precision you want to use.
Your problem is - doubles are never stored exactly as you seem to expect them to be. There are always fluctuations at the end of the number far beyond the comma separated part.

Related

how would i find the length? i have 8 tubes and 11 nodes

So i have 11 tubes and 8 nodes.
The lenght is the distance from node 1 to node 2.
8 //nodes
11 //tubes
0 0 -50 //node_x, node_y, flow
1000 0 -50 //node_x, node_y, flow
2000 0 0 //node_x, node_y, flow
0 500 0 //node_x, node_y, flow
500 500 0 //node_x, node_y, flow // NODES
0 1000 -50 //node_x, node_y, flow
1000 1000 0 //node_x, node_y, flow
2000 1000 150 //node_x, node_y, flow
1 2 0.5 //Node_1, Node_2, Diameter
2 3 0.5 //Node_1, Node_2, Diameter
1 4 0.5 //Node_1, Node_2, Diameter
4 5 0.5 //Node_1, Node_2, Diameter
2 5 0.5 //Node_1, Node_2, Diameter
2 8 0.5 //Node_1, Node_2, Diameter // TUBES
3 8 0.5 //Node_1, Node_2, Diameter
4 6 0.5 //Node_1, Node_2, Diameter
6 7 0.5 //Node_1, Node_2, Diameter
5 7 0.5 //Node_1, Node_2, Diameter
7 8 0.5 //Node_1, Node_2, Diameter
As seen the text file above we have 8 data for nodes with node x,y,flow values and 11 data for tubes with its node 1,2 id and diameter.
which means the first data of tube which is id 1 and id 2 = to 0 0 -50 and 1000 0 -50 so the length is 1000. As shown in picture we need the x value here so x is returned in the c++.
My code right now is hard coded which is like this:
for (int i = 0; i < 11; i++)
{
//1
if (node1_[i].id() == 0 && node2_[i].id() == 1)
{
return node2_->x();
}
//2
if (node1_[i].id() == 1 && node2_[i].id() == 2)
{
return node1_->x();
}
//3
if (node1_[i].id() == 0 && node2_[i].id() == 3)
{
return node2_->y();
}
//4
if (node1_[i].id() == 3 && node2_[i].id() == 4)
{
return node2_->x();
}
//5
if (node1_[i].id() == 1 && node2_[i].id() == 4)
{
return std::sqrt((node2_->x()) * (node2_->x()) + (node2_->y()) * (node2_->y()));
}
//6
if (node1_[i].id() == 1 && node2_[i].id() == 7)
{
return std::sqrt((node1_->x()) * (node1_->x()) + (node2_->y()) * (node2_->y()));
}
//7
if (node1_[i].id() == 2 && node2_[i].id() == 7)
{
return node2_->y();
}
//8
if (node1_[i].id() == 3 && node2_[i].id() == 5)
{
return node1_->y();
}
//9
if (node1_[i].id() == 5 && node2_[i].id() == 6)
{
return node2_->x();
}
//10
if (node1_[i].id() == 4 && node2_[i].id() == 6)
{
return std::sqrt((node1_->x()) * (node1_->y()) + (node1_->x()) * (node1_->y()));
}
//11
if (node1_[i].id() == 6 && node2_[i].id() == 7)
{
return node1_->x();
}
}
How can i remove the manually written ids and make it data driven so it knows which id to pick itself
length[0] = 1000
length[1] = 1000
length[2] = 500
length[3] = 500
length[4] = 707.107
length[5] = 1414.21
length[6] = 1000
length[7] = 500
length[8] = 1000
length[9] = 707.107
length[10] = 1000
these are values which i am getting which are correct. but i would like to change the way i am doing it. change the hard coded method to a data driven way.
To "generalize" your calculations you need a "generalized" function which calculates based on the amount of nodes connected to a tube.
This You have to find yourself.
After reading in the data, what is needed can be calculated.
Here is an idea for reading the data:
Define class'es for Tube's and Node's.
Use a std::vector to contain them (std::vector<Tube> tubes; and std::vector<Node> nodes;
tubes should be contained inside the Node class
Read in all the tubes and nodes.
While reading tubes assign the lower numbered node number as the location of the tube
loop over the nodes and calculate using it's tubes using the generalized function

A many-to-one mapping in the natural domain using discrete input variables?

I would like to find a mapping f:X --> N, with multiple discrete natural variables X of varying dimension, where f produces a unique number between 0 to the multiplication of all dimensions. For example. Assume X = {a,b,c}, with dimensions |a| = 2, |b| = 3, |c| = 2. f should produce 0 to 12 (2*3*2).
a b c | f(X)
0 0 0 | 0
0 0 1 | 1
0 1 0 | 2
0 1 1 | 3
0 2 0 | 4
0 2 1 | 5
1 0 0 | 6
1 0 1 | 7
1 1 0 | 8
1 1 1 | 9
1 2 0 | 10
1 2 1 | 11
This is easy when all dimensions are equal. Assume binary for example:
f(a=1,b=0,c=1) = 1*2^2 + 0*2^1 + 1*2^0 = 5
Using this naively with varying dimensions we would get overlapping values:
f(a=0,b=1,c=1) = 0*2^2 + 1*3^1 + 1*2^2 = 4
f(a=1,b=0,c=0) = 1*2^2 + 0*3^1 + 0*2^2 = 4
A computationally fast function is preferred as I intend to use/implement it in C++. Any help is appreciated!
Ok, the most important part here is math and algorythmics. You have variable dimensions of size (from least order to most one) d0, d1, ... ,dn. A tuple (x0, x1, ... , xn) with xi < di will represent the following number: x0 + d0 * x1 + ... + d0 * d1 * ... * dn-1 * xn
In pseudo-code, I would write:
result = 0
loop for i=n to 0 step -1
result = result * d[i] + x[i]
To implement it in C++, my advice would be to create a class where the constructor would take the number of dimensions and the dimensions itself (or simply a vector<int> containing the dimensions), and a method that would accept an array or a vector of same size containing the values. Optionaly, you could control that no input value is greater than its dimension.
A possible C++ implementation could be:
class F {
vector<int> dims;
public:
F(vector<int> d) : dims(d) {}
int to_int(vector<int> x) {
if (x.size() != dims.size()) {
throw std::invalid_argument("Wrong size");
}
int result = 0;
for (int i = dims.size() - 1; i >= 0; i--) {
if (x[i] >= dims[i]) {
throw std::invalid_argument("Value >= dimension");
}
result = result * dims[i] + x[i];
}
return result;
}
};

Merging multiple .txt files into a csv

*New to Python.
I'm trying to merge multiple text files into 1 csv; example below -
filename.csv
Alpha
0
0.1
0.15
0.2
0.25
0.3
text1.txt
Alpha,Beta
0,10
0.2,20
0.3,30
text2.txt
Alpha,Charlie
0.1,5
0.15,15
text3.txt
Alpha,Delta
0.1,10
0.15,20
0.2,50
0.3,10
Desired output in the csv file: -
filename.csv
Alpha Beta Charlie Delta
0 10 0 0
0.1 0 5 10
0.15 0 15 20
0.2 20 0 50
0.25 0 0 0
0.3 30 0 10
The code I've been working with and others that were provided give me an answer similar to what is at the bottom of the page
def mergeData(indir="Dir Path", outdir="Dir Path"):
dfs = []
os.chdir(indir)
fileList=glob.glob("*.txt")
for filename in fileList:
left= "/Path/Final.csv"
right = filename
output = "/Path/finalMerged.csv"
leftDf = pandas.read_csv(left)
rightDf = pandas.read_csv(right)
mergedDf = pandas.merge(leftDf,rightDf,how='inner',on="Alpha", sort=True)
dfs.append(mergedDf)
outputDf = pandas.concat(dfs, ignore_index=True)
outputDf = pandas.merge(leftDf, outputDf, how='inner', on='Alpha', sort=True, copy=False).fillna(0)
print (outputDf)
outputDf.to_csv(output, index=0)
mergeData()
The answer I get however is instead of the desired result: -
Alpha Beta Charlie Delta
0 10 0 0
0.1 0 5 0
0.1 0 0 10
0.15 0 15 0
0.15 0 0 20
0.2 20 0 0
0.2 0 0 50
0.25 0 0 0
0.3 30 0 0
0.3 0 0 10
IIUC you can create list of all DataFrames - dfs, in loop append mergedDf and last concat all DataFrames to one:
import pandas
import glob
import os
def mergeData(indir="dir/path", outdir="dir/path"):
dfs = []
os.chdir(indir)
fileList=glob.glob("*.txt")
for filename in fileList:
left= "/path/filename.csv"
right = filename
output = "/path/filename.csv"
leftDf = pandas.read_csv(left)
rightDf = pandas.read_csv(right)
mergedDf = pandas.merge(leftDf,rightDf,how='right',on="Alpha", sort=True)
dfs.append(mergedDf)
outputDf = pandas.concat(dfs, ignore_index=True)
#add missing rows from leftDf (in sample Alpha - 0.25)
#fill NaN values by 0
outputDf = pandas.merge(leftDf,outputDf,how='left',on="Alpha", sort=True).fillna(0)
#columns are converted to int
outputDf[['Beta', 'Charlie']] = outputDf[['Beta', 'Charlie']].astype(int)
print (outputDf)
outputDf.to_csv(output, index=0)
mergeData()
Alpha Beta Charlie
0 0.00 10 0
1 0.10 0 5
2 0.15 0 15
3 0.20 20 0
4 0.25 0 0
5 0.30 30 0
EDIT:
Problem is you change parameter how='left' in second merge to how='inner':
def mergeData(indir="Dir Path", outdir="Dir Path"):
dfs = []
os.chdir(indir)
fileList=glob.glob("*.txt")
for filename in fileList:
left= "/Path/Final.csv"
right = filename
output = "/Path/finalMerged.csv"
leftDf = pandas.read_csv(left)
rightDf = pandas.read_csv(right)
mergedDf = pandas.merge(leftDf,rightDf,how='inner',on="Alpha", sort=True)
dfs.append(mergedDf)
outputDf = pandas.concat(dfs, ignore_index=True)
#need left join, not inner
outputDf = pandas.merge(leftDf, outputDf, how='left', on='Alpha', sort=True, copy=False)
.fillna(0)
print (outputDf)
outputDf.to_csv(output, index=0)
mergeData()
Alpha Beta Charlie Delta
0 0.00 10.0 0.0 0.0
1 0.10 0.0 5.0 0.0
2 0.10 0.0 0.0 10.0
3 0.15 0.0 15.0 0.0
4 0.15 0.0 0.0 20.0
5 0.20 20.0 0.0 0.0
6 0.20 0.0 0.0 50.0
7 0.25 0.0 0.0 0.0
8 0.30 30.0 0.0 0.0
9 0.30 0.0 0.0 10.0
import pandas as pd
data1 = pd.read_csv('samp1.csv',sep=',')
data2 = pd.read_csv('samp2.csv',sep=',')
data3 = pd.read_csv('samp3.csv',sep=',')
df1 = pd.DataFrame({'Alpha':data1.Alpha})
df2 = pd.DataFrame({'Alpha':data2.Alpha,'Beta':data2.Beta})
df3 = pd.DataFrame({'Alpha':data3.Alpha,'Charlie':data3.Charlie})
mergedDf = pd.merge(df1, df2, how='outer', on ='Alpha',sort=False)
mergedDf1 = pd.merge(mergedDf, df3, how='outer', on ='Alpha',sort=False)
a = pd.DataFrame(mergedDf1)
print(a.drop_duplicates())
output:
Alpha Beta Charlie
0 0.00 10.0 NaN
1 0.10 NaN 5.0
2 0.15 NaN 15.0
3 0.20 20.0 NaN
4 0.25 NaN NaN
5 0.30 30.0 NaN

Eigen Sparse valuePtr is displaing zeros, while leaving out valid values

I don't understand the result I get when I try to iterate over valuePtr of a sparse matrix. Here is my code.
#include <iostream>
#include <vector>
#include <Eigen/Sparse>
using namespace Eigen;
int main()
{
SparseMatrix<double> sm(4,5);
std::vector<int> cols = {0,1,4,0,4,0,4};
std::vector<int> rows = {0,0,0,2,2,3,3};
std::vector<double> values = {0.2,0.4,0.6,0.3,0.7,0.9,0.2};
for(int i=0; i < cols.size(); i++)
sm.insert(rows[i], cols[i]) = values[i];
std::cout << sm << std::endl;
int nz = sm.nonZeros();
std::cout << "non_zeros : " << nz << std::endl;
for (auto it = sm.valuePtr(); it != sm.valuePtr() + nz; ++it)
std::cout << *it << std::endl;
return 0;
}
Output:
0.2 0.4 0 0 0.6 // The values are in the matrix
0 0 0 0 0
0.3 0 0 0 0.7
0.9 0 0 0 0.2
non_zeros : 7
0.2 // but valuePtr() does not point to them
0.3 // I expected: 0.2, 0.3, 0.9, 0.4, 0.6, 0.7, 0.2
0.9
0
0.4
0
0
I don't understand why I am getting zeros, what's going on here?
According to the documentation for SparseMatrix:
Unlike the compressed format, there might be extra space inbetween the
nonzeros of two successive columns (resp. rows) such that insertion of
new non-zero can be done with limited memory reallocation and copies.
[...]
A call to the function makeCompressed() turns the matrix into the standard compressed format compatible with many library.
For example:
This storage scheme is better explained on an example. The following
matrix
0 3 0 0 0
22 0 0 0 17
7 5 0 1 0
0 0 0 0 0
0 0 14 0 8
and one of its possible sparse, column major representation:
Values: 22 7 _ 3 5 14 _ _ 1 _ 17 8
InnerIndices: 1 2 _ 0 2 4 _ _ 2 _ 1 4
[...]
The "_" indicates available free space to quickly insert new elements.
Since valuePtr() simply return a pointer to the Values array, you'll see the empty spaces (the zeroes that got printed) unless you make the matrix compressed.

GSL histogram issue

I am trying to compute the cumulative distribution function for a set of values.
I computed the histogram using gsl and I tried to computed the CDF from here, but it seems like the values are shifted by one position.
This is the code I am using:
gHist = gsl_histogram_alloc((maxRange - minRange) / 5);
gsl_histogram_set_ranges_uniform(gHist, minRange, maxRange);
for (int j = 0; j < ValidDataCount; j++)
gsl_histogram_increment (gHist, ValAdd[j]);
gsl_histogram_pdf * p = gsl_histogram_pdf_alloc(gsl_histogram_bins(gHist));
gsl_histogram_pdf_init (p, gHist);
for (int j = 0; j < gsl_histogram_bins(gHist) + 1 ; j++)
printf ("%f ", p->sum[j]);
The histogram is like this:
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 .... goes on like this. there is a total of 20 values
And the cdf is:
0.00 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.1 0.1 ...
Why is there a 0 on the first position? Shouldn't it start with 0.05?
Thank you.
GSL alloc sum to be an array of size n+1, where n is the number of bins. However, only n entries are necessary to calculate the pdf. This extra allocation of one element happens because gsl defines sum[0] = 0.
in the GSL source coode "pdf.c" you can see that
gsl_histogram_pdf *gsl_histogram_pdf_alloc (const size_t n)
{
(...)
p->sum = (double *) malloc ((n + 1) * sizeof (double));
}
int gsl_histogram_pdf_init (gsl_histogram_pdf * p, const gsl_histogram * h)
{
(...)
p->sum[0] = 0;
for (i = 0; i < n; i++)
{
sum += (h->bin[i] / mean) / n;
p->sum[i + 1] = sum;
}
}