setting a vector to a matrix algorithm help in C++ - c++

I have a array X that has M*N elements, I'm trying to create a matrix A of size M x N with this same data. I'm using gsl for the matrix and X is declared as an array. I'm having trouble and I keep getting overlap in the matrix.
Here is an example of what I am trying to do:
Vector X[4*2]
1,2,3,4,5,6,7,8
Matrix A 4X2
1, 2
3, 4
5, 6
7, 8
//heres one of my many fail attempts as an example
//creation of array X here
X[n*m] = someCbasedformulafromtheweb(n, m);
//gsl matrix allocation for matrix A N x M
gsl_matrix * A = gsl_matrix_alloc(n, m);
for(int i=0; i<n; i++) {
for(int j=0; j<m; j++) {
// setting the x[i*j] entry to gsl_matrix A at positions i , j
gsl_matrix_set (A,i,j, x[i*j]);
}
}

I don't have gsl to play with, but wouldn't this work?
for (i=0 ; i<4 ; ++i)
for (j=0 ; j<2 ; ++j)
X[2*i + j] = gsl_matrix_get (&A, i, j));

Your problem is at this line:
gsl_matrix_set (A,i,j, x[i*j]);
Here is the table of things:
i | j | x[i*j]
0 | 0 | x[0]
0 | 1 | x[0]
1 | 0 | x[0]
1 | 1 | x[1]
2 | 0 | x[0]
2 | 1 | x[2]
3 | 0 | x[0]
3 | 1 | x[3]
Instead you need to use:
gsl_matrix_set (A,i,j, x[2*i+j]);
i | j | x[2*i+j]
0 | 0 | x[0]
0 | 1 | x[1]
1 | 0 | x[2]
1 | 1 | x[3]
2 | 0 | x[4]
2 | 1 | x[5]
3 | 0 | x[6]
3 | 1 | x[7]

Related

Why is the sum of bitwise AND and bitwise XOR equal to bitwise OR?

Is there a reason why this happens?
#include <stdio.h>
void main() {
int i, j; //Takes i as 0 with short
printf("Enter two integers: ");
scanf("%d %d", &i, &j);
printf("\n%d & %d = %d\n", i, j, (i & j));
printf("\n%d ^ %d = %d\n", i, j, (i ^ j));
printf("\n%d | %d = %d\n", i, j, (i | j));
if ((i | j) == (i & j) + (i ^ j))
printf("\nYES\n");
else
printf("\nNO\n");
}
First note that i & j and i ^ j are disjoint: if a bit is set in one of them, the corresponding bit is necessarily reset in the other. That's a consequence of the truth tables of AND and XOR. AND has only a single row with a 1 in it, and XOR has a 0 in that row, so they're never simultaneously both 1.
That means we can forget about the special complications of addition (there is no carry, which makes addition purely bitwise: equivalent to both OR and XOR), and analyze this expression as if we were dealing with just booleans.
One way to look at it is that i & j exactly compensates for the case that i ^ j does not cover. If you write out the truth tables: (only 1 bit shown)
i j i&j i^j (i&j)|(i^j)
0 0 0 0 0
0 1 0 1 1
1 0 0 1 1
1 1 1 0 1
The last column has values identical to i | j.
By using Logic gate truth table we can easily find how it works.
+---+---+------------+-----------+------------+
| A | B | AND output | OR output | XOR output |
+---+---+------------+-----------+------------+
| 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 0 | 1 | 1 |
| 1 | 0 | 0 | 1 | 1 |
| 1 | 1 | 1 | 1 | 0 |
+---+---+------------+-----------+------------+
For instance, let i = 5, j = 6. In binary we get i = 00000101, j = 00000110.
(i | j) = (00000101 | 00000110) = 01101111
(i & j) = (00000101 & 00000110) = 01100100
(i ^ j) = (00000101 ^ 00000110) = 00001011
(i & j) + (i ^ j) = 01100100 + 00001011 = 01101111 = (i | j)
Therefor, (i | j) = (i & j) + (i ^ j)

A many-to-one mapping in the natural domain using discrete input variables?

I would like to find a mapping f:X --> N, with multiple discrete natural variables X of varying dimension, where f produces a unique number between 0 to the multiplication of all dimensions. For example. Assume X = {a,b,c}, with dimensions |a| = 2, |b| = 3, |c| = 2. f should produce 0 to 12 (2*3*2).
a b c | f(X)
0 0 0 | 0
0 0 1 | 1
0 1 0 | 2
0 1 1 | 3
0 2 0 | 4
0 2 1 | 5
1 0 0 | 6
1 0 1 | 7
1 1 0 | 8
1 1 1 | 9
1 2 0 | 10
1 2 1 | 11
This is easy when all dimensions are equal. Assume binary for example:
f(a=1,b=0,c=1) = 1*2^2 + 0*2^1 + 1*2^0 = 5
Using this naively with varying dimensions we would get overlapping values:
f(a=0,b=1,c=1) = 0*2^2 + 1*3^1 + 1*2^2 = 4
f(a=1,b=0,c=0) = 1*2^2 + 0*3^1 + 0*2^2 = 4
A computationally fast function is preferred as I intend to use/implement it in C++. Any help is appreciated!
Ok, the most important part here is math and algorythmics. You have variable dimensions of size (from least order to most one) d0, d1, ... ,dn. A tuple (x0, x1, ... , xn) with xi < di will represent the following number: x0 + d0 * x1 + ... + d0 * d1 * ... * dn-1 * xn
In pseudo-code, I would write:
result = 0
loop for i=n to 0 step -1
result = result * d[i] + x[i]
To implement it in C++, my advice would be to create a class where the constructor would take the number of dimensions and the dimensions itself (or simply a vector<int> containing the dimensions), and a method that would accept an array or a vector of same size containing the values. Optionaly, you could control that no input value is greater than its dimension.
A possible C++ implementation could be:
class F {
vector<int> dims;
public:
F(vector<int> d) : dims(d) {}
int to_int(vector<int> x) {
if (x.size() != dims.size()) {
throw std::invalid_argument("Wrong size");
}
int result = 0;
for (int i = dims.size() - 1; i >= 0; i--) {
if (x[i] >= dims[i]) {
throw std::invalid_argument("Value >= dimension");
}
result = result * dims[i] + x[i];
}
return result;
}
};

what is mean by dict

I came through a program which was like below
firstMissingPositive(vector<int> &A) {
vector<bool> dict(A.size()+1,false);
for(int i=0;i<A.size();i++){
if(A[i]>0 && A[i]<dict.size()) dict[A[i]]=true;
}
if(A.size()==1 && A[0]!=1) return 1;
else if(A.size()==1 && A[0]==1) return 2;
int i=0;
for(i=1;i<dict.size();i++){
if(dict[i]==false) return i;
}
return i;
}
In this program, I could not get what is mean by following line
vector<bool> dict(A.size()+1,false);
What is dict and this statement?
It's simply a variable.
The definition of the variable calls a specific constructor of the vector to initialize it with a specific size, and initialize all elements to a specific value.
It's equivalent to
vector<bool> dict;
dict.resize(A.size()+1,false);
See e.g. this std::vector constructor reference for more information about available constructors.
It is an definition of a variable "dict" of type vector. And please Google it first
You are declaring container of bool's (it means variables which stores only 0/1 (8B)) which has same count of elements as int vector A and all these elements are set to false -> 0.
It calls this constructor
vector (size_type n, const value_type& val,
const allocator_type& alloc = allocator_type());
Example:
This is vector A:
0 1 2 3 4 <- Indexes
+---+---+---+---+---+
| 0 | 1 | 2 | 3 | 4 | (int)
+---+---+---+---+---+
Its size is 5, so it would declare container with size 5, initialized to 0's.
0 1 2 3 4 <- Indexes
+---+---+---+---+---+
| 0 | 0 | 0 | 0 | 0 | (bool)
+---+---+---+---+---+
In this case its used to flag indexes in first vectror.
For example it is often used for Sieve of Eratosthenes. You can set 1's to primes with each iteration. It would be (for numbers 0-4)
0 1 2 3 4
+---+---+---+---+---+
| 0 | 0 | 1 | 1 | 0 |
+---+---+---+---+---+
Then you know on which indexes are primes in vector A.
for (int i = 0; i < A.size(); i++)
{
if ( dict[i] == true )
{
std::cout << "Prime number: << A[i] << std::endl;
}
}

Int into Array C++ help please

I am trying to separate an integer into an array. I have been using modulo 10 and then dividing by 10 but I believe that will only work for numbers 6 digits or less I may be wrong but it is not working for me. This is what I have:
for(int i=0; i<=8; i++){
intAr[i] = intVal%10;
intVal /= 10;
}
It is not working for me and help would be lovely
The problem i guess you have is that the number in the array is reversed. So try this:
for(i=8;i>=0;i--)
{
intAr[i] = intVal%10;
intVal /= 10;
}
This will work and have the number stored correctly in the array
If you are expecting the numbers to be stored in your array right-to-left, you'll need to reverse the way your store the values:
for(int i=0; i < 9; i++)
{
intAr[9 - i - 1] = intVal % 10;
intVal /= 10;
}
This will store your number (103000648) like this
|-0-|-1-|-2-|-3-|-4-|-5-|-6-|-7-|-8-|
| 1 | 0 | 3 | 0 | 0 | 0 | 6 | 4 | 8 |
instead of
|-0-|-1-|-2-|-3-|-4-|-5-|-6-|-7-|-8-|
| 8 | 4 | 6 | 0 | 0 | 0 | 3 | 0 | 1 |

Solving a simple sparse linear system of equation using csparse: cs_cholsol

I am using Microsoft Visual Studio 2008 on a Windows 7 x64. I am trying to solve the following linear system Ax=b by using csparse, where A is positive definite.
| 1 0 0 1 |
A = | 0 3 1 0 |
| 0 1 2 1 |
| 1 0 1 2 |
| 1 |
b = | 1 |
| 1 |
| 1 |
I have used the following codes
int Ncols = 4, Nrows = 4, nnz = 10;
int cols[] = {0, 3, 1, 2, 1, 2, 3, 0, 2, 3};
int rows[] = {0, 0, 1, 1, 2, 2, 2, 3, 3, 3};
double vals[] = {1, 1, 3, 1, 1, 2, 1, 1, 1, 2};
cs *Operator = cs_spalloc(Ncols,Nrows,nnz,1,1);
int j;
for(j = 0; j < nnz; j++)
{
Operator->i[j] = rows[j];
Operator->p[j] = cols[j];
Operator->x[j] = vals[j];
Operator->nz++;
}
for(j = 0; j < nnz; j++)
cout << Operator->i[j] << " " << Operator->p[j] << " " << Operator->x[j] << endl;
Operator = cs_compress(Operator);
for(j = 0; j < nnz; j++)
cout << Operator->i[j] << " " << Operator->p[j] << " " << Operator->x[j] << endl;
// Right hand side
double b[] = {1, 1, 1, 1};
// Solving Ax = b
int status = cs_cholsol(0, Operator, &b[0]); // status = 0 means error.
In order to make sure that I have created the sparse variable correctly, I tried to print out the rows and columns index as well as their values to the console before and after cs_compress. The following is the result of this print-out.
Before:
0 0 1
0 3 1
1 1 3
1 2 1
2 1 1
2 2 2
2 3 1
3 0 1
3 2 1
3 3 2
After:
0 0 1
3 2 1
1 4 3
2 7 1
1 10 1
2 -6076574517017313795 2
3 -6076574518398440533 1
0 -76843842582893653 1
2 0 1
3 0 2
Because of the trash values that can be observed above after calling cs_compress, the solution of Ax=b does not match with the one that I have calculated with MATLAB. MATLAB results in the following solution.
| 2.0000 |
x = | 0.0000 |
| 1.0000 |
|-1.0000 |
Interestingly, I don't have this problem for the following codes which solve Ax=b, where A is a 3×3 identity matrix.
int Ncols = 3, Nrows = 3, nnz = Nrows;
cs *Operator = cs_spalloc(Ncols,Nrows,nnz,1,1);
int j;
for(j = 0; j < nnz; j++) {
Operator->i[j] = j;
Operator->p[j] = j;
Operator->x[j] = 1.0;
Operator->nz++;
}
Operator = cs_compress(Operator);
double b[] = {1, 2, 3};
int status = cs_cholsol(0, Operator, &b[0]); // status = 1 means no error.
Could someone please help me fix the problem that I have with cs_compress?
Having never worked with csparse before, I skimmed the source code.
When you call cs_spalloc() to create Operator, you are creating a triplet (indicated by setting the last parameter to 1). But, after the call to cs_copmress(), the result is no longer a triplet (you can detect this by checking the result and see that Operator->n is now -1 after compression). So, it is an error to traverse the matrix as if it were.
You can use the cs_print() API to print your sparse matrix.
As an aside, your code leaks memory, since the compressed matrix is a new allocation, and the original uncompressed matrix was not freed by cs_compress().