How do I add a Header for Rows in a 2D Array? - c++

I Need to output an 2D Array that has a label header for the colums and the rows.
the columns is easy i just ouput a string above the table but i cannot figure out how to add the word ROW in vertical letters at the begining of the table.
it has to look like this.
C o l u m n s
| 1 2 3 4 5 6
----------------------------------
1 | 2 3 4 5 6 7
R 2 | 3 4 5 6 7 8
O 3 | 4 5 6 7 8 9
W 4 | 5 6 7 8 9 10
S 5 | 6 7 8 9 10 11
6 | 7 8 9 10 11 12
i cannot figure out how to get the rows label

Related

Changing observation from ID to ID-year pair

I have this data
ID A1 A2 B1 B2 C
1 0 1 2 3 4
2 5 6 7 8 9
Here, A1 means A at year 1, A2 means A at year 2. Same goes for B.
I want to make a data where each row is ID-year pair, not just ID.
Like this:
ID year A B C
1 1 0 2 4
1 2 1 3 4
2 1 5 7 9
2 2 6 8 9
Luckily, there are same number of years of A and B.
Honestly I am stuck and all I could come up was just create the desired data structure first and manually copy and paste things. But the data is too big to do it manually.
How should I go about it?
EDIT:
The names of the variables should be more like below:
ID A00 A01 B00 B01 C
1 0 1 2 3 4
2 5 6 7 8 9
See help for the reshape command. It's a reshape long problem.
clear
input ID A1 A2 B1 B2 C
1 0 1 2 3 4
2 5 6 7 8 9
end
reshape long A B , i(ID) j(Year)
list, sepby(ID)
+-----------------------+
| ID Year A B C |
|-----------------------|
1. | 1 1 0 2 4 |
2. | 1 2 1 3 4 |
|-----------------------|
3. | 2 1 5 7 9 |
4. | 2 2 6 8 9 |
+-----------------------+

Stripping off the zeros from a list containing a panda dataframe

i have a list
[0,0,0, DataFrame1,0,0,DataFrame2,0,0, DataFrame3]
where Dataframe is a "Panda Dataframe".
now what i am trying to do is to strip of the '0' zeros (being integers). Is there any way i can do this without using a loop. I tried to use set function, but it is not working with panda Dataframes.
My answer should resemble like this
[ DateFrame1, DataFrame2, DataFrame3]
As #Zero, suggest in comment.
l = []
df = pd.DataFrame(np.random.randint(0,10,(5,5))
l.append(0)
l.append(0)
l.append(df)
l.append(0)
l.append(0)
l.append(df)
print(l)
[0, 0, 0 1 2 3 4
0 5 4 9 6 7
1 9 9 2 7 3
2 4 9 4 8 3
3 4 6 2 5 5
4 8 1 2 1 8, 0, 0, 0 1 2 3 4
0 5 4 9 6 7
1 9 9 2 7 3
2 4 9 4 8 3
3 4 6 2 5 5
4 8 1 2 1 8]
[x for x in l if isinstance(x,pd.DataFrame)]
Output:
[ 0 1 2 3 4
0 5 4 9 6 7
1 9 9 2 7 3
2 4 9 4 8 3
3 4 6 2 5 5
4 8 1 2 1 8, 0 1 2 3 4
0 5 4 9 6 7
1 9 9 2 7 3
2 4 9 4 8 3
3 4 6 2 5 5
4 8 1 2 1 8]

Sort rows in a dataframe based on highest values in the whole dataframe

I have a dataframe that has probability values for 3 category columns [A, B, C]. Now I want to sort the rows of this dataframe based on the condition that the row which has the highest probability value in the whole dataframe(irrespective of the columns), should be at the top followed by the row with the second highest probability value and so on.
If someone can help me with this?
In [15]: df = pd.DataFrame(np.random.randint(1, 10, (10,3)))
In [16]: df
Out[16]:
0 1 2
0 9 2 8
1 6 6 9
2 2 4 9
3 2 1 2
4 2 5 3
5 3 4 9
6 8 7 3
7 6 4 1
8 3 3 8
9 7 2 7
In [17]: df.iloc[df.apply(np.max, axis=1).sort_values(ascending=False).index]
Out[17]:
0 1 2
5 3 4 9
2 2 4 9
1 6 6 9
0 9 2 8
8 3 3 8
6 8 7 3
9 7 2 7
7 6 4 1
4 2 5 3
3 2 1 2

Mapping the subdivision of an matrix to a vector

I am trying to map the subdivision of a matrix to an array.
By subdivision of a matrix I mean a box like the 3x3 boxes in a 9x9 sudoku matrix.
To do so I use :
grid[x][y] = box[x/3 + (y/3)*3];
But it does not work, any sugesstion on a solution and an explanation of why it does not work ?
EDIT:
I know how to map a vector to a matrix.
I want to map a vector to a portion of a square matrix like just like in the sudoku game.
EDIT2:
Bassicaly what I want is to be able to map a box number to a tuple ,
for example with 3x3 boxes and a 9x9 matrix
(0,0) => 1
(0,1) => 1
(8,8) => 9
Updated Answer to Edit2:
If you want a mapping like:
1 2 3
4 5 6
7 8 9
then your original code is almost want you want (just add 1):
for (int y = 0; y < 9; ++y)
{
for (int x = 0; x < 9; ++x)
{
int index = x/3 + (y/3) * 3 + 1;
printf("%d ", index);
}
printf("\n");
}
Which outputs:
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
4 4 4 5 5 5 6 6 6
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9
7 7 7 8 8 8 9 9 9
7 7 7 8 8 8 9 9 9

Can I use Lists in R as a proxy to data frame having unequal number of columns?

My understanding as far as data frame in R is that it has to be rectangular. It is not possible to have a data frame with unequal column lengths. Can I use the lists in R to achieve this? What are he pros and cons for such an approach?
You can use lists to store whatever you want, even dataframes or other lists! You can indeed assign different length vectors, or even completely different objects. It gives you the same functionality as dataframes in that you can index using the dollar sign:
> fooList <- list(a=1:12, b=1:11, c=1:10)
> fooList$a
[1] 1 2 3 4 5 6 7 8 9 10 11 12
> fooDF <- data.frame(a=1:10, b=1:10, c=1:10)
> fooDF$a
[1] 1 2 3 4 5 6 7 8 9 10
But numeric indexing is different:
> fooList[[1]]
[1] 1 2 3 4 5 6 7 8 9 10 11 12
> fooDF[,1]
[1] 1 2 3 4 5 6 7 8 9 10
as well as the structure and printing method:
> fooList
$a
[1] 1 2 3 4 5 6 7 8 9 10 11 12
$b
[1] 1 2 3 4 5 6 7 8 9 10 11
$c
[1] 1 2 3 4 5 6 7 8 9 10
> fooDF
a b c
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
6 6 6 6
7 7 7 7
8 8 8 8
9 9 9 9
10 10 10 10
Simply said a dataframe is a matrix and a list more of a container.
A list is meant to keep all sorts of stuff together, and a dataframe is the usual data format (a subject/case for each row and a variable for each column). It is used in a lot of analyses, allows to index the scores of a subject, can be more easilly transformed and other things.
However if you have unequal length columns then I doubt each row resembles a subject/case in your data. In that case I guess you don't need much of the functionality of dataframes.
If each row does resemble a subject/case, then you should use NA for any missing values and use a data frame.