I'm trying to write a function that when given 2 arguments, the 2 leftmost columns, produces the third column as a result:
0 0 0
1 0 3
2 0 2
3 0 1
0 1 1
1 1 0
2 1 3
3 1 2
0 2 2
1 2 1
2 2 0
3 2 3
0 3 3
1 3 2
2 3 1
3 3 0
I know there will be a modulus involved but I can't quite figure it out.
I'm trying to figure out if 4 people are sitting at a table, given the person and target, from the person's perspective which seat is the target sitting in?
Thanks
If a and b are the positions of the two persons, their "distance" is:
(4+b-a) % 4
This also shows that the forth block in your example is wrong.
Assuming that last block of numbers is wrong, I think you're looking for (4 + b - a) % 4 gives c (for columns a b c).
Related
I didn't find how the domain map maps the indices in the multi-dimensional domains to the multi-dimensional target locales.
1.) How the target locales (one dimension) is arranged in multi-dimension fashion which equals the distribution dimension to map the indexes?
2.) In documentation it states that for multi-dimension case, the computation should be done in every dimension. For the domain {1..8, 1..8} ==> dom
assume dom is block-distributed over 6 target locales.
Steps in mapping
1 for 1st dimension (1..8) do the computation
if idx is low<=idx<=high then locid is
floor (idx-low)*N / (high-low+1) gives me an index say i.
Repeat the same for 2nd dimension which gives me an index say j.
Now I have a tuple ( i, j )
how this is mapped to the target locales array of dimension 2?
What the domain map do for changing the 1D target locales array to distribution dimension?
Is something like reshape function ?
Please let me know if this lacks sufficient information.
The specific details about how a domain's indices are mapped to a program's locales are not defined by the Chapel language itself, but rather by the implementation of the domain map used to declare the domain. In the comments under your question, you mention that you're referring to the Block distribution, so I'll focus on that in my answer (documented here), but note that any other domain map could take a different approach.
The Block distribution takes an optional targetLocales argument which permits you to specify the set of locales to be targeted, as well as their virtual topology. For instance, if I declare and populate a few arrays of locales:
var grid1: [1..3, 1..2] locale, // a 3 x 2 array of locales
grid2: [1..2, 1..3] locale; // a 2 x 3 array of locales
for i in 1..3 {
for j in 1..2 {
grid1[i,j] = Locales[(2*(i-1) + j-1)%numLocales];
grid2[j,i] = Locales[(3*(j-1) + i-1)%numLocales];
}
}
I can then pass them in as the targetLocales arguments to a few instances of a Block-distributed domain:
use BlockDist;
config const n = 8;
const D = {1..n, 1..n},
D1 = D dmapped Block(D, targetLocales=grid1),
D2 = D dmapped Block(D, targetLocales=grid2);
Each domain will distribute its n rows to the first dimension of its targetLocales grid and its n columns to the second dimension. We can see the results of this distribution by declaring arrays of integers over these domains and assigning them in parallel to make each element store its owning locale's ID, as follows:
var A1: [D1] int,
A2: [D2] int;
forall a in A1 do
a = here.id;
forall a in A2 do
a = here.id;
writeln(A1, "\n");
writeln(A2, "\n");
When running on six or more locales (./a.out -nl 6), the output is as follows, revealing the underlying grid structure:
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
2 2 2 2 3 3 3 3
2 2 2 2 3 3 3 3
2 2 2 2 3 3 3 3
4 4 4 4 5 5 5 5
4 4 4 4 5 5 5 5
0 0 0 1 1 1 2 2
0 0 0 1 1 1 2 2
0 0 0 1 1 1 2 2
0 0 0 1 1 1 2 2
3 3 3 4 4 4 5 5
3 3 3 4 4 4 5 5
3 3 3 4 4 4 5 5
3 3 3 4 4 4 5 5
For a 1-dimensional targetLocales array, the documentation says:
If the rank of targetLocales is 1, a greedy heuristic is used to reshape the array of target locales so that it matches the rank of the distribution and each dimension contains an approximately equal number of indices.
For example, if we distribute to a 1-dimensional 4-element array of locales:
var grid3: [1..4] locale;
for i in 1..4 do
grid3[i] = Locales[(i-1)%numLocales];
var D3 = D dmapped Block(D, targetLocales=grid3);
var A3: [D3] int;
forall a in A3 do
a = here.id;
writeln(A3);
we can see that the target locales form a square, as expected:
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
2 2 2 2 3 3 3 3
2 2 2 2 3 3 3 3
2 2 2 2 3 3 3 3
2 2 2 2 3 3 3 3
The documentation is intentionally vague about how a 1D targetLocales argument will be reshaped if it's not a perfect square, but we can find out what's done in practice by using the targetLocales() query on the domain. Also, note that if no targetLocales array is supplied, the entire Locales array (which is 1D) is used by default. As an illustration of both these things, if the following code is run on six locales:
var D0 = D dmapped Block(D);
writeln(D0.targetLocales());
we get:
LOCALE0 LOCALE1
LOCALE2 LOCALE3
LOCALE4 LOCALE5
illustrating that the current heuristic matches our explicit grid1 declaration above.
For each row of data in a DataFrame I would like to compute the number of unique values in columns A and B for that particular row and a reference row within the group identified by another column ID. Here is a toy dataset:
d = {'ID' : pd.Series([1,1,1,2,2,2,2,3,3])
,'A' : pd.Series([1,2,3,4,5,6,7,8,9])
,'B' : pd.Series([1,2,3,4,11,12,13,14,15])
,'REFERENCE' : pd.Series([1,0,0,0,0,1,0,1,0])}
data = pd.DataFrame(d)
The data looks like this:
In [3]: data
Out[3]:
A B ID REFERENCE
0 1 1 1 1
1 2 2 1 0
2 3 3 1 0
3 4 4 2 0
4 5 11 2 0
5 6 12 2 1
6 7 13 2 0
7 8 14 3 1
8 9 15 3 0
Now, within each group defined using ID I want to compare each record with the reference record and I want to compute the number of unique A and B values for the combination. For instance, I can compute the value for data record 3 by taking len(set([4,4,6,12])) which gives 3. The result should look like this:
A B ID REFERENCE CARDINALITY
0 1 1 1 1 1
1 2 2 1 0 2
2 3 3 1 0 2
3 4 4 2 0 3
4 5 11 2 0 4
5 6 12 2 1 2
6 7 13 2 0 4
7 8 14 3 1 2
8 9 15 3 0 3
The only way I can think of implementing this is using for loops that loop over each grouped object and then each record within the grouped object and computes it against the reference record. This is non-pythonic and very slow. Can anyone please suggest a vectorized approach to achieve the same?
I would create a new column where I combine a and b into a tuple and then I would group by And then use groups = dict(list(groupby)) and then get the length of each frame using len()
I would like to create a dummy variable that will look at the variable "count" and label the rows as 1 starting from the last row of each id. As an example ID 1 has count of 3 and the last three rows of this id will have such pattern: 0,0,1,1,1 Similarly, ID 4 which has a count of 1 will have 0,0,0,1. The IDs have different number of rows. The variable "wish" shows what I want to obtain as a final output.
input byte id count wish str9 date
1 3 0 22sep2006
1 3 0 23sep2006
1 3 1 24sep2006
1 3 1 25sep2006
1 3 1 26sep2006
2 4 1 22mar2004
2 4 1 23mar2004
2 4 1 24mar2004
2 4 1 25mar2004
3 2 0 28jan2003
3 2 0 29jan2003
3 2 1 30jan2003
3 2 1 31jan2003
4 1 0 02dec1993
4 1 0 03dec1993
4 1 0 04dec1993
4 1 1 05dec1993
5 1 0 08feb2005
5 1 0 09feb2005
5 1 0 10feb2005
5 1 1 11feb2005
6 3 0 15jan1999
6 3 0 16jan1999
6 3 1 17jan1999
6 3 1 18jan1999
6 3 1 19jan1999
end
For future questions, you should provide your failed attempts. This shows that you have done your part, namely, research your problem.
One way is:
clear
set more off
*----- example data -----
input ///
byte id count wish str9 date
1 3 0 22sep2006
1 3 0 23sep2006
1 3 1 24sep2006
1 3 1 25sep2006
1 3 1 26sep2006
2 4 1 22mar2004
2 4 1 23mar2004
2 4 1 24mar2004
2 4 1 25mar2004
3 2 0 28jan2003
3 2 0 29jan2003
3 2 1 30jan2003
3 2 1 31jan2003
4 1 0 02dec1993
4 1 0 03dec1993
4 1 0 04dec1993
4 1 1 05dec1993
5 1 0 08feb2005
5 1 0 09feb2005
5 1 0 10feb2005
5 1 1 11feb2005
6 3 0 15jan1999
6 3 0 16jan1999
6 3 1 17jan1999
6 3 1 18jan1999
6 3 1 19jan1999
end
list, sepby(id)
*----- what you want -----
bysort id: gen wish2 = _n > (_N - count)
list, sepby(id)
I assume you already sorted your date variable within ids.
One way to accomplish this would be to use within-group row numbers using 'bysort'-type logic:
***Create variable of within-group row numbers.
bysort id: gen obsnum = _n
***Calculate total number of rows within each group.
by id: egen max_obsnum = max(obsnum)
***Subtract the count variable from the group row count.
***This is the number of rows where we want the dummy to equal zero.
gen max_obsnum_less_count = max_obsnum - count
***Create the dummy to equal one when the row number is
***greater than this last variable.
gen dummy = (obsnum > max_obsnum_less_count)
***Clean up.
drop obsnum max_obsnum max_obsnum_less_count
Given a N size array whose elements denotes the capacity of containers ...In how many ways M similar objects can be distributed so that each containers is filled at the end.
for example
for arr={2,1,2,1} N=4 and M=10 there comes out be 35 ways.
Please help me out with this question.
First calculate the sum of the container sizes. I your case 2+1+2+1 = 6 let this be P. Find the number of ways of choosing P objects from M. There are M choices for the first object, M-1 for the second, M-2 for the third etc. This gives use M * (M-1) * ... (M-p+1) or M! / (M-P)!. This will give us more states than you want for example
1 2 | 3 | 4 5 | 6
2 1 | 3 | 4 5 | 6
There is q! ways of arranging q object in q slots so we need to divide by factorial(arr[0]) and factorial(arr[1]) etc. In this case divide by 2! * 1! * 2! * 1! = 4.
I'm getting a very much larger number than 35. 10! / 4! = 151200 divide that by 4 gives 37800, so I'm not sure if I have understood your question correctly.
Ah so looking at the problem you need to find N integers n1, n2, ... ,nN so that n1+n2+...+nN = M and n1>= arr[1], n2>=arr[2].
Looks quite simple let P be as above. Take the first P pills and give the students their minimum number, arr[1], arr[2] etc. You will have M-P pills left, let this be R.
Essentially the problem simplifies to finding N number >=0 which sum to R. This is a classic problem. As its a challenges I won't do the answer for you but if we break the N=4, R=4 answer down you may see the pattern
4 0 0 0 - 1 case starting with 4
3 1 0 0 - 3 cases starting with 3
3 0 1 0
3 0 0 1
2 2 0 0 - 6 cases
2 1 1 0
2 1 0 1
2 0 2 0
2 0 1 1
2 0 0 2
1 3 0 0 - 10 cases
1 2 1 0
1 2 0 1
1 1 2 0
1 1 1 1
1 1 0 2
1 0 3 0
1 0 2 1
1 0 1 2
1 0 0 3
0 4 0 0 - 15 cases
0 3 1 0
0 3 0 1
0 2 2 0
0 2 1 1
0 2 0 2
0 1 3 0
0 1 2 1
0 1 1 2
0 1 0 3
0 0 4 0
0 0 3 1
0 0 2 2
0 0 1 3
0 0 0 4
You should recognise the numbers 1, 3, 6, 10, 15.
Sorry, I have probably some simple question.
I have SFrame looks like this:
A B C
0 1 2
0 2 3
1 2 3
1 3 4
2 3 1
2 3 3
. . .
Also I have another SFrame, looks like this:
A B C
0 1 4
0 2 5
I want replace SFrame with the similar A & B values, but with new C values.
A B C
0 1 4
0 2 5
1 2 3
1 3 4
2 3 1
2 3 3
. . .
It could be the all columns in the firstSFrame, but also just one column (SArray).
I try it with the next prompt:
sfr['C'][sfr['A']==0] = sfr2['C']
or just
sfr[sfr['A']==0] = sfr2
but got next error message:
TypeError: 'SArray' object does not support item assignment
Anyway, When I replace the SArray C from the similar length, this solution is worked.... The problem is in the different lengths of SFrames...
At the moment, I found someself a simple solution.
I create a list from all values, which I want replace in the first SFrame. Then convert this list to SArray and add it as a new column. (the number of columns is not important for me)...