I'm sure my browser is not broken, but OpenGL's reference documentation uses very weird and undocumented matrix representation that I could not understand.
For example, this : https://www.opengl.org/sdk/docs/man2/xhtml/glFrustum.xml
Description
glFrustum describes a perspective matrix that produces a perspective projection. The current matrix (see glMatrixMode) is multiplied by this matrix and the result replaces the current matrix, as if glMultMatrix were called with the following matrix as its argument:
2 nearVal right - left 0 A 0 0 2 nearVal top - bottom B 0 0 0 C D 0 0 -1 0
A = right + left right - left
B = top + bottom top - bottom
C = - farVal + nearVal farVal - nearVal
D = - 2 farVal nearVal farVal - nearVal
So I cannot make sense of the "following matrix" they said. It doesn't even have 16 values if I have to look at it like linear array.
This is even harder : https://www.opengl.org/sdk/docs/man2/xhtml/glMultMatrix.xml
Examples
If the current matrix is C and the coordinates to be transformed are v = v 0 v 1 v 2 v 3 , then the current transformation is C × v , or
c 0 c 4 c 8 c 12 c 1 c 5 c 9 c 13 c 2 c 6 c 10 c 14 c 3 c 7 c 11 c 15 × v 0 v 1 v 2 v 3
Calling glMultMatrix with an argument of m = m 0 m 1 ... m 15 replaces the current transformation with C × M × v , or
c 0 c 4 c 8 c 12 c 1 c 5 c 9 c 13 c 2 c 6 c 10 c 14 c 3 c 7 c 11 c 15 × m 0 m 4 m 8 m 12 m 1 m 5 m 9 m 13 m 2 m 6 m 10 m 14 m 3 m 7 m 11 m 15 × v 0 v 1 v 2 v 3
Where v is represented as a 4 × 1 matrix.
Does your browser support MathML? Because without it will show this unformatted mess. This is how it looks in a browser supporting MathML:
The problem in your case is two-fold:
1: You have a browser which does not support MathML. The OpenGL 2.x documentation makes it clear that this is necessary.
2: The OpenGL 2.x documentation has not been updated with MathJax support, which enables MathML everywhere. The modern OpenGL 4.x documentation uses this, but that only covers the core profile stuff.
So there's no real solution for you, outside of getting a browser with MathML support. The Khronos Group has made it abundantly clear that they're not even going to fix wrong information in the man2 pages, let alone upgrade them with MathJax.
Related
I am trying to solve Euler 18 in Dyalog APL, and I am not able to understand why my solution does not work.
The problem is as follow:
By starting at the top of the triangle below and moving to adjacent
numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Taking the example that I represent this way:
d ← (3 0 0 0) (7 4 0 0) (2 4 6 0) (8 5 9 3)
I am trying to solve it this way:
{⍵+((2⌈/⍺)),0}/⌽d
Which gives me this array: 22 19 15 0, where the bigger number is 22, which is not the right answer for the problem, which would be 23.
I am getting this behavior (left to right for ease of reading):
(2⌈/(8 5 9 3),0)+(2⌈/(2 4 6 0),0)+(2⌈/(7 4 0 0),0)+(2⌈/(3 0 0 0),0)
Which gives me the same result as the function.
What I would expect is this behavior (where each statement is substituted directly in the next line):
(2⌈/(8 5 9 3)),0
(2 4 6 0)+8 9 9 0
(2⌈/(10 13 15 0)),0
(7 4 0 0)+13 15 15 0
(2⌈/(20 19 15 0)),0
(3 0 0 0) + 20 19 15 0
23 19 15 0
Am I wondering where I am misunderstanding something in the APL process that leads to a different result than the one I am expecting.
Thank you!
/ works in the reverse way to what you expected - it evaluates through the array right-to-left.
F/a b c d is ⊂a F b F c F d, or, with parentheses, ⊂(a F (b F (c F d))).
After removing the ⌽ and swapping ⍺ and ⍵, you get {⍺+(2⌈/⍵),0}/d, which gives the result you want.
I have a data frame in the following form
name v1 v2 v3
x 1 4 7
y 2 5 8
z 3 6 9
I want to multiply each value in the middle two columns by the value in the final column, output would be:
name v1 v2 v3
x 7 28 7
y 16 40 8
z 27 54 9
My current attempt is giving an error, Index object has no attribute apply
df[df.columns[1:-2]] = df.columns[1:-2].apply(lambda x : (x*df.columns[-1])
You can use iloc for selecting by position with mul:
print (df.iloc[:, 1:-1])
v1 v2
0 1 4
1 2 5
2 3 6
df.iloc[:, 1:-1] = df.iloc[:, 1:-1].mul(df.iloc[:, -1], axis=0)
print (df)
name v1 v2 v3
0 x 7 28 7
1 y 16 40 8
2 z 27 54 9
Solution with selecting columns by names:
df[['v1','v2']] = df[['v1','v2']].mul(df['v3'], axis=0)
print (df)
name v1 v2 v3
0 x 7 28 7
1 y 16 40 8
2 z 27 54 9
I have a data structure that consists of a three-level nested dict that keeps counts of occurrences of a three part object. I'd like to build a DataFrame out of it with a specific shape, but I can't figure out a way to do it that doesn't involve consuming a lot of working memory---because the table is quite large (several GBs at full extent).
The basic functionality looks like this:
class SparseCubeTable:
def __init__(self):
self.table = {}
self.dim1 = []
self.dim2 = []
self.dim3 = []
def increment(self, dim1, dim2, dim3):
if dim1 in self.table:
if dim2 in self.table[dim1]:
if dim3 in self.table[dim1][dim2]:
self.table[dim1][dim2][dim3] += 1
else:
self.dim3.append(dim3)
self.table[dim1][dim2][dim3] = 1
else:
self.dim2.append(dim2)
self.dim3.append(dim3)
self.table[dim1][dim2] = {dim3:1}
else:
self.dim1.append(dim1)
self.dim2.append(dim2)
self.dim3.append(dim3)
self.table[dim1] = {dim2:{dim3:1}}
This was constructed to make summing over keys easier, among other things. A SparseCubeTable is used like this:
In [23]: example = SparseCubeTable()
In [24]: example.increment("thing1", "thing2", "thing3")
In [25]: example.increment("thing1", "thing2", "thing3")
In [26]: example.increment("thing4", "thing5", "thing6")
In [27]: example.increment("thing1", "thing3", "thing5")
And you can get the data like this:
In [29]: example.table['thing1']['thing2']['thing3']
Out[29]: 2
The sort of DataFrame I want looks like this:
1 2 3 4
thing1 thing2 thing3 2
thing1 thing3 thing5 1
thing4 thing5 thing6 1
The DataFrame is going to be saved as an HDF5 db with columns 1-3 indexed and statistical transformations on column 4 (that require the whole table be temporarily in memory).
The problem is that the pandas.DataFrame.from_dict function builds a whole other sort of structure with the keys used as row labels, as far as I understand it. However, trying to use from_records forces me to copy out the whole data structure into a list, meaning that I now have double the memory size to worry about.
I tried implementing the solution in:
Create a pandas DataFrame from generator?
but in 0.12.0 what it ends up doing is first building a giant list of strings which is even worse. I assume writing out the structure to a csv and reading it back in is also going to be terrible on memory.
Is there a better way of doing this? Or should I just try to squeeze memory even further in the SparseCubeTable somehow? It seems so wasteful to have to build an intermediate list data structure to use from_records.
Here is a code for an efficient solution.
Create some data looking like yours. This is a list of 1000 3-tuples
In [1]: import random
In [2]: tags = [ 'thing{0}'.format(i) for i in xrange(100) ]
In [3]: data = [ (random.choice(tags),random.choice(tags),random.choice(tags)) for i in range(1000) ]
Our writing function, makes sure that when we write the index is globally unique (its not actually necessary, but since the index is actually written its 'nicer')
In [4]: def write(store,c):
...: df = DataFrame(c,columns=['dim1','dim2','dim3'])
...: try:
...: nrows = store.get_storer('df').nrows
...: except:
...: nrows = 0
...: df.index += nrows
...: store.append('df',df,data_columns=True)
...: return []
...:
In [5]: collector = []
In [6]: store = pd.HDFStore('data.h5',mode='w')
Iterate thru your data (or from a stream or whatever), and write it.
In [7]: for i, d in enumerate(data):
...: collector.append(d)
...: if i % 100 == 0 and i:
...: collector = write(store,collector)
...:
In [8]: write(store,collector)
Out[8]: []
The store
In [9]: store
Out[9]:
<class 'pandas.io.pytables.HDFStore'>
File path: data.h5
/df frame_table (typ->appendable,nrows->1000,ncols->3,indexers->[index],dc->[dim1,dim2,dim3])
In [9]: store
Out[9]:
<class 'pandas.io.pytables.HDFStore'>
File path: data.h5
/df frame_table (typ->appendable,nrows->1000,ncols->3,indexers->[index],dc->[dim1,dim2,dim3])
In [10]: store.select('df')
Out[10]:
dim1 dim2 dim3
0 thing28 thing87 thing29
1 thing62 thing70 thing50
2 thing64 thing12 thing98
3 thing33 thing98 thing46
4 thing46 thing5 thing76
5 thing2 thing9 thing21
6 thing1 thing63 thing68
7 thing42 thing30 thing45
8 thing56 thing71 thing77
9 thing99 thing10 thing91
10 thing40 thing9 thing10
11 thing70 thing54 thing59
12 thing94 thing65 thing3
13 thing93 thing24 thing25
14 thing95 thing94 thing86
15 thing41 thing55 thing3
16 thing88 thing10 thing47
17 thing89 thing58 thing33
18 thing16 thing66 thing55
19 thing68 thing20 thing99
20 thing34 thing71 thing28
21 thing67 thing87 thing97
22 thing77 thing74 thing6
23 thing63 thing41 thing30
24 thing14 thing62 thing66
25 thing20 thing36 thing67
26 thing33 thing19 thing58
27 thing0 thing71 thing24
28 thing1 thing48 thing42
29 thing18 thing12 thing4
30 thing85 thing97 thing20
31 thing73 thing71 thing70
32 thing91 thing43 thing48
33 thing45 thing6 thing87
34 thing0 thing28 thing8
35 thing56 thing38 thing61
36 thing39 thing92 thing35
37 thing69 thing26 thing22
38 thing16 thing16 thing79
39 thing4 thing16 thing12
40 thing81 thing79 thing1
41 thing77 thing90 thing83
42 thing53 thing17 thing89
43 thing53 thing15 thing37
44 thing25 thing7 thing20
45 thing44 thing14 thing25
46 thing62 thing84 thing23
47 thing83 thing50 thing60
48 thing68 thing64 thing24
49 thing73 thing53 thing43
50 thing86 thing67 thing31
51 thing75 thing63 thing82
52 thing8 thing10 thing90
53 thing34 thing23 thing12
54 thing66 thing97 thing26
55 thing66 thing53 thing27
56 thing79 thing22 thing37
57 thing43 thing82 thing66
58 thing87 thing53 thing92
59 thing33 thing71 thing97
... ... ...
[1000 rows x 3 columns]
In [11]: store.close()
Then you can do interesting things. If you are not reading the entire set in you may want to chunk this (which is a bit more involved if you are counting things).
In [56]: pd.read_hdf('data.h5','df').apply(lambda x: x.value_counts())
Out[56]:
dim1 dim2 dim3
thing0 12 6 8
thing1 14 7 8
thing10 10 10 7
thing11 8 10 14
thing12 11 14 11
thing13 11 12 7
thing14 8 14 3
thing15 12 11 11
thing16 7 10 11
thing17 16 9 13
thing18 13 8 10
thing19 11 7 8
thing2 9 5 17
thing20 6 7 11
thing21 7 8 8
thing22 4 17 14
thing23 14 11 7
thing24 10 5 14
thing25 11 11 12
thing26 13 10 15
thing27 12 15 16
thing28 11 10 8
thing29 7 7 8
thing3 11 14 14
thing30 11 16 8
thing31 7 6 12
thing32 8 12 9
thing33 13 12 12
thing34 12 8 5
thing35 6 10 8
thing36 6 9 13
thing37 8 10 12
thing38 7 10 4
thing39 14 11 7
thing4 9 7 10
thing40 12 8 9
thing41 8 16 11
thing42 9 11 13
thing43 8 6 13
thing44 9 13 11
thing45 7 13 7
thing46 12 8 13
thing47 9 10 9
thing48 8 9 9
thing49 4 8 7
thing5 13 7 7
thing50 14 12 9
thing51 5 7 11
thing52 9 11 12
thing53 9 15 15
thing54 7 9 13
thing55 6 10 10
thing56 12 11 11
thing57 12 9 11
thing58 12 12 10
thing59 6 13 10
thing6 8 5 7
thing60 12 9 6
thing61 5 9 9
thing62 8 10 8
... ... ...
[100 rows x 3 columns]
You can then do a 'groupby' like this:
In [69]: store = pd.HDFStore('data.h5')
In [61]: dim1 = Index(store.select_column('df','dim1').unique())
In [66]: store.close()
In [67]: groups = dim1[0:10]
In [68]: groups
Out[68]: Index([u'thing28', u'thing62', u'thing64', u'thing33', u'thing46', u'thing2', u'thing1', u'thing42', u'thing56', u'thing99'], dtype='object')
In [70]: pd.read_hdf('data.h5','df',where='dim1=groups').apply(lambda x: x.value_counts())
Out[70]:
dim1 dim2 dim3
thing1 14 2 1
thing10 NaN 1 1
thing11 NaN 1 2
thing12 NaN 5 NaN
thing13 NaN 1 NaN
thing14 NaN 1 1
thing15 NaN 1 1
thing16 NaN 1 3
thing17 NaN NaN 2
thing18 NaN 1 1
thing19 NaN 1 2
thing2 9 1 1
thing20 NaN 2 NaN
thing21 NaN NaN 1
thing22 NaN 2 2
thing23 NaN 2 3
thing24 NaN 2 1
thing25 NaN 3 2
thing26 NaN 2 2
thing27 NaN 3 1
thing28 11 NaN NaN
thing29 NaN 1 2
thing30 NaN 2 NaN
thing31 NaN 1 1
thing32 NaN 1 1
thing33 13 1 2
thing34 NaN 1 NaN
thing35 NaN 1 NaN
thing36 NaN 1 1
thing37 NaN 1 2
thing38 NaN 3 NaN
thing39 NaN 3 1
thing4 NaN 2 NaN
thing41 NaN NaN 1
thing42 9 1 1
thing43 NaN NaN 1
thing44 NaN 1 2
thing45 NaN NaN 2
thing46 12 NaN 1
thing47 NaN 1 1
thing48 NaN 1 NaN
thing49 NaN 1 NaN
thing5 NaN 2 2
thing50 NaN NaN 3
thing51 NaN 2 2
thing52 NaN 1 3
thing53 NaN 2 4
thing55 NaN NaN 2
thing56 12 1 1
thing57 NaN NaN 3
thing58 NaN 1 2
thing6 NaN NaN 1
thing60 NaN 1 1
thing61 NaN 1 4
thing62 8 2 1
thing63 NaN 1 1
thing64 15 NaN 1
thing66 NaN 1 2
thing67 NaN 2 NaN
thing68 NaN 1 1
... ... ...
[90 rows x 3 columns]
This one is a bit weird, but I will start at the beginning:
As far as I gathered, there are 3 ways to open up an OpenGL window in Haskell: GLUT, GLFW and SDL. I don't want to use GLUT at all, because it forces you to use IORefs and basically work in the IO monad only. So I tried GLFW and made a little thingie on my laptop, which uses Xubuntu with the XFCE desktop system.
Now I was happy and copied it to my desktop, a fairly fresh installed standard Ubuntu with Unity, and was amazed to see nothing. The very same GLFW code that worked fine on the laptop was caught in an endless loop before it opened the window.
Right then I ported it all to SDL. Same code, same window, and SDL crashes with
Main.hs: user error (SDL_SetVideoMode
SDL message: Couldn't find matching GLX visual)
I have checked back with SDLgears, using the same method to open a window, and it works fine. Same with some other 3D application, and OpenGL is enabled fine.
What baffles me is that it works under a XUbuntu but not on an Ubuntu. Am I missing something here? Oh, and if it helps, the window opening function:
runGame w h (Game g) = withInit [InitVideo] $ do
glSetAttribute glRedSize 8
glSetAttribute glGreenSize 8
glSetAttribute glBlueSize 8
glSetAttribute glAlphaSize 8
glSetAttribute glDepthSize 16
glSetAttribute glDoubleBuffer 1
_ <- setVideoMode w h 32 [OpenGL, Resizable]
matrixMode $= Projection
loadIdentity
perspective 45 (fromIntegral w / fromIntegral h) 0.1 10500.0
matrixMode $= Modelview 0
loadIdentity
shadeModel $= Smooth
hint PerspectiveCorrection $= Nicest
depthFunc $= Just Lequal
clearDepth $= 1.0
g
This error message is trying to tell you that your combination of bit depths for the color, depth and alpha buffers (a "GLX visual") is not supported. To see which ones you can use on your system, try running glxinfo.
$ glxinfo
...
65 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x023 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 None
0x024 24 tc 0 32 0 r . . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 None
0x025 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 0 16 16 16 16 0 0 None
0x026 24 tc 0 32 0 r . . 8 8 8 8 . . 0 24 0 16 16 16 16 0 0 None
0x027 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
...
Can someone explain to me what this does?
#define ROUNDUP(n,width) (((n) + (width) - 1) & ~unsigned((width) - 1))
Providing width is an even power of 2 (so 2,4,8,16,32 etc), it will return a number equal to or greater than n, which is a multiple of width, and which is the smallest value meeting that criteria.
So width = 16; 5->16, 7->16, 15->16, 16->16, 17->32, 18->32 etc.
EDIT I started out on providing an explanation of why this works as it does, as I sense that's really what the OP wants, but it turned into a rather convoluted story. If the OP is still confused, I'd suggest working through a few simple examples, say width = 16, n=15,16,17. Remember that & = bitwise AND, ~ = bitwise complement, and to use binary representation exclusively as you work through the examples.
It rounds n up to the next 'width' - but I think width needs to be a power of 2.
For example width == 8, n = 5:
(5 + 8 - 1) & ~(7)
= 12 & ~7
= 8
So 5 rounds to 8. Anything 1 - 8 rounds to 8. 9 to 16 rounds to 16. Etc. (0 rounds to 0)
It defines a macro called ROUNDUP which takes two parameters, n and width, and returns the value (n + width - 1) & ~unsigned(width - 1).
:)
Try this if you think you know what it does:
std::string s("WTF");
std::complex<double> c(-11,5);
ROUNDUP(s, c);
It won't work in C because of the unsigned. Here is what is does, as long as width is confined to powers of 2:
n width ROUNDUP(n,width)
----------------
0 4 0
1 4 4
2 4 4
3 4 4
4 4 4
5 4 8
6 4 8
7 4 8
8 4 8
9 4 12
10 4 12
11 4 12
12 4 12
13 4 16
14 4 16
15 4 16
16 4 16
17 4 20
18 4 20
19 4 20