Get C/C++/Latex code from Jaxpr or Jax XLA Computation Object - glsl

I am making a procedurally generated terrain for which I used the Classic Perlin Noise give here. Now to calculate the normal to the terrain I need the differential of this function, so I rewrote the function in python and used jax.grad to differentiated it. I then created a computation graph as shown here but it was too complicated to turn into code manually.
The closest solution I have found is the jax2tex library but its depricated and doesn't work anymore.
Since I need it for a compute shader, I can't run it using an XLA runtime in C++, I need to write glsl code from it.
My Question is : Is there something like Jax2Tex that I can use that can help me understand the XLA code in an easy to understand way so I can write glsl code from it.

Related

equivalent of wavedec (matlab function) in opencv

I am trying to rewrite a matlab code to cpp and I still blocked with this line :
[c, l]=wavedec(S,4,'Dmey');
Is there something like that in opencv ?
if someone have an idea about it try to share it with and thanks in advance.
Maybe if you would integrate your codes with Python, then PyWavelet might be an option.
I'm seeing the function that you're looking for is in there. That function is called Discrete Meyer (Dmey). Not sure what you're planning to do with that though, maybe you're processing some images or something, but Dmey is OK, not very widely used. You might want to just find some GitHub codes and integrate to whatever you're doing to see if it would work first, and based on those you can also change the details of your currently posted function (might find something more efficient).
1-D wavelet decomposition (wavedec)
In your code, c and l stand for coefficients and level. You're passing level four with a Dmey function. If you'd have one dimensional data, the following map is how your decomposition would look like, roughly I guess:
There are usually two types of decomposition models that are being used in Wavelets, one is called packet which is similar to a Full Binary Tree, from architecture standpoint:
The other one, which is the one you're most likely using, is less computationally expensive, because it does not decompose both branches of a tree, if you will. It'd just do the mathematical decomposition in one branch of the tree. Maybe, these images would shed some lights:
1 D
2 D
Notes:
If you have a working model in MatLab, you might want to see the C/C++ Code Generation in MatLab, will automatically convert MatLab codes to C++.
References:
Images are from Wikipedia or mathworks.com
Wiki
mathworks
Wavelet 2D

Does tensorflow c++ API support automatic differentiation for backpropagation?

Does tensor-flow C++ API support automatic differentiation to back-propagate the gradient?
If I write a graph in c++ and would like to run it in a c++ code (not in python!) will automatic differentiation work?
Let's suppose every op in the graph has a gradient implementation.
I think the documentation regarding what tensor-flow C++ API can and can't do is is very poor.
Thank you very much for the help
Technically it can, but AFAIK the automatic differentiation is only "configured" in Python. What I mean by this is that, at a lower level, each TensorFlow operation does not declare itself what its gradient is (that is, the corresponding operation that computes its gradient). That is instead declared at Python level. For example, you can take a look at math_ops.py. You will see that, among other things, there are several functions decorated with #ops.RegisterGradient(...). What this decorator does is adding that function to a global registry (in Python) of operations and their gradients. So, for example, optimizer classes are largely implemented in Python, since they make use of this registry to build the backpropagation computation (as opposed to making use of native TensorFlow primitives to that end, which do not exist).
So the point is that you can do the same computations using the same ops (which are then implemented with the same kernels), but I don't think that C++ has (or will ever have) such gradient registry (and optimizer classes), so you would need to work out or copy that backpropagation construction by yourself. In general, the C++ API is not well suited to building the computation graph.
Now a different question (and maybe this was what you were asking about in the first place) is whether you can run an already existing graph that does backpropagation in C++. By this I mean building a computation graph in Python, creating an optimizer (which in turn creates the necessary operations in the graph to compute the gradient and update the variables) and exporting the graph, then load that graph in C++ and run it. That is entirely possible and no different to running any other kind of thing in TensorFlow C++.

General tips for optimizing distance matrices in OpenGL

I'll be taking a look at some C++ code that utilizes OpenGL to render some information. I've been told that they are using distance matrices to accomplish part of the rendering and that it currently runs in N^2 time.
I don't have the code just yet, but do you all know of any common problems or coding mistakes that would cause this to run in N^2? Do you all know of any ways to reduce this, perhaps down to NlogN?

How do you calculate transformation matrix for shader in OpenGL

In newer OpenGL specifications, matrix manipulation functions are removed. You need to calculate the transformation matrices by hand and pass them to the shaders. Although glRotate, glScale, etc. disappeared, I didn't see anything in exchange...
My question:
how do you handle the transformations? Do you dig the theory and implement all by hand, or use some predefined libraries? Is there any "official" OpenGL solution?
For example, datenwolf points to his hand made C library in this post. For Java users (Android) there is AffineTransform class, but it applies to 3x3 matrices, so it needs an extra effort to apply it to OpenGL mat4
What is your solution?
how do you handle the transformations? Do you dig the theory and implement all by hand, or use some predefined libraries?
Either way goes. But the thing is: In a real program that deals with 3D geometry you need those transformation matrices for a lot more than just rendering stuff. Say you have some kind of physics simulation running. The position of rigid objects is usually represented by their transformation matrix. So if doing a physics sim, you've got that transformation matrix lying around somewhere anyway, so you just use that.
In fully integrated simulation engines you'll also want to avoid redundancies, so you take some physics simulation library like ODE, Bullet or so and modify it in a way that it can work directly on your object representing structures without copying the data into library specific records for procressing and then back.
So you usually end up with some mixture. Some of the math comes in preexisting libraries, others you implement yourself.
I agree with datenwolf, but to give an example I use Eigen, which is a fantastic general purpose matrix math library.
above glsl 3.0 the glTraslate(),glRotate(),fTransform() etc. functions are deprecated.. but still can be use.
one better way is to use some math library like GLM http://glm.g-truc.net/ which is compatible with the glsl specifications.
The projection matrix, model matrix and view matrix are passed to the shader as uniform variables.

Playing with geometry?

Does anyone have some useful beginner tutorials and code snippets for playing with basic geometric shapes and geometric proofs in code?
In particular something with the ability to easily create functions and recursively draw them on the screen. Additional requirements, but not absolute, support for Objective-C and basic window drawing routines for OS X and Cocoa.
A specific question how would one write a test to validate that a shape is in fact a square, triangle, etc. The idea being that you could draw a bunch of shapes, fit them together and test and analyze the emergent shape that arises from the set of sub shapes.
This is not a homework question. I am not in school. Just wanted to experiment with drawing code and geometry. And looking for an accessible way to play and experiment with shapes and geometry programming.
I am open to Java and Processing, or Actionscript/Haxe and Flash but would also like to use Objective C and Xcode to build projects as well.
What I am looking for are some clear tutorials to get me started down the path.
Some specific applications include clear examples of how to display for example parts of a Cantor Set, Mandelbrot Set, Julia set, etc...
One aside, I was reading on Wikipedia about the "Russell's Paradox". And the wiki article stated:
Let us call a set "abnormal" if it is
a member of itself, and "normal"
otherwise. For example, take the set
of all squares. That set is not itself
a square, and therefore is not a
member of the set of all squares. So
it is "normal". On the other hand, if
we take the complementary set that
contains all non-squares, that set is
itself not a square and so should be
one of its own members. It is
"abnormal".
The point about squares seems intuitively wrong to me. All the squares added together seem to imply a larger square. Obviously I get the larger paradox about sets. But what I am curious about is playing around with shapes in code and analyzing them empirically in code. So for example a potential routine might be draw four squares, put them together with no space between them, and analyze the dimensions and properties of the new shape that they make.
Perhaps even allowing free hand drawing with a mouse. But for now just drawing in code is fine.
If you're willing to use C++ I would recommend two libraries:
boost::GGL generic geometry library handles lots of geometric primitives such as polygons, lines, points, and so forth. It's still pretty new, but I have a feeling that it's going to be huge when it's officially added into boost.
CGAL, the Computational Geometry Algorithms Library: this thing is huge, and will do almost anything you'll ever need for geometry programming. It has very nice bindings for Qt as well if you're interested in doing some graphical stuff.
I guess OpenGL might not be the best starting point for this. It's quite low-level, and you will have to fight with unexpected behavior and actual driver issues. If you emphasize the "playing" part, go for Processing. It's a programming environment specifically designed to play with computer graphics.
However, if you really want to take the shape testing path, an in-depth study of computer vision algorithms in inevitable. On the other hand, if you just want to compare your shapes to a reference image, without rotation, scaling, or other distortions, the Visual Difference Predictor library might help you.
I highly recommend NeHe for any beginner OpenGL programmer, once you complete the first few tutorials you should be able to have fun with geometry any way you want.
Hope that helps