Playing with geometry? - opengl

Does anyone have some useful beginner tutorials and code snippets for playing with basic geometric shapes and geometric proofs in code?
In particular something with the ability to easily create functions and recursively draw them on the screen. Additional requirements, but not absolute, support for Objective-C and basic window drawing routines for OS X and Cocoa.
A specific question how would one write a test to validate that a shape is in fact a square, triangle, etc. The idea being that you could draw a bunch of shapes, fit them together and test and analyze the emergent shape that arises from the set of sub shapes.
This is not a homework question. I am not in school. Just wanted to experiment with drawing code and geometry. And looking for an accessible way to play and experiment with shapes and geometry programming.
I am open to Java and Processing, or Actionscript/Haxe and Flash but would also like to use Objective C and Xcode to build projects as well.
What I am looking for are some clear tutorials to get me started down the path.
Some specific applications include clear examples of how to display for example parts of a Cantor Set, Mandelbrot Set, Julia set, etc...
One aside, I was reading on Wikipedia about the "Russell's Paradox". And the wiki article stated:
Let us call a set "abnormal" if it is
a member of itself, and "normal"
otherwise. For example, take the set
of all squares. That set is not itself
a square, and therefore is not a
member of the set of all squares. So
it is "normal". On the other hand, if
we take the complementary set that
contains all non-squares, that set is
itself not a square and so should be
one of its own members. It is
"abnormal".
The point about squares seems intuitively wrong to me. All the squares added together seem to imply a larger square. Obviously I get the larger paradox about sets. But what I am curious about is playing around with shapes in code and analyzing them empirically in code. So for example a potential routine might be draw four squares, put them together with no space between them, and analyze the dimensions and properties of the new shape that they make.
Perhaps even allowing free hand drawing with a mouse. But for now just drawing in code is fine.

If you're willing to use C++ I would recommend two libraries:
boost::GGL generic geometry library handles lots of geometric primitives such as polygons, lines, points, and so forth. It's still pretty new, but I have a feeling that it's going to be huge when it's officially added into boost.
CGAL, the Computational Geometry Algorithms Library: this thing is huge, and will do almost anything you'll ever need for geometry programming. It has very nice bindings for Qt as well if you're interested in doing some graphical stuff.

I guess OpenGL might not be the best starting point for this. It's quite low-level, and you will have to fight with unexpected behavior and actual driver issues. If you emphasize the "playing" part, go for Processing. It's a programming environment specifically designed to play with computer graphics.
However, if you really want to take the shape testing path, an in-depth study of computer vision algorithms in inevitable. On the other hand, if you just want to compare your shapes to a reference image, without rotation, scaling, or other distortions, the Visual Difference Predictor library might help you.

I highly recommend NeHe for any beginner OpenGL programmer, once you complete the first few tutorials you should be able to have fun with geometry any way you want.
Hope that helps

Related

How to use Unreal 4 ProceduralMeshComponent classes in code?

I was trying to find some examples or documentation as for how to implement functionality using the Unreal 4 ProceduralMeshComponents through code. The documentation of these classes on the website is very sparse and only provides the barest details of how they function:
https://docs.unrealengine.com/latest/INT/BlueprintAPI/Components/ProceduralMesh/index.html
I know, I know, they are already exposed to the Blueprint Editor so I am aware I can use them in the engine itself. However, I want to understand the exact ins and outs of the process, which means that I need to implement this in a project through code.
Also I feel that using these components through Blueprint nodes alone limits the extent of what can be done with this powerful functionality.
I have also made searches for any examples (either on the net or on the forums) but can't find any that don't involve using Blueprints in some way. The other problem is that this functionality was introduced relatively recently, and before this Rama (a stellar Unreal user) had put up a similar API that allowed procedural mesh generation. However it is deprecated now and there are many examples that refer to that version instead.
Don't get me wrong, I'm not dissing Blueprints here. I love the tool and consider them one of the best bits of Unreal 4. But for my purpose I require the process to be completely exposed to me from start to finish.
I would appreciate any resources or examples that you could share that implement the Unreal Procedural Mesh classes completely through code towards some effect.
This is quite a big question, since Procedural Mesh Components can be used in very many ways. They just represent an arbitrary runtime-generated mesh.
Most of the functions listed in the documentation are quite self-explanatory if you know the data representations of meshes in 3D applications.
A mesh can have several LODs. Each individual mesh LOD is made up of sections. Each section can have a visual representation and a collision representation. For the "visual" representation, there are lists of point locations, lists of triangles that are represented by three point indices, lists of edges connecting two points, lists of normals for each triangle, lists of UV-space positions for each point, etc. For "collision" representation of the meshes, there are of course separate lists, in most cases smaller in size for more optimized calculation.
Depending on your use case, you fill these lists with the data you need. This data can be generated in whatever way of course, that is up to you. Whatever you do, you just need to have some arrays of the needed stuff, be it points, edges, etc.
can't find any that don't involve using Blueprints in some way
The beauty of UE is, any “blueprint” example can act as a C++ example. You can recreate a BP graph on code easily one-to-one, since BP nodes are based on C++ functions which are marked as UFUNCTION(BlueprintCallable). For the C++ versions of the functions, you can see the C++ source by double clicking the node.
p.s. I understand it might not be the best that I don't provide you code examples, but instead a long-winded explanation, but for a question this wide there's really no one-size-fits-all code snippet.
Plus, procedural mesh generation is a massive world of its own, with each individual scenario requiring thousands of lines of specialized code. If you make a procedural terrain vs a procedural animal vs a procedural chair, you can imagine the code is both very complex and so specific it's nearly useless for the other use cases.

2D DirectX 10 Game Development

I am attempting to create a 2D game using DirectX 10. I've been having lots of problems with figuring out how to actually get started to create such a thing. I've discovered mixed ideas, such as just use shaders and stick with Direct3D and I've seen some people using libraries such as SFML. I am rather new to DirectX, and am wondering what I should be using to create a 2D game (shaders or a library). Are shaders something that I should look into, or should I just use a library?
Well if you are interested in learning DirectX itself, you should try to write your own 2D engine, which with a little experience isn't as hard as it may seem. But if you want to get straight to game developement, you can take a look at some engines that take care of that part. Shaders can really enhance scenes (3D as well as 2D) and if I were you, I would definitely use them, instead of just using simple unprocessed textures. Most engines won't take the shader programming from you, so you will probably need to look into hlsl anyway.
Also what I experienced with several engines and librarys: at some point they will come to an end, and if you are enthousiastic about your project, you don't want to live with those limitations. That's why I would recommend writing your own engine which you can easily expand as needed.
A good starting point for pure DirectX:
http://www.rastertek.com/tutdx10.html (Mainly for 3D but also covers 2D, which you will notice isn't that different)

Which library for voxel data structure?

I'm working in C++ with large voxel grids in a scientific context and I'm trying to decide, which library to use. Only a fraction of the voxel grid holds values - but might be several per voxel (e.g. struct), which are determined by raytracing. I'm not trying to render anything, but I have to determine the potential number of rays passing though the entire target area, thus an awful lot of ray-box computations will have to be caluculated and preferebly very fast...
So far, I found
OpenVDB http://www.openvdb.org/
Field3d http://sites.google.com/site/field3d/
The latter appeals a bit more, because it seems simpler/easier to use.
My question is: Which of them would be more suited if put to use in tasks, which are not aimed at rendering/visualization? Which one is faster/better when computing a lot of ray-box-intersections (no viewpoint-dependent culling possible)? Suggestions, anyone?
In any case, I want to use an existing C++ library and not write a kdTree/Octree etc. myself. Don't have the time for inventing the wheel anew.
I would advise
OpenSceneGraph
Ogre3D
VTK
I have personally used the first two. However, VTK is also a popular alternative. All three of them support voxel based rendering.

Any benefits of learning 3d software rasterization & theory before jumping into OpenGL/Direct3D?

I came across a very interesting book. I have done some 2d games but 3D is a whole new ballpark for me. I just need to know if there any benefits of learning 3d software rasterization & theory before jumping into OpenGL/Direct3D? Any reason why to either approach? Thanks in advance!
It is handy stuff to know, but largely unnecessary for learning OpenGL or Direct3D. It might be (marginally) more useful when getting into pixel shaders.
if there any benefits of learning 3d software rasterization
You'll get deeper understanding of internal working of 3D apis.
I think that if you're serious about working with 3D, you should be able to write CSG raytracer, software rasterizer with texture mapping support, know a few related algorithms. Or AT LEAST you should have knowledge that would allow you to write those if you wanted.
Francis Hill's book "Computer Graphics using OpenGL" had a few chapters about writing raytracer and combining software rasterization with OpenGL-rendered scene, which is definitely something you have to read.
Without that knowledge you'll screw up when you'll have to write a tool that HAVE to calculate something in software mode. Classic example of such tool is radiosity/lightmap calculation, raytracing, and so on. Some of those tasks can be accelerated by GPU, but GPU won't magically handle everything for you, and you'll have to deal with code VERY similar to standard software rasterization.
I would skip the rasterization theory. Most of the tricky parts are fixed in current GPU, so when you move on to hardware accelerated graphics, you won't be able to apply your knowledge.
If you first learn about rasterization theory, you will probably have a better understanding of why it is useful to specify things such as pixel center coordinates and why there's a difference between the way D3D9, D3D10 and OpenGL handle it, but at the end of the day it doesn't really matter for most developers. Try the other way: Learn the APIs first and whenever you step over some weird specification, try to find a rationale in rasterization theory.
Unnecessary. Good tutorials will tell you everything you need to know to use the relevant technology. The fact is that if everyone needed a doctorate in rasterization theory before they could use OGL/D3D, there'd be way, way less OGL/D3D games than there are.
I learned it (wrote software rasterizer) and am extremally happy with that - this is much more helpfull I would say for doing open gl tahan for example knowing assembly tocode in c - it is extremally helpful to get understanding of things and feeling that everything is possible is possible

Modifying an image with OpenGL?

I have a device to acquire XRay images. Due to some technical constrains, the detector is made of heterogeneous pixel size and multiple tilted and partially overlapping tiles. The image is thus distorted. The detector geometry is known precisely.
I need a function converting these distorted images into a flat image with homogeneous pixel size. I have already done this by CPU, but I would like to give a try with OpenGL to use the GPU in a portable way.
I have no experience with OpenGL programming, and most of the information I could find on the web was useless for this use. How should I proceed ? How do I do this ?
Image size are 560x860 pixels and we have batches of 720 images to process. I'm on Ubuntu.
OpenGL is for rendering polygons. You might be able to do multiple passes and use shaders to get what you want but you are better off re-writing the algorithm in OpenCL. The bonus then would be you have something portable that will even use multi core CPUs if no graphics accelerator card is available.
Rather than OpenGL, this sounds like a CUDA, or more generally GPGPU problem.
If you have C or C++ code to do it already, CUDA should be little more than figuring out the types you want to use on the GPU and how the algorithm can be tiled.
If you want to do this with OpengGL, you'd normally do it by supplying the current data as a texture, and writing a fragment shader that processes that data, and set it up to render to a texture. Once the output texture is fully rendered, you can retrieve it back to the CPU and write it out as a file.
I'm afraid it's hard to do much more than a very general sketch of the overall flow without knowing more about what you're doing -- but if (as you said) you've already done this with CUDA, you apparently already have a pretty fair idea of most of the details.
At heart what you are asking here is "how can I use a GPU to solve this problem?"
Modern GPUs are essentially linear algebra engines, so your first step would be to define your problem as a matrix that transforms an input coordinate < x, y > to its output in homogenous space:
For example, you would represent a transformation of scaling x by ½, scaling y by 1.2, and translating up and left by two units as:
and you can work out analogous transforms for rotation, shear, etc, as well.
Once you've got your transform represented as a matrix-vector multiplication, all you need to do is load your source data into a texture, specify your transform as the projection matrix, and render it to the result. The GPU performs the multiplication per pixel. (You can also write shaders, etc, that do more complicated math, factor in multiple vectors and matrices and what-not, but this is the basic idea.)
That said, once you have got your problem expressed as a linear transform, you can make it run a lot faster on the CPU too by leveraging eg SIMD or one of the many linear algebra libraries out there. Unless you need real-time performance or have a truly immense amount of data to process, using CUDA/GL/shaders etc may be more trouble than it's strictly worth, as there's a bit of clumsy machinery involved in initializing the libraries, setting up render targets, learning the details of graphics development, etc.
Simply converting your inner loop from ad-hoc math to a well-optimized linear algebra subroutine may give you enough of a performance boost on the CPU that you're done right there.
You might find this tutorial useful (it's a bit old, but note that it does contain some OpenGL 2.x GLSL after the Cg section). I don't believe there are any shortcuts to image processing in GLSL, if that's what you're looking for... you do need to understand a lot of the 3D rasterization aspect and historical baggage to use it effectively, although once you do have a framework for inputs and outputs set up you can forget about that and play around with your own algorithms in shader code relatively easily.
Having being doing this sort of thing for years (initially using Direct3D shaders, but more recently with CUDA) I have to say that I entirely agree with the posts here recommending CUDA/OpenCL. It makes life much simpler, and generally runs faster. I'd have to be pretty desperate to go back to a graphics API implementation of non-graphics algorithms now.