OpenGL - sphere shrinking and expanding - opengl

I'm revising for an OpenGL exam and keep coming across this question on past papers. It's not something I've been taught and I was wondering if anyone could set me off in the right direction.
Sorry I haven't added what I have so far, there's not much because I don't really understand the question either.
"You wish to create a simple animation that shows a small red sphere shrinking and expanding. Specifically the radius oscillates sinusoidally between 0.3 and 0.5 in magnitude.
(i) Discuss the role of the glutIdleFunc in the animation.
(ii) Write the display method that performs the above animation;
assume the radius vector R is of type double and is declared with
global scope."

The glutIdleFunc documentation could set you off in a direction. Notice how it does calculations in the background, so a possible answer could be discussing how the animation behaves by setting or not setting (or simply leaving empty) that particular callback, respectively.
If you are allowed to use glutSolidSphere or glutWiredSphere the display method could be quite simple if you know the basics of OpenGL (assuming you've studied and attended class :). But if you have to use OpenGL 3.3 or 4.0+ you will probably have to think about coming up with an algorithm to first generate the vertices of the sphere (simpler) then the indices of the vertices (little bit trickier). There are numerous examples on the Internet and StackOverflow on how to do that, I do believe.
Good luck on your exam!

Related

Perspective projection based on 4 points in 2D

I'm writing to ask about homography and perspective projection.
I'm trying to write a piece of code, that will "warp" my image so that its corners align with 4 reference points that are in the 3D space - however, the game engine that I'm running it in, already allows me to get the screen position of them, so I already have their screen-space coordinates of both xi,yi and ui,vi, normalized to values between 0 and 1.
I have to mention that I don't have a degree in mathematics, which seems to be a requirement in the posts I've seen on this topic so far, but I'm hoping there is actually a solution to this problem that one can comprehend. I never had a chance to take classes in Computer Vision.
The reason I came here is that in all the posts I've seen online, the simple explanation that I came across is that each point must be put into a 1x3 matrix and multiplied by a 3x3 homography, which consists of 9 components h1,h2,h3...h9, and this transformation matrix will transform each point to the correct perspective. And that's where I'm hitting a brick wall - how do I calculate the transformation matrix? It feels like it should be a relatively simple algebraic task, but apparently it's not.
At this point I spent days reading on the topic, and the solutions I've come across are either based on matlab (which have a ton of mathematical functions built into them), or include elaborations and discussions that don't really explain much; sometimes they suggest tons of different parameters and simplifications, but rarely explain why and what's their purpose, or they are referencing books and studies that have been since removed from the web, and I found myself more confused than I was in the beginning. Most of the resources I managed to find online are also made in a different context - image stitching and 3d engine development.
I also want to mention that I need to run this code each frame on the CPU, and I'm fairly concerned about the effect of having to run too many matrix transformations and solving a ton of linear algebra equations.
I apologize for not asking about any specific code, but my general question is - can anyone point me in the right direction with this issue?
Limit the problem you deal with.
For example, if you always warp the entire rectangular image, you can treat that the coordinates of the image corners are {(0,0), (1,0), (0,1), (1,1)}.
This can simplify the equation, and you'll be able to solve the equation by yourself.
So you'll be able to implement the answer.
Note : Homograpy is scale invariant. So you can decrease the freedom to 8. (e.g. you can solve the equation under h9=1).
Best advice I can give: read a good book on the subject. For example, "Multiple View Geometry" by Hartley and Zisserman

How flexible is OpenGL's quadric functionality and transformation matrices?

To give you an idea of where I'm coming from, this started as a teaching exercise to get a 12-year-old video game addict into coding. The 2D games, I did in SDL with him and that was fine because I wasn't planning on going into 3D. Yeah, right! So now I'm in at the deep end in OpenGL and mainly trying to figure out exactly what it can and cannot do. I understand the theory (still working on beziers and nurbs if the truth be told) and could code the whole thing by hand in calculated triangular vertices but I'd hate to spend days on that only to be told that there's a built in function/library that does the whole thing faster and easier.
Quadrics seem to be extremely powerful but not terribly flexible. Consider the human head - roughly speaking a 3x4x3 sphere or a torso as a truncated cone that's taller than it is wide than it is thick. Again, a quadric shape with independent x,y and z radii. Since only one radius is provided, am I right in thinking that I would have to generate it around the origin and then apply a scaling matrix to adjust them? Furthermore, if this is so, am I also correct in thinking that saving the results into a vertex array rather than a frame list results in the system neither knowing or caring how they got there?
Transitions: I'm familiar with the basic transitions but, again, consider the torso. It can achieve, maybe, a 45 degree twist from the hips to the shoulders that is distributed linearly across the entire length or even the sideways lean. This is applied around the Y or Z axis respectively but I've obviously missed something about applying transformations that are based on an independent value. (eg rot = dist x (max_rot/max_dist). Again, I could do this by hand (and will probably have to in order to apply the correct physics) but does OpenGL have this functionality built in somewhere?
Any other areas of research I need to put in would be appreciated in the notes.

Bullet physics multi-sphere body getting sucked through ground

I have made several attempts to fix this and read all I could find here/forum/google. I used a CCD treshold mush lower than my objects move speed and using a CCD radius much smaller than the objects half radius. The only thing this does is make the multisphere get stuck on seams. I also tried to set ERP/ERP2 to 0.9/1.0.
[EDIT] Ok, so after some more reading; CCD will not work if the sphere is already touching the ground and ERP only affeccts objectts with joints if I understand correctly.
The ground is a trimesh made in Blender and using the obtainStaticNodeShape to get the shape. I have tried to scale the mesh to get smaller polygons but even the smallest (for the game acceptable) size does not work, about 32k indices with 11k polys, 500x500 units, the multisphere has a radius of 0.45 units.
[EDIT] the multi-sphere is two spheres on top of each other and they are restricted to angular movement around the Y-axis only, so no rolling.
The sphere gets "sucked" fast through the ground it does not sink slowly. I tried to make the fixedtimestep smaller 1/420 with 64 substeps did not give any better results. This happens most often while ascending or descending a slope. My ground is gently sloped but an incline of 20% seems to be enough for it to fall through a lot but it can happen on level ground too, just not as often.
When I did my first test I used a big stretched out cube as ground and it worked well.
So my problem now is I don't even know why this is happening so I have no idea what to try next? Can anyone please give me a solution or some pointers.
Is there any use in increasing the multi-sphere size (for the game I can not increase more than 25-30%) I have not explicitly set any collision margins but I think this would just make my sphere float over the ground? Is there any profit in changing the ground from a static object to a kinematic?
Would it work to use a raytest from the sphere straight down and push it up if it is lower than the ground? I think not, why would it fall through if it could detect the ground in the first place..?
[EDIT: additional info]
There are quite a few occurrences of similar problems floating around on forums and also here at stack overflow. Most seem to be about very small objects. Small objects (>0.2m) is clearly not a good option for bullet unless you want to increase the number of simulation steps quite a lot. My problems does not seem to fall under this category since my smallest object is 0.9m in diameter?
I have now also done a debug draw to see the normals of the trimesh that I use as ground. I can not find any errors with the normals.
I also tried to increase the collission margins of the speheres but to no avail.
I further tried to use suggested settings:
((btDefaultCollisionConfiguration)world.collisionConfiguration).setPlaneConvexMultipointIterations(3,3); ((btDefaultCollisionConfiguration)world.collisionConfiguration).setConvexConvexMultipointIterations(3, 3);
No difference.
I did however read about big trimeshes not working very well for raycasting, my mesh is big 512x512 units but I am not sure if this could cause my object to fall through the mesh?
I also read that sphere shapes has problems with trimeshes, but again I am not sure if this would be my case? The sphere I am using is locked for rotation on all axes.
I have also tried using a btCapsule but it gave same results.. Would a cylinder work better?
[EDIT]
I have tried using a cylinder instead since sphere and capsule did not work. The cylinder is working a lot better. I have still got it to fall through once though. The clyinder was jerking around a lot before it went through where the sphere/capsule would just go through really fast and easy. Maybe this could be a clue of whats the underlaying problem? A cylinder is not the best for a character shape though..
An other possible reason could be if a triangle in the mesh has too long sides or a large ratio between sides. I found a few of those on a slope where my sphere always falls through. If this is indeed the problem can I do anything about it except manually editing the mesh in Blender?
As you can see there are a lot of these questions and a lot of possible answers and I have no idea which one corresponds to my case, someone with better insight giving some pointers would mean a lot, thanks!

How to make room reflection using Cubemap

I am trying to use a cube map of the inside of a room to create some reflections on walls, ceiling and floor.
But when I use the cube map, the reflected image is not correct. The point of view seems to be false.
To be correct I use a different cube map for each walls, floor or ceiling. The cube map is calculated from the center of the plane looking at the room.
Are there specialized techniques to achieve such effect ?
But when I use the cube map, the reflected image is not correct.
Yes, this is to be expected.
Are there specialized techniques to achieve such effect ?
Indeed there is; by which I mean years ago I came across an techdemo made by ATI in which they implemented some correction. IIRC this was part of their "Ruby" (the ATI demo, not the language) series of presentations and papers. Unfortunately I can't find it anymore.
EDIT At Siggraph2012 a technique called "Parallax-corrected cubemaps" was presented in a paper about realtime illumination. This looks very similar.

OpenGL- Simple 2D clipping/occlusion method?

I'm working on a relatively small 2D (top-view) game demo, using OpenGL for my graphics. It's going for a basic stealth-based angle, and as such with all my enemies I'm drawing a sight arc so the player knows where they are looking.
One of my problems so far is that when I draw this sight arc (as a filled polygon) it naturally shows through any walls on the screen since there's nothing stopping it:
http://tinyurl.com/43y4o5z
I'm curious how I might best be able to prevent something like this. I do already have code in place that will let me detect line-intersections with walls and so on (for the enemy sight detection), and I could theoretically use this to detect such a case and draw the polygon accordingly, but this would likely be quite fiddly and/or inefficient, so I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
I've tried looking for questions on topics like clipping/occlusion but I'm not even sure if these are exactly what I should be looking for; my OpenGL skills are limited. It seems that anything using, say, glClipPlanes or glScissor wouldn't be suited to this due to the large amount of individual walls and so on.
Lastly, this is just a demo I'm making in my spare time, so graphics aren't exactly my main worry. If there's a (reasonably) painless way to do this then I'd hope someone can point me in the right direction; if there's no simple way then I can just leave the problem for now or find other workarounds.
This is essentially a shadowing problem. Here's how I'd go about it:
For each point around the edge of your arc, trace a (2D) ray from the enemy towards the point, looking for intersections with the green boxes. If the green boxes are always going to be axis-aligned, the math will be a lot easier (look for Ray-AABB intersection). Rendering the intersection points as a triangle fan will give you your arc.
As you mention that you already have the line-wall intersection code going, then as long as that will tell you the distance from the enemy to the wall, then you'll be able to use it for the sight arc. Don't automatically assume it'll be too slow - we're not running on 486s any more. You can always reduce the number of points around the edge of your arc to speed things up.
OpenGL's built-in occlusion handling is designed for 3D tasks and I can't think of a simple way to rig it to achieve the effect you are after. If it were me, the way I would solve this is to use a fragment shader program, but be forewarned that this definitely does not fall under "a (reasonably) painless way to do this". Briefly, you first render a binary "occlusion map" which is black where there are walls and white otherwise. Then you render the "viewing arc" like you are currently doing with a fragment program that is designed to search from the viewer towards the target location, searching for an occluder (black pixel). If it finds an occluder, then it renders that pixel of the "viewing arc" as 100% transparent. Overall though, while this is a "correct" solution I would definitely say that this is a complex feature and you seem okay without implementing it.
I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
OpenGL is a drawing API, not a geometry processing library.
Actually your intersection test method is the right way to do it. However to speed it up you should use a spatial subdivision structure. In your case you have something that's cries for a Binary Space Partitioning tree. BSP trees have the nice property, that the complexity for finding intersections of a line with walls is in average about O(log n) and worst case is O(n log n), or in other words, BSP tress are very efficient. See the BSP FAQ for details http://www.opengl.org//resources/code/samples/bspfaq/index.html