Whenever I used FTExtrudeFont and add depth to it, whenever light is added and the angle is changed as seen below (the sides goes over the side thats supposed to be covering it).
I simply added depth to this using the FTExtrudeFont and rendered it. I also changed the way it is being rendered
font.Render([textValue UTF8String],-1,FTPoint(),FTPoint(),0x0002);
font.Render([textValue UTF8String],-1,FTPoint(),FTPoint(),0x0004);
font.Render([textValue UTF8String],-1,FTPoint(),FTPoint(),0x0001);
since the render_all provides a worse overlap.
Related
I am following a tutorial series about skeletal animation on Youtube (https://www.youtube.com/watch?v=f3Cr8Yx3GGA) and have ran into a problem – everything works fine, except when I rotate one of the bones (or "joints"), they get rotated around the scene origin, meaning they do not stay in place but are translated. The following image illustrates the problem:
How can I make it so that the translation doesn't happen? I have been going over the tutorial series multiple times now, but cannot identify which step would prevent this from happening.
The code is very large, split into around a dozen files, and I don't know which section might be causing the issue, so I do not think there's much point in posting it all here (it should be similar to the code in the tutorial, even though I am using C++ while he's working in Java. The tutorial code can be found here: https://github.com/TheThinMatrix/OpenGL-Animation). If you could give me even general advice on how this issue is normally solved in skeletal animation, it should hopefully be enough for me to at least identify the part that's wrong and try moving from there.
Rotation matrices on their own can only describe rotations around the origin (Wikipedia). However, rotations can be used in conjunction with translations to change where the origin is to get the desired effect. For example, you could:
Translate the object so that it is centered around the origin
Rotate the object to the desired orientation
Translate the object back to the original position
Or, to phrase it in a different, but functionally equivalent way:
Move the origin to the object's position
Rotate the object to the desired orientation
Reset the origin back to its original position
Related question: Rotating an object around a fixed point in opengl
You just need to pay attention to what you are rotating around.
A way to fix this: Rotate it first and then translate it. Rotate the object while it is at the origin and then translate the object to where you want it.
Repeatedly do this when things change throughout your program. Start the object at the origin, do the desired rotation, and then translate out to it's final resting position.
I currently have an ray-tracer that can read .obj models and then render the objects described on them. Until now, I was basically working with .obj models where the vertices where around the origin, generally closer than 10 of distance, at maximum being around 100.
Now, I downloaded a different model, where the vertices are far away from the origin, Always at least at hundreds of units from the origin, some vertices being about 5000 away in some axis.
The problem is that now I cannot focus the entire car!
One of my tests was with the distance from camera to origin of -3639.
And the result was this:
Then I step the camera away at -4639 and what was produced was this:
Changing my approach, decided to approach it, placing the camera at -2639
The result:
So at -2639 a I am being able to visualize the entire car but it does not fit in my field of view. At -3669 the light is already fading away by some reason.
I imagine that might be possible to see the full car proper lightened using a intermediate distance between -2669 and -3669 and also experimenting with the filed of view value, but there is something odd about the Light not covering the entire car at -3669 and I would like to find out the reason.
So I would appreciate suggestions about the cause of this issue and how to proceed in this kind of situation, how to focus the entire car.
Your question mentions you are changing the camera position. However, the images show the lighting area changing between the various cases. Just a spotlight in one case, and more of the car being lit in the other.
Most likely, in the third case, nothing of the car is lit, hence everything comes up black. Start by fixing the light staying the same when the camera moves, and see if it fixes your issue.
If you move the camera: It could help looking into the settings for the front and back clipping planes.
If you don't move the camera: The FOV show be larger if the object is larger. I would avoid doing this as this likely will lead to more problems when you read more than one object that are different.
Personally I would scale the input from the file. Ideally to some SI-unit that makes sense.
I have been using OpenGL for a while now and continue to stay positive about making progress. However, I now have an issue that I have been unable to solve and it's taking a while. So, the issue is that I would like to:
Create points on screen sequentially (to appear every second for example)
Move these points independently
So far I have 2 methods on paper and that is to upload all vertices to a VBO and make each point visible (draw). The other method I had in mind was to create an empty VBO (set to NULL) and upload data per point.
Note, I want to transform these points independent of each other - can a uniform still be used? If so how can I set this up to draw point - transform - draw point - transform.
If I'm going about this completely wrong or there is a better, more improved method then please say so.
Many thanks!
For the past few weeks, I have been working on an algorithm that finds hidden surfaces of complex meshes and removes them. These hidden surfaces are completely occluded, and will never be seen. Due to the nature of the meshes I'm working with, there are a ton of these hidden triangles. In some cases, there are more hidden surfaces than visible surfaces. As removing them manually is prohibitive for larger meshes, I am looking to automate this with software.
My current algorithm consists of:
Generating several points on the surface of a triangle.
For each point, generate a hemisphere sampler aligned to the normal of the triangle.
Cast rays up into the hemispheres.
If there are less than a certain number of rays unoccluded, I flag the triangle for deletion.
However, this algorithm is causing a lot of grief. It's very inconsistent. While some of the "occluded" faces are not found as occluded by the algorithm, I'm more worried about very visible faces that get removed due to issues with the current implementation. Therefore, I'm wondering about two things, mainly:
Is there a better way to find and remove these hidden surfaces than raytracing?
Should I investigate non-random ray generation? I'm currently generating random directions in a cosine-weighted hemisphere, which could be causing issues. The only reason I haven't investigated this is because I have yet to find an algorithm to generate evenly-spaced rays in a hemisphere.
Note: This is intended to be an object space algorithm. That is, visibility from any angle--not a fixed camera.
I've actually never implemented ray tracing, but I have a few suggestions anyhow. As your goal is to detect every hidden triangle, you could turn the problem around and instead find every visible triangle.
I'm thinking of something along the lines of either:
Ray trace from the outside and towards the centre/perpendicular to the surface, mark any triangle hit as visible.
Cull all others.
or
Choose a view of your model.
Rasterize the model, (for example using a different colour for each triangle).
Mark any triangle visible as visible.
Change the orientation and repeat.
Cull all non-visible triangles.
The advantage of the last one is that it should be relatively cheap to implement using a graphics API, if you can read/write the pixels reliably.
A disadvantage of both would be the resolution needed. Triangles inside small openings that should not be culled may still be, thus the number of rays may be prohibitive (in the first algorithm) or you will require very large off screen frame buffers (in the second).
A couple of ideas that may help.
Use a connectivity test to determine what is connected to your main model (if there is one).
Use a variant of Depth Peeling (I've used it to convert shells into voxels; once you know what is inside the models that you want to keep (the voxels), you can intersect the junk that you want to remove.)
Create a connectivity graph and prune the graph based on the complexity of connected groups.
For the following code segement, my problem is that the two objects are intersected, but the views (lower figure) are not correct, object 1 (box) is inserted into the cylinder but the sideview (lower figure) is not correct, it looks like the yellow box is behind the cylinder. How can I make it look they are intersected?
glColor3f(1,1,0);
drawobj1(); // draw box
glColor3f(1,0.5,0);
drawobj2();draw Cyclinder() using gluCylinder
It is behind the cylinder. It is both inside and behind it. Part of the box is inside it, and part of it is behind it.
Imagine a fork embedded in the side of a can. You can rotate the can so that it appears like the cylinder in your diagram. The fork is still embedded in it, but from that angle, you can only suspect that it is based on what you know about the length of a fork.
Your problem is the lack of visual depth cues, brought on by the fact that this scene lacks lighting, textures, and everything else that your brain normally would use to actually interpret something.