Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a graphics project trying to create an impossible cube in 3D. An impossible cube looks like that:
The trick behind this is two of the edges which are 'cut' and a picture taken from a specific angle to give the illusion of the impossibility.
Well I'm trying to make this but instead of a static image, I want to be able to animate it (rotate around) maintaining the impossible properties.
I have managed to make a cube in blender as you can see in the screenshot below:
I would like to hear your suggestions as to how I can achieve the desired effect. An idea would be to make transparent the portion of the edge that has an edge(or more) behind it, so that every time the camera angle changes, the transparent patch moves along.
It doesn't have to be done in Blender exclusively so any solutions in OpenGL etc are welcome.
To give you an idea of what the end result should be, this is a link to such an illustration:
3D Impossible Cube Illusion Animation
It's impossible (heh). Try to imagine rotating the cube so that the impossibly-in-front bit moves to the left. As soon as it would "cross" the current leftmost edge, the two properties of "it's in front" and "it's in the back" will not be possible to fulfill simultaneously.
If you have edge culling enabled, but clipping (depth-testing) disabled, and draw primitives in the right order, you should get the Escher cube without any need for cuts. This should be relatively easy to animate.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm making this game wherein the object should move along a Bezier Curve, I've already computed and successfully drawn the tangent and normal line, but I can't seem to make the Box move along the direction of the tangent. I'm really new in SFML, hope someone could give an advice and direction.
Edited: Like for example, I want to move the object from (0,0) coordinate to (3, 7) when I press only the right arrow key. I know that I should use the concept of vector and normalization but I don't understand the tutorial videos that I watched about it.
There are two parts to this problem, detecting input and then actually moving your object. Presuming you've got the input covered (please say if not) then I'll focus on moving the object:
If an object inherits from sf::Transformable, you will be able to use many transform functions such as setPosition(x,y) and move(x,y) (they do different things!)
A basic example based on yours, using sf::RectangleShape which inherits from sf::Transformable:
sf::RectangleShape shape({5.f,5.f}); // A square, 5 pixels wide
sf::Vector2f movementThisFrame(3.f,7.f); //this would be the value from your curve
if(sf::Keyboard::isKeyPressed(sf::Keyboard::Right))
shape.move(movementThisFrame);
A few things to mention:
You may find using events for input works better, especially if you only want one action per press (isKeyPressed directly queries the key's state, whereas events send you pressed/released events once
move() is relative to the current position, whereas setPosition() is absolute, don't forget that!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm developing a rendering engine using OpenGL and I'm wanting to know:
Should duplicated vertices (for flat shading, we need to duplicate vertices as we have 2+ normals for a single vertex) be created in the model or should an algorithm be implemented in the engine to work out when vertices need to be duplicated. An example would be a model of a rock which has sharp edges and smooth surfaces.
It makes sense to me that the artist would duplicate vertices for sharp edges in the modelling software as the engine has no idea what the artist's intentions are (in regards to model features). The engine could identify which vertices should be duplicated by checking the angle between face normals but to me, doing this could overwrite features of the model.
This is specifically for .obj models as different exporters may (? haven't looked into it) provide options to cater for this need.
You should probably be defining the duplicate vertices yourself, insofar as they're not really duplicate vertices.
In Graphics Programming terms, a "vertex" is supposed to define all the necessary information to define a single point. This includes, but is not necessarily limited to: Position, Normal, Texture Coordinates, and Untextured Colors.
So in general, a Vertex is only "duplicate" if all of this defined data is identical (plus or minus Epsilon) when comparing two points. If you write an algorithm to detect and remove such duplicates, I'd say there's no problem.
Where you'll get a problem is when you're expecting an algorithm to accurately decide if a vertex should be "smooth" or "flat" because no one algorithm will ever get it right. Especially in your case: if you expect the rock to always be smooth shaded (which is reasonable for any particularly worn rock) you'd probably be okay, but given that you need it to consider both smooth and sharp edges, your algorithm will always screw it up. You'll need both situations where a < 10° angle is shaded smoothly, and a > 170° angle is shaded flatly. You won't get it right unless the model itself provides those rules.
So, to sum up: Just create the duplicate vertices in the model. Don't try to algorithm your way out of it. Most decent 3d modelling programs should provide features which will make this process relatively painless.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
So my problem, as I said in the title is that I have an image that is in an perspective view and I to transform it into the orthographic view.
But as far as I can understand this example :
the distance from the camera to the NearClip plane and to the FarClip plane is required.
I was wondering if I'm going completly wrong and if there is a way to accomplish that without knowing those distances?
If yes, I Suppose it's something easy such as a matrix multiplication, but after few hours of research, I turn to you searchin any help that can come...
Thanks a lot!
Best regards!
EDIT : I will explain the context, maybe it can helps.
I have a Fish-eye camera that took a panoramic picture (like below, for example)
And my final goal is to create 6 cubics (6 image that corresponde to the up, the down, right, left, front and back of a cube if you're in the cube). So I tried to use the equirectangular projection to create a picture that contains the 6 cubics.
But the problem is that the fisheye take perspective view. So my 6 pictures are perspectives. And I want them to be ortho... :'(
No this is not possible without making several assumptions. Distances or object sizes..
Of course you don't have any informaton of what is behind your objects from your perspective. This information is not available even if you had the distances.
If that was possible there would be no need for 3d-imaging or telecentric lenses.
Of course you can also assume that your objects are spheres. Then you know what to add in your reconstruction but in general this is not viable.
This may be an old question, but the existing answer of "not possible" is not correct for pictures that are less extreme than the example. Photoshop has a Lens Correction tool, as does the free program Gimp. A tutorial for the Photoshop tool is at https://helpx.adobe.com/photoshop/using/correcting-image-distortion-noise.html#correct_lens_distortion_and_adjust_perspective showing it can be done through Choose Filter > Lens Correction. And though you would need to know specific measurements from the camera or scene to perfectly correct the image, you can get pretty close and use assumptions that some objects will have straight edges or certain lines will be parallel.
Gimp's tool is under Filters -> Distorts -> Lens Distortions, and some examples can be found at http://www.texturemate.com/content/how-easily-remove-lens-distortion-photos-using-gimp and there's a StackExchange answer for it at https://gamedev.stackexchange.com/questions/129415/converting-real-life-perspective-photos-into-orthographic-view-for-texture-creat
Neither of these may be extensive enough to un-distort an image from fisheye lenses, but these options are available to anyone who found this page and hopes to adjust an image with more common distortions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a landscape(generated via Perlin noise) and a ball. I want the ball to move along the geodesic(implementation of basic physics: gravitation, friction).
I thought to do raycast around the ball to the landscape, choose the lowest point and move the ball to this point, but it won't work in every case and it won't allow the ball to jump (with inertia).
So, what is the best way/algorithm to implement such feature?
P.S. I don't want to use any libraries.
It'll take some time, but it's not THAT hard, you need to calculate the ball's new position, ignoring the height field at all (only gravity & inertia) and then, after this step, you check for collisions (basic collision detection between sphere and triangle mesh), and if a collision is detected, generate the collision data and resolve it by applying an impulse OR force in the appropriate direction, using the motion direction and the collision normal direction. Now, if you never worked with collision detection before, it'll probably take you some extra time to learn the algorithms involved, like how to detect collision, how to generate the collision data (normal, penetration, etc).
You need to build your own physics library for this task which individual effort can take months. If you still don't want to export an external engine to your project, At least I suggest you to check open-source engines to see how they handle things.
I can suggest Bullet Physics for a start.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm getting 3D points from the Kinect via OpenNI. Let's say I have :
X = [93.7819,76.8463,208.386,322.069,437.946,669.999]
Y = [-260.147,-250.011,-230.717,-211.104,-195.538,-189.851]
Z = [958,942,950,945,940,955]
That's the points I was able to catch from my moving ball. Now I would like to be able to compute something like an interpolation or least square with those points to know the trajectory of the ball. I can then know where the ball is going and where it will hit the wall.
I'm not sure of which mathematical tool to use and how to translate it in C++. I've seen lots of resources for 2D interpolation (cubic,...) or least squares, but it seems that it's harder for 3D or I missed something maybe.
Best regards
EDIT : the question is marked as too broad by moderators, so I will reduce the scope with the responses I got : if I use 2D polynomial regression with the 3 plans separately (thx yephick), what can I use in C++ to implement it ?
For what you are interested in there's hardly any difference between 3D and 2D.
All you do is work with planes independently (XY plane, XZ plane, and YZ plane). This will reduce the complexity significantly and allow you to "draw" much simpler diagrams on a piece of paper when you work on this problem.
Once you figured the coordinates in each of the planes it is quite trivial to not only reconcile the coordinates into a 3D space but also provides an added benefit of error checking. For example an X coordinate found in XY plane should match (or be "close enough") to the same X coordinate found in XZ plane.
If the accuracy is not too critical you don't even need to go higher than the first power of polynomial approximation, using just a simple plain-old arithmetical average of the two consequential points.
You can use spline interpolation to create a smooth trajectory.
If not in the "mood" to implement it yourself, a quick google search will give you open source libraries like SINTEF's SISL that have such functionallity.