Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
My goal is to implement a voxel terrain system for Unreal Engine, and things went well until I produced a bunch of chunks with jagged voxel terrains.
I used simplex noise 2d to calculate the height value. However, I found that each chunk had a specific hightmap, which resulted incoherent and jagged voxel terrain.
So, how can we create smooth terrains which consist of chunks that use the same heightmap using simplex noise 2d?
In order to have the texture seamless, you need to set up the underlying noise funtion in such a way that interpolation at the edges of the tiles will result in the same values; it is impossible to obtain the desired result with noise functions which are completely independent from each other in each tile. The approach is discussed in detail here.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I want to learn how I can track a moving object in OpenGL. The position of the object will be a continuous input. Also, what if when the object moves out of the screen?
You have to position and orient your camera towards the object. That means you will have to provide the correct View Matrix.
You can use functions such as gluLookAt() to generate a View Matrix that points towards a specific object.
If you don't know what a view matrix is, I suggest looking at this tutorial (http://learnopengl.com). Check out this page which explains cameras matrices work in openGL
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm in the middle of implementing deferred shading in an engine I'm working on, and now I have to make a decision on whether to use a full RGB32F texture to store positions, or reconstruct it from the depth buffer. So it's basically a RGB32F texel fetch vs a matrix vector multiplication in the fragment shader. Also the trade between memory and extra ALU operations.
Please direct me to useful resources and tell me your own experience with the subject.
In my opinion it is preferable to recalculate the position from depth. This is what I do in my deferred engine. The recalculation is a fast enough to not even show up when I've been profiling the render loop. And that (virtually no performance impact) compared to ~24MB of extra video memory usage (for a 1920x1080 texture) was an easy choice for me.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm searching for solution of my problem.
I have some geographical coordinates like this:
Lat. 32.5327 Lon. 95.5019 time 15:44:44
Lat. 32.5339 Lon. 96.1439 time 15:48:31
It's position of some object and time when it was in that position.
What i need is to check in some interval of time(30 seconds for example), what was the position of the object between these points.
Interpolating over a sphere and finding the shortest path between two points would require for example Slerp.
But for distances less than 100km you will end up with a line (more or less) so do not bother and do a linear interpolation.
As #chux pointed out: linear interpolation will exibit significant artifacts when interpolating near the poles.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm implementing the Chessboard class to represent the chessboard. I've to implement the transformations (reflections and rotations) on the chess board possible.
The possible transformations includes the combination of:
1. Vertical Reflection
2. Horizontal Reflection
3. Diagonal Reflection
Thus, we've 8 possible transformations for chess board.
There are 64 squares on the Chessboard numbered [0..63].
Thus, to represent the total resulting values after the transformations is 8*64 (No.of Transformations * Chessboard_Size).
There are two fundamental ways to represent the transformed_board using Arrays:
One-Dimensional Array with transformed_board[8*64]
Two-Dimensional Array with transformed_board[8][64]
Questions:
Which approach is better?
What are the pros and cons of each approach?
How will effect the performance with respect to time factor?
The memory layout is the same for both, so there isn't really any "real" difference whatsoever. It's just a matter if whether you want the compiler to do the offset calculation for you or not, so just go with the syntax you like better.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 16 days ago.
Improve this question
I am dealing with Constructive Solid Geometry(CSG) modeling with OpenGL.
I want to know how to implement binary operation. I read something about Gold Feather Algorithm and I know about OpenCSG but after reading its source code, I found it too complicated to understand. I just need a simple shortest OpenGL example how to implement it.
There's no restrict in Algorithm as long as it is easy to implement.
OpenGL will not help you. OpenGL is a rendering library/API. It draws points, lines and triangles; it's up to you to tell it what to draw. OpenGL does not maintain a scene or even has a notion of coherent geometric objects. Hence CSG is not something that goes into OpenGL.
Nicol Bolas is correct - OpenGL will not help with CSG, it only provides a way to draw 3D things onto a 2D screen.
OpenCSG is essentially "fake" CSG by using using OpenGL's depthbuffers, stencils and shaders to make it appear that 3D objects have had a boolean operation performed on them.
CSG is a huge task and I doubt you will ever find an "algorithm easy to understand"
Have a look at this project: http://code.google.com/p/carve/ which performs CSG on the triangles/faces which you would then draw to OpenGL