Tessellation in DirectX 11 and in OpenGL, what and how? - opengl

DirectX 11 supports tessellation. What is the real purpose of this feature?
And how to determine whether it's working or not on applications built with DX11 or OpenGL?

There are many fields where tessellation could be applied.
for example, Level of detail on a face, this could make you have 2-3 milion triangles without any super load on the gpu when you are up close to the face. that way you achive über graphics.
another way of using it could be the terrain, DICE uses it on their terrain system to enhance the overall look of the terrain to get it more relaistic.
http://publications.dice.se/publications.asp?show_category=yes&which_category=Rendering
So basicly it´s one of the dx11 best feautres, giving you realy realy good controll of adding extra polygons to meshes. (Higher detail, more good looking graphics)

Related

way to have 3d animated/rigged character in Opengl

If I want a 3D animated/rigged character in OpenGL game how i would have it in OpenGL?If i make a animated/rigged character in 3Ds max is it possible to have that character in OpenGl?would be helpful if someone gives a proper way or a tutorial link to export animated model from 3d software to open GL
OpenGL is a very simple 3D framework which only offers the bare bones. OpenGL can display triangles and fill them with color and that's about it. It comes with some helper functions to manipulate point clouds but it's really very basic.
Animation and rigging are way beyond what it can do. What you need is a framework which supports animation and rigging which then uses OpenGL to display the result.
Since you don't specify any requirements, it's hard to say which one would be good for you. Blender is probably a good place to start. It's game engine runs on many platforms, supports OpenGL, animation and rigging.

Average triangles in DirectX 10

So I basically want to check some information for my project.
I have GTX 460 video card. I wrote DX10 program with 20k triangles printed on the screen and now I get 28 FPS in Release build. All those triangles call DrawIndexed inside them so this is ofcourse an overhead in calling so much draws.
But anyway, I would like to know: how much triangles could I draw on the screen with those capabilities and at which FPS? I think 20k triangles is not even nearly enough to load some good models on game scene.
Sorry for my terrible english.
Sounds like you are creating a single draw call per triangle primitive, this is very bad, hence the horrid FPS, you should aim to draw as many triangles as possible per draw call, this can be done in a few ways:
Profile your code, both nVidia and AMD have free to to you you find why your code is slow, allowing you to focus where it really matters, so use them.
Index buffers & triangle strips to reduce bandwidth
Grouping of verts by material type/state/texture to improve batching
Instancing of primitive groups: draw multiple models/meshs in one call
Remove as much redundant state change (setting of shaders, textures, buffers, paramters) as possible, this goes hand-in-hand with the group mentioned earlier
The DX SDK will have examples of implementing each of these. The exact amount for triangles you can draw and a decent FPS(either 30 or 60 if you want vsync) varies greatly depending on the complexity of shading the triangles, however, if draw most simply, you should be able to push a few million with ease.
I would recommend taking a good look at the innards of an open source DX11 (not many DX10 projects exists, but the API is almost identical) engine, such as heiroglyph 3, and going through the SDK tutorials.
There are also quite a few presentations on increasing performance with DX10, but profile your code before diving into the suggestions full-on, here are a few from the hardware vendors themselves (color coded hints for nVidia vs AMD hardware):
GDC '08
GDC '09

curves representable in OpenGL

I am a beginner in CAD development & want to know some things about OpenGL.
My main objective is to represent conics, cycloid, epicycloid, hypocycloid, involutes, etc
Can i directly represent them using some trigonometry, or do i need to convert these curves into B-Spline?
Actually i am currently developing the kernel & want to develop the kernel so that i cant display the above mentioned curves.(there is no use in supporting these curves in kernel if i cant graphically represent them!)
I dont know much about OpenGL, so please pardon me if my question is really stupid!
I tried searching over here but could not find anything useful.
OpenGL can directly render Bezier curves and surfaces using evaluators and even NURBS using the GLU API. See the OpenGL Programming Guide for more information. So you could transform those curves and surfaces into this form.
But I highly recommend you not to use these features, as they are deprecated (dropped from the core of newer OpenGL versions) and nowadays likely to be implemented in software and not in hardware.
Instead you should rather implement your own evaluation routines for such curves and surfaces, that evaluate the corresponding equations at a specified sampling rate and generate a simple vertex array (and maybe and index array). This way you stay future-ready as these can be rendered as standard line strips or triangular meshes using VBOs (the only way to render something in modern OpenGL).
And you even stay API agnostic, as a general vertex array can also be rendered using Direct3D or whatever. So this way you don't pollute your CAD kernel with draw calls. All it needs is a function to transform parametric curves and surfaces into arrays of vertices (and maybe indices) and the client/user of the kernel is responsible for drawing these with whatever API he likes.
If I am not wrong, OpenGL only works with flat polygons. Even though, you can check if the GLUT libraries have any method to draw the aforementioned figures, or google for a .obj of those figures, and scale, rotate and translate them to the desired position.

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

How to do ray tracing in modern OpenGL?

So I'm at a point that I should begin lighting my flatly colored models. The test application is a test case for the implementation of only latest methods so I realized that ideally it should be implementing ray tracing (since theoretically, it might be ideal for real time graphics in a few years).
But where do I start?
Assume I have never done lighting in old OpenGL, so I would be going directly to non-deprecated methods.
The application has currently properly set up vertex buffer objects, vertex, normal and color input and it correctly draws and transforms models in space, in a flat color.
Is there a source of information that would take one from flat colored vertices to all that is needed for a proper end result via GLSL? Ideally with any other additional lighting methods that might be required to complement it.
I would not advise to try actual ray tracing in OpenGL because you need a lot hacks and tricks for that and, if you ask me, there is not a point in doing this anymore at all.
If you want to do ray tracing on GPU, you should go with any GPGPU language, such as CUDA or OpenCL because it makes things a lot easier (but still, far from trivial).
To illustrate the problem a bit further:
For raytracing, you need to trace the secondary rays and test for intersection with the geometry. Therefore, you need access to the geometry in some clever way inside your shader, however inside a fragment shader, you cannot access the geometry, if you do not store it "coded" into some texture. The vertex shader also does not provide you with this geometry information natively, and geometry shaders only know the neighbors so here the trouble already starts.
Next, you need acceleration data-structures to get any reasonable frame-rates. However, traversing e.g. a Kd-Tree inside a shader is quite difficult and if I recall correctly, there are several papers solely on this problem.
If you really want to go this route, though, there are a lot papers on this topic, it should not be too hard to find them.
A ray tracer requires extremely well designed access patterns and caching to reach a good performance. However, you have only little control over these inside GLSL and optimizing the performance can get really tough.
Another point to note is that, at least to my knowledge, real time ray tracing on GPUs is mostly limited to static scenes because e.g. kd-trees only work (well) for static scenes. If you want to have dynamic scenes, you need other data-structures (e.g. BVHs, iirc?) but you constantly need to maintain those. If I haven't missed anything, there is still a lot of research currently going on just on this issue.
You may be confusing some things.
OpenGL is a rasterizer. Forcing it to do raytracing is possible, but difficult. This is why raytracing is not "ideal for real time graphics in a few years". In a few years, only hybrid systems will be viable.
So, you have three possibities.
Pure raytracing. Render only a fullscreen quad, and in your fragment shader, read your scene description packed in a buffer (like a texture), traverse the hierarchy, and compute ray-triangles intersections.
Hybrid raytracing. Rasterize your scene the normal way, and use raytracing in your shader on some parts of the scene that really requires it (refraction, ... but it can be simultated in rasterisation)
Pure rasterization. The fragment shader does its normal job.
What exactly do you want to achieve ? I can improve the answer depending on your needs.
Anyway, this SO question is highly related. Even if this particular implementation has a bug, it's definetely the way to go. Another possibility is openCL, but the concept is the same.
As for 2019 ray tracing is an option for real time rendering but requires high end GPUs most users don't have.
Some of these GPUs are designed specifically for ray tracing.
OpenGL currently does not support hardware accelerated ray tracing.
DirectX 12 on windows does have support for it. It is recommended to wait a few more years before creating a ray tracing only renderer although it is possible using DirectX 12 with current desktop and laptop hardware. Support from mobile may take a while.
Opengl (glsl) can be used for ray (path) tracing. however there are few better options: Nvidia OptiX (Cuda toolkit -- cross platform), directx 12 (with Nvidia ray tracing extension DXR -- windows only), vulkan (nvidia ray tracing extension VKR -- cross platform, and widely used), metal (only works on MacOS), falcor (DXR, VKR, OptiX based framework), Intel Embree (CPU ray tracing only).
I found some of the other answers to be verbose and wordy. For visual examples that YES, functional ray tracers absolutely CAN be built using the OpenGL API, I highly recommend checking out some of the projects people are making on https://www.shadertoy.com/ (Warning: lag)
To answer the topic: OpenGL has no RTX extension, but Vulkan has, and interop is possible. Example here: https://github.com/nvpro-samples/gl_vk_raytrace_interop
As for the actual question: To light the triangles, there are tons of techniques, look up for "forward", "forward+" or "deferred" renderers. The technique to be used depends on you goal. The simplest and most good-looking these days, is image based lighting (IBL) with physically based shading (PBS). Basically, you use a cubemap and blur it more or less depending on the glossiness of the object. For a simple object viewer you don't need more.