When to use Nvidia CUDA instead of plain OpenGL shaders [closed] - opengl

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm asking myself what will be the architectural benefits of these two approaches whether using Nvidia CUDA or OpenGL Shaders for a computation.
Therefore I want to determine which part of my application would better be implemented on CUDA or OpenGL.
For sure, if platform-dependence is no reason, you gain more granularity about your threads and memory by using CUDA.
To display your results, which were computed by CUDA, using OpenGL, you have to use some maybe "tricky" interoperabiltiy API.
Are there any best practices when to use one of these architectures and when to utilize a combined approach?

Most rendering tasks would be way harder to implement using CUDA. Shaders are fully integrated with rendering APIs, such as OpenGL, to automate and provide all the most common and efficient tools for rendering geometry. Things like texture sampling and polygon rasterisation are built-in all shading languages. If you where to re-implement that using something like CUDA, it would not only take a long time and effort but also probably end up running slower than a shader.
CUDA is designed, in my view, to process other complex computational things, like physics simulations, in a more parallel and efficient manner, using the naturally scalable GPU. Apart from that, for rendering of geometry, there is little gain from the use of a computing framework like CUDA or OpenCL.

Related

Interoperability between Metal Shading Language (MSL) and C++ Libraries [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I’m developing an iOS app and solved the interoperability Swift-C++ in the CPU side wrapping the C++ classes in Objective-C. But on the GPU side, I don’t Know if exists any way of call and retrieve data from a pure C++ class (not MSL). Since MSL is C++ based my intuition says that yes but I didn’t find any information in that way…
I’m trying to use libraries like CGAL or FastNoise to update a massive number of particles.
Welcome!
I'm afraid you can't just access any data from any class in your GPU code. Programming for the GPU requires a very different paradigm. You have to explicitly send data to the GPU, trigger computational tasks a.k.a. "kernels" to run on the GPU, wait for their completion, and transfer the data back to the RAM.
I recommend you check out some tutorials on Metal. Here is for example a sample project where image data is modified on the GPU using compute kernels.

What is the difference between FreeGLUT vs GLFW? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My university started teaching a course which includes OpenGL programming. They make us use FreeGLUT to create a window and a context for OpenGL, but I found an online course at lynda.com about OpenGL in which they use GLFW instead of FreeGLUT.
So I want to know which one should I use and what are the differences between the two?
FreeGLUT:
Based on the GLUT API.
GLUT has been around for about as long as OpenGL itself.
Many tutorials and examples out there use GLUT.
Takes care of implementing the event loop and works through callbacks (good for simple stuff, makes things like precisely timed animation loops and low latency input much harder though).
GLFW:
Designed from scratch with the experiences of other frameworks in mind.
Gives much finer control over context creation and window attributes.
GLFW-2 Provides basic threading support functions (thread creation, synchronization). –– removed from GLFW-3
GLFW-2 Provides basic image file loading support. –– removed from GLFW-3
Gives very detailed access to input devices.
Event loop is in control of the programmer which allows for much preciser timing and lower latency.

CSG Modeling in OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 16 days ago.
Improve this question
I am dealing with Constructive Solid Geometry(CSG) modeling with OpenGL.
I want to know how to implement binary operation. I read something about Gold Feather Algorithm and I know about OpenCSG but after reading its source code, I found it too complicated to understand. I just need a simple shortest OpenGL example how to implement it.
There's no restrict in Algorithm as long as it is easy to implement.
OpenGL will not help you. OpenGL is a rendering library/API. It draws points, lines and triangles; it's up to you to tell it what to draw. OpenGL does not maintain a scene or even has a notion of coherent geometric objects. Hence CSG is not something that goes into OpenGL.
Nicol Bolas is correct - OpenGL will not help with CSG, it only provides a way to draw 3D things onto a 2D screen.
OpenCSG is essentially "fake" CSG by using using OpenGL's depthbuffers, stencils and shaders to make it appear that 3D objects have had a boolean operation performed on them.
CSG is a huge task and I doubt you will ever find an "algorithm easy to understand"
Have a look at this project: http://code.google.com/p/carve/ which performs CSG on the triangles/faces which you would then draw to OpenGL

Ray tracing tutorial on GLSL? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I haven't found a good ray tracing tutorial on GLSL, I found one on CUDA that's great, but I really want a GLSL one too. I read the Stanford Graphics paper on GPU ray tracing and I want to see a GLSL implementation.
Shading languages really aren't meant for raytracing. The structure of a rasterizer just doesn't make them a good fit for most raytracing tasks. Yes, raytracers can use rasterizers to do parallel ray computations, and that's good. But the bulk of the algorithm doesn't fit the needs of a rasterizer.
Indeed, now that there are GP-GPU specific languages like OpenCL and CUDA, most of the research time and money is invested in them, not in shoehorning GP-GPU functionality into a rasterizer. It just isn't worth the effort to work around the limitations of a rasterizing pipeline to do raytracing; you'll get better performance with a real GP-GPU language.
And isn't performance the whole reason to do GP-GPU to begin with?

opengl vbo advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a model designed in Blender that i am going to render, in my game engine built using opengl, after exporting it to collada. The model in blender is divided into groups. These groups contains vertices normals and textures with there own index. is it a good thing to do if i rendered each group as a separate vbo or should i render the whole model as a single vbo?
Rules of thumb are:
always do as few OpenGL state changes as possible;
always talk to OpenGL as little as possible.
The main reason being that talking to the GPU is costly. The GPU is much happier if you give it something it'll take a long time to do and then don't bother it again while it's working. So it's likely that a single VBO would be a better solution, unless it would lead to a substantial increase in the amount of storage you need to use and hence run against caching elsewhere.