SDL and Dynamic Super Resolution? - opengl

Is there a way to do DSR with SDL and OpenGL? As far as I know this is an NVidia thing (I have an NVidia card), so would this be something done in a shader? I can't find anything in the SDL reference and some googling around doesn't reveal anything either.

On the top of my head, the best way to do that would be using framebuffers.
You do your rendering on a larger FBO (FBO Documentation) than your screen resolution, then you downsample your FBO to another framebuffer that fit the size of the screen using a pixel shader.
This is OpenGL-specific, so you should be able to do it on SDL.
The OpenGL wiki has some snippet of code to render to FBO, it should be useful to get you started. And since what you wanna do is basicly downsampling, you might be interested in this thread.

Related

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

How to make textured fullscreen quad in OpenGL 2.0 using SDL?

Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.

Render Mona Lisa to PBO

After reading this article I wanted to try to do the same, but to speed things up the rendering part I've wanted to be performed on the GPU, needless to say why the triangles or any other geometric objects should be rendered on GPU rather than CPU.
Here's one nice image of the process:
The task:
Render 'set of vertices'
Estimate the difference pixel by
pixel between the rendered 'set of vertices' and the Mona Lisa image (Mona Lisa is located on GPU in texture or PBO no big difference)
The problem:
When using OpenCL or Cuda with OpenGL FBO (Frame Buffer Object) extension.
In this case according to our task
Render 'set of vertices' (handled by OpenGL)
Estimate the difference pixel by
pixel between the rendered 'set of vertices' and the Mona Lisa image (handled by OpenCL or Cuda)
So in this case I'm forced to do copies from FBO to PBO (Pixel Buffer Object) to get rendered 'set of vertices' available for OpenCL/Cuda. I know how fast are Device to Device memory copies but according to the fact that I need to do thousands of these copies it makes sense not to do so...
This problem leaves three choices:
Render with OpenGL to PBO (somehow, I don't know how, It also might be impossible to do so)
Render the image and estimate the difference between images totally with OpenGL (somehow, I don't know how, maybe by using shaders, the only problem is that I've never written a shader in my life and this might take months of work for me...)
Render the image and estimate the difference between images totally with OpenCL/Cuda (I know how to do this, but also it will take months to get stable and more or less optimized version of renderer implemented in OpenCL or Cuda)
The question
Can anybody help me with writing a shader for the above process or maybe point-out the way of rendering the Mona Lisa to PBO without copies from FBO...
My gut feeling is that the Shader approach is also going to have the same IO problem, you certainly can compare textures in a shader as long as the GPU supports PS 4.0 or higher; but you've still got to get the source texture (Mona Lisa) on to the device in the first place.
Edit: Been digging around a bit and this forum post might provide some insight:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=221384&page=1.
The poster, Komat, provides an example of the shader on the 2nd page.

OpenGL - Using glTexImage2d to fill the entire screen with a texture

Two questions -
What is the best way to use a texture in OpenGL to fill the entire window?
I want to use glTexImage2D to take in an array of ints containing colour data, how would I go about doing this? (I've found a couple of pages of reference on glTexImage2D but tutorial on using it would be great)
Clarification:
I have done texturing before. I simply need help on these two specific parts.
glTexImage2D just uploads texture data, nothing more. When you have your texture, draw a texture mapped quad the size of the screen and you will draw your texture pixels to the screen.
A ortographic projection is usually used for this.
NeHe provides tutorials for almost any OpenGL topic.
The first lesson on using textures is #6.
Also you could just upload the pixels with glDrawPixels if you don't need to update to much.
There is a nice example from Nehe on how to use textures here:

OpenGL primitives too dark when multitexturing?

I'm having a problem getting accurate primitive colours when I'm using multi-texturing elsewhere in the scene. Basically, I have some lines and polygons that I am trying render over a video texture (I'm using 3 stage multitexturing to create the video texture)... Anyhow, I know the problem is not alpha related... In fact, I know that in my texture update function if I just comment out the calls to glBindTexture() for texture levels 1 and 2, the primitive color is fine (so leaving texture level 0)... Is it trying to multitexture the primitives too (even though I'm obviously not setting texture coordinates for primitives)?
Make sure to disable multitexturing when not using it. OpenGL uses a state machine, so if you turn on a texture it will stay on until you explicitly turn it off.
Just because you're not setting coordinates, doesn't mean OpenGL will assume you're not using the texture.