GLSL: Off screen texture sampling - glsl

What happens when a shader reaches the primitive edge and there is a
color=texture2D(texture, vec2(texCoord.x+some_positive_value, texCoord.y));
somewhere in it? I mean, what value does color get in such a call, transparent black(0,0,0,0)? There seems to be no error in doing this, but I really need to ask if this is safe to use, and are there any visible artifacts to expect. I'm making a blur shader and all tutorials I've seen use this method to access adjacent pixels.

You define what happens. What you're after is "texture wrapping"
But there's still the problem with the blur itself. There is no data outside the texture, so either you apply a wrap mode (GL_CLAMP_TO_EDGE is probably what you want) and accept there will be imperfections, or render the input to the blur slightly larger.
Possible imperfections are shown below. I've blurred a circle in gimp before and after moving it past the edge. Then filled the centre so you can see the difference better. Note the misshapen fourth circle caused by the blur operation's assumption about how the colour continues outside the border.
Just so you know texture2D applies filtering, which can be bypassed with texelFetch (note that this takes coordinates in pixels instead of normalized zero to one texture coordinates).

Related

How to draw an array of pixels directly to the screen with OpenGL?

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.
I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:
Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.
How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?
Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.
First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.
Why write my data in floating-points when it's going to be transformed into an array of pixel data.
Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).
Also textures must be square, and most screens are not square, so I'd have to handle that problem.
Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.
However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.
However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.
However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.

Can "See through" objects in openGL

I'm not sure why this is happening, I'm only rendering a few simple primitive QUADS.
The red is meant to be in front of the yellow.
The yellow always goes in-front of the red, even when it's behind it.
Is this a bug or simply me seeing the cube wrongly?
Turn the depth buffer and depth test on, or OpenGL would draw what is latter on the top.
Your application needs to do at least the following to get depth buffering to work:
Ask for a depth buffer when you create your window.
Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.
Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.
Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT.
See here http://www.opengl.org/resources/faq/technical/depthbuffer.htm
I had the same problem but it was unrelated to the depth buffer, although I did see some change for the better when I enabled that. It had to do with the blend functions used which combined pixel intensities at the last step of rendering. So I had to turn off glBlendFunc()

GLSL object glowing

is it possible to create a GLSL shader to get any object to be surrounded by a glowing effect?
Let's say i have a 3d cube and if it's selected the cube should be surrounded by a blue glowing effect. Any hints?
Well there are several ways of doing this. If each object is also represented in a winged edge format then it is trivial to calculate the silhouette and then extrude it to generate a glow. This however is, very much, a CPU method.
For a GPU method you could try rendering to an offscreen buffer with the stencil set to increment. If you then perform a blur on the image (though only writing to pixels where the stencil is non zero) you will get a blur around the edge of the image which can then be drawn into the main scene with alpha blending. This is more a blur than a glow but it would be relatively easy to re-jig the brightness so that it renders a glow.
There are plenty of other methods too ... here are a couple of links for you to look through:
http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html
http://www.codeproject.com/KB/directx/stencilbufferglowspart1.aspx?display=Mobile
Have a hunt round on google because there is lots of information :)

Blur with OpenGL?

I'm using OpenGL and drawing polygons in a 2D view. How could I blur a polygon, without using glsl, and only things like stencil buffer and stuff. Thanks
The normal method uses the accumulation buffer instead of the stencil buffer. You basically re-draw your polygon(s) a number of times, but change the viewing perspective slightly each time. Exactly what you change determines the style of blur you get. For example, if you want an effect like zooming a camera lens, you can change the view frustum slightly between frames. If you want motion blur, you change the camera view angle instead. With some extra work, you can do some slightly odd-looking effects, such as moving your viewpoint forward, and zooming back at the same time, so (most of) the scene remains roughly the same size, but the perspective you're viewing it from constantly changes.
There are two quick and dirty ways. GLSL or Cg is by far your best solution, especially if you need any decent blur (Gaussian, box, motion, etc). However, you can:
Render the image at a lower resolution, usually to a texture, then render that texture to the screen at high-res. It will blur the image, but you need to use trilinear or anistropic filtering for it to look good. Usually it still won't, but those help.
Render the image to a texture, render once to screen with full opacity, then turn on blending, turn down alpha, and render the image shifted left a bit, right a bit, up a bit, down a bit... etc. You need low opacity for the in-front renders, but they will effectively blur the scene. You may also want to play with blending modes, SrcColor/InvSrcColor or DstColor/InvDstColor may be helpful.
There are a few ways to do this without shaders, but none of them are optimal.

Antialiasing algorithm?

I have textures that i'm creating and would like to antialias them. I have access to each pixel's color, given this how could I antialias the entire texture?
Thanks
I'm sorry but true anti-aliasing does not consist in getting the average color from the neighbours as commented above. This will undoubtfully soften the edges but it's not anti-aliasing but blurring. True anti-aliasing just cannot be done properly on a bitmap, since it has to be calculated at drawing time to tell which pixels and/or edges must be "softened" and which ones must not. For instance: imagine you draw an horizontal line which must be exactly 1 pixel thick (say "high") and must be placed exactly on an integer screen row coordinate. Obviously, you'll want it unsoftened, and proper anti-aliasing algorithm will do it, drawing your line as a perfect row of solid pixels surrounded by perfect background-coloured pixels, with no tone blending at all. But if you take this same line once it's been drawn (i.e. bitmap) and apply the average method, you'll get blurring above and below the line, resulting a 3 pixels thick horizontal line, which is not the goal. Of course, everything could be achieved through the right coding but from a very different and much more complex approach.
The basic method for anti-aliasing is: for a pixel P, sample the pixels around it. P's new color is the average of its original color with those of the samples.
You might sample more or less pixels, change the size of the area around the pixel to sample, or randomly choose which pixels to sample.
As others have said, though: anti-aliasing isn't really something that's done to an image that's already a bitmap of pixels. It's a technique that's implemented in the 2D/3D rendering engine.
"Anti-aliasing" can refer to a broad range of different techniques. None of those would typically be something that you would do in advance to a texture.
Maybe you mean mip-mapping? http://en.wikipedia.org/wiki/Mipmap
It's not very meaningful to ask about antialiasing in terms this vague. It all depends on the nature of the textures and how they will be used.
Generally though, if you don't have control over the display environment and your textures might be scaled, rotated, or composited with other elements you don't have any control over, you should use transparency to indicate where your image ends and where it only partially fills a pixel.
It's also generally good to avoid sharp edges and features that are small relative to the amount of scaling that might be done.