I'm trying to put some blur around a sun, to give it a glow effect.
I'm using the method here:
http://nehe.gamedev.net/tutorial/radial_blur__rendering_to_a_texture/18004/
Basically, I draw a sphere, it takes the current buffer and grabs it as a texture and redraws it a number of times, stretching it each time.
This works great when the object is in the centre of the screen. However, I want to have it offset to say, the top right.
This is the main part I believe that needs adjusting:
glColor4f(1.0f, 1.0f, 1.0f, alpha); // Set The Alpha Value (Starts At 0.2)
glTexCoord2f(0+spost,1-spost); // Texture Coordinate ( 0, 1 )
glVertex2f(0,0); // First Vertex ( 0, 0 )
glTexCoord2f(0+spost,0+spost); // Texture Coordinate ( 0, 0 )
glVertex2f(0,480); // Second Vertex ( 0, 480 )
glTexCoord2f(1-spost,0+spost); // Texture Coordinate ( 1, 0 )
glVertex2f(640,480); // Third Vertex ( 640, 480 )
glTexCoord2f(1-spost,1-spost); // Texture Coordinate ( 1, 1 )
glVertex2f(640,0);
For the life of me though, I can't work out how to offset it each time so that the blurred images are not offset to the right. I understand that the whole screen is being captured, but there must be a way to offset this when the texture is drawn.....
How?
This maybe isn't a direct answer, to your question, but you shouldn't focus too much on effects via the fixed pipeline. NeHE tutorials are good, but a bit outdated. I recommend you just skim through the basics and incorporate shaders in your code. They're much faster and will allow you for creating much more complex effects easier.
Generally, if you want to scale around a point (sx,sy), you need to translate your world so that (sx,sy) is at the origin, do your scaling, and translate in the reverse so that (sx, sy) is back where it was originally.
Related
An image says a thousand words, so what about two? I have this map art:
In order to actually use this as a map I scale this texture 6 times. This however didn't go as expected:
All the OpenGL code is in my homebrew 2D OpenGL rendering library, and since OpenGL is a state machine it is hard to actually document the whole rendering process. But here is +/- what I do (the code is Python):
width, height = window.get_size()
glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glOrtho(0.0, width, height, 0.0, 0.0, 1.0)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
glDisable(GL_DEPTH_TEST)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glEnable(self.texture.target) # GL_TEXTURE_2D
glBindTexture(self.texture.target, self.texture.id)
glTexParameteri(self.texture.target, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameteri(self.texture.target, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glPushMatrix()
glTranslatef(self.x, self.y, 0.0) # self.x and self.y are negative int offsets
glScalef(self.scale_x, self.scale_y, 1.0) # scale_x and scale_y are both 6
glBegin(GL_QUADS)
glTexCoord2i(0, 0)
glVertex2i(0, 0)
glTexCoord2i(0, 1)
glVertex2i(0, self.texture.height)
glTexCoord2i(1, 1)
glVertex2i(self.texture.width, self.texture.height)
glTexCoord2i(self.texture.width, 0)
glVertex2i(self.texture.width, 0)
glEnd()
glPopMatrix()
glDisable(self.texture.target)
However, this "blurring" bug doesn't occur when I use GL_TEXTURE_RECTANGLE_ARB. I'd like to also be able to use GL_TEXTURE_2D, so can someone please point out how to stop this from happening?
When in doubt - replace your texture with same size black'n'white checkerboard (black/white 1px). It will give you a good sense of what is going - is it going to be uniformly gray (displacement is wrong) or is it going to have waves (scaling is wrong).
Make sure you don't have mip-maps automatically generated and used.
Generally you don't need any special displacement, texels match pixels with properly setup glOrtho.
Another important issue - use PowerOfTwo textures, as older GPU could used various schemes to support NPOT textures (scaling or padding transparently to user) which could result in just that sort of blurring.
To manually work with NPOT textures you will need to pad them with clear pixels till next POT size and scale your UV values glTextCoord2f(u,v) by the factor npotSize / potSize. Note that this is not compatible with tiling, but judging from your art you don't need it.
# displacement trick for exact pixelization
glTranslatef(0.375, 0.375, 0.0)
Unfortunately this is not enough since you also need to scale the texture. For a unscaled, untranslated texture matrix, to address a certain pixel i from a texture with dimension N you need to apply the formula
(2i + 1)/(2N)
You can derive your scaling and translation from that – or determine the texture coordinates directly.
EDIT due to comment.
Okay, let's say your texture is 300 pixels wide, and you want to address exactly pixels 20 to 250, then the texture coordinates to choose for a identity texture matrix would be
(2*20 + 1)/(2*300) = 41/600 = 0.0650
and
(2*250 + 1)/(2*300) = 501/600 = 0.8350
But you could apply a transformation through the texture matrix as well. You want to map the pixels 0…299 (for 300 pixels width the index goes from 0 to 300-1 = 299) to 0…1. So let's put in those figures:
(2*0 + 1)/(2*300) = 1/600 =~= 0.0017 = a
(2*299 + 1)/(2*300) = 599/600 =~= 0.9984 = b
b-a =~= 0.9967
So you have to scale down the range 0…1 by 0.9967 and offset it by 0.0017 in the width. The same calculation goes for the height←→t coordinates. Remember that order of transformations matters. You must first scale then translate when performing the transformation, so in the matrix multiplications the translation is multiplied first:
// for a texture 300 pixels wide
glTranslatef(0.0017, …, …);
glScale(0.9967, …, …);
If you want use pixels instead of the range 0…1, further divide the scale by the texture width.
A BIG HOWEVER:
OpenGL-3 completely discarded the whole matrix manipulation functions and expect you to supply it ready to use matrices and shaders. And for OpenGL fragment shaders there is a nice function texelFetch which you can use to directly fetch texture pixels with absolute coordinates. Using that would make things a lot easier!
I have a quad and I would like to use the gradient it produces as a texture for another polygon.
glPushMatrix();
glTranslatef(250,250,0);
glBegin(GL_POLYGON);
glColor3f(255,0,0);
glVertex2f(10,0);
glVertex2f(100,0);
glVertex2f(100,100);
glVertex2f(50,50);
glVertex2f(0,100);
glEnd(); //End quadrilateral coordinates
glPopMatrix();
glBegin(GL_QUADS); //Begin quadrilateral coordinates
glVertex2f(0,0);
glColor3f(0,255,0);
glVertex2f(150,0);
glVertex2f(150,150);
glColor3f(255,0,0);
glVertex2f(0,150);
glEnd(); //End quadrilateral coordinates
My goal is to make the 5 vertex polygon have the gradient of the quad (maybe a texture is not the best bet)
Thanks
Keep it simple!
It is very simple to create a gradient texture in code, e.g.:
// gradient white -> black
GLubyte gradient[2*3] = { 255,255,255, 0,0,0 };
// WARNING: check documentation, I am not quite sure about syntax and order:
glTexture1D( GL_TEXTURE_1D, 0,3, 2, 0, GL_RGB, GL_UNSIGNED_BYTE, gradient );
// setup texture parameters, draw your polygon etc.
The graphics hardware and/or the GL will create a sweet looking gradient from color one to color two for you (remember: that's one of the basic advantages of having hardware accelerated polygon drawing, you don't have to do interpolation work in software).
Your real problem is: which texture coordinates do you use on the 5 vertex polygon. But that was not your question... ;-)
To do that, you'd have to do a render-to-texture. While this is commonplace and supported by practically every board, it's typically used for quite elaborate effects (e.g. mirrors).
If it's really just a gradient, I'd try to create the gradient in am app like Paint.Net. If you really need to create them at run-time, use a pixel shader to implement render-to-texture. However, I'm afraid explaining pixel shaders in a few words is a bit tough - there are lots of tutorials on this on the net, however.
With the pixel shader, you gain a lot of control over the graphic card. This allows you to render your scene to a temporary buffer and then apply that buffer as a texture quite easily, plus a lot more functionality.
I'm using python but OpenGL is pretty much done exactly the same way as in any other language.
The problem is that when I try to render a texture or a line to a texture by means of a frame buffer object, it is rendered upside down, too small in the bottom left corner. Very weird. I have these pictures to demonstrate:
This is how it looks,
www.godofgod.co.uk/my_files/Incorrect_operation.png
This is how it did look when I was using pygame instead. Pygame is too slow, I've learnt. My game would be unplayable without OpenGL's speed. Ignore the curved corners. I haven't implemented those in OpenGL yet. I need to solve this issue first.
www.godofgod.co.uk/my_files/Correct_operation.png
I'm not using depth.
What could cause this erratic behaviour. Here's the code (The functions are indented in the actual code. It does show right), you may find useful,
def texture_to_texture(target,surface,offset): #Target is an object of a class which contains texture data. This texture should be the target. Surface is the same but is the texture which should be drawn onto the target. offset is the offset where the surface texture will be drawn on the target texture.
#This will create the textures if not already. It will create textures from image data or block colour. Seems to work fine as direct rendering of textures to the screen works brilliantly.
if target.texture == None:
create_texture(target)
if surface.texture == None:
create_texture(surface)
frame_buffer = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer)
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, target.texture, 0) #target.texture is the texture id from the object
glPushAttrib(GL_VIEWPORT_BIT)
glViewport(0,0,target.surface_size[0],target.surface_size[1])
draw_texture(surface.texture,offset,surface.surface_size,[float(c)/255.0 for c in surface.colour]) #The last part changes the 0-255 colours to 0-1 The textures when drawn appear to have the correct colour. Don't worry about that.
glPopAttrib()
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
glDeleteFramebuffersEXT(1, [int(frame_buffer)]) #Requires the sequence of the integer conversion of the ctype variable, meaning [int(frame_buffer)] is the odd required way to pass the frame buffer id to the function.
This function may also be useful,
def draw_texture(texture,offset,size,c):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity() #Loads model matrix
glColor4fv(c)
glBegin(GL_QUADS)
glVertex2i(*offset) #Top Left
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
glColor4fv((1,1,1,1))
glBindTexture(GL_TEXTURE_2D, texture)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex2i(*offset) #Top Left
glTexCoord2f(0.0, 1.0)
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glTexCoord2f(1.0, 1.0)
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glTexCoord2f(1.0, 0.0)
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
You don't show your projection matrix, so I'll assume it's identity too.
OpenGL framebuffer origin is bottom left, not top left.
The size issue is more difficult to explain. What is your projection matrix after all ?
also, you don't show how to use the texture, and I'm not sure what we're looking at in your "incorrect" image.
Some non-related comments:
creating a framebuffer each frame is not the right way to go about it.
come to think about it, why use framebuffer at all ? it seems that the only thing you're after is blending to the frame buffer ? glEnable(GL_BLEND) does that just fine.
I am tasked with making a sun/moon object flow across the screen throughout a time-span (as it would in a regular day). One of the options available to me is to use a "billboard", which is a quad that is always facing the camera.
I have yet to use many direct x libraries or techniques. This is my first graphics project. How does this make sense? And how can you use this to move a sun object across a screen?
Thanks :) This will be run on windows machines only and I only have the option for direct x (9).
I have gotten this half working. I have a sun image displaying, but it sits at the front of my screen overtop of 80% of my screen, no matter which way my camera is pointing. I'm looking down towards the ground? Still a huge sun there. Why is this? Here is the code I used to create it...
void Sun::DrawSun()
{
std::wstring hardcoded = L"..\\Data\\sun.png";
m_SunTexture = MyTextureManager::GetInstance()->GetTextureData(hardcoded.c_str()).m_Texture;
LPD3DXSPRITE sprite = NULL;
if (SUCCEEDED(D3DXCreateSprite(MyRenderer::GetInstance()->GetDevice(), &sprite)))
{
//created!
}
sprite->Begin(D3DXSPRITE_ALPHABLEND);
D3DXVECTOR3 pos;
pos.x = 40.0f;
pos.y = 20.0f;
pos.z = 20.0f;
HRESULT someHr;
someHr = sprite->Draw(m_SunTexture, NULL, NULL, &pos, 0xFFFFFFFF);
sprite->End();
}
Obviously, my position vector is hardcoded. Is this what I need to be changing? I have noticed in the documentation the possibility of D3DXSPRITE_BILLBOARD rather than D3DXSPRITE_ALPHABLEND, will this work? Is it possible to use both?
As per the tutorial mentioned in an earlier post, D3DXSPRITE is a 2d object, and probably will not work for displaying within the 3d world? What is a smarter alternative?
The easiest way to do a screen aligned quad is by using Point Sprites with texture rewrite.
I never did that with DirectX but in OpenGL, enabling point sprites is a matter of 2 API calls that look like this.
glEnable(GL_POINT_SPRITE);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, true);
With this mode enabled, You draw a single vertex and instead of a point, a screen aligned quad is renderd. The coordinate replace thing means that the rendered quad is processed with texture coordinates. This means that you can place any texture on the quad. Usually you'll want something with an alpha channel to blend into the background seamlessly.
There should be an equivalently easy way to do it in D3D. In addition, if you write a shader, it may allow you do do some additional stuff like discard some of the pixels of the texture.
This tutorial might help
Also, google.
--Edit
To transform a quad to any other shape, use the alpha channel of the texture. If the alpha is 0, the pixel is not visible. You can't add an alpha channel to a JPEG but you can do it to a PNG. Here's an example:
sun with alpha channel http://www.shiny.co.il/shooshx/sun.png
If you'll open this image in photoshop you'll see that the background is invisible.
Using alpha blending might cause some problems if you have other things going on in the scene so if that happens, you can write a simple fragment shader which discards the pixels with alpha==0.
Okay im not a direct x expert so i am going to assume a few things.
First i am assuming you have some sort of DrawQuad() function that takes the 4 corners and a texture inside your rendering class.
First we want to get the current viewport matrix
D3DMATRIX mat;
hr = m_Renderer->GetRenderDevice()->GetTransform(D3DTS_VIEW,&mat);
Lets just set some sort of arbitrary size
float size = 20.0f;
Now we need to calculate two vectors the up unit vector and the right unit vector.
D3DXVECTOR3 rightVect;
D3DXVECTOR3 viewMatrixA(mat._11,mat._21,mat._31);
D3DXVECTOR3 viewMatrixB(mat._12,mat._22,mat._32);
D3DXVec3Normalize(&rightVect, &viewMatrixA);
rightVect = rightVect * size * 0.5f;
D3DXVECTOR3 upVect;
D3DXVec3Normalize(&upVect, &viewMatrixB);
upVect = upVect * size * 0.5f;
Now we need to define a Location for our object, i am just going with the origin.
D3DXVECTOR3 loc(0.0f, 0.0f, 0.0f);
Lets load the sun texture:
m_SunTexture = <insert magic texture load>
Now lets figure out the 4 corners.
D3DXVECTOR3 upperLeft = loc - rightVect;
D3DXVECTOR3 upperRight = loc + upVect;
D3DXVECTOR3 lowerRight = loc-upVect;
D3DXVECTOR3 lowerLeft= loc + rightVect;
Lets draw our quad. I am assuming this function exists otherwise you'll need to do
some vertex drawing.
m_Renderer->DrawQuad(
upperLeft.x, upperLeft.y, upperLeft.z,
upperRight.x, upperRight.y, upperRight.z,
lowerRight.x, lowerRight.y, lowerRight.z,
lowerLeft.x, lowerLeft.y, lowerLeft.z, m_SunTexture);
Enjoy :)!
Yes, a billboard would typically be used for this. It's pretty straightforward to calculate the coordinates of the corners of the billboard and the texture parameters. The texture itself could be rendered using some other technique (in the application in system ram for instance).
It it simple enough to take the right and up vectors of the camera and add/sub those (scaled appropriately) from the centre point of the world-coordinates you want to render the object in.
How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...