So i have a bmp file that i want to show it a a quad the problem is that every time texture is split in 2 and the texture the first half is shown next to the other but its reversed.
Examples if the bmp is an image of 12 it will split in half 1 and 2 and it will be shown as 21 or or the one in top of the other i tried to use other cords in but nothing worked.
i use the glOrtho2D(0,300,300,0);.if that helps and my draw function is :
`glBegin(GL_QUADS);
glTexCoord2i(0,0);glVertex2i(300,0);
glTexCoord2i(0,1);glVertex2i(300,300);
glTexCoord2i(1,1);glVertex2i(0,300);
glTexCoord2i(0,1);glVertex2i(0,0);
glEnd();`
Your texCoord are wrong:
glBegin(GL_QUADS);
glTexCoord2i(0,0);glVertex2i(300,0);
glTexCoord2i(0,1);glVertex2i(300,300);
glTexCoord2i(1,1);glVertex2i(0,300);
glTexCoord2i(1,0);glVertex2i(0,0); // It was repeated
glEnd();
It was my loading function all along the cords had nothing to do in case you have a problem like check you loading function also.
Related
I'm searching for a function/way to make blending work only when destination pixels' (i.e. the back buffer) alpha value is greater than 0.
What i'm looking for is something like the glAlphaFunc which tests the incoming fragments, but in my case i want to test the fragments already found in the back buffer.
Any ideas?
Thank you in advance
ps. I cannot do a pixel-by-pixel test in the drawing function because this is set as a callback function to the user.
Wait, your answer is somewhat confusing, but i think what you're looking for is something like this : opengl - blending with previous contents of framebuffer
Sorry for this, but i think it's better answering instead of commenting.
So, let me explain better giving an example.
Let's say we have to draw something (whatever the user wants, like a table) and after that (before swapping the buffers of course) we must draw over it the "saved" textures using blending.
Let's say we have to draw two transparent boxes. If those boxes are to be saved in a different texture, this can be done by:
Clear the screen with (0, 0, 0, 0)
set blend function (GL_ONE, GL_ZERO)
draw the box
save it to texture.
Now, whenever the user wants to redraw them all, he simply draws the main theme (the table) and over it draws the textures using blend function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).
This works fine. But if the user wants to save both boxes in one texture and the boxes overlap, how can we save the blending of those two boxes without blend them with the "cleared" background?
Summarizing, the final image of the whole painting should be a table with two boxes (let's say a yellow and a green box) over it, blended with function (GL_SOURCE_ALPHA, GL_ONE_MINUS_SOURCE_ALPHA).
Step : Find 68 Landmarks on a 2D image (with dlib)
So i know all 68 Coordinates of each landmark!
Create a 3D mask of a generical face (with OpenGL) -> Result
I know all the 3d Coordinates of the face model as well!
Now i want to use this Tutorial to texture map all triangles from the 2d image to the 3D generic Facemodel
Does anyone know an answer of my problem ? If you need more information just give me a message and i will send you what you need. Thanks everybody!
EDIT: After finding this tutorial i changed the size of my picture to get a width and a height which is power of two.
And then a divide all my picture coords (landmarks)with the size:
landmark(x) / height and landmark(y) / width
Picture :
Result:
As bigger the width and the height is as better is the image definition!
What you're seeing looks like you passed all your vertices directly to glDrawArrays without any reuse. So each vertex is used for a single triangle in your result, rather than being used in 6 or more triangles in the original picture.
You need to use an element buffer to describe how all your triangles are made up of the vertices you have, and use glDrawElements to draw them.
Also note that some of your polygons on the original image are in fact not triangles. You'll probably want to insert additional triangles for those polygons (the inside of the eyes).
I'm currently working on a cylinder shaped terrain produced by a height map.
What happens in the program is simple, there is a texture for the colors of the terrain that has the alpha value of regions in with i want it to be invisible and another texture ARGB with the A being the gray scale for the heights and RGB is the normal for the light.
The texture is such that the A value goes from 1 to 255 and I'm reserving the 0 for the regions with holes, meaning i don't want then to exist.
So in theory no problem, I'm making those regions invisible based on the first texture but on practice what's happening is that the program is considering the 0 as the minimum height and, even with the texture on top, is creating some lines towards this regions of 0, like trying to make its triangle but not getting there because i cut the next vertex by making it invisible.
Notice the lines going to the center of the cylinder
This is how it gets when i stop making those vertex invisible
So, just to say, i used the function Clip() on the pixel shader to make it invisible.
Basically what i need of help:
Is it possible, the same way i use clip() on the pixel shader i do something like that on the vertex shader and get rid of the unwanted vertex?
Basically, is possible to just say to ignore value 0?
Any ideas to fix this? i thinking of making all the vertex that are 0 become the value of his neighbor, that way those lines wouldn't go to the center but to the same plane as the cylinder itself.
Another thing is that we can see that the program is interpolating the values from one vertex to the next, that is why i cuts on halfway through to the invisible vertex
I'm working with Directx11 API with C++ and the program uses Tessellation.
Thank you for your time and will be very glad with any input on this!
Well i did resolve a bit of this issue.
I made the texture with the height values pass through a modifier that created another texture with the zero values substituted by the side pixel with value different then zero or change for 128.0f.
with that it made the weird lines direction be more accurate not going to the center of the cylinder but along the line.
So basically I am making a 2D game with opengl/c++. I have a quad with a texture mapped on it and because I cant use non power of two images (or at least I shouldnt) I have an image inside a power of two image and I wish to remove the excess image with texture mapping.
GLfloat quadTexcoords[] = {
0.0, 0.0,
0.78125, 0.0,
0.78125, 0.8789,
0.0, 0.8789
};
glGenBuffers(1, &VBO_texcoords);
glBindBuffer(GL_ARRAY_BUFFER, VBO_texcoords);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadTexcoords), quadTexcoords, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is my texture code. Nothing special I know. The X coordinates (0.7256) work fine and it remove the excess image on the right side of the image. However the Y coordinate does not work. I have debugged my game and found the correct coordinate is sent to the shader but it wont map. It seems to be working in reverse sometimes but it is very unclear.
if I give it a coordinate for Y like 20 it repeats multiple times but still leaves a little white line a the top of the quad. I havent got the faintest idea what it could be.
Other details: The image I am trying to map is 800 by 450 and it is wrapped in an image of size 1024 by 512. I scale the quad by the aspect ratio of 800 by 450. I doubt this would make a difference but you never know!
Thanks for your time.
EDIT: here is an example of whats happening.
This is the full image mapped fully (0 to 1 in X and Y) The blue portion is 200 pixels high and the full image is 300 pixels high.
The second image is the image mapped to 2 thirds of the Y axis (i.e. 0 to 0.6666 in Y). This should remove the white at the top but that is not what is happening. I don't think the coordinates are back to front as I got the mapping of several tutorials online.
It seems to be working in reverse sometimes but it is very unclear.
OpenGL assumes the viewport origin in the lower left and texture coordinates running "along with" flat memory texel values in S, then T direction. In essence this means, that with one of the usual mappings, textures have their origin in the lower left, contrary to the upper left origin found in most image manipulation programs.
So in your case the white margin you see, is simply the padding you probably applied to the texture image at the bottom, instead of the top, where you should put it. Why can't you use NPO2 textures anyway? They're widely supported.
Not a real solution to you problem but maybe a way to go around the problem:
You can scale the image to 1024x1024 (which deforms the image) and use 0->1 texture coordinates. Because the aspect ratio of your quad is 800x450 the image should be displayed correctly.
So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.
Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.
Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.