I'm using OpenGL to develop a 2D game. and I'm trying to map a texture around a circle, as shown on image below. I have noticed that many games have used this technique because it can save the size of texture resources.
But I don't know which texture mapping technique it used. Any suggestions?
Just like pointed out by genpfault.
Create a bunch of Quads along two circles. Set their UV coordinates A, B, C, D like shown in the picture. To get the point C, just add the distance h to the Vector Center -> B
PS: you will need a lot more quads then i drew
Generate a donut of quads with appropriate texture coordinates.
Related
Here is a task that every GIS application can do: given some polygons, fill each polygon with a chosen color. Like this: image
What is the best way of doing this repeatedly in Opengl? That is, the polygons do not change, and I want to vary the data for coloring to produce difference renderings.
Redrawing polygons for each rendering is the most straightforward solution, but it seems to be a waste, since the geometries do not change at all.
Or is it better to create a stencil for each polygon, and stencil print the entire map? If there are too many polygons, will doing hundreds or thousands of rendering passes create a problem?
For each vertex of a polygon, map a certain color.That means when you send the data to the shaders, with each call the vertex array object sends 2 parameters: a vector which is needed in the vertex shader and a vector which will be used as the fragment color.That is the simplest way.
For example think of a triangle drawn in opengl . if you send its vertices to the vertex shader and set a color in the fragment shader everytime when a vertex enters the shader pipeline it will be positioned accordingly and on the screen set with the given color from the fragment shader.
The technique which I poorly explained ( sry I am not the best at explanations) , is used in the colored triangle example in which colors interpolate.Red maped to a corner , Green maped to another , and Blue to the last. If you set it so the red color maps to every corner you get your colored triangle.That is the basic principle.Oh and you draw the minimum count of triangles and you need one pair of shaders .
Note : a polygon is made out of N triangles and you need to map the same color to every vertex of each triangle drawn in that polygon.
I think a bigger issue will be that OpenGL doesn't support polygons or vector drawing in general, but there are libraries for this. You'll have to use an existing solution for vector drawing, or failing that, you'll have to convert from your GIS data (usually a list of points for a polygon) to triangles. This is likely the biggest obstacle.
The fact that the geometry doesn't change isn't really an issue, you would generally store geometry into one or more buffers, then create logic to only draw what is visible inside your view point area, perhaps even go as far to only generate the geometry for the visible area.
See also this question and it's answers.
Rendering Vector Graphics in OpenGL?
I'm fairly new to OpenGL. I have 3D object and 2D image drew as HUD. At this moment, it looks like this. What I want to do now is to put 2D texture from HUD on a visible part of 3D object (in this case - front of a skull). As far as I know what I need to do is:
check which vertices are visible (again, as far as I know and after StackOverflow searching I think this question can answer my question about how to check if vertex is visible)
If vertex is visible transform this 3D point into 2D point (just use gluProject to get 2D coordinates)
I know 2D coordinates of vertex, so I can compare it to pixels on texture, which brings me directly to texturing.
And here's the problem - I don't have any idea how to do action in point 3. I have of visible 3D vertices in 2D, I have 2D texture and no idea how to use this. I was thinking to use it in similar way as 2D draw, but I have much more restrictive points than in 2D quad texturing.
Step : Find 68 Landmarks on a 2D image (with dlib)
So i know all 68 Coordinates of each landmark!
Create a 3D mask of a generical face (with OpenGL) -> Result
I know all the 3d Coordinates of the face model as well!
Now i want to use this Tutorial to texture map all triangles from the 2d image to the 3D generic Facemodel
Does anyone know an answer of my problem ? If you need more information just give me a message and i will send you what you need. Thanks everybody!
EDIT: After finding this tutorial i changed the size of my picture to get a width and a height which is power of two.
And then a divide all my picture coords (landmarks)with the size:
landmark(x) / height and landmark(y) / width
Picture :
Result:
As bigger the width and the height is as better is the image definition!
What you're seeing looks like you passed all your vertices directly to glDrawArrays without any reuse. So each vertex is used for a single triangle in your result, rather than being used in 6 or more triangles in the original picture.
You need to use an element buffer to describe how all your triangles are made up of the vertices you have, and use glDrawElements to draw them.
Also note that some of your polygons on the original image are in fact not triangles. You'll probably want to insert additional triangles for those polygons (the inside of the eyes).
I'm working on a scanline rendering for a class project. The renderer works so far, it reads in a model (using the utah teapot mostly), computes vertex/surface normals, and can do flat and phong shading. I'm now working on adding texture mapping, which is where I'm running into problems (I cannot use any OpenGL methods other than actually drawing the points on the screen).
So, I read in a texture into my app and have a 2D array of RGB values. I know that the concept is to map the texture from 2D texture space to a simple 3D object (in my case, a cylinder). I then now that you then map the intermediate surface onto the object surface.
However, I don't actually know how to do those things :). I've found some formulas as to mapping a texture to a cylinder, but they always seem to leave details out such as which values to use. I also then don't know how to take the vertex coordinate of my object and get the cylinder value for that point. There's some other StackOverflow posts about mapping to a cylinder, but they 1) deal with newer OpenGL with shaders and such and 2) don't deal with intermediate surfaces, so I'm not sure how to translate the knowledge from them.
So, any help on pseudo code for mapping a texture onto a 3D object using a cylinder as an intermediate surface would be greatly appreciated.
You keep using the phrase "intermediate surface", which does not describe the process correctly, yet hints at what you have in your head.
Basically, you're asking for a way to map every point on the teapot's surface onto a cylinder (assuming that the texture will be "wrapped" on the cylinder).
Just convert your surface point into cylindrical coordinates (r, theta, height), then use theta as u and height as v (texcoords).
This is what you are trying to achieve:
I have enjoyed learning to use OpenGL under the context of games programming, and I have experimented with creating small shapes. I'm wondering if there are any resources or apps that will generate code similar to the following with a simple paint-like interface.
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
glVertex2f(1, 0);
glVertex2f(2, 3);
glVertex2f(4, 5);
glEnd();
I'm having trouble thinking of the correct dimensions to generate shapes and coming up with the correct co-ordinates.
To clarify, I'm not looking for a program I can just freely draw stuff in and expect it to create good code to use. Just more of a visual way of representing and modifying the sets of coordinates that you need.
I solved this to a degree by drawing a shape in paint and measuring the distances between the pixels relative to a single point, but it's not that elegant.
It sounds like you are looking for a way to import 2d geometry into your application. The best approach in my opinion would be to develop a content pipeline. It goes something like this:
You would create your content in a 3d modeling program like Google's Sketchup. In your case you would draw 2d shapes using polygons.
You need a conversion tool to get the data out of the original format and into a format that your target application can understand. One way to get polygon and vertex data out of Sketchup is to export to Collada and have your tool read and process it. (The simplest format would be a list of triangles or lines.)
Write a geometry loader in your code that reads the data created by your conversion tool. You need to write opengl code that uses vertex arrays to display the geometry.
The coordinates you'll use just depend on how you define your viewport and the resolution you're operating in. In fact, you might think about collecting the coordinates of the mouse clicks in whatever arbitrary coordinate system you want and then mapping those coordinates to opengl coordinates.
What kind of library are you expecting?
something like
drawSquare(dx,dy);?
drawCircle(radius);?
drawPoly(x1,y1,x2,y2....);?
Isn't that exactly the same as glVertex but with a different name? Where is the abstraction?
I made one of these... it would take a bitmap image, and generate geometry from it. try looking up triangulation.
the first step is generating the edge of the shape, converting it from pixels to vertices and edges, find all the edge pixels and put a vertex at each one, then based on either the distance between vertices, or (better) the difference in gradient between edges to cull out vertices and reduce the poly count of the mesh.
if your shape drawing program works with 'vector graphics' rather than pixels, i.e. plotting points and having lines drawn between them, then you can skip that first step and you just need to do triangulation.
the second step, once you have your edges and vertices is triangulation, in order to generate triangles, ear clipping is a simple method for instance.
as for the coordinates to use? that’s entirely up to you as others have said, to keep it simple, Id just work in pixel coordinates.
you can then scale and translate as needed to transform the shape for use.