glLineWidth alternative? [duplicate] - c++

This question already has answers here:
OpenGL Line Width
(4 answers)
Closed 2 years ago.
glLineWidth guarantees to support only width 1. On Windows, it's limited to width 10. To overcome this limitation, the common suggestion is to "simply" render a rectangle instead.
Since this seems like a basic requirement (render 2D/3D lines of arbitrary width, mesh wireframe, etc.), I was wondering if anyone has a code snippet for it.
It would work similar to what the legacy OpenGL offers.
Input: two 3D points and width.
Output: It would render a 3D line that faces the camera with width in pixels.
Emphasis:
It needs to face the camera.
The width is in screen pixels.
Since it's a 3D (flat) line, these properties aren't defined properly. So, I guess it would be something like "as much as possible" and "on average" (whatever that means). This is probably why glLineWidth is limited.

Something basic that doesn't answer the nuances, which is enough for me at the moment (for now, only 2D lines, for given world thickness):
GLUquadricObj *pQuadric = gluNewQuadric();
glPushMatrix();
// flatten y to make a rectangle
glm::dmat4 S = glm::scale( glm::dvec3(1., 0.001 / radius, 1.) );
// translate
glm::dmat4 T = glm::translate( toPoint<glm::dvec3>(p0) );
// rotate
glm::dvec3 xaxis(1, 0, 0);
glm::dmat4 R1 = glm::rotate( -M_PI / 2, xaxis );
glm::dvec3 u( toPoint<glm::dvec3>(p1 - p0) );
u = glm::normalize( u );
glm::dvec3 yaxis(0, 1, 0);
glm::dmat4 R2 = glm::orientation(u, yaxis);
// combine transforms
glm::dmat4 A = T * R2 * R1 * S;
glMultMatrixd( (double*)&A[0] );
glGetDoublev(GL_MODELVIEW_MATRIX, (double*)&A[0]);
gluCylinder(pQuadric, radius, radius, height, 4, 1);
glPopMatrix();
gluDeleteQuadric(pQuadric);

Related

Implementing correct texture mapping in triangles with glTexCoord4f() in a Doom-like engine

A while ago I asked a question similar to this one, but in that case I was trying to correct the perspective texture mapping of a trapezoid that had the horizontal lines constantly parallel with glTexCoord4f() and this is relatively simple. However, now I'm trying to fix the texture mapping of the floor and ceiling in my engine, the problem is that since both depend on the shape of the map, I need to use triangles to fill in the polygonal shapes that the map may contain.
I tried a few variations of the same method I used for correct texture mapping on trapezoids, the attempt with more "acceptable" results were when I calculated the size of the triangle's edges (with screen coordinates) and used each result in the different 'q' in each glTexCoord4f(), that is how code currently stands.
With that in mind, how can I fix this while using glTexCoord4f()?
Here is the code I used to correct the texture mapping of the walls (functional):
float u, v;
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
float sza = wyaa - wyab; //Size of the first vertical edge on the wall
float szb = wyba - wybb; //Size of the second vertical edge on the wall
//Does the wall have streeched textures?
if(!(*wall).streechTexture){
u = -texLength;
v = -texHeight;
}else{
u = -1;
v = -1;
}
glBindTexture (GL_TEXTURE_2D, texture.at((*wall).texture));
glBegin(GL_TRIANGLE_STRIP);
glTexCoord4f(0, 0, 0, sza);
glVertex3f(wxa, wyaa + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, 0, 0, szb);
glVertex3f(wxb, wyba + shearing, -tzb * 0.001953);
glTexCoord4f(0, v * sza, 0, sza);
glVertex3f(wxa, wyab + shearing, -tza * 0.001953);
glTexCoord4f(u * szb, v * szb, 0, szb);
glVertex3f(wxb, wybb + shearing, -tzb * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
And here the current code that renders both the floor and the ceiling (which needs to be fixed):
glEnable(GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, texture.at((*floor).texture));
float difA, difB, difC;
difA = vectorMag(Vertex(fxa, fyaa), Vertex(fxb, fyba)); //Size of the first edge on the triangle
difB = vectorMag(Vertex(fxb, fyba), Vertex(fxc, fyca)); //Size of the second edge on the triangle
difC = vectorMag(Vertex(fxc, fyca), Vertex(fxa, fyaa)); //Size of the third edge on the triangle
glBegin(GL_TRIANGLE_STRIP); //Rendering the floor
glTexCoord4f(ua * difA, va * difA, 0, difA);
glVertex3f(fxa, fyaa + shearing, -tza * 0.001953);
glTexCoord4f(ub * difB, vb * difB, 0, difB);
glVertex3f(fxb, fyba + shearing, -tzb * 0.001953);
glTexCoord4f(uc * difC, vc * difC, 0, difC);
glVertex3f(fxc, fyca + shearing, -tzc * 0.001953);
glEnd();
glBegin(GL_TRIANGLE_STRIP); //Rendering the ceiling
glTexCoord4f(uc, vc, 0, 1);
glVertex3f(fxc, fycb + shearing, -tzc * 0.001953);
glTexCoord4f(ub, vb, 0, 1);
glVertex3f(fxb, fybb + shearing, -tzb * 0.001953);
glTexCoord4f(ua, va, 0, 1);
glVertex3f(fxa, fyab + shearing, -tza * 0.001953);
glEnd();
glDisable(GL_TEXTURE_2D);
Here a picture of how it looks visually (for comparison purposes, the floor has the failed attempt at correct texture mapping, while the ceiling has affine texture mapping):
I understand that it would be easier if I just set a normal perspective view, but that would simply defeat the whole purpose of the engine.
This is an issue only for floor and ceiling (unless your camera can tilt). So you can render your wals as you doing. But for floors and ceiling you have these basic options (As I mentioned in your old duplicate post):
Rasterize scan line on your own
So instead of rendering triangles (which old ray casters did not do) you render vertical lines pixel by pixel using points instead of triangles. That will be much slower of coarse as GL is more suited for polygonal primitives. See draw_scanline functions in here:
Efficient floor/ceiling rendering in Raycaster
Use perspective view and pass z coordinate
Looks like you added the z coordinate already. So now you just need to set perspective view that matches your wall rendering. OpenGL will do the rest on its own. So you should add something like gluPerspective for your GL_PROJECTION matrix. but just for your floors/ceilings ...
Pass z coordinate and overide fragment shader
So you just write fragment shader that computes the perspective correct texture mapping correction in it and just output wanted texel color +/- some lighting. Here example of shaders usage:
complete GL+GLSL+VAO/VBO C++ example
For more info see:
Ray Casting with different height size

dynamically render a 2d board in 3d view

I am a beginner in openGL. I am currently working on a program which take in inputs the width and the length of a board. Given those inputs i want to dynamically position my camera so that i can have a view on the whole board. Let' s say that my window size is 1024x768.
Are there any mathematical formula to compute the different parameters of the opengl function glookat to make it possible ?
the view i want to have on the board should look like this.
It doesn't matter if a board too big will make things look tiny. What matters the most here is to position the camera in a way that the view on the whole board is made possible
So far i am hopelessly randomly changing the parameters of my glookat function till i ran into something decent for a X size width and and Y size Height.
my gluperpective function :
gluPerspective(70 ,1024 / 768,1,1000)
my glooatfunction for a 40 * 40 board
gluLookAt(20, 20, 60, 20, -4, -20, 0, 1, 0);
how i draw my board (plane):
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluLookAt(20, 20, 60, 20, -4, -20, 0, 1, 0);
glBindTexture(GL_TEXTURE_2D, texture_sol);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex3i(width, 0, height);
glTexCoord2i(10, 0); glVertex3i(0, 0, height)
glTexCoord2i(10, 10); glVertex3i(0, 0, 0);
glTexCoord2i(0, 10); glVertex3i(width, 0, 0);
glEnd();
the output looks as follow :
gluLookAt takes 2 points and a vector; the eye and centre positions and the up vector. There's no issue with the last parameter. The first two are relevant to your question.
I see that your board in the world space is extending on the positive X and Y axes with some arbitrary width and height values. Lets take width = height = 1.0 for instance. So the board spans from (0, 0), (1, 0), (1, 1), (0, 1); the Y value is ignored here since the board lies on the Y = 0 plane and have the same value for all vertices; these are just (X, Z) values.
Now coming to gluLookAt, eye is where the camera is in world space and centre is the point where you want the camera to be looking at (in world space)
Say you want the camera to look at centre of the board I presume, so
eye = (width / 2.0f, 0, height/2.0f);
Now you've to position the camera at its vantage point. Say somewhere above the board but towards the positive Z direction since there's where the user is (assuming your world space is right handed and positive Z direction is towards the viewer), so
centre = (width / 2.0f, 5.0f, 1.0f);
Since the farthest point on Z is 0, I just added one more to be slightly father than that. Y is how much above you want to see the board from, I just chose 5.0 as an example. These are just arbitrary values I can come up with, you'll still have to experiment with these values. But I hope you got the essence of how gluLookAt works.
Though this is written as an XNA tutorial, the basic technique and math behind it should carry over to OpenGL and your project:
Positioning the Camera to View All Scene Objects
Also see
OpenGL FAQ
8.070 How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.)
Edit in response to the comment question
A bounding sphere is simply a sphere that completely encloses your model. It can be described as:
A bounding sphere, S, of a point set P with n points is described by
a center point, c, and a radius, r.
So,
P = the vertices of your model (the board in this case)
c = origin of your model
r = distance from origin of the vertex, in P, farthest from the origin
So the Bounding Sphere for your board would be composed of the origin location (c) and the distance from one corner to the origin (r) assuming the board is a square and all points are equidistant.
For more complicated models, you may employ pre-created solutions [1] or implement your own calculations [2] [3]

How to infer translate, shear, etc from manual matrix operations?

While reading some code from UCMerced's TriPath Toolkit, I came across these
float xmin, xmax, ymin, ymax;
float mat[16] = { 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 };
TheLct->get_bounds ( xmin, xmax, ymin, ymax );
glMatrixMode ( GL_MODELVIEW );
glLoadIdentity ();
float width = xmax-xmin;
float height = ymax-ymin;
mat[0]=mat[5]=mat[10]= 1.8f * (1 / (width > height ? width : height));
glMultMatrixf ( mat );
mat[0]=mat[5]=mat[10]= 1;
mat[12]=-(xmin+w/2);
mat[13]=-(ymin+h/2);
glMultMatrixf ( mat );
In the first transformation, the first three diagonal 1's in the matrix are multiplied by a factor. From my limited knowledge of the identity matrix, this appears to be scaling by a factor.
The second transformation, however, I don't really understand:
mat[12]=-(xmin+w/2);
mat[13]=-(ymin+h/2);
glMultMatrixf ( mat );
First of all, I don't know what it even means to change indices 12 and 13 in such a matrix. I'm trying to figure it out by reading the wikipedia page on transformations, but I guess I don't have enough math-related domain knowledge to make sense of it.
Whereas the OpenGL resources I can find don't really seem to modify matrices in this manner, rather they use functions like glScaleF.
How can I relate manual matrix transformations such as the above to scaling, shearing, translating, and rotating?
The first matrix, as you correctly guessed, is a uniform scale matrix. The second matrix is just a translation (along x and y axis). Note that the (fixed function matrix stack of the) GL
uses a column major memory layout, where the translation part is always in m[12], m[13], m[14] (see also answer 9.005 in the old GL FAQ). The combined transformation is not a perspective projection (that would require that (m[3], m[7], m[11]) is not the null vector), but an orthogonal one.
For an easy explanation of how all these numbers can be geometrically interpreted, you might find this article useful.
The last is a perspective projection. See http://en.wikipedia.org/wiki/Transformation_matrix

OpenGL screen layout

I have some questions about the screen set up. Originally when I would draw a triangle the x vector 1 would be all the way to the right and -1 would be all the way to the left. Now I have adjusted it to account for the different aspect ratio of the window. My new question how do I make the numbers which are used to render a 2d tri go along with the pixel values. If my window is 480 pixels wide and 320 tall I want to have to enter this to span the screen with a tri
glBegin(GL_TRIANGLES);
glVertex2f(240, 320);
glVertex2f(480, 0);
glVertex2f(0, 0);
glEnd();
but instead it currently looks like this
glBegin(GL_TRIANGLES);
glVertex2f(0, 1);
glVertex2f(1, -1);
glVertex2f(-1, -1);
glEnd();
Any ideas?
You need to use functions glViewport and glOrtho with correct values. Basically glViewport sets the part of your window capable of rendering 3D-Graphics using OpenGL. glOrtho establishes coordinate system within that part of a window using OpenGL's coordinates.
So for your task you need to know exact width and height of your window. If you are saying they are 480 and 320 respectively then you need to call
glViewport(0, 0, 480, 320)
// or: glViewport ( 0,0,w,h)
somewhere, maybe in your SizeChanging-handler(if you are using WINAPI it is WM_SIZE message)
Next, when establishing OpenGL's scene you need to specify OpenGL's coordinates. For orthographic projection they will be the same as dimensions of a window so
glOrtho(-240, 240, -160, 160, -100, 100)
// or: glOrtho ( -w/2, w/2, -h/2, h/2, -100, 100 );
is siutable for your purppose. Not that here I'm using depth of 200 (z goes from -100 to 100).
Next on your rendering routine you may draw your triangle
Since the second piece of code is working for you, I assume your transformation matrices are all identity or you have a shader that bypasses them. Also your viewport is spanning the whole window.
In general if your viewport starts at (x0,y0) and has WxH size, the normalized coordinates (x,y) you feed to glVertex2f will be transformed to (vx,vy) as follows:
vx = x0 + (x * .5f + .5f) * W
vy = y0 + (y * .5f + .5f) * H
If you want to use pixel coordinates you can use the function
void vertex2(int x, int y)
{
float vx = (float(x) + .5f) / 480.f;
float vy = (float(y) + .5f) / 320.f;
glVertex3f(vx, vy, -1.f);
}
The -1 z value is the closest depth to the viewer. It's negative because the z is assumed to be reflected after the transformation (which is identity in your case).
The addition of .5f is because the rasterizer considers a pixel as a 1x1 quad and evaluates the coverage of your triangle in the middle of this quad.

How to tell the size of font in pixels when rendered with openGL

I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.