I was wondering if it possible to give a shape to a Texture in LibGDX.
In particular, I have a Texture and I want to make a button out of it. To do so, I wanted to give it a rounded corners shape.
In a nutshell, I have this :
and I want this :
I've already read some similar questions with any clear answer. Does anyone have experience this problem and found a smart solution ?
Create rounded texture image using bellow method and then add text to it.
public static Texture createPixmapRoundCornerRect(Color color, int width,
int height, int radius) {
Pixmap pixmap = new Pixmap(width, height, Format.RGBA8888);
pixmap.setColor(color);
pixmap.fillCircle(radius, radius, radius);
pixmap.fillCircle(width - radius, radius, radius);
pixmap.fillCircle(width - radius, height - radius, radius);
pixmap.fillCircle(radius, height - radius, radius);
pixmap.fillRectangle(0, radius, width, height - (radius * 2));
pixmap.fillRectangle(radius, 0, width - (radius * 2), height);
Texture pixmaptex = new Texture(pixmap);
pixmap.dispose();
return pixmaptex;
}
This has been already answered here. You have to implement your own Texture which uses polygons to achieve what you are trying to do.
Related
Im trying to retain the ratio and sizes of the rendered content when resizing my window/framebuffer texture on which im rendering exclusively on the xy-plane (z=0) and would like to have an orthographic projection.
Some general questions, do i need to resize both glm::ortho and viewport, or just one of them? Do both of them have to have the same amount of pixels or just the same aspect ratio? What about their x and y offsets?
I understand that i need to update the texture size.
What i have tried (among many other things):
One texture attachment on my framebuffer, a gl_RGB on GL_COLOR_ATTACHMENT0.
I render my framebuffers attached texture to a ImGui window with imgui::image(), When this ImGui window is resized I resize the texture assignent to my FBO using this: (i do not reattach the texture with glFramebufferTexture2D!)
void Texture2D::SetSize(glm::ivec2 size)`
{
Bind();
this->size = size;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, size.x, size.y, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
Unbind();
}
I also update the member variable windowSize.
In my renderingloop (for the Framebuffer) i use
view = glm::lookAt(glm::vec3(0, 0, 1), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::vec2 projWidth = windowSize.x;
glm::vec2 projHeight = windowSize.y;
proj = glm::ortho(-projWidth/2, projWidth/2, -projHeight/2, projHeight/2, -1.f, 1.f);
shaderProgram.SetMat4("view", view, 1);
shaderProgram.SetMat4("proj", proj, 1);
shaderProgram.SetMat4("world", world, 1); // world is identity matrix for now
glViewport(-windowSize.x/2, -windowSize.y/2, windowSize.x, windowSize.y);
Now i now do have some variables here that i use to try to implement paning with but i first want to get resizing to work correctly.
EDIT:
ok i have now tried different parameters for the viewport and orthographic projection matrix:
auto aspect = (float)(windowSize.x)/(float)windowSize.y;
glViewport(0, 0, windowSize.x, windowSize.y);
proj = glm::ortho<float>(-aspect, aspect, -1/aspect, 1/aspect, 1, -1);
Resizing the window still stretches my renderTexture, but now it does it uniformely, x is scaled as much as y. My square sprites remain sqares, but becomes smaller as i decrease the window size.
GIF
I've found a way to resize the 2D interface to scale it based by aspect, give this a try and see if this solves your issue:
auto aspectx = windowSize.x/windowSize.y
auto aspecty = (aspectx < 16f / 9f ? aspectx : 16f / 9f) / (16f / 9f);
float srcaspect = 4f / 3f;
float scale = 0.5f*((float)windowSize.y);
proj = glm::ortho(scale - (scale * aspectx / aspecty) - (scale - (scale * srcaspect)), scale + (scale * aspectx / aspecty) - (scale - (scale * srcaspect)), scale - (scale / aspecty), scale + (scale / aspecty), -1.f, 1.f);
and as a future reference, do this if you want the orthographic scaled within the aspect ratio. The source aspect is what scales the object to be within the original screen aspect:
float srcaspect = 4f / 3f;
float dstaspect = (float)windowSize.x/(float)windowSize.y;
float scale = 0.5f*(bottom - top);
float offset = scale + top;
proj = glm::ortho<float>(offset - (scale * dstaspect / yscale) - (offset - (scale * srcaspect)), offset + (scale * dstaspect / yscale) - (offset - (scale * srcaspect)), offset - (scale / yscale), offset + (scale / yscale), 1, -1);
I believe there were problems with my handling of FBOs as i have now found a satisfactory solution, one that also seems to be the obvious one. Simply using the width and height of the viewport (and FBO attached texture) like this:
proj = glm::ortho<float>(-zoom *viewportSize.x/(2) , zoom*viewportSize.x/(2) , -zoom*viewportSize.y/(2), zoom*viewportSize.y/(2), 1, -1);
You might also want to divide the four parameters by a constant (possibly much larger than 2) to make the zooming range more natural around 1.
While i tried this exact code before, I actually did not recreate the framebuffer, but simply used glTexImage2D(); on the attached texture.
It seems as though one need to delete and recreate the whole Framebuffer, and this was also stated to be safer on some posts here on SO, but I never found any sources saying that it was actually required.
So, lesson learnt, delete and create a new framebuffer. Window resizing is not something that happens very often.
My allegro 5 game needs to draw a region of a tilesheet then i used al_draw_bitmap_region, but now i added the function to change the screen resolution, so now i also need to scale that bitmap but allegro 5 do not have something like al_draw_scaled_bitmap_region, it have al_draw_bitmap_region andal_draw_scaled_bitmap` but not both.
somebody can help me how use both?
There is no al_draw_scaled_bitmap_region, but there is
al_draw_tinted_scaled_rotated_bitmap_region. You can just pass 'default'
values to the parameters you don't need.
al_draw_tinted_scaled_rotated_bitmap_region(
bitmap,
sx, sy, sw, sh, // source bitmap region
al_map_rgb(1, 1, 1), // color, just use white if you don't want a tint
cx, cy, // center of rotation/scaling
float dx, float dy, // destination
xscale, yscale, // scale
0, 0)); // angle and flags
You could also use transforms to scale your bitmap:
ALLEGRO_TRANSFORM trans, prevTrans;
// back up the current transform
al_copy_transform(&prevTrans, al_get_current_transform());
// scale using the new transform
al_identity_transform(&trans);
al_scale_transform(&trans, xscale, yscale);
al_use_transform(&trans);
al_draw_bitmap_region(*bitmap, sx, sy, sw, sh, dx, dy, 0));
// restore the old transform
al_use_transform(&prevTrans);
I have made some shapes like this :
// Triangle
glBegin(GL_TRIANGLES);
glVertex3f(0.0,0.0,0);
glVertex3f(1.0,0.0,0);
glVertex3f(0.5,1.0,0);
glEnd();
// Cube using GLUT
glColor3f(0.0,0.0,1.0);
glutSolidCube(.5);
// Circle
glPointSize(2);
glColor3f(1.0,0.0,1.0);
glBegin(GL_POINTS);
float radius = .75;
for( float theta = 0 ; theta < 360 ; theta+=.01 )
glVertex3f( radius * cos(theta), radius * sin(theta), 0 );
glEnd();
Initially I keep my window size as 500x500 and the output is as shown :
However, if I change the width and height (not in proportion) of my widget, the shapes get distorted (Circle looks oval, Equilateral triangle looks isosceles) :
This is the widget update code :
void DrawingSurface::resizeGL(int width, int height)
{
// Update the drawable area in case the widget area changes
glViewport(0, 0, (GLint)width, (GLint)height);
}
I understand that I can keep the viewport itself with same width and height, but then lot of space will get wasted on sides.
Q. Any solution for this ?
Q. How do game developers handle this in general, designing OpenGL game for different resolutions ?
P.S. : I do understand that this isn't modern OpenGL and also there are better ways of making a circle.
They solve it by using the projection matrix, both the perspective matrix and ortho projection traditionally have a way of getting the aspect ratio (width/height) and use that to adjust the result on screen.
I'm working on the editor for Bitfighter, where we use the default OpenGL stroked font. We generally render the text with a linewidth of 2, but this makes smaller fonts less readable. What I'd like to do is detect when the fontsize will fall below some threshold, and drop the linewidth to 1. The problem is, after all the transforms and such are applied, I don't know how to tell how tall (in pixels) a font of size <fontsize> will be rendered.
This is the actual inner rendering function:
if(---something--- < thresholdSizeInPixels)
glLineWidth(1);
float scalefactor = fontsize / 120;
glPushMatrix();
glTranslatef(x, y + (fix ? 0 : size), 0);
glRotatef(angle * radiansToDegreesConversion, 0, 0, 1);
glScalef(scaleFactor, -scaleFactor, 1);
for(S32 i = 0; string[i]; i++)
OpenglUtils::drawCharacter(string[i]);
glPopMatrix();
Just before calling this, I want to check the height of the font, then drop the linewidth if necessary. What goes in the ---something--- spot?
Bitfighter is a pure old-school 2D game, so there are no fancy 3D transforms going on. All code is in C++.
My solution was to combine the first part Christian Rau's solution with a fragment of the second. Basically, I can get the current scaling factor with this:
static float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview); // Fills modelview[]
float scalefact = modelview[0];
Then, I multiply scalefact by the fontsize in pixels, and multiply that by the ratio of windowHeight / canvasHeight to get the height in pixels that my text will be rendered.
That is...
textheight = scalefact * fontsize * widndowHeight / canvasHeight
And I liked also the idea of scaling the line thickness rather than stepping from 2 to 1 when a threshold is crossed. It all works very nicely now.
where we use the default OpenGL stroked font
OpenGL doesn't do fonts. There is no default OpenGL stroked font.
Maybe you are referring to GLUT and its glutStrokeCharacter function. Then please take note that GLUT is not part of OpenGL. It's an independent library, focused on providing a simplicistic framework for small OpenGL demos and tutorials.
To answer your question: GLUT Stroke Fonts are defined in terms of vertices, so the usual transformations apply. Since usually all transformations are linear, you can simply transform the vector (0, base_height, 0) through modelview and projection finally doing the perspective divide (gluProject does all this for you – GLU is not part OpenGL, too), the resulting vector is what you're looking for; take the vector length for scaling the width.
This should be determinable rather easily. The font's size in pixels just depends on the modelview transformation (actually only the scaling part), the projection transformation (which is a simple orthographic projection, I suppose) and the viewport settings, and of course on the size of an individual character of the font in untransformed form (what goes into the glVertex calls).
So you just take the font's basic size (lets consider the height only and call it height) and first do the modelview transformation (assuming the scaling shown in the code is the only one):
height *= scaleFactor;
Next we do the projection transformation:
height /= (top-bottom);
with top and bottom being the values you used when specifying the orthographic transformation (e.g. using glOrtho). And last but not least we do the viewport transformation:
height *= viewportHeight;
with viewportHeight being, you guessed it, the height of the viewport specified in the glViewport call. The resulting height should be the height of your font in pixels. You can use this to somehow scale the line width (without an if), as the line width parameter is in floats anyway, let OpenGL do the discretization.
If your transformation pipeline is more complicated, you could use a more general approach using the complete transformation matrices, perhaps with the help of gluProject to transform an object-space point to a screen-space point:
double x0, x1, y0, y1, z;
double modelview[16], projection[16];
int viewport[4];
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(0.0, 0.0, 0.0, modelview, projection, viewport, &x0, &y0, &z);
gluProject(fontWidth, fontHeight, 0.0, modelview, projection, viewport, &x1, &y1, &z);
x1 -= x0;
y1 -= y0;
fontScreenSize = sqrt(x1*x1 + y1*y1);
Here I took the diagonal of the character and not only the height, to better ignore rotations and we used the origin as reference value to ignore translations.
You might also find the answers to this question interesting, which give some more insight into OpenGL's transformation pipeline.
My application is a vector drawing application. It works with OpenGL. I will be modifying it to instead use the Cairo 2D graphics library. The issue is with zooming. With openGL camera and scale factor sort of work like this:
float scalediv = Current_Scene().camera.ScaleFactor / 2.0f;
float cameraX = GetCameraX();
float cameraY = GetCameraY();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float left = cameraX - ((float)controls.MainGlFrame.Dimensions.x) * scalediv;
float right = cameraX + ((float)controls.MainGlFrame.Dimensions.x) * scalediv;
float bottom = cameraY - ((float)controls.MainGlFrame.Dimensions.y) * scalediv;
float top = cameraY + ((float)controls.MainGlFrame.Dimensions.y) * scalediv;
glOrtho(left,
right,
bottom,
top,
-0.01f,0.01f);
// Set the model matrix as the current matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
hdc = BeginPaint(controls.MainGlContext.mhWnd,&ps);
Mouse position is obtained like this:
POINT _mouse = controls.MainGlFrame.GetMousePos();
vector2f mouse = functions.ScreenToWorld(_mouse.x,_mouse.y,GetCameraX(),GetCameraY(),
Current_Scene().camera.ScaleFactor,
controls.MainGlFrame.Dimensions.x,
controls.MainGlFrame.Dimensions.y );
vector2f CGlEngineFunctions::ScreenToWorld(int x, int y, float camx, float camy, float scale, int width, int height)
{
// Move the given point to the origin, multiply by the zoom factor and
// add the model coordinates of the center point (camera position)
vector2f p;
p.x = (float)(x - width / 2.0f) * scale +
camx;
p.y = -(float)(y - height / 2.0f) * scale +
camy;
return p;
}
From there I draw the VBO's of triangles. This allows me to pan and zoom in. Given that Cairo only can draw based on coordinates, how can I make it so that a vertex is properly scaled and panned without using transformations. Basically GlOrtho sets the viewport usually but I dont think I could do this with Cairo.
Well GlOrtho is able to change the viewport matrix instead of modifying the verticies but how could I instead modify the verticies to get the same result?
Thanks
*Given vertex P, which was obtained from ScreenToWorld, how could I modify it so that it is scaled and panned accordng to the camera and scale factor? Because usually OpenGL would essentially do this
I think Cairo can do what you want ... see http://cairographics.org/matrix_transform/ . Does that solve your problem, and if not, why ?