I want to create an arbitrary rectangle using the distance function in FragmentShader of GLSL - glsl

I want to use a distance function in GLSL's FragmentShader to create an arbitrary rectangle. I want to specify the x, y coordinates of A and B and the length of C as shown in the attached image. Is such a thing possible?
There is no problem with another method instead of the distance function. It would be helpful if you could tell me.

Related

Rotating, scaling and translating 2d points in GLM

I am currently trying to create a 2d game and I am a bit shocked that I can not find any transformations like rotation, scale, translate for vec2.
For example as far as I know rotate is only available for a mat4x4 and mat4x4 can only be multiplied by a vec4.
Is there any reason for that?
I want to calculate my vertices on the CPU
std::vector<glm::mat2> m;
Gather all matrices, generate the vertices, fill the GPU buffer and then draw everything in one draw call.
How would I do this in glm? Just use a mat4 and then ignore the z and w component?
One thing that you should understand is that you can't just ignore the z component, let alone the w component. It depends on how you want to view the problem. The problem is that you want to use a mat2 and a vec2 for 2D, but alas it's not as simple as you may think. You are using OpenGL, and presumably it's with shaders too. You need to use at least a glm::mat3, or better: mat4. The reason for this is that although you want everything in 2D, it has to be done in terms of 3D. 2D is really just 3D with the z-buffer being 1 and the clip pane simply static, relative to the Window size.
So, what I suggest is this:
For your model matrix, you have it as a glm::mat4 which contains all data, even if you don't intend to use it. Consistency is important here, especially for transformations.
Don't ignore the z and w component in glm::mat4; they are important because their values dictate where they are on screen. OpenGL needs to know where in the z-plane a value is. And for matrix multiplication, homogenous coordinates are required, so the w component is also important. In other words, you are virtually stuck with glm::mat4.
To get transformations for glm::mat4 you should
#include <glm/gtc/matrix_transform.hpp>
Treat your sprites separately; as in have a Sprite class, and group them outside that class. Don't do the grouping recursively as this leads to messy code. Generating vertices should be done on a per-Sprite basis; don't worry about optimisation because OpenGL takes care of that for you, let alone the compiler on the C++ side of things. You'll soon realise that by breaking down this problem you can have something like
std::vector<Sprite*> sprites;
for (const auto& : i)
i->init();
// etc...
for (const auto& : i)
i->render();
With regards to the shaders, you shouldn't really have them inside the Sprite class. Have a resource loader, and simply have each Sprite class retrieve the shader from that loader.
And most importantly: remember the order of transformations!
With regards to transformations of sprites, what you can do is have a glm::vec3 for sprite positions, setting the z-component to 0. You can then move your sprite by simply having a function which wants x and y values. Feed these values into the model matrix using glm::translate(..). With regards to rotation, you use glm::rotate() and simply have functions which get rotation angles. ALWAYS ROTATE ON THE Z-PANE thus it should be similar to this:
modelMatrix = glm::rotate(modelMatrix, glm::radians(angle), glm::vec3(0.f, 0.f, 1.f));
As for scaling, again a suitable setScale() function that accepts two values for x and y. Set the z component to 1 to prevent the Sprite from being scaled in z. Feed the values into glm::scale like so:
modelMatrix = glm::scale(modelMatrix, glm::vec3(scale));
Remember, that storing matrices provides little benefit because they're just numbers; they don't indicate what you really want to do with them. It's better for a Sprite class to encapsulate a matrix so that it's clear what they stand for.
You're welcome!
It seems that I haven't looked hard enough:
GLM_GTX_matrix_transform_2d from glm/gtx/matrix_transform_2d.hpp.

How to map a texture around a circle?

I'm using OpenGL to develop a 2D game. and I'm trying to map a texture around a circle, as shown on image below. I have noticed that many games have used this technique because it can save the size of texture resources.
But I don't know which texture mapping technique it used. Any suggestions?
Just like pointed out by genpfault.
Create a bunch of Quads along two circles. Set their UV coordinates A, B, C, D like shown in the picture. To get the point C, just add the distance h to the Vector Center -> B
PS: you will need a lot more quads then i drew
Generate a donut of quads with appropriate texture coordinates.

GLSL-Only Texture Coordinates

There are a great number of questions relating to exactly two points in texture coordinates, and my shader already works with that concept. 1.0, 1.0 shows the entire image, and 1.0 / frame in one dimension or another displays the appearance of... well, unfortunately, it displays everything between 0.0 of the quad, to the decimal value of the division of the frame.
What I'd like to do is, from the shader, control all four points of the texture coordinates. In every tutorial and every sample, the texture coordinate vec is always a vec2, implying that you only have control over the two end-points, and not the starting points. Is there a way to eliminate this limitation?
To give you an idea of why I want to do this (If it isn't blatantly obvious already), I'd like to pick a tile or animated frame out of a larger sheet.
Ideally, I'd also be able to find the dimensions (Width and height) of the image in the shader, but if necessary, it isn't that difficult to pass those values in. I believe at this time I'm using GLSL 2, meaning I'm unable to use the textureSize2D function in the shader (Already tried it).
Simplifying things UV coodrinate pair you pass to texture command means a point to read from texture. just one point, not an area. Depending on sampler state and whether minification/magnification occur or not more then one texel can be used to calculate value of that point, but still it is one value connected to UV provided.

[GLSL]How to compare the z value of all the vertices in world coordinate?

This might be a simple question. As a newbie on GLSL, I would rather ask here.
Now, in the vertex shader, I can get the position in world coordinate system in the following way:
gl_Position = ftransform();
posWorld = gl_ModelViewMatrix * gl_Vertex;
The question is: now can I can the max/min value of the posWorld among all the vertices? So that I can get a range of the vertex depth, but not the range of depth buffer.
If this is not possible, how can I get the z value of near/far plane in world coordinate system?
with best regards,
Jian
Yes it is possible with OpenGL. I'm doing a similar technique for calculating object's bounding box on GPU. Here are the steps:
Arrange a render-buffer of size 1x1 type RGBA_32F in its own FBO. Set as a render target (no depth/stencil, just a single color plane). It can be a pixel of a bigger texture, in which case you'll need to setup the viewport correctly.
Clear with basic value. For 'min' it will be some huge number, for 'max' it's negative huge.
Set up the blending function 'min' or 'max' correspondingly with coefficients (1,1).
Draw your mesh with a shader that produces a point with (0,0,0,1) coordinate. Output the color containing your original vertex world position.
You can go further optimizing from here. For example, you can get both 'min' and 'max' in one draw call by utilizing the geometry shader and negating the position for one of the output pixels.
From what i know, i think this needs to be done manually with an algorithm based on parallel reduction. I would like someone to confirm if there exists or not an OpenGL or GLSL function that already does this.
on the other hand, you can have access to the normalized near/far planes within a fragment shader,
http://www.opengl.org/wiki/GLSL_Predefined_Variables#Fragment_shader_uniforms.
and with the help of some uniform variables you can get the world far/near.

How do I color / texture a 3d object dynamically?

I have a 3D model, composed of triangles. What I want to do is, given a point near to the model, I would like to color the model (triangles) to another color, say blue.
Right now, I have a bounding sphere about the model, and when the collision occurs, I just want to approximately color the portions of model from where the collision occurred.
Can someone please suggest me something that I can use and make this happen ?
Thanks
If you just have one or a small number of points to test against, the fastest-to-render method would probably be to write a shader in GLSL that conditionally modifies fragment colors based on world-space distance to your point(s).
An alternative that may be simpler if you've never done GLSL programming would be to use vertex arrays and maintain a map from your triangle vertices to coordinates indexing the vertex arrays; then you can take whatever vertices trigger the collision test and manually modify their associated color data on each frame.