I am wondering how to draw this effect using computer programs, either CPU or GPU?
You have two lines there. What you want to do is to pick the closer line for each pixel, and calculate the distance to it. This will be your intensity at a given point. Furthermore do a fade to black as you approach the bottom of the image (use your pixel's y position to do this)
your lines seem to be at exactly at 25% and 75% on the x axis, therefore a pseudode looks like this:
for each pixel p: //p.x and p.y is normalized to the 0-1 range!
intensity = ( 0.25 - min( abs(p.x-0.25) , abs(p.x-0.75) ) ) / 0.25; //intensity is normalized to 0-1 range
intensity *= intensity; //distance squared
intensity *= (1.0 - p.y); //Top of image is 0, bottom is 1
display_intensity();
end
Depending on how you want to use this, you can create a texture on the CPU, or use a shader and calculate it in GLSL on the GPU.
Related
I wish to size an OpenGL texture so that at the maximum clipping distance each pixel will represent a fixed area, say 0.25 m2, i.e. each pixel represents a square of 0.5m by 0.5m.
For example.
GLint TextureWidth = GLint(HorizViewAngle*pi*RangeDistance / 180.0 / ResolutionAtDistance);
GLint TextureHeight = GLint(VertViewAngle*pi*RangeDistance / 180.0 / ResolutionAtDistance);
HorizViewAngle will be typically 120 degrees, with a VertViewAngle of 60, which is based around a humans field of view.
Furthermore, RangeDistance value will be 100 and ResolutionAtDistance = 0.5
The above code is base on the formula of an Arc = θ × (π/180) × r.
For setting the clipping distance I am using OpenGL Utility Library (GLU) gluPerspective
GLdouble AspectRatio = HorizViewAngle / VertViewAngle ;
gluPerspective(VertViewAngle, AspectRatio, 0.01, RangeDistance);
I was wondering if anyone else has had experience of doing this in OpenGL and what issues I should be looking out for. Such as image distortion which could impact the projected image. For example, I believe if I was to extend the field of view to 180 degrees, then it would be better to render two 90 degree textures and stitch them together.
I am rendering a tile map to a fbo and then moving the resulted buffer to a texture and rendering it on a FSQ. Then from the mouse click events, I got the screen coordinates and move them to clip space [-1,1]:
glm::vec2 posMouseClipSpace((2.0f * myCursorPos.x) / myDeviceWidth -
1.0f, 1.0f - (2.0f * myCursorPos.y) / myDeviceHeight);
I have logic on my program that based on those coordinates, it selects a specific tile on the texture.
Now, moving to 3D, I am texturing a semi cylinder with the FBO I used in the previous step:
In this case I am using a ray-triangle intersection point that hits the cylinder with radius r and height h. The idea is moving this intersection point to space [-1,1] so I can keep the logic on my program to select tiles
I use the Möller–Trumbore algorithm to check points on the cylinder hit by a ray. Lets say the intersected point is (x,y) (not sure if the point is in triangle, object or world space. Apparently it's worldspace).
I want to translate that point to space x:[-1,1], y[-1,1].
I know the height of my cylinder, which is a quarter of the cylinder's arc length:
cylinderHeight = myRadius * (PI/2);
so the point in the Y axis can be set in [-1,1]space:
vec2.y = (2.f * (intersectedPoint.y - myCylinder->position().y) ) /
(myCylinder->height()) - 1.f
and That works perfectly.
However, How to compute the horizontal axis which depends on 2 variables x and z?
Currently, my cylinder's radius is 1, so by coincidence a semi cylinder set in the origin would go from (-1 ,1) on the X axis, which made me think it was [-1,1] space, but it turns out is not.
My next approach was using the arc length of a semi circle s =r * PI and then plug that value into the equation:
vec2.x = (2.f * (intersectedPoint.x - myCylinder->position().x) ) /
(myCylinder->arcLength()) - 1.f
but clearly it goes off by 1 unit on the negative direction.
I appreciate the help.
From your description, it seems that you want to convert the world space intersection coordinate to its corresponding normalized texture coordinate.
For this you need the Z coordinate as well, as there must be two "horizontal" coordinates. However you don't need the arc length.
Using the relative X and Z coordinates of intersectedPoint, calculate the polar angle using atan2, and divide by PI (the angular range of the semi-circle arc):
vec2.x = atan2(intersectedPoint.z - myCylinder->position().z,
myCylinder->position().x - intersectedPoint.x) / PI;
So the default 2d clipping area of opengl is left -1.0 to right 1.0, and buttom -1.0 to top 1.0
And the window I created for an opengl program is 640 pixles in width and 480 pixels in height. The top left pixel is (0,0), the button right pixel is (640, 480)
I also wrote a function to retrive the coordinates when I click and drag and release the mouse button(When I click, it's (x1,y1) and when I release it's(x2,y2) )
So what should I do to convert (x1,y1) and (x2,y2) to the corresponding position in the clipping area?
The answer given by #BDL might get you close enough for what you need, but the calculations are not really correct.
The division needs to be by the number of pixels in each coordinate direction, because you do have 640/480 pixels within the coordinate range.
One subtle detail to take into account is that, when you get a given position from your mouse input, these will be the integer coordinates of the pixels. If you simply apply the scaling based on the window size, the resulting OpenGL coordinate would map to the left/bottom edge of the pixel. But what you most likely want is the center of the pixel. To precisely transform this into the OpenGL coordinate space, you can simply apply a 0.5 offset to your input value, moving the value from the edge to the center of the pixel.
For example, the left most pixel would have x-coordinate 0, the right most 639. The centers of these two, after applying the 0.5 offset, are 0.5 and 639.5. Applying this correction, you can also see that they are now both a distance of 0.5 away from the corresponding edges of the area at 0 and 640, making the whole thing symmetrical.
So the correct calculation is:
float xClip = ((xPix + 0.5f) / 640.0f) * 2.0f - 1.0f;
float yClip = 1.0f - ((yPix + 0.5f) / 480.0f) * 2.0f;
Or slightly simplified:
float xClip = (xPix + 0.5f) / 320.0f - 1.0f;
float yClip = 1.0f - (yPix + 0.5f) / 240.0f;
This takes the y-inversion into account.
I assume that the rightmost pixel is 639 (otherwise your window would be 641 pixels large).
The transformation is quiet simple, we just need a linear mapping. To transform a point P from pixel coordinates to clipping coordinates one can use the following formula
319.5
P_clip = (P_pixel / [ ]) - 1.0
239.5
Let's go over it step by step for the x coordinate. First we transform the [0, 639] range to a [0, 1] range by dividing through the window width
P_01 = P_pixel_x / 639
Then we transform from [0, 1] to [-1, 1] by multiplying by 2 and subtracting 1
P_clip_x = P_01 * 2 - 1
When one combines these two calculations and extends it to the y coordinate one gets the equation given above.
I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.
OpenGL can colour a rectangle with a gradient of colours from 1 side to the other. I'm using the following code for that in C++
glBegin(GL_QUADS);
{
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY + keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY + keyHight/2);
}
I'm using some Qt libraries to do the conversions between HSV and RGB. As you can see from the code, I'm drawing a rectangle with colour gradient from what I call hotColour to coldColour.
Why am I doing this? The program I made draws 3D Vectors in space and indicates their length by their colour. The user is offered to choose the hot (high value) and cold (low value) colours, and the program will automatically do the gradient using HSV scaling.
Why HSV scaling? because HSV is single valued along the colour map I'm using, and creating gradients with it linearly is a very easy task. For the user to select the colours, I offer him a QColourDialog colour map
http://qt-project.org/doc/qt-4.8/qcolordialog.html
On this colour map, you can see that red is available on the right and left side, making it impossible to have a linear scale for this colour-map with RGB. But with HSV, the linear scale is very easily achievable, where I just have to use a linear scale between 0 and 360 for Hue values.
With this paradigm, we can see that hot and cold colours define the direction of the gradient, so for example, if I choose hue to be 0 for cold and 359 for hot, HSV will give me a gradient between 0 and 359, and will include the whole spectrum of colours in the gradient; whilst, in OpenGL, it will basically go from red to red, which is no gradient!!!!!!
How can I force OpenGL to use an HSV gradient rather than RGB? The only idea that occurs to me is slicing the rectangle I wanna colour and do many gradients over smaller rectangles, but I think this isn't the most efficient way to do it.
Any ideas?
How can I force OpenGL to use an HSV gradient rather than RGB?
I wouldn't call it "forcing", but "teaching". The default way of OpenGL to interpolate vertex attributes vectors is by barycentric interpolation of the single vector elements based on the NDC coordinates of the fragment.
You must tell OpenGL how to turn those barycentric interpolated HSV values into RGB.
For this we introduce a fragment shader that assumes the color vertex attribute not being RGB but HSV.
#version 120
varying vec3 vertex_hsv; /* set this in appropriate vertex shader to the vertex attribute data*/
vec3 hsv2rgb(vec3 hsv)
{
float h = hsv.x * 6.; /* H in 0°=0 ... 1=360° */
float s = hsv.y;
float v = hsv.z;
float c = v * s;
vec2 cx = vec2(v*s, c * ( 1 - abs(mod(h, 2.)-1.) ));
vec3 rgb = vec3(0., 0., 0.);
if( h < 1. ) {
rgb.rg = cx;
} else if( h < 2. ) {
rgb.gr = cx;
} else if( h < 3. ) {
rgb.gb = cx;
} else if( h < 4. ) {
rgb.bg = cx;
} else if( h < 5. ) {
rgb.br = cx;
} else {
rgb.rb = cx;
}
return rgb + vec3(v-cx.y);
}
void main()
{
gl_FragColor = hsv2rgb(vertex_hsv);
}
You can do this with a fragment shader. You draw a quad and apply your fragment shader which does the coloring you want to the quad. The way I would do this is to set the colors of the corners to the HSV values that you want, then in the fragment shader convert the interpolated color values from HSV back to RGB. For more information on fragment shaders see the docs.