How to draw a lightened border(outer glow effect)? - c++

How can i draw a lightened border like this with gdi/gdi+:
Anyone can give me a train of thought?Thanks.

Using GDI+, I would recommend you use a PathGradientBrush. It allows you to fill a region with series of colors around the edge that all blend toward a center color. You probably only need 1 edge color in this case. Create a GraphicsPath for a rounded rectangle and use FillPath() to fill it with a PathGradientBrush:
GraphicsPath graphicsPath;
//rect - for a bounding rect
//radius - for how 'rounded' the glow will look
int diameter = radius * 2;
graphicsPath.AddArc(Rect(rect.X, rect.Y, diameter, diameter) 180.0f, 90.0f);
graphicsPath.AddArc(Rect(rect.X + rect.Width - diameter, rect.Y, diameter, diameter), 270.0f, 90.0f);
graphicsPath.AddArc(Rect(rect.X + rect.Width - diameter, rect.Y + rect.Height - diameter, diameter, diameter), 0.0f, 90.0f);
graphicsPath.AddArc(Rect(rect.X, rect.Y + rect.Height - diameter, diameter, diameter), 90.0f, 90.0f);
graphicsPath.CloseFigure();
PathGradientBrush brush(&graphicsPath);
brush.SetCenterColor(centerColor); //would be some shade of blue, following your example
int colCount = 1;
brush.SetSurroundColors(surroundColor, &colCount); //same as your center color, but with the alpha channel set to 0
//play with these numbers to get the glow effect you want
REAL blendFactors[] = {0.0, 0.1, 0.3, 1.0};
REAL blendPos[] = {0.0, 0.4, 0.6, 1.0};
//sets how transition toward the center is shaped
brush.SetBlend(blendFactors, blendPos, 4);
//sets the scaling on the center. you may want to have it elongated in the x-direction
brush.SetFocusScales(0.2f, 0.2f);
graphics.FillPath(&brush, &graphicsPath);

Draw the border into an image slightly larger than the border itself.
Blur it.
Erase the inside of the border.
Draw the border over the blurred image.
Draw that image to the destination.

Related

openGL: cannot correctly draw a sphere in front of camera

I'm trying to draw a little sphere in front of the camera, let's say 5 units away (C++, newby in openGL and not very confident in trigonometry!).
I expect that the sphere is always in the middle of my camera when I perform pan and tilt movements.
In my rendering loop, I calculated the coordinates of the sphere in the following way:
// 1) settimg my camera
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(camera_angle[1], 0, 1, 0);
glRotatef(camera_angle[0], 1, 0, 0);
glRotatef(camera_angle[2], 0, 0, 1);
glTranslatef(camera_pos[0],camera_pos[1],camera_pos[2]);
// 2) retrieving camera pan tilt angles in radians:
double phi = camera_angle[1] *(M_PI/180.0); //pan
double theta = camera_angle[0] *(M_PI/180.0); //tilt
//3) calculating xyz coordinates of the sphere, if 5 units away from camera
double dist = -5;
double ax = camera_pos[0] + (-1)*(dist*sin(phi)*cos(theta));
double ay = camera_pos[1] + dist*sin(theta);
double az = camera_pos[2] + dist*cos(theta)*cos(phi);
//4) draw sphere
float ndiv = 2.0;
GLfloat f[]={1.0,1.0,1.0,1};
glPushMatrix();
glTranslated(ax, ay, az);
glMaterialfv(GL_FRONT, GL_AMBIENT_AND_DIFFUSE, f);
glShadeModel(GL_FLAT);
glBegin(GL_TRIANGLES);
for (int i=0;i<20;i++)
makeTri(vdata[tindices[i][0]], vdata[tindices[i][1]], vdata[tindices[i][2]], ndiv, 0.2);
glEnd();
glPopMatrix();
I found the trigonometric formula here
spherical coordinate system
Note that I inverted some values like sin and cos, as I guess the correct order depends on reference system (I guess openGL has some inverted axis).
Now I have a strange result, that can be seen in this video:
sphere behaviour
Please, ignore the coloured spheres in the background and the green square in the middle of the camera, just look at the white sphere in front of the camera.
As you can see, if I perform only pan or only tilt (look at bottom left values showing the exact camera angles), the white sphere is always in the exact centre of the camera, as expected. Nevertheless, when pan and tilt are performed together, the sphere drifts:
the more pan and tilt values move away from 0 degree, the more the sphere drifts. Moreover, the shape of drift follow a circular trajectory, which is very suspicious for me.
Does anyone have an idea? Thanks
To draw something that doesn't move relative to the camera you simply start from an identity model-view matrix from scratch:
float ndiv = 2.0;
GLfloat f[]={1.0,1.0,1.0,1};
glPushMatrix();
glLoadIdentity(); // <------------- zero out transforms
glTranslated(0, 0, -5); // <------------- translate 5 units from the camera
glMaterialfv(GL_FRONT, GL_AMBIENT_AND_DIFFUSE, f);
glShadeModel(GL_FLAT);
glBegin(GL_TRIANGLES);
for (int i=0;i<20;i++)
makeTri(vdata[tindices[i][0]], vdata[tindices[i][1]], vdata[tindices[i][2]], ndiv, 0.2);
glEnd();
glPopMatrix();

Keeping the geometry intact on screen size change

I have made some shapes like this :
// Triangle
glBegin(GL_TRIANGLES);
glVertex3f(0.0,0.0,0);
glVertex3f(1.0,0.0,0);
glVertex3f(0.5,1.0,0);
glEnd();
// Cube using GLUT
glColor3f(0.0,0.0,1.0);
glutSolidCube(.5);
// Circle
glPointSize(2);
glColor3f(1.0,0.0,1.0);
glBegin(GL_POINTS);
float radius = .75;
for( float theta = 0 ; theta < 360 ; theta+=.01 )
glVertex3f( radius * cos(theta), radius * sin(theta), 0 );
glEnd();
Initially I keep my window size as 500x500 and the output is as shown :
However, if I change the width and height (not in proportion) of my widget, the shapes get distorted (Circle looks oval, Equilateral triangle looks isosceles) :
This is the widget update code :
void DrawingSurface::resizeGL(int width, int height)
{
// Update the drawable area in case the widget area changes
glViewport(0, 0, (GLint)width, (GLint)height);
}
I understand that I can keep the viewport itself with same width and height, but then lot of space will get wasted on sides.
Q. Any solution for this ?
Q. How do game developers handle this in general, designing OpenGL game for different resolutions ?
P.S. : I do understand that this isn't modern OpenGL and also there are better ways of making a circle.
They solve it by using the projection matrix, both the perspective matrix and ortho projection traditionally have a way of getting the aspect ratio (width/height) and use that to adjust the result on screen.

OpenGL circle radius issue when drawing square

I have a function that draws a circle.
glBegin(GL_LINE_LOOP);
for(int i = 0; i < 20; i++)
{
float theta = 2.0f * 3.1415926f * float(i) / float(20);//get the current angle
float rad_x = ratio*(radius * cosf(theta));//calculate the x component
float rad_y = radius * sinf(theta);//calculate the y component
glVertex2f(x + rad_x, y + rad_y);//output vertex
}
glEnd();
This works dandy. I save the x, y and radius values in my object.
However when I try and draw a square with the following function call:
newSquare(id, red, green, blue, x, (x + radius), y, (y + radius));
I get the following image.
As you see, the square is nearly double as wide (looks more like the diameter). The following code is how I create my square box. As you can see it starts in the center of the circle in which it should. And should stretch out to the edge of the circle.
glBegin(GL_QUADS);
glVertex2f(x2, y2);
glVertex2f(x2, y1);
glVertex2f(x1, y1);
glVertex2f(x1, y2);
glEnd();
I can't seem to understand why this is!
If you're correcting the x-position for one object, you have to do it for all others as well.
However, if you continue this, you'll get into trouble very soon. In your case, only the width of objects is corrected but not their positions. You can solve all your problems by setting an orthographic projection matrix and you won't ever need to correct positions again. E.g. like so:
glMatrixMode(GL_PROJECTION); //switch to projection matrix
glOrtho(-ratio, ratio, -1, 1, -1, 1);
glMatrixMode(GL_MODELVIEW); //switch back to model view
where
ratio = windo width / window height
This constructs a coordinate system where the top edge has y=1, the bottom edge y=-1 and the left and right sides have x=-ratio and x=ratio, respectively.

OpenGL screen layout

I have some questions about the screen set up. Originally when I would draw a triangle the x vector 1 would be all the way to the right and -1 would be all the way to the left. Now I have adjusted it to account for the different aspect ratio of the window. My new question how do I make the numbers which are used to render a 2d tri go along with the pixel values. If my window is 480 pixels wide and 320 tall I want to have to enter this to span the screen with a tri
glBegin(GL_TRIANGLES);
glVertex2f(240, 320);
glVertex2f(480, 0);
glVertex2f(0, 0);
glEnd();
but instead it currently looks like this
glBegin(GL_TRIANGLES);
glVertex2f(0, 1);
glVertex2f(1, -1);
glVertex2f(-1, -1);
glEnd();
Any ideas?
You need to use functions glViewport and glOrtho with correct values. Basically glViewport sets the part of your window capable of rendering 3D-Graphics using OpenGL. glOrtho establishes coordinate system within that part of a window using OpenGL's coordinates.
So for your task you need to know exact width and height of your window. If you are saying they are 480 and 320 respectively then you need to call
glViewport(0, 0, 480, 320)
// or: glViewport ( 0,0,w,h)
somewhere, maybe in your SizeChanging-handler(if you are using WINAPI it is WM_SIZE message)
Next, when establishing OpenGL's scene you need to specify OpenGL's coordinates. For orthographic projection they will be the same as dimensions of a window so
glOrtho(-240, 240, -160, 160, -100, 100)
// or: glOrtho ( -w/2, w/2, -h/2, h/2, -100, 100 );
is siutable for your purppose. Not that here I'm using depth of 200 (z goes from -100 to 100).
Next on your rendering routine you may draw your triangle
Since the second piece of code is working for you, I assume your transformation matrices are all identity or you have a shader that bypasses them. Also your viewport is spanning the whole window.
In general if your viewport starts at (x0,y0) and has WxH size, the normalized coordinates (x,y) you feed to glVertex2f will be transformed to (vx,vy) as follows:
vx = x0 + (x * .5f + .5f) * W
vy = y0 + (y * .5f + .5f) * H
If you want to use pixel coordinates you can use the function
void vertex2(int x, int y)
{
float vx = (float(x) + .5f) / 480.f;
float vy = (float(y) + .5f) / 320.f;
glVertex3f(vx, vy, -1.f);
}
The -1 z value is the closest depth to the viewer. It's negative because the z is assumed to be reflected after the transformation (which is identity in your case).
The addition of .5f is because the rasterizer considers a pixel as a 1x1 quad and evaluates the coverage of your triangle in the middle of this quad.

Open GL: draw rectangles with borders?

Check the image I produced, but what I want to do is producing those rectangles with borders, and set the background colour to another. How can I do that?
glRectf(top_left_x, top_left_y, bottom_right_x, bottom_right_y)?
if loop==0:
ratio = 0.10
glBegin(GL_QUADS)
while ratio <= 1.0:
width = window_width/2
height = window_height
long_length = width * ratio
short_length = height* (1.0 - ratio)
top_left_x = (width - long_length) / 2.0
top_left_y = (height - window_height * (1.0-ratio)) /2
bottom_right_x = top_left_x + long_length
bottom_right_y = top_left_y + short_length
glColor(1.0,1.0,1.0,0.5)
glVertex3f(top_left_x, top_left_y, 0.0)
glVertex3f(top_left_x + long_length, top_left_y, 0.0)
glVertex3f(bottom_right_x,bottom_right_y, 0.0)
glVertex3f(bottom_right_x-long_length,bottom_right_y, 0.0)
ratio += 0.05
glEnd()
You can draw a rectangle not filled this way:
glBegin(GL_LINES);
glVertex2d(top_left_x, top_left_y);
glVertex2d( top_right_x, top_right_y);
glVertex2d( bottom_right_x,bottom_right_y);
glVertex2d(bottom_left_x,bottom_left_y);
glVertex2d(top_left_x, top_left_y);
glEnd();
OpenGL use a state machine. So for changing the color just put :
glColor3f (R, G, B);
before your drawing primitives.
So, mixing it up, your step should be:
choose fill color
draw fill rect with glRectf
choose border color
draw unfilled rect with the code I posted
These steps repeated for each rectangle you are drawing of course.