I have two RGBA images (simple 2D raster of type GL_UNSIGNED_BYTE, not textures or anything) of the same scene, one sharp, one blurred. With the blurred image displayed, I need to create an effect where the sharp image shows through in one or more (possibly overlapping) circular areas, smoothly blending with the background blurred image around the edges of the circles. I used to do the following.
Call this at the initialization:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
For every circle, make a copy of the sharp image, assign new values for their alpha components, starting from 255 at the center of the circle, reducing it to 0 towards the edges. Then rendered them one by one with glDrawPixels(), starting from the blurred image. It works, but as the number of those circular areas grows, it is getting noticeably slow.
I was thinking of using some alpha buffers (don't know the correct term), so that I create a small image with a cut out alpha circle in the middle, render its alpha component at one or several places of the framebuffer, then somehow blend the blurred image and the sharp image with those pre-rendered alpha values. So I wrote this in my display() function:
//render the alpha-mask first, at one place for now
GLfloat rp[4];
glGetFloatv(GL_CURRENT_RASTER_POSITION, rp);
glRasterPos2f(0.2f, 0.2f); //just some arbitrary coordinates on the screen
glColorMask(false, false, false, true); //think I only need the alpha-channel
glBlendFunc(GL_ONE, GL_ZERO);
glDrawPixels(sharp_mask_width, sharp_mask_height, GL_RGBA, GL_UNSIGNED_BYTE, alpha_mask);
//render the blurred image, blending the the previously rendered alpha
glRasterPos2f(rp[0], rp[1]);
glColorMask(true, true, true, true); //all channels
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA); //make a hole where alpha mask had the maximum alpha
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataBlurred);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataSharp);
glFlush();
It seems I don't understand something important about all this blending stuff, because nothing is rendered at all, all I get is an empty screen. All I got is an effect remotely similar to what I need, messing with glBlendFunc parameters and blending rgb-s instead of alpha-s.
How, if at all, can it be done?
I know I will probably burn in hell for using outdated OpenGL in the year 2015, when programmable shaders rule the world and cure cancer, but if possible I'd much prefer an answer describing how to do it in old style OpenGL, so I don't have to change that crap of a legacy program too much...
Related
I've been working in opengl for a while relatively smoothly, but recently I've noticed that when I render a primitive with a transparent texture onto my fbo texture (custom frame buffer) it makes the fbo texture transparent at the pixels the primitive's texture is transparent. The problem is that there are things behind this primitive (with solid color) already rendered before the transparent one. So the fbo texture should not be transparent at those pixels - blending a solid & transparent color should result in a solid color, shouldn't it?
So basically, opengl is adding transparency to my fbo-texture just because the last primitive drawn has transparent pixels even though there are solid colors behind it already drawn into the fbo texture. Shouldn't opengl blend the transparent texture with the fbo's existing pixels, and result in a solid color if the fbo texture is already filled with solid colors before rendering the transparent primitive?
What happens when I render my fbo texture to the default frame buffer, is that the clear color bleeds through parts of it - where the last drawn texture is transparent. But when I render the same scene straight to the default opengl frame buffer, the scene looks fine and the clear color is not bleeding into the transparent texture.
What's even more interesting is that the glClearColor - color is only visible where the primitive's texture's alpha has a gradient - the clear color has no influence on where the texture alpha is 1.0 or 0.0... is this a bug? It seems to affect the primitive's texture the most at pixels with 0.5 alpha. Then going further above or below decreases the glClearColor influence.
I know it's a bit of a complex question/situation, I honestly tried my best to explain
I'm using:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
to draw both the partly transparent primitive into the fbo-texture, and then the fbo texture to the default framebuffer
This is what the fbo-texture drawn into the default opengl fbo looks like:
glClearColor is set to red.
blending a solid & transparent color should result in a solid color, shouldn't it?
Only if your blend mode tells it to. And it doesn't:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This applies the same blending operation to all four components of the colors. This causes the final alpha to be the result of multiplying the source alpha by itself and adding that to the destination alpha times 1-src alpha.
Now, you could use separate blend functions for the color and the alpha:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ZERO, GL_ONE);
The last two parameters specify the blending for just the alpha, while the first two specify the blending for the RGB colors. So this preserves the alpha. Separate blend functionality is available on GL 3.x and above, but is also available as an extension on older hardware.
But it seems to me that you probably don't want to change the alpha at all. So simply mask alpha writes when rendering to the texture:
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE);
Don't forget to undo the alpha mask when you want to write to it again.
In libGdx, i'm trying to create a shaped texture: Take a fully-visible rectangle texture and mask it to obtain a shaped textured, as shown here:
Here I test it on rectangle, but i will want to use it on any shape. I have looked into this tutorial and came with an idea to first draw the texture, and then the mask with blanding function:
batch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
GL20.GL_ZERO - because i really don't want to paint any pixels from the mask
GL20.GL_SRC_ALPHA - from original texture i want to paint only those pixels, where mask was visible (= white).
Crucial part of the test code:
batch0.enableBlending();
batch0.begin();
batch0.draw(original, 0, 0); //to see the original
batch0.draw(mask, width1, 0); //and the mask
batch0.draw(original, 0, height1); //base for the result
batch0.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
batch0.draw(mask, 0, height1); //draw mask on result
batch0.setBlendFunction(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
batch0.end();
The center ot the texture get's selected well, but instead of transparent color around, i see black:
Why is the result blank and not transparent?
(Full code - Warning: very messy)
What you're trying to do looks like a pretty clever use of blending. But I believe the exact way you apply it is "broken by design". Let's walk through the steps:
You render your background with red and green squares.
You render an opaque texture on top of you background.
You erase parts of the texture you rendered in step 2 by applying a mask.
The problem is that for the parts you erase in step 3, the previous background is not coming back. It really can't, because you wiped it out in step 2. The background of the whole texture area was replaced in step 2, and once it's gone there's no way to bring it back.
Now the question is of course how you can fix this. There are two conventional approaches I can think of:
You can combine the texture and mask by rendering them into an off-sreen framebuffer object (FBO). You perform steps 1 and 2 as you do now, but render into an FBO with a texture attachment. The texture you rendered into is then a texture with alpha values that reflect your mask, and you can use this texture to render into your default framebuffer with standard blending.
You can use a stencil buffer. Masking out parts of rendering is a primary application of stencil buffers, and using stencil would definitely be a very good solution for your use case. I won't elaborate on the details of how exactly to apply stencil buffers to your case in this answer. You should be able to find plenty of examples both online and in books, including in other answers on this site, if you search for "OpenGL stencil". For example this recent question deals with doing something similar using a stencil buffer: OpenGL stencil (Clip Entity).
So those would be the standard solutions. But inspired by the idea in your attempt, I think it's actually possible to get this to work with just blending. The approach that I came up with uses a slightly different sequence and different blend functions. I haven't tried this out, but I think it should work:
You render the background as before.
Render the mask. To prevent it from wiping out the background, disable writing to the color components of the framebuffer, and only write to the alpha component. This leaves the mask in the alpha component of the framebuffer.
Render the texture, using the alpha component from the framebuffer (DST_ALPHA) for blending.
You will need a framebuffer with an alpha component for this to work. Make sure that you request alpha bits for your framebuffer when setting up your context/surface.
The code sequence would look like this:
// Draw background.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// Draw mask.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_BLEND);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
// Draw texture.
A very late answer, but with the current version this is very easy. You simply draw the mask, set the blending mode to use the source color to the destination and draw the original. You'll only see the original image where the mask is.
//create batch with blending
SpriteBatch maskBatch = new SpriteBatch();
maskBatch.enableBlending();
maskBatch.begin();
//draw the mask
maskBatch.draw(mask);
//store original blending and set correct blending
int src = maskBatch.getBlendSrcFunc();
int dst = maskBatch.getBlendDstFunc();
maskBatch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
//draw original
maskBatch.draw(original);
//reset blending
maskBatch.setBlendFunction(src, dst);
//end batch
maskBatch.end();
If you want more info on the blending options, check How to do blending in LibGDX
Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.
I'm a little new to OpenGL. I am making a 2D application, and I defined a Quad class which defines a square with a texture on it. It loads these textures from a texture atlas, and it does this correctly. Everything works with regular textures, and the textures display correctly, but doesn't display correctly when the texture image is not a square.
For example, I want a Quad to have a star texture, and have the star to show up, and the area around the star image that still lies in the Quad to be transparent. But what ends up happening is that the star shows up fine, and then behind it is another texture from my texture atlas that fills the Quad. I assume the texture behind it is just the last texture I loaded into the system? Either way, I don't want that texture to show up.
Here's what I mean. I want the star but not the cloud-ish texture behind it showing up:
The important part of my render function is:
glDisable(GL_CULL_FACE);
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureID);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoordinates);
//render
glDrawArrays(renderStyle, 0, vertexCount);
It seems like the obvious choice would be to use an RGBA texture, and make everything but the star transparent by setting the alpha channel to zero for those pixels (and enable alpha blending for the texture unit).
Use an image manipulation program. Photoshop is a great one, gimp is a free one. You don't really use OpenGL to crop your textures. Rather, your textures need to be prepared beforehand for your program.
There should be some sort of very easy tool to remove everything outside of the star. By remove, I mean make it transparent, which will require an alpha channel. This means you need to make sure that the way you load your textures in your program takes into account 32-bit colors (RGBA - red, green, blue, alpha), not just 24-bit colors (RGB - red, green, blue).
This will make everything behind your star see-through, or transparent.
Also, just an afterthought, it looks like you could be taking a copyrighted image off the internet and using it in your game/program. If you're doing anything commercial, I'd strongly recommend creating your own textures.
You want to make a call to glBindTexture(GL_TEXTURE_2D,0); after you have mapped your texture
here is an example from some code ive written
// Bind the texture
glBindTexture(GL_TEXTURE_2D, image.getID());
// Draw a QUAD with setting texture coordinates
glBegin(GL_QUADS);
{
// Top left corner of the texture
glTexCoord2f(0, 0);
glVertex2f(x, y);
// Top right corner of the texture
glTexCoord2f(image.getRelativeWidth(), 0);
glVertex2f(x+image.getImageWidth(), y);
// Bottom right corner of the texture
glTexCoord2f(image.getRelativeWidth(), image.getRelativeHeight());
glVertex2f(x+image.getImageWidth()-20, y+image.getImageHeight());
// Bottom left corner of the texture
glTexCoord2f(0, image.getRelativeHeight());
glVertex2f(x+20, y+image.getImageHeight());
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
I am no expert but this certainly solved what you are experiencing for me.
I am drawing lots of small black rectangles to the white screen and as they move about and zoom it doesn't look very graceful.
How can I draw them so if the edge lies between pixels, the pixels will be grey rather than black?
Sub-pixel rendering is actually more complicated than regular anti-aliasing. If you're using a display with pixels in RGB format, then a gray shape ending halfway through a pixel might be rendered as yellow or as cyan, depending on which side of the pixel the shape lies. This can look strange when the resulting image is not drawn in native resolution or on a display with a different layout than expected, but otherwise it can look quite nice.
Here is a sample of sub-pixel rendering applied to text; the left panel is color, the center panel is the displayed version of the color, and the right panel is the perceived brightness. Notice that accuracy in hue is exchanged for accuracy in brightness.
One approach might be to render each channel separately, each slightly offset in rendering space by the appropriate amount, so that the combined image is in full color. Each of the channels must be the same resolution as the resulting image; each channel is rendered with anti-aliasing the same way as the original would have been, except the other colors are ignored. Once the channels are created, they can be combined with simple addition. I don't know of a "pure" solution like the glEnable/glHint versions available for normal anti-aliasing, but it may exist or may in the future.
glEnable(GL_SMOOTH); should do it.
What you are effectively asking for is anti-aliasing,and there are numerous ways of doing it. One way is summarized in this gamedev topic: http://www.gamedev.net/topic/107637-glenablegl_polygon_smooth/.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_POINT_SMOOTH, GL_NICEST);
glHint(GL_LINE_SMOOTH, GL_NICEST);
glHint(GL_POLYGON_SMOOTH, GL_NICEST);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);