Color picking with AntiAliasing in OpenGL? - opengl

I'm having a problem with color picking and antialiasing in OpenGL. When AA is activated results from glReadPixels are obviously wrong on object edges and object intersections. For example:
I render a box #28 (RGBA: 28, 0, 0, 0) near a box #32 (RGBA: 32, 0, 0, 0). With AA, I can get a wrong ReadPixel value (e.g. 30) where the cube and triangle overlap, or value of 14 on boxes edge, due to the AA algorithm.
I have ~4000 thousand objects I need to be able to pick (it's a jigsaw puzzle game). It is vital to be able to select objects by shape.
I've tried to disable AA with glDisable(GL_MULTISAMPLE) but it does not works with certain AA modes (I read it depends on AA implementation - SS, MS, CS ..)
So, how do I pick an underlying object?
A way do temporary disable AA?
Using a different buffer or even rendering context?
Any other suggestion?

Why not use an FBO as your pick buffer?

I use this hack: pick not just one pixel, but all the 3x3=9 pixels around the picking point. If they are all same, we are safe. Otherwise, it must be on edge and we can skip that.
int renderer::pick_(int x, int y)
{
static_assert(__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__,
"only works on little-endian architecture");
static_assert(sizeof(int) == 4,
"only works on architecture that has int size of 4");
// sort of edge detection. selection only happens at non-edge
// since the edge may cause anti-aliasing glitch
int ids[3*3];
glReadPixels(x-1, y-1, 3, 3, GL_RGBA, GL_UNSIGNED_BYTE, ids);
for (auto& id: ids) id &= 0x00FFFFFF; // mask out alpha
if (ids[0] == 0x00FFFFFF) return -1; // pure white for background
// prevent anti-aliasing glitch
bool same = true;
for (auto id: ids) same = (same && id == ids[0]);
if (same) return ids[0];
return -2; // edge
}

Related

Understanding image dithering and how they help blending CSM

So I wish to implement dithering as a blend mode between my cascade shadow map splits.
I had no idea what they were so I've watched this video to try and understand it.As far as I understand it it's a way to map an image colors to a limited pallet while trying to maintain a convincing gradient between different colored pixels.
Now from this video I understand how to calculate what color my eye will see based on the weights of the dithering pattern. What I do not understand is how we take an image with 4 bytes pixels data and for example trying to map it to 1 byte pixel data. How can we map each pixel color in the original image to a dither pattern that it's weighted average will look as if it's the original color if we're basically limited? Say we were limited to only 5 colors, I'm guessing not every possible weighted average combination of dither pattern using these 5 pallet color could result in the original pixel color so how can this be achieved? Also is a dither pattern is calculated for each pixel to achieve a dithered image?
Besides these general question about image dithering I'm still having difficulties understanding how this technique is helping us blend between cascade splits, where as far as actually implementing it in code, I've seen an example where it uses the space coordinates of a fragment and calculate a dither (Not sure what it's calculating actually because it doesn't return a matrix it returns a float):
float GetDither2(ivec2 p)
{
float d = 0.0;
if((p.x & 1) != (p.y & 1))
d += 2.0;
if((p.y & 1) == 1)
d += 1.0;
d *= 0.25;
return d;
}
float GetDither4(ivec2 p)
{
float d = GetDither2(p);
d = d * 0.25 + GetDither2(p >> 1);
return d;
}
float threshold = GetDither4(ivec2(gl_FragCoord.xy));
if(factor <= threshold)
{
// sample current cascade
}
else
{
// sample next cascade
}
And then it samples either cascade map based on this returned float.
So my brain can't translate what I learned that you can have a dither pattern to simulate large color pattern, into this example that uses a returned float as a threshold factor and compares it to some blend factor just to sample from either shadow map. So it made me more confused.
Would appreciate a good explanation of this 🙏
EDIT:
Ok I see correlation between the algorithm I was provided with to the wikipedia article about ordered dithering, which as far as I understand is the preferred dithering algorithm because according to the article:
Additionally, because the location of the dithering patterns always
stays the same relative to the display frame, it is less prone to
jitter than error-diffusion methods, making it suitable for
animations.
Now I see the code tries to get this threshold value for a given space coordinate although it seems to me it got it a bit wrong because the following calculation of threshold is a follows:
Mpre(i,j) = (Mint(i,j)+1) / n^2
And it needs to set: float d = 1.0 instead of float d = 0.0 if Im not mistaken.
Secondly, I’m not sure how left shifting the ivec2 space coordinate (I’m not even sure what’s the behavior of bitwise shift on vector in glsl…) but I assumes it just component bitwise operation, and I tried plug-in (head calculating) for a given space coordinate (2,1) (according to my assumptions about the bitwise operation) and got different threshold result for what should be the threshold value of this position in a 4x4 Bayer matrix.
So I'm skeptic about how well this code implements the ordered dithering algorithm.
Secondly I’m still not sure how this threshold value has anything to do with choosing between shadow map 1 or 2, and not just reducing color pallet of a given pixel, this logic hasn’t settled in my mind yet as I do not understand the use of dithering threshold value for a given space coordinate to choose the right map to sample from.
Lastly won’t choosing space coordinate will cause jitters? Given fragment in world position (x,y,z) who’s shadowed. Given this fragment space coordinate for a given frame are (i,j). If the camera moves won’t this fragment space coordinate bound to change making the dither threshold calculated for this fragment change with each movement causing jitters of the dither pattern?
EDIT2:
Tried to blend the maps as follow although result not look so good any ideas?
const int indexMatrix8x8[64] = int[](
0, 32, 8, 40, 2, 34, 10, 42,
48, 16, 56, 24, 50, 18, 58, 26,
12, 44, 4, 36, 14, 46, 6, 38,
60, 28, 52, 20, 62, 30, 54, 22,
3, 35, 11, 43, 1, 33, 9, 41,
51, 19, 59, 27, 49, 17, 57, 25,
15, 47, 7, 39, 13, 45, 5, 37,
63, 31, 55, 23, 61, 29, 53, 21
);
for (int i = 0; i < NR_LIGHT_SPACE; i++) {
if (fs_in.v_FragPosClipSpaceZ <= u_CascadeEndClipSpace[i]) {
shadow = isInShadow(fs_in.v_FragPosLightSpace[i], normal, lightDirection, i) * u_ShadowStrength;
int x = int(mod(gl_FragCoord.x, 8));
int y = int(mod(gl_FragCoord.y, 8));
float threshold = (indexMatrix8x8[(x + y * 8)] + 1) / 64.0;
if (u_CascadeBlend >= threshold)
{
shadow = isInShadow(fs_in.v_FragPosLightSpace[i + 1], normal, lightDirection, i + 1) * u_ShadowStrength;
}
}
break;
}
}
Basically if I understand what I'm doing is getting the threshold value from the matrix for each space coordinate of a shadowed pixel and if it's (using probability) higher than a blend factor than I sample the second map instead.
Here're the results:
The larger red box is where the split between map occurs.
The smaller red box goes to show that there's some dither pattern but the image isn't so blended as I think it should.
First of all I have no knowledge about CSM so I focus on dithering and blending. Firstly see these:
my very simple dithering in C++ I come up with
image dithering routine that accepts an amount of dithering? and its variation
They basically answers you question about how to compute the dithering pattern/pixels.
Also its important to have good palette for dithering that reduce your 24/32 bpp into 8 bpp (or less). There are 2 basic approaches
reduce colors (color quantization)
so compute histogram of original image and pick significant colors from it that more or less cover whole image information. For more info see:
Effective gif/image color quantization?
dithering palette
dithering use averaging of pixels to generate desired color so we need to have such colors that can generate all possible colors we want. So its good to have few (2..4) shades of each base color (R,G,B,C,M,Y) and some (>=4) shades of gray. From these you can combine any color and intensity you want (if you have enough pixels)
#1 is the best but it is per image related so you need to compute palette for each image. That can be problem as that computation is nasty CPU hungry stuff. Also on old 256 color modes you could not show 2 different palettes at the same time (which with true color is no more a problem anymore) so dithering is usually better choice.
You can even combine the two for impressive results.
The better the used palette is the less grainy the result is ...
The standard VGA 16 and 256 color palettes where specially designed for dithering so its a good idea to use them...
Standard VGA 16 color palette:
Standard VGA 256 color palette:
Here also C++ code for the 256 colors:
//---------------------------------------------------------------------------
//--- EGA VGA pallete -------------------------------------------------------
//---------------------------------------------------------------------------
#ifndef _vgapal_h
#define _vgapal_h
//---------------------------------------------------------------------------
unsigned int vgapal[256]=
{
0x00000000,0x00220000,0x00002200,0x00222200,
0x00000022,0x00220022,0x00001522,0x00222222,
0x00151515,0x00371515,0x00153715,0x00373715,
0x00151537,0x00371537,0x00153737,0x00373737,
0x00000000,0x00050505,0x00000000,0x00030303,
0x00060606,0x00111111,0x00141414,0x00101010,
0x00141414,0x00202020,0x00242424,0x00202020,
0x00252525,0x00323232,0x00303030,0x00373737,
0x00370000,0x00370010,0x00370017,0x00370027,
0x00370037,0x00270037,0x00170037,0x00100037,
0x00000037,0x00001037,0x00001737,0x00002737,
0x00003737,0x00003727,0x00003717,0x00003710,
0x00003700,0x00103700,0x00173700,0x00273700,
0x00373700,0x00372700,0x00371700,0x00371000,
0x00371717,0x00371727,0x00371727,0x00371737,
0x00371737,0x00371737,0x00271737,0x00271737,
0x00171737,0x00172737,0x00172737,0x00173737,
0x00173737,0x00173737,0x00173727,0x00173727,
0x00173717,0x00273717,0x00273717,0x00373717,
0x00373717,0x00373717,0x00372717,0x00372717,
0x00372525,0x00372531,0x00372536,0x00372532,
0x00372537,0x00322537,0x00362537,0x00312537,
0x00252537,0x00253137,0x00253637,0x00253237,
0x00253737,0x00253732,0x00253736,0x00253731,
0x00253725,0x00313725,0x00363725,0x00323725,
0x00373725,0x00373225,0x00373625,0x00373125,
0x00140000,0x00140007,0x00140006,0x00140015,
0x00140014,0x00150014,0x00060014,0x00070014,
0x00000014,0x00000714,0x00000614,0x00001514,
0x00001414,0x00001415,0x00001406,0x00001407,
0x00001400,0x00071400,0x00061400,0x00151400,
0x00141400,0x00141500,0x00140600,0x00140700,
0x00140606,0x00140611,0x00140615,0x00140610,
0x00140614,0x00100614,0x00150614,0x00110614,
0x00060614,0x00061114,0x00061514,0x00061014,
0x00061414,0x00061410,0x00061415,0x00061411,
0x00061406,0x00111406,0x00151406,0x00101406,
0x00141406,0x00141006,0x00141506,0x00141106,
0x00141414,0x00141416,0x00141410,0x00141412,
0x00141414,0x00121414,0x00101414,0x00161414,
0x00141414,0x00141614,0x00141014,0x00141214,
0x00141414,0x00141412,0x00141410,0x00141416,
0x00141414,0x00161414,0x00101414,0x00121414,
0x00141414,0x00141214,0x00141014,0x00141614,
0x00100000,0x00100004,0x00100000,0x00100004,
0x00100010,0x00040010,0x00000010,0x00040010,
0x00000010,0x00000410,0x00000010,0x00000410,
0x00001010,0x00001004,0x00001000,0x00001004,
0x00001000,0x00041000,0x00001000,0x00041000,
0x00101000,0x00100400,0x00100000,0x00100400,
0x00100000,0x00100002,0x00100004,0x00100006,
0x00100010,0x00060010,0x00040010,0x00020010,
0x00000010,0x00000210,0x00000410,0x00000610,
0x00001010,0x00001006,0x00001004,0x00001002,
0x00001000,0x00021000,0x00041000,0x00061000,
0x00101000,0x00100600,0x00100400,0x00100200,
0x00100303,0x00100304,0x00100305,0x00100307,
0x00100310,0x00070310,0x00050310,0x00040310,
0x00030310,0x00030410,0x00030510,0x00030710,
0x00031010,0x00031007,0x00031005,0x00031004,
0x00031003,0x00041003,0x00051003,0x00071003,
0x00101003,0x00100703,0x00100503,0x00100403,
0x00000000,0x00000000,0x00000000,0x00000000,
0x00000000,0x00000000,0x00000000,0x00000000,
};
//---------------------------------------------------------------------------
class _vgapal_init_class
{
public: _vgapal_init_class();
} vgapal_init_class;
//---------------------------------------------------------------------------
_vgapal_init_class::_vgapal_init_class()
{
int i;
BYTE a;
union { unsigned int dd; BYTE db[4]; } c;
for (i=0;i<256;i++)
{
c.dd=vgapal[i];
c.dd=c.dd<<2;
a =c.db[0];
c.db[0]=c.db[2];
c.db[2]= a;
vgapal[i]=c.dd;
}
}
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
//--- end. ------------------------------------------------------------------
//---------------------------------------------------------------------------
Now back to your question about blending by dithering
Blending is merging of 2 images of the same resolution together by some amount (weights). So each pixel color is computed like this:
color = w0*color0 + w1*color1;
where color? are pixels in source images and w? are weights where all weights together sum up to 1:
w0 + w1 = 1;
here example:
Draw tbitmap with scale and alpha channel faster
and preview (the dots are dithering from my GIF encoder):
But Blending by dithering is done differently. Instead of Blending colors we use some percentage of pixels from one image and others from the second one. So:
if (Random()<w0) color = color0;
else color = color1;
Where Random() returns pseudo random number in range <0,1>. As you can see no combinig of colors is done simply you just chose from which image you copy the pixel... Here preview:
Now the dots are caused by the blending by dithering as the intensities of the images are very far away of each other so it does not look good but if you dither relatively similar images (like your shadow maps layers) the result should be good enough (with almost no performance penalty).
To speed up this its usual to precompute the Random() outputs for some box (8x8, 16x16 , ...) and use that for whole image (its a bit blocky but that is sort of used as a fun effect ...). This way it can be done also branchlessly (if you store pointers to source images instead of random value). Also it can be done fully on integers (withou fixed precision) if the weights are integers for example <0..255> ...
Now to make cascade/transition from image0 to image1 or what ever just simply do something like this:
for (w0=1.0;w0>=0.0;w0-=0.05)
{
w1=1.0-w0;
render blended images;
Sleep(100);
}
render image1;
I got the dither blend to work in my code as follows:
for (int i = 0; i < NR_LIGHT_SPACE; i++) {
if (fs_in.v_FragPosClipSpaceZ <= u_CascadeEndClipSpace[i])
{
float fade = fadedShadowStrength(fs_in.v_FragPosClipSpaceZ, 1.0 / u_CascadeEndClipSpace[i], 1.0 / u_CascadeBlend);
if (fade < 1.0) {
int x = int(mod(gl_FragCoord.x, 8));
int y = int(mod(gl_FragCoord.y, 8));
float threshold = (indexMatrix8x8[(x + y * 8)] + 1) / 64.0;
if (fade < threshold)
{
shadow = isInShadow(fs_in.v_FragPosLightSpace[i + 1], normal, lightDirection, i + 1) * u_ShadowStrength;
}
else
{
shadow = isInShadow(fs_in.v_FragPosLightSpace[i], normal, lightDirection, i) * u_ShadowStrength;
}
}
else
{
shadow = isInShadow(fs_in.v_FragPosLightSpace[i], normal, lightDirection, i) * u_ShadowStrength;
}
break;
}
}
First check if we're close to the cascade split by a fading factor taking into account frag position clip space and the end of the cascade clip-space with fadedShadowStrength (I use this function for normal blending between cascade to know when to start blending, basically if blending factor u_CascadeBlend is set to 0.1 for example then we blend when we're atleast 90% into the current cascade (z clip space wise).
Then if we need to fade (if (fade <1.0)) I just compare the fade factor to the threshold from the matrix and choose shadow map accordingly.
Results:

Why a String drawn on a Graphics object change its position depending on the used skin?

If I draw a String onto a Graphics (from a mutable image) in a specific position why does the String position moves (on the Y Axis) depending on the simulator skin that is used ?
public void paints(Graphics g, Image background, Image watermark, int width, int height) {
g.drawImage(background, 0, 0);
g.drawImage(watermark, 0, 0);
g.setColor(0xFF0000);
// Upper left corner
g.fillRect(0, 0, 10, 10);
// Lower right corner
g.setColor(0x00FF00);
g.fillRect(width - 10, height - 10, 10, 10);
g.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
g.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
g.drawString("HelloWorld",
(int) (848 ),
(int) (610)
);
}
This is the way I save a screenshot programatically with CodenameOne :
Image screenshot = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
f.revalidate();
f.setVisible(true);
drawing.paintComponent(screenshot.getGraphics(), true);
String imageFile = FileSystemStorage.getInstance().getAppHomePath() + "screenshot.png";
try(OutputStream os = FileSystemStorage.getInstance().openOutputStream(imageFile)) {
ImageIO.getImageIO().save(screenshot, os, ImageIO.FORMAT_PNG, 1);
} catch(IOException err) {
err.printStackTrace();
}
And here is the result with the iPhone 6 skin :
And with the Xoom skin :
Thanks a lot to anyone that could give me hints on how to solve this problem and start the String always at the position nevermind the skin (and device) used !
Regards,
Is the "drawing" component on the screen?
If so it will be sized differently based on the specific device you are running on as each device has different densities/resolutions. So elements from the screen will appear in different positions which is what we want normally.

SDL and c++ -- More efficient way of leaving a trail behind the player?

so i'm fairly new with SDL, and i'm trying to make a little snowboarding game. When the player is moving down the hill, I want to leave a trail of off-coloured snow behind him. Currently, the way i have this working is I have an array (with 1000 elements) that stores the players last position. Then each frame, I have a for loop that loops 1000 times, to draw out the trail texture in all these last 1000 positions of the player...
I feel this is extremely inefficient, and i'm looking for some better alternatives!
The Code:
void Player::draw()
{
if (posIndex >= 1000)
{
posIndex = 0;
}
for (int i = 0; i < 1000; i++) // Loop through all the 1000 past positions of the player
{
// pastPlayerPos is an array of SDL_Rects that stores the players last 1000 positions
// This line calculates teh location to draw the trail texture
SDL_Rect trailRect = {pastPlayerPos[i].x, pastPlayerPos[i].y, 32, 8};
// This draws the trail texture
SDL_RenderCopy(Renderer, Images[IMAGE_TRAIL], NULL, &trailRect);
}
// This draws the player
SDL_Rect drawRect = {(int)x, (int)y, 32, 32};
SDL_RenderCopy(Renderer, Images[0], NULL, &drawRect);
// This is storing the past position
SDL_Rect tempRect = {x, y, 0, 0};
pastPlayerPos[posIndex] = tempRect;
posIndex++; // This is to cycle through the array to store the new position
This is the result, which is exactly what i'm trying to accomplish, but i'm just looking for a more efficient way. If there isn't one, i will stick with this.
There are multiple solutions. I'll give you two.
1.
Create screen-size surface. Fill it with alpha. On each player move, draw it's current position into this surface - so each movement will add you extra data to this would-be mask. Then blit this surface on screen (beware of blit order). In your case it could be improved by disabling alpha and initially filling surface with white, and blitting it first, before anything else. With that approach you can skip screen clearing after flip, by the way.
I recommend starting with this one.
2.
Not easy one, but may be more efficient (it depends). Save array points where player actually changed movement direction. After it, you need to draw chainline between these points. There is however no builtin functions in SDL to draw lines; maybe there are in SDL_gfx, i never tried it. This approach may be better if you'll use OpenGL backend later on; with SDL (or any other ordinary 2D drawing library), it's not too useful.

Drawing points of handwritten stroke using DrawEllipse (GDI+)

I'm working on an application that draws handwritten strokes. Strokes are internally stored as vectors of points and they can be transformed into std::vector<Gdiplus::Point>. Points are so close to each other, that simple drawing of each point should result into an image of continual stroke.
I'm using Graphics.DrawEllipse (GDI+) method to draw these points. Here's the code:
// prepare bitmap:
Bitmap *bitmap = new Gdiplus::Bitmap(w, h, PixelFormat32bppRGB);
Graphics graphics(bitmap);
// draw the white background:
SolidBrush myBrush(Color::White);
graphics.FillRectangle(&myBrush, 0, 0, w, h);
Pen blackPen(Color::Black);
blackPen.SetWidth(1.4f);
// draw stroke:
std::vector<Gdiplus::Point> stroke = getStroke();
for (UINT i = 0; i < stroke.size(); ++i)
{
// draw point:
graphics.DrawEllipse(&blackPen, stroke[i].X, stroke[i].Y, 2, 2);
}
At the end I just save this bitmap as a PNG image and sometimes the following problem occurs:
When I saw this "hole" in my stroke, I decided to draw my points again, but this time, by using ellipse with width and height set to 1 by using redPen with width set to 0.1f. So right after the code above I added the following code:
Pen redPen(Color::Red);
redPen.SetWidth(0.1f);
for (UINT i = 0; i < stroke.size(); ++i)
{
// draw point:
graphics.DrawEllipse(&redPen, stroke[i].X, stroke[i].Y, 1, 1);
}
And the new stoke I've got looked like this:
When I use Graphics.DrawRectangle instead of DrawEllipse while drawing this new red stroke, it never happens that this stroke (drawn by drawing rectangles) would have different width or holes in it:
I can't think of any possible reason, why drawing circles would result into this weird behaviour. How come that stroke is always continual and never deformed in any way when I use Graphics.DrawRectangle?
Could anyone explain, what's going on here? Am I missing something?
By the way I'm using Windows XP (e.g. in case it's a known bug). Any help will be appreciated.
I've made the wrong assumption that if I use Graphics.DrawEllipse to draw a circle with radius equal to 2px with pen of width about 2px, it will result in a filled circle with diameter about 4-5 px being drawn.
But I've found out that I actually can't rely on the width of the pen while drawing a circle this way. This method is meant only for drawing of border of this shape, thus for drawing filled ellipse it's much better to use Graphics.FillEllipse.
Another quite important fact to consider is that both of mentioned functions take as parameters coordinates that specify "upper-left corner of the rectangle that specifies the boundaries of the ellipse", so I should subtract half of the radius from both coordinates to make sure the original coordinates specify the middle of this circle.
Here's the new code:
// draw the white background:
SolidBrush whiteBrush(Color::White);
graphics.FillRectangle(&whiteBrush, 0, 0, w, h);
// draw stroke:
Pen blackBrush(Color::Black);
std::vector<Gdiplus::Point> stroke = getStroke();
for (UINT i = 0; i < stroke.size(); ++i)
graphics.FillEllipse(&blackBrush, stroke[i].X - 2, stroke[i].Y - 2, 4, 4);
// draw original points:
Pen redBrush(Color::Red);
std::vector<Gdiplus::Point> origStroke = getOriginalStroke();
for (UINT i = 0; i < origStroke.size(); ++i)
graphics.FillRectangle(&redBrush, origStroke[i].X, origStroke[i].Y, 1, 1);
which yields following result:
So in case someone will face the same problem as I did, the solution is:

Removal of OpenGL rubber banding artefacts

I'm working with some OpenGL code for scientific visualization and I'm having issues getting its rubber banding working on newer hardware. The code is drawing a "Zoom Window" over an existing scene with one corner of the "Zoom Window" at the stored left-click location, and the other under the mouse as it is moved. On the second left-click the scene zooms into the selected window.
The symptoms I am seeing as the mouse is moved across the scene are:
Rubber banding artefacts appearing which are the lines used to create the "Zoom Window" not being removed from the buffer by the second "RenderLogic" pass (see code below)
I can clearly see the contents of the previous buffer flashing up and disappearing as the buffers are swapped
The above problem doesn't happen on low end hardware such as the integrated graphics on a netbook I have. Also, I can't recall this problem ~5 years ago when this code was written.
Here are the relevant code sections, trimmed down to focus on the relevant OpenGL:
// Called by every mouse move event
// Makes use of current device context (m_hDC) and rendering context (m_hRC)
void MyViewClass::DrawLogic()
{
BOOL bSwapRv = FALSE;
// Make the rendering context current
if (!wglMakeCurrent(m_hDC, m_hRC))
// ... error handling
// Perform the logic rendering
glLogicOp(GL_XOR);
glEnable(GL_COLOR_LOGIC_OP);
// Draws the rectangle on the buffer using XOR op
RenderLogic();
bSwapRv = ::SwapBuffers(m_hDC);
// Removes the rectangle from the buffer via a second pass
RenderLogic();
glDisable(GL_COLOR_LOGIC_OP);
// Release the rendering context
if (!wglMakeCurrent(NULL, NULL))
// ... error handling
}
void MyViewClass::RenderLogic(void)
{
glLineWidth(1.0f);
glColor3f(0.6f,0.6f,0.6f);
glEnable(GL_LINE_STIPPLE);
glLineStipple(1, 0x0F0F);
glBegin(GL_LINE_LOOP);
// Uses custom "Point" class with Coords() method returning double*
// Draw rectangle with corners at clicked location and current location
glVertex2dv(m_pntClickLoc.Coords());
glVertex2d(m_pntCurrLoc.X(), m_pntClickLoc.Y());
glVertex2dv(m_pntCurrLoc.Coords());
glVertex2d(m_pntClickLoc.X(), m_pntCurrLoc.Y());
glEnd();
glDisable(GL_LINE_STIPPLE);
}
// Setup code that might be relevant to the buffer configuration
bool MyViewClass::SetupPixelFormat()
{
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1, // Version number (?)
PFD_DRAW_TO_WINDOW // Format must support window
| PFD_SUPPORT_OPENGL // Format must support OpenGL
| PFD_DOUBLEBUFFER, // Must support double buffering
PFD_TYPE_RGBA, // Request an RGBA format
32, // Select a 32 bit colour depth
0, 0, 0, 0, 0, 0, // Colour bits ignored (?)
8, // Alpha buffer bits
0, // Shift bit ignored (?)
0, // No accumulation buffer
0, 0, 0, 0, // Accumulation bits ignored
16, // 16 bit Z-buffer
0, // No stencil buffer
0, // No accumulation buffer (?)
PFD_MAIN_PLANE, // Main drawing layer
0, // Number of overlay and underlay planes
0, 0, 0 // Layer masks ignored (?)
};
PIXELFORMATDESCRIPTOR chosen_pfd;
memset(&chosen_pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
chosen_pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
// Find the closest match to the pixel format
m_uPixelFormat = ::ChoosePixelFormat(m_hDC, &pfd);
// Make sure a pixel format could be found
if (!m_uPixelFormat)
return false;
::DescribePixelFormat(m_hDC, m_uPixelFormat, sizeof(PIXELFORMATDESCRIPTOR), &chosen_pfd);
// Set the pixel format for the view
::SetPixelFormat(m_hDC, m_uPixelFormat, &chosen_pfd);
return true;
}
Any pointers on how to remove the artefacts will be much appreciated.
#Krom - image below
With OpenGL it's canonical to redraw the whole viewport if just anything changes. Consider this: Modern system draw animates complex scenes at well over 30 FPS.
But I understand, that you may want to avoid this. So the usual approach is to first copy the frontbuffer in a texture, before drawing the first rubberband. Then for each rubberband redraw "reset" the image by drawing a framebuffer filling quad with the texture.
I know I'm posting to a year and half old question but in case anyone else comes across this.
I've had this happen to me myself is because you are trying to remove the lines off of the wrong buffer. For example you draw your rectangle on buffer A call swapBuffer and then try to remove the rectangle off of buffer B. What you would want to do is keep track of 2 "zoom window" rectangles while your doing the drawing one for buffer A and one for buffer B and then keep track of which one is the most recent.
If you're using Vista/7 and Aero, try switching to the Classic theme.