OpenGL: Current raster position stays invalid (freaky error, makes no sense) - opengl

I am getting a rather odd error while displaying bitmap texts on screen and using glRasterPos3f function to position them. When I start the application, all my texts fit inside the view and everything works fine and dandy. However, things start to get really messed up as soon as one of them gets outside the view - all my texts disappear and won't reappear ever again not even if I set the view back to its original position.
I did some investigation and made an explicit check of the raster position validity flag, like this:
glRasterPos3f(xPos, yPos, zPos);
// check raster position validity
GLboolean valid;
pin_ptr<GLboolean> p_valid = &valid;
glGetBooleanv(GL_CURRENT_RASTER_POSITION_VALID, p_valid);
if (!valid)
return;
Well now this is when things start to really boggle my mind - the condition at the end of this code gets triggered not just when a text position gets outside the view, but from then on ever since! It drives me into despair. Even if I restore the view to its usual working state, the validity bit appears to stay cleared forever. Any ideas on what might be the possible cause for this or maybe how to restore raster position manually somehow?
EDIT: Some pics...
Initial state, all is well:
http://i.imgur.com/SFGU4QI.png
I zoom in, some raster position gets invalid, texts disappear:
http://i.imgur.com/cj2xVAs.png
When I zoom back out again, there still aren't any texts...

All OpenGL raster operations are discarded if the glRasterPos transforms to outside the clip space to NDC space volume. So if your text starts out at a position outside the visible viewport it won't show up. And if the text extends to beyond the visible viewport, everything after the last character visible will get messed up.
Which means, glRasterPos is rather useless. It's use is strongly discouraged, as are all OpenGL raster operations. In fact those have been removed entirely from modern OpenGL versions.

Related

How to detect if an image contains only white color with C++

We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.

How to get the cursor position in Windows?

I need to check a pixel color and I let the user tag the position on the screen.
I've tried both GetCursorPos() and GetPhysicalCursorPosition() but neither returns the correct value.
(I noticed that getting my screen resolution via GetSystemMetrics returns a wrong resolution as well. It reports to be 2560x1440, but it really is 3840x2160.)
So, when multiplying the result of GetCursorPos() by 1.5, I do get the correct cursor position for my monitor. However, the positions on my other two 1080p monitors are still wrong. They were not correct before.

Clips are not linking up end to end

it has been a big while since I have last used After Effects.
Currently I am getting into a problem in which I have a few footages, tried changing the frame rate etc but it does not seems to help.
You can see in the image, the the lower bar clip is not starting right off at the end of the first clip, since the first clip ends off slightly more than the 07f mark.
As such, if I try to add in another clip, it can only be at the 07f or the 08f mark.
Is there anyways that I can make it 'linked' up end to end?
Just hold Shift button while dragging the second footage.
And to easily change the frame-rate you can use video copilot frame rate converter

C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap

I have started to create a paint program that interacts with drawing tablets. Depending on the pressure of the pen on the tablet I change the alpha value of the line being drawn. That mechanism works.
Thin lines look decent and it looks a real sketch. But since I am drawing lines between two points (like in the Qt scribble tutorial) to paint there is an alpha overlap between the line joints and it is very noticeable for thick strokes.
This is the effect with line to line conjuction:
As you can see, there is an ugly alpha blend between the line segments.
In order to solve this I decided to use a QPainterPath to render lines.
Two problems with this:
A long, continuous, thick path quickly lags the program.
Since the path is connected it acts as one, so any change to the alpha value affects the the entire path(which I don't want since I want to preserve a blending effect).
The following images use a QPainterPath.
The blend effect I want to keep.
The following image shows the 2nd problem which changes the alpha and thickness of the entire path
The red text should read: "if more pressure is added without removing the pen from the tablet surface the line thickens" (and alpha becomes opaque)
Another thing is that with this approach I can only get a blending trail from a dark to light (or thick to thin path width) but not light to dark. I am not sure why this effect occurs but my best guess is that it has to do with the line segments of the path updating as whole.
I did make the program increase/decrease alpha and line thickness based on the pressure of the pen on the tablet.
The problem is that I want to render lines without the alpha overlap and QPainterPath updates the entire path's alpha and thickness which I don't want.
This is the code that creates the path:
switch(event->type()){
case QEvent::TabletPress:
if(!onTablet){
onTablet = true;
//empty for new segment
freePainterPath();
path = new QPainterPath(event->pos());
} break;
case QEvent::TabletRelease:
if(onTablet)
onTablet = false;
break;
case QEvent::TabletMove:
if(path != NULL)
path->lineTo(event->pos());
if(onTablet){
//checks for pressure of pen on tablet to change alpha/line thickness
brushEffect(event);
QPainter painter(&pixmap);
//renders the path
paintPixmap(painter, event);
} break;
default:;
}
update();
The desired effect that I want as a single path (image created with Krita paint program):
To emulate the Krita paint program:
Keep a backup of the original target surface.
Paint with your brush onto a scratch surface that starts out completely transparent.
On that surface, your composting rule is "take maximum opacity".
Keep track of the dirty regions of that surface, and do a traditional composite of (scratch surface) onto (original target surface) and display the result. Make sure this operation doesn't damage the original target surface.
Now, you don't have to keep the entire original target surface -- just the parts you have drawn on with this tool. (A good tile based lazy-write imaging system will make this easy).
Depending on the segment size you are drawing with, you may want to interpolate between segments to make the strength of the brush be a bit less sharp. The shape of your brush may also need work. But these are independent of the transparency problem.
As for the Qt strangeness, I don't know enough Qt to tell you how to deal with the quirks of Qt's brush code. But the above "key-mask" strategy should solve your alpha overlap problem.
I do not know how to do this in Qt. Glancing at the Qt compositing modes I don't see an obvious way to say "take maximum" as the resulting alpha. Maybe something involving both color and alpha channels in some clever way.
I know this question is very old, and has an accepted answer, but in case someone else needs the answer, here it is:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.

changing textureRect of a CCSprite created by CCRenderTexture

I have a CCSprite which gradually needs to be exhausted linearly from one end, lets say from left to right.For this purpose ,I am trying to change the textureRect property of the sprite so that the part that got exhausted from one end is 'outside' the displaying frame of the sprite.
I did this sort of thing before with a sprite that gets loaded from a spritesheet.And it worked perfectly.But I created this CCSprite using CCRenderTexture and by changing the textureRect property,the entire sprite gets disappeared.
The first image is the original CCSprite which I get from CCRenderTexture.The second image shows what I want to achieve.The black dotted rectangular portion of the Sprite needs to be omitted out.Only the blue dotted portion of the sprite needs to be displayed.Essentially,this blue dotted rectangle is my textureRect.
Is there any way how I could make my sprite reduce from one end.
Also is there any difference between a sprite created normally,and one created using CCRenderTexture.
I have done similar thing like this before using some low-level hack.
There is a work around solution if you use CCProgressTimer, that's very easy and I think it should be enough for your examples.
But you said in comment that you have some special requirements like "exhaust it from both the ends at once" then some low-level hack is needed. My solution from my last object is:
1) Get the texture image's raw data. In cocos2d you can use CCRenderTexture and in cocos2d-x you can use CCImage.
2) CCRenderTexture has a method of - (BOOL) saveToFile: (NSString *) name
format: (tCCImageFormat) format
. You can read its source code then try to save it into an 2D array instead like byte raw[1024][768]. Each element in this array represents one pixel on your picture(the type may not be byte, I'm not sure, nearly forget the details). The format MUST BE PNG since transparency will be needed.
3) Modify raw data directly, set pixel's transparency to 0x0 which you want it to disappear.
4) Re-initialize a CCRenderTexture using picture data you modified.
I can't provide the code directly since is a trade secret and core part of one of my projects. But I can share you my solution. You also need some knowledge about how PNG file works. Read:
https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
Turns out I was making a silly mistake.While supplying values to the textureRect(CGRect),I was actually setting the textureRect.origin.y to the height of the texture which made my textureRect go beyond(above) the texture area.This explains why they were disappearing.