Draw shape shifting objects on (desktop)screen - c++

I'm currently developing a program to show and control animated sprites on the desktopscreen. My problem is now to actually draw them onto the screen. The user should still be able to access other applications, as long as the sprite does not obstruct it.
My attempts are below and I hope, someone can point me in the right direction. I don't really care which library I need to use, as long as the performance is good enough for something around 20-30 animated sprites.
My attempts so far:
My first attempt was with Qt. I used a QWidget with a QLabel in it to show the pixmap of an object. The pixmap itself had an alpha channel and I used the "setMask(pixmap.mask()" method of QWidget to remove anything I don't want to show. But this method can't be used for rapidly shifting shapes, like moving creatures. If setMask is called all 50-100ms to change the mask to the next movementphase, then the cpu load gets to high with a lot of creatures moving at the same time.
My second attempt was to use one QWidget for all creatures. This way setMask ist called only one time and not once for every creature. It's possible to move more creatures this way, but the screen is flickering like hell when moving the mouse pointer over the creatures.
My third attempt were the XShape functions from Xlib to change the shape of each creature, but the performance is not much better then setMask.
I tried the transparency in Qt but if I use a QWidget over the whole screen the cpu load of X gets really high while moving the mouse. I don't know, if I can do something better here.

Create a QGLWidget and learn to use the OpenGL API to draw sprites within it, even if only using glDrawPixels rather than texture objects.
You certainly won't have any problems drawing a few tens of sprites, and the time spent learning OpenGL will be a good investment if you aspire to do more complex graphical things in future.

Not sure if this is your language but the ESheep is on GitHub, could get you started: https://github.com/Adrianotiger/desktopPet

Related

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Are offscreen animations ignored by rendering and CPU?

Just wondering how Cocos manages the CPU cycle and graphics engine for CCSprites that are offscreen, including those in the middle of an animation. If you have many animated sprites going on and off the screen, I could check and stop each animation when it's off the screen then restart it when it is about to come back on, but I'm wondering if this is necessary?
Suppose you had a layer with a bunch of them and you make the layer invisible, but don't stop the sprite animations. Will they still use CPU time?
I just did a quick test (good question :) ), in a game where i can slide the screen over a large map that contains images of soldiers performing an 'idle' animation. They continue running when off-screen (I tacked a CCCallFunc in a sequence in a repeat forever, to a simple selector that logs).
I suspect they would also run when the object is not visible. It kind of makes sense, especially for animations. If you look at my use case, if the animation were stopped, it could cause a cognitive disconnect if the user slided the soldier in and out of view, especially when the soldier is doing a walk on the map - he could actually walk-in the view without the user having done any interaction with the screen.

Cocos2D - Large Image

Is it possible to use a large image in Cocos2D, and allow, via swiping or pinching, for the user to zoom in and out?
I see from this post, that the max res for a Cocos2D image is 2048x2048. That is obviously larger than a device viewport, so I want the user to be able to move around the image.
I'm not creating a game, I'm making a sort of interactive biological cell, that will allow the user to tap arbitrary organelles, and see a popup of information about them.
Here is an idea of what the image will be, and obviously cramming the whole thing into a device viewport is not possible:
So really, before I delve too deep into this project, I'm just curious as to whether it is possible to use a large image, that allows the user the ability to arbitrarily move it around, and, if I can detect organelle touches, perhaps via CCSprites?
I recommend subclassing CCSprite and using your large image as the class's image. CCSprites certainly can detect touches by simply adding the basic CCTouchDispatcher delegate to the sprite's class:
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:-1 swallowsTouches:YES];
Then also add this method to your CCSprite subclass:
-(BOOL) ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event
You can do anything you want with the touches at this point, scroll or whatever suits your needs.
You could break up your image into many multiple sprites and use a CCLayer to manage touches instead, it just depends on whether you really need your image to be that large, or if the limitations for a single image are enough for you to work with, considering they are pretty large too. My method here is a lot less complicated than that.
The max texture size is limited by OpenGL ES not just coscos2d and it changes by device. However, you can load the image into more than one texture and then position and move those textures around the screen. So really you could have the appearance of an image any size you would like but programmatically you will have to manage the different sprites (tiles) of the image.
CCSptites don't detect touches. CCLayers have will get the touch events you can then do a hit test to see if it hits a givcen CCSprite.

Tiling with QGraphicsScene and QGraphicsView

I'm creating a image visualizer that open large images(2gb+) in Qt.
I'm doing this by breaking the large image into several tiles of 512X512. I then load a QGraphicsScene of the original image size and use addPixmap to add each tile onto the QGraphic Scene. So ultimately it looks like a huge image to the end user when in fact it is a continuous array of smaller images stuck together on the scene.First of is this a good approach?
Trying to load all the tiles onto the scene takes up a lot of memory. So I'm thinking of only loading the tiles that are visible in the view. I've already managed to subclass QGraphicsScene and override its drag event thus enabling me to know which tiles need to be loaded next based on movement. My problem is tracking movement on the scrollbars. Is there any way I can create an event that get called every time the scrollbar moves. Subclassing QGraphicsView in not an option.
QGraphicsScene is smart enough not to render what isn't visible, so here's what you need to do:
Instead of loading and adding pixmaps, add classes that wrap the pixmap, and only load it when they are first rendered. (Computer scientists like to call this a "proxy pattern"). You could then unload the pixmap based on a timer. (They would be transparently re-loaded if unloaded too soon.) You could even notify this proxy path of the current zoom level, so that it loads lower resolution images when they will be rendered smaller.
Edit: here's some code to get you started. Note that everything that QGraphicsScene draws is a QGraphicsItem, (if you call ::addPixmap, it's converted to a ...GraphicsItem behind the scenes), so that's what you want to subclass:
(I haven't even compiled this, so "caveat lector", but it's doing the right thing ;)
class MyPixmap: public QGraphicsItem{
public:
// make sure to set `item` to nullptr in the constructor
MyPixmap()
: QGraphicsItem(...), item(nullptr){
}
// you will need to add a destructor
// (and probably a copy constructor and assignment operator)
QRectF boundingRect() const{
// return the size
return QRectF( ... );
}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option,
QWidget *widget){
if(nullptr == item){
// load item:
item = new QGraphicsPixmapItem( ... );
}
item->paint(painter, option, widget);
}
private:
// you'll probably want to store information about where you're
// going to load the pixmap from, too
QGraphicsPixmapItem *item;
};
then you can add your pixmaps to the QGraphicsScene using QGraphicsScene::addItem(...)
Although an answer has already been chosen, I'd like to express my opinion.
I don't like the selected answer, especially because of that usage of timers. A timer to unload the pixmaps? Say that the user actually wants to take a good look at the image, and after a couple of seconds - bam, the image is unloaded, he will have to do something in order the image to reappear. Or may be you will put another timer, that loads the pixmaps after another couple of seconds? Or you will check among your thousand of items if they are visible? Not only is this very very irritating and wrong, but that means that your program will be using resources all the time. Say the user minimizes you program and plays a movie, he will wonder why on earth my movie is freezing every couple of seconds...
Well, if I misunderstood the proposed idea of using timers, execuse me.
Actually the idea that mmutz suggested is better. It reminded me of the Mandelbrot example. Take a look at it. Instead of calculating what to draw you can rewrite this part to loading that part of the image that you need to show.
In conclusion I will propose another solution using QGraphicsView in a much simpler way:
1) check the size of the image without loading the image (use QImageReader)
2) make your scene's size equal to that of the image
3) instead of using pixmap items reimplement the DrawBackground() function. One of the parameters will give you the new exposed rectangle - meaning that if the user scrolls just a little bit, you will load and draw only this new part(to load only part of an image use setClipRect() and then read() methods of the QImageReader class). If there are some transformations you can get them from the other parameter(which is QPainter) and apply them to the image before you draw it.
In my opinion the best solution will be to combine my solution with the threading shown in the Mandelbrot example.
The only problem that I can think of now is if the user zooms out with a big scale factor. Then you will need a lot of resources for some time to load and scale a huge image. Well I see now that there is some function of the QImageReader that I haven't tried yet - setScaledSize(), which maybe do just what we need - if you set a scale size and then load the image maybe it won't load first the entire image – try it. Another way is just to limit the scale factor, a thing that you should do anyway if you stick to the method with the pixmap items.
Hope this helps.
Unless you absolutely need the view to be a QGraphicsView (e.g. because you place other objects on top of the large background pixmap), I'd really recommend just subclassing QAbstractScrollArea and reimplementing scrollContentsBy() and paintEvent().
Add in a LRU cache of pixmaps (see QPixmapCache for inspiration, though that one is global), and make the paintEvent() pull used pixmaps to the front, and be set.
If this sounds like more work than the QGraphicsItem, believe me, it's not :)

Maximum Size of QPixmap/QImage Windows

I have a QGraphicsView for a very wide QGraphicsScene. I need to draw the background in drawBackground() and the background is a bit complicated (long loop) although it doesn't need to be repainted constantly. I store it in a static QPixmap (I tried QImage too) inside the function drawBackground() and that pixmap is what I draw onto the painter of the view. Only when needed is the QPixmap painted on again.
If I didn't use a static pixmap, the complicated background would be generated every time I scroll sideways for example. The problem is that apparently there is a maximum width for pixmaps on Windows, on my computer it's 32770. I could store a list of pixmaps and draw them side by side but it would make the code uglier and I also don't know what the maximum width of a pixmap is for every Windows machine. Since this might be a well-known problem I was wondering if anyone has a better solution.
Thanks.
You can probably avoid the windows limit by using unaccelerated raster paint device, but 32770*1024 is 100MiB of pixmap; you probably don't want to do that even if Windows would let you.
You've already thought of the usual answer (tile it in more reasonably-sized chunks and load/generate them on demand). The other piece of the usual solution is to use something like QPixmapCache to keep the recently-used tiles so you don't regenerate them too often (only when the user scrolls a long way).
You didn't say how complex your complex background is, but you might also want to look at the Mandelbrot set example for how to do piecewise rendering of an (infinitely) large background pixmap on-demand, without blocking the UI.
This is the common use case for the tiling pattern. Basically you split the background into small images.
I'm not sure why you think "it would make the code uglier". It is certainly not a one-liner. Depending whether you have fixed size background image or not, the tiling code is usually pretty straightforward.