Pygame: slow performance using pygame.Surface and convert_alpha() - python-2.7

I'm developing a simple tile game that displays a grid image and paints it with successive layers of images. So I have-
list_of_image_tiles = { GRASS: pygame.image.load('/grass.png').convert_alpha(), TREES: pygame.image.load('/trees.png').convert_alpha(), etc}
Then later on I blit these-
DISPLAYSURF.blit(list_of_images[lists_of_stuff][TREES], (col*TILESIZE,row*TILESIZE))
DISPLAYSURF.blit(list_of_images[lists_of_stuff][GRASS], (col*TILESIZE,row*TILESIZE))
Note that for brevity I've not included a lot of code but it does basically work- except performance is painfully slow. If I comment out the DISPLAYSURF stuff performance leaps forward, so I think I need a better way to do the DISPLAYSURF stuff, or possibly the pygame.image.load bits (is convert_alpha() the best way, bearing in mind I need the layered-image approach?)
I read something called psycho might help, but not sure how to fit that in. Any ideas how to improve the performance most welcome.

There are a couple of things you can do.
Perform the "multi-layer" blit just once to a surface then just blit that surface every frame to the DISPLAYSURF.
Identify parts of the screen that need to be updated and use screen.update(rectangle_list) instead of screen.flip().
Edit to add example of 1.
Note: you didn't give much of your code, so I just fit this with how I do it.
# build up the level surface once when you enter a level.
level = Surface((LEVEL_WIDTH * TILESIZE, LEVEL_HIGHT * TILESIZE))
for row in range(LEVEL_HIGHT):
for col in range(LEVEL_WIDTH):
level.blit(list_of_images[lists_of_stuff][TREES], (col * TILESIZE, row * TILESIZE))
level.blit(list_of_images[lists_of_stuff][GRASS], (col * TILESIZE, row * TILESIZE))
then in main loop during draw part
# blit only the part of the level that should be on the screen
# view is a Rect describing what tiles should be viewable
disp = DISPLAYSURF..get_rect()
level_area = Rect((view.left * TILESIZE, view.top * TILESIZE), disp.size)
DISPLAYSURF.blit(level, disp, area = level_area)

You should use colorkey whenever you dont need per pixel alpha. I just changed all convert_alphas in my code to simple convert and set color key for fully opaque parts of image. Performance increase TEN FOLD!

Related

How to detect if an image contains only white color with C++

We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.

Is there any way to save the path and restore it in Cairo?

I have two graphs of drawing signals on a gtkmm application.
The problem comes when I have to paint a graph with many points (around 300-350k) and lines to the following points since it slows down a lot to paint all the points each iteration.
bool DrawArea::on_draw(const Cairo::RefPtr<Cairo::Context>& c)
{
cairo_t* cr = c->cobj();
//xSignal.size() = ySignal.size() = 350000
for (int j = 0; j < xSignal.size() - 1; ++j)
{
cairo_move_to(cr, xSignal[j], ySignal[j]);
cairo_line_to(cr, xSignal[j + 1], ySignal[j + 1]);
}
cairo_stroke(cr);
return true;
}
I know that exist a cairo_stroke_preserve but i think is not valid for me because when I switch between graphs, it disappears.
I've been researching about save the path and restore it on the Cairo documentation but i don´t see anything. In 2007, a user from Cairo suggested in the documentation 'to do' the same thing but apparently it has not been done.
Any suggestion?
It's not necessary that you draw everything in on_draw. What I understand from your post is that you have a real-time waveform drawing application where samples are available at fixed periods (every few milliseconds I presume). There are three approaches you can follow.
Approach 1
This is good particularly when you have limited memory and do not care about retaining the plot if window is resized or uncovered. Following could be the function that receives samples (one by one).
NOTE: Variables prefixed with m_ are class members.
void DrawingArea::PlotSample(int nSample)
{
Cairo::RefPtr <Cairo::Context> refCairoContext;
double dNewY;
//Get window's cairo context
refCairoContext = get_window()->create_cairo_context();
//TODO Scale and transform sample to new Y coordinate
dNewY = nSample;
//Clear area for new waveform segment
{
refCairoContext->rectangle(m_dPreviousX
+ 1,
m_dPreviousY,
ERASER_WIDTH,
get_allocated_height()); //See note below on m_dPreviousX + 1
refCairoContext->set_source_rgb(0,
0,
0);
refCairoContext->fill();
}
//Setup Cairo context for the trace
{
refCairoContext->set_source_rgb(1,
1,
1);
refCairoContext->set_antialias(Cairo::ANTIALIAS_SUBPIXEL); //This is up to you
refCairoContext->set_line_width(1); //It's 2 by default and better that way with anti-aliasing
}
//Add sub-path and stroke
refCairoContext->move_to(m_dPreviousX,
m_dPreviousY);
m_dPreviousX += m_dXStep;
refCairoContext->line_to(m_dPreviousX,
dNewY);
refCairoContext->stroke();
//Update coordinates
if (m_dPreviousX
>= get_allocated_width())
{
m_dPreviousX = 0;
}
m_dPreviousY = dNewY;
}
While clearing area the X coordinate has to be offset by 1 because otherwise the 'eraser' will clear of the anti-aliasing on the last coulmn and your trace will have jagged edges. It may need to be more than 1 depending on your line thickness.
Like I said before, with this method your trace will get cleared if the widget is resized or 'revealed'.
Approach 2
Even here the sample are plotted the same way as before. Only difference is that each sample received is pushed directly into a buffer. When the window is resized or 'reveled' the widget's on_draw is called and there you can plot all the samples one time. Of course you'll need some memory (quite a lot if you have 350K samples in queue) but the trace stays on screen no matter what.
Approach 3
This one also takes up a little bit of memory (probably much more depending on the size of you widget), and uses an off-screen buffer. Here instead of storing samples we store the rendered result. Override the widgets on_map method and on_size_allocate to create an offsceen buffer.
void DrawingArea::CreateOffscreenBuffer(void)
{
Glib::RefPtr <Gdk::Window> refWindow = get_window();
Gtk::Allocation oAllocation = get_allocation();
if (refWindow)
{
Cairo::RefPtr <Cairo::Context> refCairoContext;
m_refOffscreenSurface =
refWindow->create_similar_surface(Cairo::CONTENT_COLOR,
oAllocation.get_width(),
oAllocation.get_height());
refCairoContext = Cairo::Context::create(m_refOffscreenSurface);
//TODO paint the background (grids may be?)
}
}
Now when you receive samples, instead of drawing into the window directly draw into the off-screen surface. Then block copy the off screen surface by setting this surface as your window's cairo context's source and then draw a rectangle to draw the newly plotted sample. Also in your widget's on_draw just set this surface as the source of widget's cairo context and do a Cairo::Context::paint(). This approach is particularly useful if your widget probably doesn't get resized and the advantage is that the blitting (where you transfer contents of one surface to the other) is way faster than plotting individual line segments.
To answer your question:
There is cairo_copy_path() and cairo_append_path() (there is also cairo_copy_path_flat() and cairo_path_destroy()).
Thus, you can save a path with cairo_copy_path() and later append it to the current path with cairo_append_path().
To answer your not-question:
I doubt that this will speed up your drawing. Appending these lines to the current path is unlikely to be slow. Rather, I would expect the actual drawing of these lines to be slow.
You write "it slows down a lot to paint all the points each iteration.". I am not sure what "each iteration" refers to, but why are you drawing all these points all the time? Wouldn't it make more sense to only draw them once and then to re-use the drawn result?

Speeding up drawing bitmap magnification within second bitmap with blend

The following code stretches a bitmap, blends it with an existing background, maintains transparent area of primary graphic and then displays the blend within a window (imgScreen). This works fine when the level of stretch is not large or when it is actually shrinking the initial bitmap. However when stretching the graphic it is very slow.
I have limited experience with C++ and this kind of graphics so perhaps there is another more efficient way to do this. The primary bitmap to be sized is always square. Any ideas are much appreciated..!
I was going to try not displaying clipping area but from tests it seems the initial stretch is causing the slowdown... Also having trouble seeing how to calculate non clipped area... Drawing to controls seems a waste but seems only way to use built in functions like stretchdraw and the alpha draw option.
std::auto_ptr<Graphics::TBitmap> bmap(new Graphics::TBitmap);
std::auto_ptr<Graphics::TBitmap> bmap1(new Graphics::TBitmap);
int s = newsize;
TRect sR = Rect(X,Y,X+s,Y+s);
TRect tR = Rect(0,0,s,s);
bmap->SetSize(s,s);
bmap->Canvas->StretchDraw(Rect(0, 0, s, s), Form1->Image4->Picture-
>Bitmap); // scale
bmap1->SetSize(s,s);
bmap1->Canvas->CopyRect(tR, Form1->imgScreen->Canvas, sR); //background
bmap1->Canvas->Draw(0,0,bmap.get()); // combine
Form1->imgTemp->Picture->Assign(bmap1.get());
Form1->imgScreen->Canvas->Draw(X,Y, Form1->imgTemp->Picture->Bitmap,
alpha);
Displays correctly but as graphic gets larger draw rate slows down quickly...

Remove moving objects to get the background model from multiple images

I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??
Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.
If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)
If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link

Drawing large numbers of pixels in OpenGL

I've been working on some sound processing code and now I'm doing some visualizations. I finished making a spectrogram spectrogram, but how I am drawing it is too slow.
I'm using OpenGL to do 2D drawing, which has made searching for help more difficult. Also I am very new to OpenGL, so I don't know the standard way things are done.
I am storing the r,g,b values for each pixel in a large matrix.
Each time I get a small sound segment, I process it and convert it to column of pixels. Everything is shifted to the left 1 pixel, and the new line is put at the end.
Each time I redraw, I am looping through setting the color and drawing each pixel individually, which seems like a horribly inefficient way to do this.
Is there a better way to do this? Is there some method for simply shifting a bunch of pixels over?
They are many ways to improve your drawing speed.
The simplest would be to allocate a an RGB texture that you will draw using a screen aligned texture quad.
Each time that you want to draw a new line you can use glTexSubImage2d to a load a new subset of the texture and then you redraw the quad.
Are you perhaps passing a lot more data to the graphics card than you have pixels? This could happen if your FFT size is much larger than the height of the drawing area or the number of spectral lines is a lot more than its width. If so, it's possible that the bottle neck could be passing too much data across the bus. Try reducing the number of spectral lines by either averaging them or picking (taking the maximum in each bin for a set of consecutive lines).
GL_POINTS, VBO, GL_STREAM_DRAW.
I know this is an old question, but . . .
Use a circular buffer to store the pixels, and then simply call glDrawPixels twice with the appropriate offsets. Something like this untested C:
#define SIZE_X 800
#define SIZE_Y 600
unsigned char pixels[SIZE_Y][SIZE_X*2][3];
int start = 0;
void add_line(const unsigned char line[SIZE_Y][1][3]) {
int i,j,coord=(start+SIZE_X)%(2*SIZE_X);
for (i=0;i<SIZE_Y;++i) for (j=0;j<3;++j) pixels[i][coord][j] = line[i][0][j];
start = (start+1) % (2*SIZE_X);
}
void draw(void) {
int w;
w = 2*SIZE_X-start;
if (w!=0) glDrawPixels(w,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,3*sizeof(unsigned char)*SIZE_Y*start+pixels);
w = SIZE_X - w;
if (w!=0) glDrawPixels(SIZE_X,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,pixels);
}