I'm relatively new to love2d and was wondering if there is a simple way to draw a linear gradient without using an image. I'm trying to draw a scene that is at dusk, and want a subtle gradient from the top of the background to the bottom, but creating an image large enough to fill the background seems like it would be too large.
Any thoughts?
Try using an image which is 1px wide by the height needed, and repeat it horizontally like so:
-- load
bgImage = love.graphics.newImage('gradient.png')
bgImage:setWrap('repeat', 'clamp')
bgQuad = love.graphics.newQuad(
0, 0,
WIDTH, bgImage:getHeight(),
bgImage:getWidth(), bgImage:getHeight()
)
-- draw
love.graphics.drawq(bgImage, bgQuad, X, Y)
Replace X, Y, and WIDTH with the values you need. Using a quad here allows Löve to handle the horizontal repeat for really fast drawing.
(Hopefully this works, I haven't tested it.)
If you are worried about the size of the image and the performance, the best way is making an image with 1 x n pixels, where n is the number of colors on the gradient.
for example, if you want an vertical background-gradient with 2 colors:
love.graphics.draw(img,0,0,0,love.graphics.getWidth(),love.graphics.getHeight()/2)
:)
Related
The following code stretches a bitmap, blends it with an existing background, maintains transparent area of primary graphic and then displays the blend within a window (imgScreen). This works fine when the level of stretch is not large or when it is actually shrinking the initial bitmap. However when stretching the graphic it is very slow.
I have limited experience with C++ and this kind of graphics so perhaps there is another more efficient way to do this. The primary bitmap to be sized is always square. Any ideas are much appreciated..!
I was going to try not displaying clipping area but from tests it seems the initial stretch is causing the slowdown... Also having trouble seeing how to calculate non clipped area... Drawing to controls seems a waste but seems only way to use built in functions like stretchdraw and the alpha draw option.
std::auto_ptr<Graphics::TBitmap> bmap(new Graphics::TBitmap);
std::auto_ptr<Graphics::TBitmap> bmap1(new Graphics::TBitmap);
int s = newsize;
TRect sR = Rect(X,Y,X+s,Y+s);
TRect tR = Rect(0,0,s,s);
bmap->SetSize(s,s);
bmap->Canvas->StretchDraw(Rect(0, 0, s, s), Form1->Image4->Picture-
>Bitmap); // scale
bmap1->SetSize(s,s);
bmap1->Canvas->CopyRect(tR, Form1->imgScreen->Canvas, sR); //background
bmap1->Canvas->Draw(0,0,bmap.get()); // combine
Form1->imgTemp->Picture->Assign(bmap1.get());
Form1->imgScreen->Canvas->Draw(X,Y, Form1->imgTemp->Picture->Bitmap,
alpha);
Displays correctly but as graphic gets larger draw rate slows down quickly...
I'm developing a simple tile game that displays a grid image and paints it with successive layers of images. So I have-
list_of_image_tiles = { GRASS: pygame.image.load('/grass.png').convert_alpha(), TREES: pygame.image.load('/trees.png').convert_alpha(), etc}
Then later on I blit these-
DISPLAYSURF.blit(list_of_images[lists_of_stuff][TREES], (col*TILESIZE,row*TILESIZE))
DISPLAYSURF.blit(list_of_images[lists_of_stuff][GRASS], (col*TILESIZE,row*TILESIZE))
Note that for brevity I've not included a lot of code but it does basically work- except performance is painfully slow. If I comment out the DISPLAYSURF stuff performance leaps forward, so I think I need a better way to do the DISPLAYSURF stuff, or possibly the pygame.image.load bits (is convert_alpha() the best way, bearing in mind I need the layered-image approach?)
I read something called psycho might help, but not sure how to fit that in. Any ideas how to improve the performance most welcome.
There are a couple of things you can do.
Perform the "multi-layer" blit just once to a surface then just blit that surface every frame to the DISPLAYSURF.
Identify parts of the screen that need to be updated and use screen.update(rectangle_list) instead of screen.flip().
Edit to add example of 1.
Note: you didn't give much of your code, so I just fit this with how I do it.
# build up the level surface once when you enter a level.
level = Surface((LEVEL_WIDTH * TILESIZE, LEVEL_HIGHT * TILESIZE))
for row in range(LEVEL_HIGHT):
for col in range(LEVEL_WIDTH):
level.blit(list_of_images[lists_of_stuff][TREES], (col * TILESIZE, row * TILESIZE))
level.blit(list_of_images[lists_of_stuff][GRASS], (col * TILESIZE, row * TILESIZE))
then in main loop during draw part
# blit only the part of the level that should be on the screen
# view is a Rect describing what tiles should be viewable
disp = DISPLAYSURF..get_rect()
level_area = Rect((view.left * TILESIZE, view.top * TILESIZE), disp.size)
DISPLAYSURF.blit(level, disp, area = level_area)
You should use colorkey whenever you dont need per pixel alpha. I just changed all convert_alphas in my code to simple convert and set color key for fully opaque parts of image. Performance increase TEN FOLD!
I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.
I would like to create a fake "explosion" effect in SDL. For this, I would like the screen to go from what it is currently, and fade to white.
Originally, I thought about using SDL_FillRect like so (where explosionTick is the current alpha value):
SDL_FillRect(screen , NULL , SDL_MapRGBA(screen->format , 255, 255 , 255, explosionTick ));
But instead of a reverse fading rectangle, it shows up completely white with no alpha. The other method I tried involved using a fullscreen bitmap filled with a transparent white (with an alpha value of 1), and blit it once for each explosionTick like so:
for(int a=0; a<explosionTick; a++){
SDL_BlitSurface(boom, NULL, screen, NULL);
}
But, this ended up being to slow to run in real time.
Is there any easy way to achieve this effect without losing performance? Thank you for your time.
Well, you need blending and AFAIK the only way SDL does it is with SDL_Blitsurface. So you just need to optimize that blit. I suggest benchmarking those:
try to use SDL_SetAlpha to use per-surface alpha instead of per-pixel alpha. In theory, it's less work for SDL, so you may hope some speed gain. But I never compared it and had some problem with this in the past.
you don't really need a fullscreen bitmap, just repeat a thick row. It should be less memory intensive and maybe there is a cache gain. Also you can probably fake some smoothness by doing half the lines at each pass (less pixels to blit and should still look like a global screen effect).
for optimal performance, verify that your bitmap is at the display format. Check SDL_DisplayFormatAlpha or possibly SDL_DisplayFormat if you use per-surface alpha
I've been working on some sound processing code and now I'm doing some visualizations. I finished making a spectrogram spectrogram, but how I am drawing it is too slow.
I'm using OpenGL to do 2D drawing, which has made searching for help more difficult. Also I am very new to OpenGL, so I don't know the standard way things are done.
I am storing the r,g,b values for each pixel in a large matrix.
Each time I get a small sound segment, I process it and convert it to column of pixels. Everything is shifted to the left 1 pixel, and the new line is put at the end.
Each time I redraw, I am looping through setting the color and drawing each pixel individually, which seems like a horribly inefficient way to do this.
Is there a better way to do this? Is there some method for simply shifting a bunch of pixels over?
They are many ways to improve your drawing speed.
The simplest would be to allocate a an RGB texture that you will draw using a screen aligned texture quad.
Each time that you want to draw a new line you can use glTexSubImage2d to a load a new subset of the texture and then you redraw the quad.
Are you perhaps passing a lot more data to the graphics card than you have pixels? This could happen if your FFT size is much larger than the height of the drawing area or the number of spectral lines is a lot more than its width. If so, it's possible that the bottle neck could be passing too much data across the bus. Try reducing the number of spectral lines by either averaging them or picking (taking the maximum in each bin for a set of consecutive lines).
GL_POINTS, VBO, GL_STREAM_DRAW.
I know this is an old question, but . . .
Use a circular buffer to store the pixels, and then simply call glDrawPixels twice with the appropriate offsets. Something like this untested C:
#define SIZE_X 800
#define SIZE_Y 600
unsigned char pixels[SIZE_Y][SIZE_X*2][3];
int start = 0;
void add_line(const unsigned char line[SIZE_Y][1][3]) {
int i,j,coord=(start+SIZE_X)%(2*SIZE_X);
for (i=0;i<SIZE_Y;++i) for (j=0;j<3;++j) pixels[i][coord][j] = line[i][0][j];
start = (start+1) % (2*SIZE_X);
}
void draw(void) {
int w;
w = 2*SIZE_X-start;
if (w!=0) glDrawPixels(w,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,3*sizeof(unsigned char)*SIZE_Y*start+pixels);
w = SIZE_X - w;
if (w!=0) glDrawPixels(SIZE_X,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,pixels);
}