I am wondering if I can change window size and scale everything inside about 50 %. I would love to keep all the logic behind the program still relevant (for example pygame would register click on 1000 px, so after change at 500 px) I'm don't think there is an easy answer..
There's two different ways you could go about this. If you want to pick a screen size and have it stay the same until you decide to change it on the backend, you can go to each call of pygame.blit(foo) or pygame.draw.shape(foobar) and scale up the dimensions of foo and foobar by 50%. That's the simple way, but it has some drawbacks. For example, if you ever want to change the screen size again, you'll have to do it over again. Certainly not ideal.
If you want the screen to be re-sizable by the user, or if you want to offer yourself greater flexibility in choosing your own screen size, you have to have some formulas to determine where to draw something based on screen size. For example, let's say you're doing a chess game. You could have something like this:
SCREEN_WIDTH = 500
TILE_WIDTH = SCREEN_WIDTH / 8
Then your drawBoard function could look like:
for i in range(8):
for j in range(8):
....
#Color = black or white depending on which tile we're drawing
pygame.draw.rect(mainWindow, color, (i * SCREEN_WIDTH, J * SCREEN_WIDTH, SCREEN_WIDTH, SCREEN_WIDTH))
Now, if you want a larger screen, your tiles will scale up with it. Or, to make it a little more complicated, let's say you want a buffer between the chessboard and the window edge. Then you could do
SCREEN_WIDTH = 500
BUFFER = 20
TILE_WIDTH = (SCREEN_WIDTH - BUFFER * 2) / 8
And your drawBoard function could look like:
for i in range(8):
for j in range(8):
....
#Color = black or white depending on which tile we're drawing
pygame.draw.rect(mainWindow, color, (BUFFER + i * SCREEN_WIDTH, BUFFER + J * SCREEN_WIDTH, SCREEN_WIDTH, SCREEN_WIDTH))
This way will be harder short-term, but totally worth it in the long run. You'll have to also factor in whether you want everything to resize, or if there's a min/max size, and a couple of other thoughts, but I'm sure you can figure that out.
Related
I'm creating a simple window manager for future projects and I seem to have run into a problem. I have a snippet of code which is supposed to change the viewport's position to the middle of the window whenever somebody resizes it, and it seems to work completely fine when changing position on the x-axis, as seen here. Unfortunately, it doesn't work on the y-axis, instead showing up at the bottom of the window. here is the code that handles this:
/* create viewport */
if (win->width > win->height)
glViewport((win->width / 2 - win->viewport.width / 2), 0, win->viewport.width, win->viewport.height);
else
/* FIXME: viewport appears at bottom of window, i have no idea why */
glViewport(0, (win->height / 2 - win->viewport.height / 2), win->viewport.width, win->viewport.height);
I have changed a number of variables in the equation but none of them yielded any results. I have ran the equation outside of glViewport and it returns the desired numbers. OpenGL is intentionally changing the viewports position to (0,0) and I have yet to figure out why. if it helps at all, I'm using OpenGL 3.3 and SDL2 on a Windows machine.
If anybody could tell me what I need to do to fix this, I would greatly appreciate it. Please and thank you.
I have run in a similar problem with SDL2 too.
I think the missing part is that you are not considering the aspect ractio value.
Also using SDL2 with Opengl you should consider that the drawable area can be different from the window area.
Assuming w and h the original sizes,
draw_w and _draw_h the current drawable area size,
view_w view_h the viewport area size,
we can calculate it as :
SDL_GL_GetDrawableSize(window,&draw_w,&draw_h);
float ratio = (float)w/(float)h;
if(draw_w/ratio < draw_h)
{
view_w = draw_w;
view_h = (int)((float)view_w/ratio);
}
else
{
view_h = draw_h;
view_w = (int)((float)view_h*ratio);
}
x = (draw_w-view_w)/2;
y = (draw_h-view_h)/2;
glViewport(x, y, view_w, view_h);
I have used a similar function applied to a SDL2 filter event as :
if(event->type==SDL_WINDOWEVENT && (event->window.event==SDL_WINDOWEVENT_SIZE_CHANGED || event->window.event==SDL_WINDOWEVENT_EXPOSED))
I have two graphs of drawing signals on a gtkmm application.
The problem comes when I have to paint a graph with many points (around 300-350k) and lines to the following points since it slows down a lot to paint all the points each iteration.
bool DrawArea::on_draw(const Cairo::RefPtr<Cairo::Context>& c)
{
cairo_t* cr = c->cobj();
//xSignal.size() = ySignal.size() = 350000
for (int j = 0; j < xSignal.size() - 1; ++j)
{
cairo_move_to(cr, xSignal[j], ySignal[j]);
cairo_line_to(cr, xSignal[j + 1], ySignal[j + 1]);
}
cairo_stroke(cr);
return true;
}
I know that exist a cairo_stroke_preserve but i think is not valid for me because when I switch between graphs, it disappears.
I've been researching about save the path and restore it on the Cairo documentation but i don´t see anything. In 2007, a user from Cairo suggested in the documentation 'to do' the same thing but apparently it has not been done.
Any suggestion?
It's not necessary that you draw everything in on_draw. What I understand from your post is that you have a real-time waveform drawing application where samples are available at fixed periods (every few milliseconds I presume). There are three approaches you can follow.
Approach 1
This is good particularly when you have limited memory and do not care about retaining the plot if window is resized or uncovered. Following could be the function that receives samples (one by one).
NOTE: Variables prefixed with m_ are class members.
void DrawingArea::PlotSample(int nSample)
{
Cairo::RefPtr <Cairo::Context> refCairoContext;
double dNewY;
//Get window's cairo context
refCairoContext = get_window()->create_cairo_context();
//TODO Scale and transform sample to new Y coordinate
dNewY = nSample;
//Clear area for new waveform segment
{
refCairoContext->rectangle(m_dPreviousX
+ 1,
m_dPreviousY,
ERASER_WIDTH,
get_allocated_height()); //See note below on m_dPreviousX + 1
refCairoContext->set_source_rgb(0,
0,
0);
refCairoContext->fill();
}
//Setup Cairo context for the trace
{
refCairoContext->set_source_rgb(1,
1,
1);
refCairoContext->set_antialias(Cairo::ANTIALIAS_SUBPIXEL); //This is up to you
refCairoContext->set_line_width(1); //It's 2 by default and better that way with anti-aliasing
}
//Add sub-path and stroke
refCairoContext->move_to(m_dPreviousX,
m_dPreviousY);
m_dPreviousX += m_dXStep;
refCairoContext->line_to(m_dPreviousX,
dNewY);
refCairoContext->stroke();
//Update coordinates
if (m_dPreviousX
>= get_allocated_width())
{
m_dPreviousX = 0;
}
m_dPreviousY = dNewY;
}
While clearing area the X coordinate has to be offset by 1 because otherwise the 'eraser' will clear of the anti-aliasing on the last coulmn and your trace will have jagged edges. It may need to be more than 1 depending on your line thickness.
Like I said before, with this method your trace will get cleared if the widget is resized or 'revealed'.
Approach 2
Even here the sample are plotted the same way as before. Only difference is that each sample received is pushed directly into a buffer. When the window is resized or 'reveled' the widget's on_draw is called and there you can plot all the samples one time. Of course you'll need some memory (quite a lot if you have 350K samples in queue) but the trace stays on screen no matter what.
Approach 3
This one also takes up a little bit of memory (probably much more depending on the size of you widget), and uses an off-screen buffer. Here instead of storing samples we store the rendered result. Override the widgets on_map method and on_size_allocate to create an offsceen buffer.
void DrawingArea::CreateOffscreenBuffer(void)
{
Glib::RefPtr <Gdk::Window> refWindow = get_window();
Gtk::Allocation oAllocation = get_allocation();
if (refWindow)
{
Cairo::RefPtr <Cairo::Context> refCairoContext;
m_refOffscreenSurface =
refWindow->create_similar_surface(Cairo::CONTENT_COLOR,
oAllocation.get_width(),
oAllocation.get_height());
refCairoContext = Cairo::Context::create(m_refOffscreenSurface);
//TODO paint the background (grids may be?)
}
}
Now when you receive samples, instead of drawing into the window directly draw into the off-screen surface. Then block copy the off screen surface by setting this surface as your window's cairo context's source and then draw a rectangle to draw the newly plotted sample. Also in your widget's on_draw just set this surface as the source of widget's cairo context and do a Cairo::Context::paint(). This approach is particularly useful if your widget probably doesn't get resized and the advantage is that the blitting (where you transfer contents of one surface to the other) is way faster than plotting individual line segments.
To answer your question:
There is cairo_copy_path() and cairo_append_path() (there is also cairo_copy_path_flat() and cairo_path_destroy()).
Thus, you can save a path with cairo_copy_path() and later append it to the current path with cairo_append_path().
To answer your not-question:
I doubt that this will speed up your drawing. Appending these lines to the current path is unlikely to be slow. Rather, I would expect the actual drawing of these lines to be slow.
You write "it slows down a lot to paint all the points each iteration.". I am not sure what "each iteration" refers to, but why are you drawing all these points all the time? Wouldn't it make more sense to only draw them once and then to re-use the drawn result?
I'm developing a simple tile game that displays a grid image and paints it with successive layers of images. So I have-
list_of_image_tiles = { GRASS: pygame.image.load('/grass.png').convert_alpha(), TREES: pygame.image.load('/trees.png').convert_alpha(), etc}
Then later on I blit these-
DISPLAYSURF.blit(list_of_images[lists_of_stuff][TREES], (col*TILESIZE,row*TILESIZE))
DISPLAYSURF.blit(list_of_images[lists_of_stuff][GRASS], (col*TILESIZE,row*TILESIZE))
Note that for brevity I've not included a lot of code but it does basically work- except performance is painfully slow. If I comment out the DISPLAYSURF stuff performance leaps forward, so I think I need a better way to do the DISPLAYSURF stuff, or possibly the pygame.image.load bits (is convert_alpha() the best way, bearing in mind I need the layered-image approach?)
I read something called psycho might help, but not sure how to fit that in. Any ideas how to improve the performance most welcome.
There are a couple of things you can do.
Perform the "multi-layer" blit just once to a surface then just blit that surface every frame to the DISPLAYSURF.
Identify parts of the screen that need to be updated and use screen.update(rectangle_list) instead of screen.flip().
Edit to add example of 1.
Note: you didn't give much of your code, so I just fit this with how I do it.
# build up the level surface once when you enter a level.
level = Surface((LEVEL_WIDTH * TILESIZE, LEVEL_HIGHT * TILESIZE))
for row in range(LEVEL_HIGHT):
for col in range(LEVEL_WIDTH):
level.blit(list_of_images[lists_of_stuff][TREES], (col * TILESIZE, row * TILESIZE))
level.blit(list_of_images[lists_of_stuff][GRASS], (col * TILESIZE, row * TILESIZE))
then in main loop during draw part
# blit only the part of the level that should be on the screen
# view is a Rect describing what tiles should be viewable
disp = DISPLAYSURF..get_rect()
level_area = Rect((view.left * TILESIZE, view.top * TILESIZE), disp.size)
DISPLAYSURF.blit(level, disp, area = level_area)
You should use colorkey whenever you dont need per pixel alpha. I just changed all convert_alphas in my code to simple convert and set color key for fully opaque parts of image. Performance increase TEN FOLD!
So i'm playing around with creating a simple game engine in c++. I needed to render some text so I used this tutorial (http://learnopengl.com/#!In-Practice/Text-Rendering) for guidance. It's using the library freetype 2.
Everything works great, text is rendering as it should. But now when i'm fleshing the ui out and is creating labels I would like to be able to change the size of the text. I can do so by scaling the text, but I would like to be able to do so by using pixels.
Here you can see the scaling in action:
GLfloat xpos = x + ch.Bearing.x * scale;
GLfloat ypos = y + linegap + (font.Characters['H'].Bearing.y - ch.Bearing.y) * scale;
GLfloat w = ch.Size.x * scale;
GLfloat h = ch.Size.y * scale;
So in my method renderText I just pass a scale variable and it scales the text. But I would prefer to use pixels as it is more user friendly, is there any way I could do this in freetype 2 or am I stuck with a scale variable?
Assuming you don't want to regenerate the glyphs at a different resolution, but instead want to specify scale as a unit of pixels instead of a ratio (i.e. you want to say scale = 14 pixels instead of scale = 29%), then you can do the following: Save the height value you passed to FT_Set_Pixel_Sizes (which is 48 in the tutorial). Now if you want a 14-pixel render, just divide 14 by that number (48), so it would be scale = 14.0f / 48.0f. That will give you the scaling needed to render at a 14-pixel scale from a font that was originally generated with a 48-pixel height.
You might want to play with your OpenGL texture filters or mipmapping as well when you do this to improve your results. Additionally, fonts sometimes have low-resolution pixel hinting, which helps them be rendered clearly even at low resolutions; unfortunately this hinting information is lost/not used when you generate a high res texture and then scale it down to a smaller render size, so it might not look as clear as you desire.
I'm relatively new to love2d and was wondering if there is a simple way to draw a linear gradient without using an image. I'm trying to draw a scene that is at dusk, and want a subtle gradient from the top of the background to the bottom, but creating an image large enough to fill the background seems like it would be too large.
Any thoughts?
Try using an image which is 1px wide by the height needed, and repeat it horizontally like so:
-- load
bgImage = love.graphics.newImage('gradient.png')
bgImage:setWrap('repeat', 'clamp')
bgQuad = love.graphics.newQuad(
0, 0,
WIDTH, bgImage:getHeight(),
bgImage:getWidth(), bgImage:getHeight()
)
-- draw
love.graphics.drawq(bgImage, bgQuad, X, Y)
Replace X, Y, and WIDTH with the values you need. Using a quad here allows Löve to handle the horizontal repeat for really fast drawing.
(Hopefully this works, I haven't tested it.)
If you are worried about the size of the image and the performance, the best way is making an image with 1 x n pixels, where n is the number of colors on the gradient.
for example, if you want an vertical background-gradient with 2 colors:
love.graphics.draw(img,0,0,0,love.graphics.getWidth(),love.graphics.getHeight()/2)
:)