Let's say I make a surface like this:
cairo_surface_t* surface = cairo_pdf_surface_create("pdffile.pdf", 40000, 40000);
cairo_t* cr = cairo_create(surface);
That's a big surface! The reason for doing so is that I don't know the size of my drawing until I've plotted it (it's a complicated graph, generated on the fly). After I've plotted it, it seems pretty trivial to crop the surface. So how do I do it?
Draw to a recording surface instead of a PDF surface. The recording surfaces can then be painted to a smaller PDF surface. Also, cairo supports unbounded recording surfaces, so this would even work when your drawing is wider/higher than 40k pixel.
Related
I'd like to be able to apply a transparent texture to each face of a simple 3D cube. Each texture is simply the name of the face, e.g. "Front", "Back", "Left", etc. Normally I'd just create an image in a paint app, then save it, and load the image as a texture at runtime. However, the twist is that I'd like this text to be in multiple (and growing) different languages and although creating six new images for each language is one method, it would be preferable to dynamically generate textures from translated strings instead.
So the ideal solution is to be able to render a text string to a 2D texture at runtime (using any font and colour I choose), then render the resultant texture over each surface as a transparent overlay. Can this be done relatively easily?
e.g. this sort of process:
Getting the texture onto the cube is easy enough. It's the getting the text (string) rendered to a texture (ID3D11Texture2D) which is the tricky part. It seems as though it should be very do-able. Very new at this, so as far as possible keep any explanations in beginner-friendly language where possible.
I guess the cube textures do not change too frequently, since the language should not be switched to often by the user. In this case I would use a toolkit like Qt to generate the textures - similar like this:
QImage image(width,height,QImage::Format_ARGB32);
image.fill(Qt::transparent);
QPainter painter;
painter.begin(&image);
painter.setRenderHints(QPainter::HighQualityAntialiasing
| QPainter::TextAntialiasing);
painter.setFont(font);
painter.setPen(Qt::black);
painter.drawText(0, metric.ascent(), QString(c));
painter.end();
To access the texture data:
auto* data = image.data();
A different solution would be to use Signed Distance Text Rendering. Doing this you need to render this text to a texture first (render to texture) and then apply it to the cube.
I want to create an app in Cocos2d/Cocos2dx in which i have an image which is not visible but when i move my finger on device it start drawing. Only that part of image draws where i move my finger.
Thanks in Advance
There are two ways I can think of drawing an image.
The first way would be like a brush. You would use RenderTexture and draw/visit a brush sprite to draw it into this texture. If you just need to draw with solid colors (can have opacity) you could also use the primitive draw commands (drawCircle, drawPoly, drawSegment). You will need a high rate of touch tracking and will likely want to draw segments or bezier curves between touch movements to catch fast movements.
http://discuss.cocos2d-x.org/t/using-rendertexture-to-render-one-sprite-multiple-times/16332/3
http://discuss.cocos2d-x.org/t/freehand-drawing-app-with-cocos2d-x-v3-3-using-rendertexture/17567/9
Searching on how other drawing games work would be useful.
The second way I can envision it is similar to revealing except using an inverse mask. So you would draw a set image, but reveal that image by drawing.
http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0
There are more elaborate ways to handle the drawing into the RenderTexture in order to have the brush design tile correctly and repeat based on a given size, but that'll involve making something closer to an image editing tool.
I am using OpenGL for a 2D-based game which has been developed for a resolution of 640x480 pixels. Thus, I setup my OpenGL doublebuffer like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 480, 0, 0, 1);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This works really well and I can draw all my sprites and background scrollers using hardware accelerated GL textures. Now I'd like to support other window sizes as well, i.e. the user should be able to run the game in 800x600, 1024x768, etc. So all graphics should be scaled to the new resolution. Of course I could do this by simply applying scaling factors to all my vertices when drawing the textures as quads. But I don't think that I'd be able to achieve pixel-perfect positioning that way.... but pixel-perfect positioning is of course very important for 2D games!
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
1) My doublebuffer will always be 640x480 pixels, no matter what the real output window size is.
2) Once I call glfwSwapBuffers() the 640x480 doublebuffer should be scaled to the actual window size which can be smaller or larger than 640x480.
Is this possible somehow? I think this would be the easiest solution for my game because manually scaling all vertices is likely to give me some problems when it comes to pixel-perfect positioning, isn't it?
Thanks!
I setup my OpenGL doublebuffer like this:
I think you don't know what "doublebuffer" means. It means that you perform drawing on a invisible buffer which is then revealed to the user once the drawing is finished, so that the user doesn't see the drawing process.
The code snippet you have there is the projection setup. And hardcoding the dimensions in pixel units there is just wrong.
but pixel-perfect positioning is of course very important for 2D games!
No, not really. Instead of "pixel" units (which don't really exist in OpenGL except for texture image indexing and the viewport) you should use something like world unit. For example in a simple jump-and-run platformer like SMW
you could say, that each block is one unit high. The Yosi-sprite would be 2 units high, Mario 1.5 and so on.
The important thing is, that you can keep your sprite rendering dimensions independent of screen resolution. This is especially important with respect to all the varying screen resolutions and aspect ratios out there. Don't force the user on resolutions you think are appropriate. People have computers with big screens and they want to use them.
Also the appearance of your sprites depends largely on the texture images and filtering method you use. If you want to achieve a pixelated look, just make the texture images low resolution and use a GL_NEAREST magnification filter, OpenGL will do the rest (however you should provide minification mipmaps and use GL_LINEAR_MIPMAP_LINEAR for minification, so that things don't look awful on small resolutions).
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
Yes, you can use a framebuffer object for this. Create a set of textures (color and depth-stencil) of the rendering dimensions (like 640×480) render to that, then when finished draw the color texture to a viewport filling quad on the main framebuffer.
Like before, render at 640x480 but to an offscreen texture. Then render a screen-sized (800x600, 1024x768,...) quad with this texture applied to it.
I have a YUV overlay that I want to draw a HUD over. Think of a video with a scrubber bar. I want to know what the fastest method of doing this would be. The platform I am on does not support Hardware Surfaces.
Currently I do things in this order:
Draw YUV overlay directly to screen
Blit scrubber bar directly to screen
Would there be any speed advantage in doing something like:
Draw YUV overlay to temporary SDL_Surface
Blit scrubber bar to temporary SDL_Surface
Blit temporary SDL_Surface to screen
I think the second way would be faster. Looking at program flow, every time you blit to the screen you might get stuck waiting for the direct blit to finish. Blitting to a temporary surface is just copying from one C array to another, so you can push the final blit to screen to the end of your program logic.
I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not.
On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application).
Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better.
My mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad.
How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.