How do I write an image into an SVG file using cairo? - c++

I have some code that looks like this:
cairo_surface_t * surface = cairo_svg_surface_create("0.svg", 512, 512);
cairo_t * context = cairo_create(surface);
int * data = new int[512*512];
// fill the data...
cairo_surface_t * image_surface =
cairo_image_surface_for_data(data, 512, 512, 512*4);
cairo_set_source_surface(context, image_surface, 0, 0);
cairo_paint(context);
// do some other drawing ...
cairo_surface_flush(surface);
cairo_surface_finish(surface);
cairo_surface_destroy(surface);
cairo_destroy(context);
However, the svg always appears corrupted. The image is not properly written, and all drawing commands following do not work. Changing the surface type to PS, ie:
cairo_surface_t * surface = cairo_ps_surface_create("0.ps", 512, 512);
produces a perfectly correct PS document.
Any help fixing the SVG would be appreciated.
EDIT: Forgot to provide version info.
Cairo 1.10.2 as given by cairo_version_string().
g++ 4.52
Running on Ubuntu 11.04
EDIT(2): Ok, I have traced this down to PNG problems with cairo and discovered that cairo_surface_write_to_png does not behave as expected either. Both this function and attempting to embed an image in an SVG cause "out of memory errors", and I still don't know
why.

Looks like you may have forgotten to specify the SVG version as:
cairo_svg_surface_restrict_to_version (surface, CAIRO_SVG_VERSION_1_2);
You can do this immediately after creating the surface.

Perhaps publishing the resulting plain SVG can help.

I cannot find cairo_image_surface_for_data in the Cairo documentation. Did you mean cairo_image_surface_create_for_data? If so, you need to use cairo_format_stride_for_width to calculate the array size, and the bitmap data needs to be in the format Cairo expects. Since both of your outputs are corrupted, this strongly suggests that the problem is with the input.

Related

DRM + GBM OpenGL rendering flickering R-PI 4

I am using GBM, DRM and EGL to render my scene to a HDMI display on my R-PI 4B. No X server is installed, and the R-PI boots into text mode before launching my application.
First my problem: I notice a lot of flickering throughout all of my rendering. So far I do not render much, I render a buch of text elements and a texture and I can see it flicker with my background without any changes to the texture or the other elements rendered.
I have attached a video here:
https://streamable.com/0pq3d0
It does not seem so bad in the video, but with the naked eye flickering is horrible (the black flashes).
For now I assume it has something to do with how I do my end_frame rendering:
void graphics_device_drm::end_frame() {
auto sync = eglCreateSyncKHR(_display, EGL_SYNC_FENCE_KHR, nullptr);
glFlush();
eglClientWaitSyncKHR(_display, sync, 0, EGL_FOREVER_KHR);
eglDestroySyncKHR(_display, sync);
eglSwapBuffers(_display, _surface);
auto bo = gbm_surface_lock_front_buffer(_gbm_surface);
const auto handle = gbm_bo_get_handle(bo).u32;
const auto pitch = gbm_bo_get_stride(bo);
uint32_t fb;
drmModeAddFB(_device, _width, _height, 24, 32, pitch, handle, &fb);
drmModeSetCrtc(_device, _crtc->crtc_id, fb, 0, 0, &_connector_id, 1, &_mode);
if (_previous_bo) {
drmModeRmFB(_device, _previous_fb);
gbm_surface_release_buffer(_gbm_surface, _previous_bo);
}
_previous_bo = bo;
_previous_fb = fb;
}
It almost seems like it is using only one single buffer for rendering. I dont really understand the DRM and GBM methods, so I assume I am doing something wrong there. Any pointers would be appreciated.
As a matter of fact it apparently was not related to my code and an R-PI/driver issue. Nevertheless the following change in /boot/config.txt did the trick:
# dtoverlay=vc4-fkms-v3d
dtoverlay=vc4-kms-v3d-pi4
The commented line (with fkms) was before and the other line was after. I assume for it to work you also need to have the latest Mesa libraries compiled which I anyway did before. No flickering whatsoever now!

Realsense SDK draw 3D scanning boundaries

I am writing a c++ app for 3D scanning. I'd like to add modify the image i get from
PXCImage* image = scanner->AcquirePreviewImage();
or directly the QImage i create from
QImage* qimage = new QImage(imageData.planes[0], width, height, QImage::Format_RGB32);
to display the rectangle which represent the scanned area as in the image below (taken from the c# example).
I am probably just missing some understanding of the SDK, but I'd be grateful if someone could explain it to me.
(I am using the PXC3DScan::ScanningMode::VARIABLE if that could affect the process).
I received an answer from Intel:
It will not support to draw scan boundary when you selected variable scan mode. We will provide this feature in the future release. Thanks!
(Current SDK version 7.0.23.8048)

How to use SDL_MapRGB with SDL 2.0

I am trying to get a 24-bit color from rgb values. I want to use SDL_MapRGB, but I don't know what pixel format is. Since its SDL 2.0 I am using the SDL_Window and SDL_Renderer.
SDL_Surface* surface = //however you created your surface
SDL_PixelFormat* myPixelFormat=surface->format;
This is from the page https://wiki.libsdl.org/SDL_PixelFormat , which you'll want to look over for more information.
Take a look at the window (or maybe it's called "surface", it's been a while and it was SDL 1.x), it includes a pixel format specification for drawing on that window, which you should use.

Is it possible to draw a Windows bitmap to a cairo surface?

I'm working in a project where Cairo was chosen as the graphics library (running on Xlib) in an OpenSUSE Linux environment. I have very little experience working with graphics libraries or graphic file formats, and I was wondering if it is possible to draw a Windows bitmap image to a Cairo surface? It appears to be relatively straightforward to draw a png in Cairo, but I've been looking around everywhere for information on drawing bitmaps and couldn't really find anything. I pieced together the following code:
int height = 256;
int width = 256;
cairo_format_t format = CAIRO_FORMAT_RGB24;
int stride = cairo_format_stride_for_width (format, width);
unsigned char *bitmapData;
bitmapData = (unsigned char *)(malloc (stride * height));
std::ifstream myFile ("exampleBitmapImage.bmp", std::ios::in | std::ios::binary);
myFile.read ((char *)bitmapData, stride * height);
cairo_surface_t *imageSurface = cairo_image_surface_create_for_data (bitmapData, format, width, height, stride);
cairo_set_source_surface (cs, imageSurface, 0, 0);
cairo_paint (cs);
cairo_show_page (cs);
cairo_surface_destroy (imageSurface);
myFile.close();
Strangely, when I run this it displays the image upside-down and backwards at 1/64 of its size 8 times in a row, and then fills out what would be the remainder of the image size (the remaining 7/8 of the image) with black. I suspect it has something to do with the file format, and that I'm parsing and feeding the binary data incorrectly and with improper settings to Cairo. Can anyone give guidance on how to get this working properly? I apologize for my lack of knowledge and wish to understand this problem better, and any help is greatly appreciated, thanks! :)
Multiply the stride with -1 that should flip your bitmap.
Look up the BMP file format http://en.wikipedia.org/wiki/BMP_file_format and
implement bitmap header parser and set the encoding correctly.
Right now are guessing the encoding as RGB24 and you have Cairo interpretting the bitmap header as image data.

C++ (LINUX) set X11 window background with DevIL

I am trying to set a background image for one of my windows created with Xlib.
I would like the image to be a JPEG or PNG. I downloaded DevIL (which I prefer to use because it supports a lot of formats).
So, my questions, is, how do I do it? I can't find any specific tutorial or help.
I understand how I can use DevIL to load an image into a stream, but how do I put that on the window? I found an answer here: Load image onto a window using xlib but I don't know how and which function should receive the image bytes.
As I also understand, I should have a XImage that would hold all the image, and which I would use with XPutImage. What I don't understand is how do I send the image's bytes from DevIL to XImage.
Does somebody know any helpful page or maybe some clues about how I should do it?
Thanks!
The Xlib function used to create an XImage is XCreateImage, and its usage looks like this (you can read a full description in the link):
XImage *XCreateImage(display, visual, depth, format, offset, data,
width, height, bitmap_pad, bytes_per_line)
where the relevant argument for your specific question would be data, a char* that points to where you keep the image data loaded in with DevIL. With this, you should then be able to follow the steps in the other answer you already found.
Edited to add:
You still have to tell DevIL how to format your image data so that XCreateImage can understand it. For example the following pair of function calls will create an XImage that appears correctly:
ilCopyPixels(
0, 0, 0,
image_width, image_height, 1,
IL_BGRA, IL_UNSIGNED_BYTE,
image_data
);
// ...
XImage* background = XCreateImage(
display,
XDefaultVisual(display, XDefaultScreen(display)),
XDefaultDepth(display, XDefaultScreen(display)),
ZPixmap,
0,
image_data,
image_width,
image_height,
32,
0
);
, if you instead chose IL_RGBA, the colors will be off!