Direct2d offscreen rendering buggy? - c++

I'm trying to render a bitmap using an offscreen bitmaprendertarget and then draw that on the screen. It works just fine but when I try to render seperate bitmaps on it, it's like the render doesn't work and it only clips the original picture.
Here's what it looks like: http://img827.imageshack.us/img827/7991/clipped.png
I'm using a compatiblerendertarget with the hwndrendertarget. Funny thing is, when I render them using the onscreen hwndrendertarget, they come out just fine.
Like this: http://img141.imageshack.us/img141/4825/workingj.png
I'm using CopyFromRendertarget to get the bitmap out of the Rendertarget, as the GetBitmap doesn't work for me with the Bitmaprendertarget for some reason. This is on Visual studio 2010 C++.
Anyone know what's going on here?
---- EDIT ----
An interesting thing to note: I tried putting clear after getting the bitmap and then doing EndDraw, but then it only gets the first bitmap, and the other bitmaps won't get drawn at all.

I was experimenting some stuff and noticed that I don't need to put EndDraw at all to the bitmaprendertarget in order for it to get the bitmaps needed. I can call EndDraw on it when I'm done using the offscreen RT and it works just fine.

Related

X11 - Change String contents after draw?

I just started creating a X11 application.
I am rendering text to the display using XDrawString(...).
Now, given I'd like to add something like a clock, some counter or something that changes constantly, how would I "override" the already rendered text?
The way it currently is that it just renders again and leaves the old contents behind.
From Java I know "BufferedImages", where I would render everything before transferring it to the actual screen. With this, the old contents on the display would be overwritten.
Is there a similar mechanism in X11 or do I have to paint the whole screen white and then render everything again on top of it?
I am using C++ along the X11 libs with the gcc compiler.
Thanks!
I think the only option is to use the X11-DoubleBuffer Library for C++. I now settled with SDL instead.

SDL function to clear only part of the screen?

I am using C++ and SDL 2. Is there any function in SDL or any available algorithm to clear only a part of the screen?
I tried using SDL_RenderSetViewPort() in the following way but it didn't work:
SDL_RenderSetViewPort(renderer,&rect);
SDL_RenderClear(renderer);
I thought that the specific texture present in the given rectangle part would be cleared but it didn't.
SDL_SetRenderDrawColor() with the clear color then SDL_RenderFillRect() with the desired region to 'clear'.

How to screencapture window that uses OpenGL?

I'm trying to capture the pixels of an OpenGL application (specifically, MEmu) and later convert it to an OpenCV Mat. I used the hwnd2mat() function from this post as a reference but ran into some problems.
Only the window frame is captured, as seen here:
Further investigation led me to believe that the problem is that StretchBlt (or BitBlt) can't capture the OpenGL pixels.
I tried to use glReadPixels (and convert to Mat using this), but it is not reading any pixels. The wglCreateContext returns NULL, probably because my application does not own the DC of MEmu. Therefore, wglMakeCurrent does nothing and the pixels are not read.
I was able to create a workaround modified version of hwnd2mat that gets the WindowRect of MEmu's hwnd but later uses GetDC(NULL) to capture only the portion of the screen where MEmu is located. This works but any windows that get on top of MEmu get captured aswell.
I can work with this, sure, but was wondering if there is a way to use glReadPixels on a window that I don't own or a way to ensure that hwnd2mat works on the contents of the window that is using OpenGL.

Save Gtk.DrawingArea to Bitmap

I need to save image of my DrawingArea object to Bitmap, but I can't find how to do it. Can anybody tell, how save DrawingArea image to Bitmap?
There are a few ways to do this, it depends exactly what you want to do and whether/why you really need a System.Drawing.Bitmap.
Copy the Widget
You can P/Invoke gtk_widget_get_snapshot to get a Gdk.Pixbuf.
The Pixbuf can be can be saved into a a file, or copied into a System.Drawing.Bitmap.
Using System.Drawing
You could port your drawing code to the System.Drawing API.
In your DrawingArea's Expose method, use Gtk.DotNet.Graphics.FromDrawable to get a System.Drawing.Graphics for your widget and draw onto that using your ported drawing code.
Then, you can create a System.Drawing.Bitmap and use the same drawing code to draw to it.
Using Cairo
You could port your drawing code to Mono.Cairo (the new GTK# drawing APIs, which are much more powerful than System.Drawing).
In your DrawingArea's Expose method, use Gdk.CairoHelper.Create to get a Cairo context for your widget and draw onto that using your ported drawing code.
Then, you can use your Cairo drawing logic to write to a Cairo ImageSurface, which can be saved into a a file, or copied into a System.Drawing.Bitmap.

How is this 3D rendering on the desktop done

I read a topic on OpenGL.org where a guy made this:
http://coreytabaka.com/programming/cube-demo/
He said to release the source code but he never did,
does anyone how I could get the same idea?
Has to do with clearing the window with alpha but drawing
on it as well.. just don't get how to get OpenGL setup like
that. From there I can do my stuff but I'd like a base for
this running in C++ with VisualStudio,
Anybody has something like this laying around ? Or can show
pieces of the code to get this kind of rendering done.
Render the 3d scene to a pbuffer.
Use a color key to blend the pbuffer to screen.