OpenGL graphics editor...implementation of eraser tool - opengl

i'm writing a menu-based OpenGL graphics editor.It is pretty basic. Every time i choose a new option in the pop-down menu, the older drawing disappears,this doesn't allow me to use the eraser tool...could anybody tell me how to solve this problem?thanx

Your question is vague. Anyway...
Are you drawing picture directly on the screen/window? Then of COURSE drawings will disappear.
Paint picture to texture (framebufferObjects or whatever. See NVidia OpenGL SDK for examples). Then render texture on the screen. clear screen, draw texture, then draw menu, every time you need to repaint window.

Related

Draw OpenGL to an offscreen bitmap

I've inherited a project which renders a 3D scene directly to the window using OpenGL. The code works fine, but we're now drawing an icon onto the 3D view to "Exit 3D view mode". This also works fine, but results in a lot of flickering as the view is rapidly rotated.
I'd like to be able to draw to an off-screen bitmap (ie. without a HWND), then draw my icon to the bitmap, then finally StretchBlt the bitmap to the window using double-buffering. We do this in other contexts (such as zooming into an image which does not need OpenGL) and it works great. My problem is that I am an OpenGL novice and all attempts at starting with the DC of the off-screen bitmap and creating a HWND from this DC fail, usually because of selecting a pixel format for the DC.
There are a few questions asking similar things here on StackOverflow (eg. this question without an accepted answer. Is this possible ? If so is there a relatively straightforward tutorial describing the procedure? If the process is extremely complex requiring detailed OpenGL knowledge, then I may just have to leave it and live with the flickering because it is a rarely used mode in our software.
Just draw the Icon using OpenGL using a textured quad.
All this draw to a bitmap copy to DC StretchBlt involves several roundtrips from and to graphics memory (wastes bandwidth) and StretchBlt will likely not be GPU accelerated. All in all what you want to do is inefficient and may even reduce quality.
I presume you have the icon stored in your executable as a resource. The most simple way to go about it is to create a memory DC (CreateCompatibleDC) with a DIBSECTION (CreateDIBSection), draw the icon to that and load the DIBSECTION data into a OpenGL texture. Then to draw the icon use glViewport to select the destination rectangle in window coordinates, use an identity transform to draw a rectangle covering the whole viewport (position values (-1,1)→(1,1), texture coordinate values (0,0)→(1,1) gives you the right outcome).
Important side fix: In case your program does silly things like setting viewport and the fixed function pipeline GL_PROJECTION matrix in a window resize handler you should clean up that anti pattern and move this to where it belongs: In the drawing code.

Rendering over DirectX window with Awesomium (semi-transparent & rounded elements)

I wonder if it's possible to use Awesomium to render the GUI over the DirectX 11 game (I do NOT use .NET, it's C++/DirectX 11 game)?
It would involve:
Rendering the scene on the window with DirectX 11 (just as I am doing it now).
Rendering the GUI with Awesomium from HTML/CSS over the previously rendered scene.
Note that some GUI elements should be semi-transparent or rounded - so it's not only rendering on some rect, but also blending.
Is it possible? Or maybe I could make it another way (e.g. telling Awesomium to use DirectX for rendering somehow)?
Or maybe I could draw an semi-transparent DirectX texture in Awesomium, and then render it over the scene with DirectX? I know that rendering to texture resource is possible with Awesomium, but does it supports transparency & semi-transparency?
If not, are the good alternatives for what I wanted to achive with Awesomium?
Yes. It can be done.
If you look at the documentation of the Awesomium WebView class it has a surface() method which will return the views backing bitmap.
Here is some c++ documentation for the class.
http://awesomium.com/docs/1_7_0/cpp_api/class_awesomium_1_1_web_view.html
You can copy this bitmap to a texture in DirectX and render it as layer on top of your game creating your UI.
You also have to route and translate input into Awesomium. You can style your UI however you like using HTML, CSS and Javascript. You can make it rounded in this way and introduce transparency.
I won't repeat a perfectly good tutorial on doing this. You can find one here.
http://www.gamedev.net/blog/32/entry-2260646-sweet-snippets-rendering-web-pages-to-texture-using-awesomium-and-direct3d/
How you render your texture after it is written doesn't have anything to do with Awesomium. Choose your blend modes and/or use shaders with output texture for desired effect.

How to take reliable QGLWidget snapshot

In my application I take snapshots of a QGLWidget's contents for two purposes:
Not redraw the scene all the time when only an overlay changes, using a cached pixmap instead
Lat the user take screenshots of the particular plots (3D scene)
The first thing I tried is grabFrameBuffer(). It is natural to use this function as for the first application, what is currently visible in the widget is exactly what I want to cache.
PROBLEM: On some hardware (e.g. Intel integrade graphics, Mac OS X with GeForce graphics), the image obtained does not contain the current screen content, but the content before that. So if the scene would be drawn two times, on the screen you see the second drawing, in the image you see the first drawing (which should be the content of the backbuffer?).
The second thing I tried is renderToPixmap(). This renders using paintGL(), but not using paint(). I have all my stuff in paint(), as I use Qt's painting functionality and only a small piece of the code uses native GL (beginNativePainting(), endNativePainting()).
I also tried the regular QWidget's snapshot capability (QPixmap::fromWidget(), or what it is called), but there the GL framebuffer is black.
Any ideas on how to resolve the issue and get a reliable depiction of the currently drawn scene?
How to take reliable QGLWidget snapshot
Render current scene to framebuffer, save data from framebuffer to file. Or grab current backbuffer after glFlush. Anything else might include artifacts or incomplete scene.
It seems that QGLWidget::grabFrameBuffer() internally calls glReadPixels() from OpenGL. On double-bufferd configurations the initial mode reads the back buffer (GL_BACK), switch with the OpenGL call glReadBuffer(GL_FRONT) to the front buffer before using QGLWidget::grabFrameBuffer() displaying an image on the screen.
The result of QGLWidget::grabFrameBuffer(), like every other OpenGL calls, depends on the video driver. Qt merely forwards your call to the driver and grabs the image it returns, but the content of the image is not under Qt's control. Other than making sure you have installed the latest driver for your video card, there is not much you can do except report a bug to your video card manufacturer and pray.
I use paintGL(); and glFlush(); before using grabFrameBuffer(). The paintGL helps to draw current frame again before grab the frame buffer, which makes an exact copy of what is currently showing.

New to OpenGL, working on "paint" program

I'm taking a computer graphics course this semester at college and our first assignment is to build a program that works much like Microsoft paint. We need to set options for drawing with shapes of different colors, sizes, and transparency parameters.
I'm having trouble finding information on how to program the ability to draw with a given shape on mouse drag. I'm not asking for the solution in code, but guidance on where to study functions that might accomplish this.
I'm completely new to OpenGL (but not C++) & I own "Computer Graphics with OpenGL" 4th ed. by Hearn & Baker. None of the topics suggest this capability.
What's probably asked from you is creating a single bufferd window, or switching to draw on the front buffer, and draw some shape at the mouse pointers location, when a button is pressed (and dragged), without clearing the frontbuffer inbetween. For added robustness draw to a Frame Buffer Object attached texture, so that dragging some window will not coorupt the user's drawing.
Keywords: Set Viewport to Window size. Ortho projection to window bounds, do not use glClear (except for resetting the picture).

Size of OPENGL context in SFML WINDOW

i'm currently working on a voxel editor and everything is going fine.
I have my SFML windows and my model to work with. I was just wondering if it was possible with SFML to set the 3D context to a certain specefic size.
I'm asking this because my model is currently shown on the screen with not problem at all, except that now, I want to create some options settings with SFML and my button will on my 3D model. Like, I would like 75% of the left side of my window to be my 3D context and the 25% at the right to be blank with space to fill in my buttons.
To do what you want to do, I believe what you're looking for is this: http://www.sfml-dev.org/documentation/2.0/classsf_1_1View.php#details
I think the context is attached to the window in general. Also be aware that SFML is for 2D graphics. Once you want 3D rendering, you're going to need to use openGL directly. SFML is a wrapper for openGL calls so there's no problem with using SFML to help set up and manage things, and openGL directly for rendering needs.
http://www.sfml-dev.org/tutorials/2.0/window-opengl.php
try:
glViewport(x,y,width,height);
source: https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glViewport.xml