How can I create a pre-render input window in MFC? - c++

I have modified Nehe's terrain tutorial so that it generates a terrain using Perlin noise instead of loading the static .raw file that comes with the tutorial. I want to specify the parameters for perlin noise (frequency, amplitude, number of octaves) before rendering the terrain. In fact, I just want to create a window that takes those 3 parameters and then dies, I don't need anything else, I do my interfaces on GLUT, I just want this particular app to run this way.
How can I do that? What should be modified in the Nehe project? I understand MFC doesn't have a built-in input box?

I am not familiary with the tutorial but if you want the input box to not be seen but to precess parameters could you just create the window as invisible?

Related

How to create a cursor in X11 from raw data c++

I have been searching around about this problem for awhile. I am making a cross platform program and I have figured out how to load an animated cursor with the windows API and how to create a cursor during run time from raw bitmap data. However I can't find good documentation for this for X11, for my Unix/Linux build of my program. I know I need to use the XRender extension functions, XRenderCreateCursor and XRenderCreateAnimCursor from this documentation https://www.x.org/releases/X11R7.6/doc/libXrender/libXrender.txt but I do not know how to use these functions and the documentation does now show any examples.
Also the raw image data is in the ARGB format, and I want to support the Alpha channel if possible with these cursors.
Could someone show me how to use the X11 and XRender (or XCursor) Library to create a cursor, static and animated, and possibly how to do it so the cursor can be used with any X11 window.
Thanks!
PS.
I am acually editing a open source libary for cross platfrom Gui that I am using for my program, and I am trying to add this feature into the libary but I am not used to programing with X11.
When it comes to X, nothing is simple.
First, review the specification of the X render extension.
The steps for creating an animated cursor are as follows.
First, you need to create a PICTURE for each frame of the animated cursor, using CreatePicture.
Use CreateCursor to create a CURSOR from each PICTURE. CreateCursor returns a CURSOR handle.
Then, you take the list of all CURSORs for all of the frames, and then use CreateAnimCursor to create a single CURSOR representing the animated cursor.
This all comes down to creating a PICTURE for each frame. A PICTURE is created using CreatePicture from a DRAWABLE and a PICTFORMAT. DRAWABLE would be the PIXMAP with the actual bitmask for the cursor's frame, and PICTFORMAT specifies which channels in the pixmap represent the red, color, and green channels, and must be one of the enumerated PICTFORMATs returned from QueryPictformat.
For more information, see the aforementioned X render extension specification.

Save Gtk.DrawingArea to Bitmap

I need to save image of my DrawingArea object to Bitmap, but I can't find how to do it. Can anybody tell, how save DrawingArea image to Bitmap?
There are a few ways to do this, it depends exactly what you want to do and whether/why you really need a System.Drawing.Bitmap.
Copy the Widget
You can P/Invoke gtk_widget_get_snapshot to get a Gdk.Pixbuf.
The Pixbuf can be can be saved into a a file, or copied into a System.Drawing.Bitmap.
Using System.Drawing
You could port your drawing code to the System.Drawing API.
In your DrawingArea's Expose method, use Gtk.DotNet.Graphics.FromDrawable to get a System.Drawing.Graphics for your widget and draw onto that using your ported drawing code.
Then, you can create a System.Drawing.Bitmap and use the same drawing code to draw to it.
Using Cairo
You could port your drawing code to Mono.Cairo (the new GTK# drawing APIs, which are much more powerful than System.Drawing).
In your DrawingArea's Expose method, use Gdk.CairoHelper.Create to get a Cairo context for your widget and draw onto that using your ported drawing code.
Then, you can use your Cairo drawing logic to write to a Cairo ImageSurface, which can be saved into a a file, or copied into a System.Drawing.Bitmap.

How to get X to render to an OpenGL texture?

I am trying to write a compositor, like Compiz, but with different graphical effects. I am stuck at the first step, though, which is that I can't find how to get X to render windows to a texture instead of to the framebuffer. Any advice on where to start?
X11 composition goes like following.
you redirect windows into a offscreen area. The Composite extension has the functions for this
you use the Damage extension to find out which windows did change their contents
in the compositor you use the GLX_EXT_texture_from_pixmap extension to submit each windows' contents into corresponding OpenGL textures.
you draw the textures into a composition layer window; the Composite extension provides you with a special screen layer, between the regular window layer and the screensaver layer in which to create the window composition is taking place in.

creating starting page for opengl game

I am working on a graphics game project in OpenGL and I want to make a front page of the game containing a image, few buttons and some text. Buttons on click perform different actions e.g. start button for starting the game , Can anyone please suggest me , How can I do it?
How can I do it?
Well, by implementing it. OpenGL is not a game engine, nor a scene graph, nor a UI toolkit. It's merely a drawing API providing you the means to draw nice pictures, and that's it. Anything beyond that is the task of either a 3rd party library/toolkit, or your own code, or a combination of both.
A usual approach to model this behaviour is by introducing application states. Here is a related question.
You could model your StartScreenState by drawing a plane with buttons using an orthogonal projection and not drawing (or not having initialized yet) the rest. When the player clicks on 'start', you can switch to perspective projection and display game contents.
I don't know that I would even use OpenGL for that. OpenGL is for rendering colored/textured triangles/quads so that you can do tons of stuff graphically. There's no such thing as "load an image to coordinate x,y on the screen". The equivalent would be "draw two triangles with these vertices that make up a rectangle and are textured with this image". Which is why I would probably stay away from OpenGL to do this, because you don't really need to use any of the awesome features that OpenGL has.
A very common UI framework that I believe nestles in with OpenGL well if you really want to use the two together is Qt. It should make your life easier in terms of UI stuff. See wiki and dev page.

Size of OPENGL context in SFML WINDOW

i'm currently working on a voxel editor and everything is going fine.
I have my SFML windows and my model to work with. I was just wondering if it was possible with SFML to set the 3D context to a certain specefic size.
I'm asking this because my model is currently shown on the screen with not problem at all, except that now, I want to create some options settings with SFML and my button will on my 3D model. Like, I would like 75% of the left side of my window to be my 3D context and the 25% at the right to be blank with space to fill in my buttons.
To do what you want to do, I believe what you're looking for is this: http://www.sfml-dev.org/documentation/2.0/classsf_1_1View.php#details
I think the context is attached to the window in general. Also be aware that SFML is for 2D graphics. Once you want 3D rendering, you're going to need to use openGL directly. SFML is a wrapper for openGL calls so there's no problem with using SFML to help set up and manage things, and openGL directly for rendering needs.
http://www.sfml-dev.org/tutorials/2.0/window-opengl.php
try:
glViewport(x,y,width,height);
source: https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glViewport.xml