How to develop a Maya Viewport Extension in C++(MFC) [closed] - c++

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I need to develop a Maya Viewport Extension in C++ with(MFC),also need to control view style(such as top view, left view etc.).Can I use Maya SDK to archive this and how to make it?Thanks.

I'm not perfectly sure what you mean by "Viewport Extension".
I guess you're trying to write a Maya Plug-In that features your own type of viewport, which is commonly called "Model View".
MCF does not have anything to do with this.
You use Maya MEL/Python commands to create Maya windows, panels and other UI-elements.
(if there is a hack to make Maya work with windows you've created yourself, I don't know it).
You can develop custom viewports in Maya by creating two classes:
A viewport or model view, and a "model editor command".
Model View
One is your viewport class, let's call it "MyViewport".
It has to inherit "MPx3dModelView".
Normally you will associate a camera with the viewport. This let's you control from where you see the scene. You can have multiple cameras connected to your viewport (multi-pass display, for example stereo 3D), or none at all (but then you must set all rendering parameters by yourself, which can be tedious).
The (callback) functions you inherit from that class allow you to set up the details for your viewport.
See the Maya documentation on MPx3dModelView to see how to use it.
http://download.autodesk.com/us/maya/2010help/API/class_m_px3d_model_view.html
Model Editor Command
The other class you'll need is a viewport command.
That is the thing that get's called when someone tries to create your viewport.
It has to inherit "MPxModelEditorCommand".
It's most important feature is that it can create an instance of your Model Editor class.
See the Maya documentation on MPxModelEditorCommand on how to use it:
http://download.autodesk.com/us/maya/2011help/API/class_m_px_model_editor_command.html
Registering the viewport command with the Maya plug-in
In order to be able to create your viewport, you must register your Model Editor Command with the plugin.
In your initializePlugin function (the one you export with the plugin)
MStatus initializePlugin( MObject obj )
{
MFnPlugin plugin( obj, PLUGIN_COMPANY, "1.0", "Any");
plugin.registerModelEditorCommand(MyModelViewCmd::commandName, MyModelViewCmd::creator, MyModelViewCmd::createModelView);
}
Writing a script that creates your viewport
Finally, you use MEL or Python scripting in Maya to create your user interface.
In the most simple set-up, you simply create a window and then call your model editor command to create you custom viewport in this window.
window MyWindow;
paneLayout MyWindowPane;
MyModelView MyModelView1;

Related

Simple OpenGL GUI Framework User Interaction Advice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm designing a simple GUI framework from scratch as a project, using OpenGL and nothing else external and need some advice on how I might implement user interaction.
Basically, I've a base class GUIItem from which all elements inherit. This gives each item some basic variables such as position, a vector to contain child elements as well as some basic functions for mouse movement and clicking.
All elements are setup as above, with their relevant member variables.
What I'm struggling with is how to implement user interaction properly. In my window manager I would create a new instance of an item, say GUIButton and call it button1. The window manager would, upon a click occurring, iterate through its list of elements and any child elements they may have, calculating a rectangular area around the object based on its coordinates, height and width, then running any "on click" function associated with said item, like change the value of textlabel1.
Firstly, is there a better way to do this calculation? It would work for rectangular elements but spherical objects and others would have a much larger erroneous area which could be clicked. Ideally I would check pixels but I've no real idea how that would be achieved. I've heard about but never used GLUT (my project only allows use of this for handling mouse/keyboard interaction though). Does GLUT provide anything to assist in this case?
My main issue is with handling what would occur when an "On click" event actually occurred. At the moment GUIButton for example, has an "On click" function built in, so as far as I can see, I'd have to do something like make it a virtual function, meaning that each new button I created would have to have its own class just to overwrite the "on click" function and each instance of a button would be an instance of a unique class that simply inherited off of GUIButton. This seems messy to me, as I've no idea where I would store all those classes, and it seems a lot of extra code. Would I be creating a button1.cpp and button1.h file?
Any advice on this really would be welcome as I'm new to C++, OpenGL and it's the first time I've been exposed to GUI programming and there's not a lot to go on when an existing GUI framework is the usual choice.
if you want something stupidly simple and fast then you could:
create shadow screen buffer containing ID/index/pointer instead of color
pre-render this buffer
Just render each of your visual component to it but instead coloring/texturing just fill in the ID/index/pointer of rendered component. Do not forget to clear this with some NULL first ... After this you should have mask of your components. You need to do this just once ...
On mouse events
you simply convert mouse coordinates to the shadow screen space and pick the value. If it is NULL then you clicked or whatever on empty area. If it contains ID instead update or call the callbacks for component ID. if you have a list of all components then ID can be the list index, otherwise use its actual pointer or encode in style (component_type, component_index). As you can see this is pretty fast O(1) item selection no matter how many components you have ... The shadow screen can have different resolution then your actual screen (to preserve memory).
This have pixel perfect mouse selection accuracy no matter the shape of your components without the need for nested component search loops.
[Notes]
As I did this stuff here are some hints:
create a window class containing configuration of your components for single screen. Programs have usually more screens with different set of components and doing dynamically the screens over and over again just because you switch page/screen sucks.
use separate list of components one list per component type.
create IDE editor for your windows see drag & drop example in C++ it might get handy for this. Add get,set functions controlled by string/enum or flag to easy obtain/change properties to make Object Inspector possible. Also this is how mine IDE looks like:
The window is saved from IDE directly as C++ code I can just copy to my App. This is the above example without the knob (forgot to save it):
//---------------------------------------------------------------------------
// OpenGL VCL window beg: win
win.grid.allocate(0);
win.grid.num=0;
win.scale.allocate(0);
win.scale.num=0;
win.button.allocate(0);
win.button.num=0;
win.knob.allocate(0);
win.knob.num=0;
win.scrollbar.allocate(3);
win.scrollbar.num=3;
win.scrollbar[0].x0=200.0;
win.scrollbar[0].y0=19.0;
win.scrollbar[0].xs=256.0;
win.scrollbar[0].ys=16.0;
win.scrollbar[0].fxs=8.0;
win.scrollbar[0].fys=19.0;
win.scrollbar[0].name="_vcl_scrollbar0";
win.scrollbar[0].hint="";
win.scrollbar[0].min=0.000;
win.scrollbar[0].max=1.000;
win.scrollbar[0].pos=0.000;
win.scrollbar[0].dpos=0.100;
win.scrollbar[0].horizontal=1;
win.scrollbar[0].style=0;
win.scrollbar[0].resize();
win.scrollbar[1].x0=200.0;
win.scrollbar[1].y0=45.0;
win.scrollbar[1].xs=256.0;
win.scrollbar[1].ys=16.0;
win.scrollbar[1].fxs=8.0;
win.scrollbar[1].fys=19.0;
win.scrollbar[1].name="_vcl_scrollbar1";
win.scrollbar[1].hint="";
win.scrollbar[1].min=0.000;
win.scrollbar[1].max=1.000;
win.scrollbar[1].pos=0.000;
win.scrollbar[1].dpos=0.100;
win.scrollbar[1].horizontal=1;
win.scrollbar[1].style=0;
win.scrollbar[1].resize();
win.scrollbar[2].x0=200.0;
win.scrollbar[2].y0=70.0;
win.scrollbar[2].xs=256.0;
win.scrollbar[2].ys=16.0;
win.scrollbar[2].fxs=8.0;
win.scrollbar[2].fys=19.0;
win.scrollbar[2].name="_vcl_scrollbar2";
win.scrollbar[2].hint="";
win.scrollbar[2].min=0.000;
win.scrollbar[2].max=1.000;
win.scrollbar[2].pos=0.000;
win.scrollbar[2].dpos=0.100;
win.scrollbar[2].horizontal=1;
win.scrollbar[2].style=0;
win.scrollbar[2].resize();
win.interpbox.allocate(0);
win.interpbox.num=0;
win.dblist.allocate(0);
win.dblist.num=0;
// OpenGL VCL window end: win
//---------------------------------------------------------------------------
Look at images here plotting real time Data on Oscillocope for some ideas (I got this working for both GDI and OpenGL)
It is better to use pixel units instead of OpenGL <-1,+1> screen units for better visual quality and editing comfort.

How to draw directly on the screen in windows? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Say for example that I wanted to draw a red square (or multiple red squares) in the middle of the screen and still be able to see everything not covered by the square, a little bit like a splash screen.
I want to implement this in windows but I don't know the best way to draw on top of the screen. These are my ideas so far:
I initially attempted to draw directly onto the desktop by obtaining a Device Context for it.
Make the rectangle/s a separate window as they would be very easy to move around and I wouldn't have to worry about transparency.
Create a transparent window that covers the entire screen and stays on top. Draw the rectangle/s on the client area of the screen
I think that the best of the 3 ideas is the last one, but any opinions on my ideas, or new ideas would be much appreciated :)
Creating transparent window, then drawing in it, in my experience it is messy.
Why don't you just take a screenshot from the back ground set it as the back ground for you snake game. When ever they click somewhere/die you just close your game.
You could catch the last click and resubmit it after closing the game.
That is how I would do, but of course that just an opinion.

Qt and OpenGL, using one context for multiple widgets

I recently asked a question about how to get around sharing issues with vertex array objects and frame buffer objects across multiple contexts, I was then convinced that using multiple contexts just caused more headaches then solutions.
I am using Qt and currently my setup is that I have one invisible QGLWidget which I then use in the constructor of my visible QGLWidget's in order to share resources, this works great accept that I cannot share certain things across the contexts.
I wish to find a solution where I am able to use a single context to render all of my different widget's, this question refers to using the QGLWidget constructor where you pass in the QGLContext you desire to be shared, however this does not seem to use one common context, but instead set the context to be used by one QGLWidget, when you try to use it on a second widget, a qWarning is called which informs you that the QGLContext must refer to the widget you are passing it to.
The goal of my application is to have 2 seperate GUI's which render different scenes, yet share the same context. Currently I have a 'World' editor which edits a scene and saves it to a file to be used in my game engine, and I also have a 'Material' editor which allows you to graphically edit a material similar to UDK's Material editor, there is a preview window which utilizes OpenGL.
Ideally I would like to keep my current design of having one unified game editor which is navigable by tabs, rather than having separate programs for each part of the editor.
The only thing that seemed like it was a decent solution was using the QGraphicsView and setting a QGLWidget as the viewport, however this does not seem to work at all. I can render basic primitives, however anything more and it falls apart.
Does anyone have experience dealing with this issue of multiple OpenGL Widgets, and if so could you explain the process you took to achieve your goal?
I don't quite understand why you are having so much trouble, I'm building a CAD-like app so share a few contexts, like this:
I use an application-wide hidden QGLWidget as a member of my main window class, this is the context shaders are loaded in.
For each document window, the window class has a hidden QGLWidget member, this is the context geometry is loaded in. The shader context is used as the 'shared' widget for it, allowing documents access to the application wide shaders.
Each of the 5 viewports in each document window is a visible QGLWidget, this is where the actual rendering takes place. The document window geometry QGLWidget is used as the 'shared' widget, so the viewports have access to the document-wide geometry data and the application-wide shaders.
The shared widget parameter allows you to create an 'inheritance' tree of contexts, every context has access to it's own and all it's ancestors data (but not it's childrens or siblings).

creating starting page for opengl game

I am working on a graphics game project in OpenGL and I want to make a front page of the game containing a image, few buttons and some text. Buttons on click perform different actions e.g. start button for starting the game , Can anyone please suggest me , How can I do it?
How can I do it?
Well, by implementing it. OpenGL is not a game engine, nor a scene graph, nor a UI toolkit. It's merely a drawing API providing you the means to draw nice pictures, and that's it. Anything beyond that is the task of either a 3rd party library/toolkit, or your own code, or a combination of both.
A usual approach to model this behaviour is by introducing application states. Here is a related question.
You could model your StartScreenState by drawing a plane with buttons using an orthogonal projection and not drawing (or not having initialized yet) the rest. When the player clicks on 'start', you can switch to perspective projection and display game contents.
I don't know that I would even use OpenGL for that. OpenGL is for rendering colored/textured triangles/quads so that you can do tons of stuff graphically. There's no such thing as "load an image to coordinate x,y on the screen". The equivalent would be "draw two triangles with these vertices that make up a rectangle and are textured with this image". Which is why I would probably stay away from OpenGL to do this, because you don't really need to use any of the awesome features that OpenGL has.
A very common UI framework that I believe nestles in with OpenGL well if you really want to use the two together is Qt. It should make your life easier in terms of UI stuff. See wiki and dev page.

Size of OPENGL context in SFML WINDOW

i'm currently working on a voxel editor and everything is going fine.
I have my SFML windows and my model to work with. I was just wondering if it was possible with SFML to set the 3D context to a certain specefic size.
I'm asking this because my model is currently shown on the screen with not problem at all, except that now, I want to create some options settings with SFML and my button will on my 3D model. Like, I would like 75% of the left side of my window to be my 3D context and the 25% at the right to be blank with space to fill in my buttons.
To do what you want to do, I believe what you're looking for is this: http://www.sfml-dev.org/documentation/2.0/classsf_1_1View.php#details
I think the context is attached to the window in general. Also be aware that SFML is for 2D graphics. Once you want 3D rendering, you're going to need to use openGL directly. SFML is a wrapper for openGL calls so there's no problem with using SFML to help set up and manage things, and openGL directly for rendering needs.
http://www.sfml-dev.org/tutorials/2.0/window-opengl.php
try:
glViewport(x,y,width,height);
source: https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glViewport.xml