Lessons 07 tended to give me the idea that we can load one object in many renderers to see it from different views. However I've many issues with it : the property dirty is a property of the object and its attributes and is not a function of the renderer : so when the first renderer is finished he puts all the properties to true and the following renderers have nothing to do.
For example if my object (root) is an empty object with 2 children (child1,child2) which contains not empty objects :
- the first renderer works fine (i.e. it adds root, child1 and child2), but the others only add the empty objects because root.dirty==false (see renderer3D.hs, line 591)
- the first renderer computes a bounding box fitting the scene, the other don't because root/child1/child2.points.dirty==false (see renderer3D.js at line 793)
So my question is : is it possible to have 1 complex object and manage it in different renderers (while every object has a property which depends of the renderer : dirty) ? Or should I copy it and link events so the transformations in 1 render are reported in the other ones ? Or should I do more modifications ?
I just created a jsfiddle to create a scenario like you said
http://jsfiddle.net/haehn/ZdzeR/
It all works fine:
scene is an X.object which holds a mesh and a cube. adding it to all the 3 renderers works fine and shows both objects.
Related
I've the following problem :
I want to get an application composed of many view which render a common OpenGL scene from a different point of view, illumination, and others options.
Basically, my question is what is the best way to do that with qt ?
My first attempt was to create multiple QOpenGLWidget and get a common QOpenGLContext where I stored the textures but also the meshes and shaders.
But it didn't work for meshes because Vertex Array Objects seem to not be shareable.
After lot of tries, a possible solution is to store one VAO for each widget that need the mesh but this look really awful.
So, I wonder if there is a good alternative for this kind of problem, or maybe a good documentation to understand how these QOpenGLContext work.
The simplest idea that I've imagined is to create only one QOpenGLContext and use it in the different widgets. But I don't know how to just create a QOpenGLContext alone nor what kind of QWidgets is able to display these renderings.
It's my first post so I don't know if it's clear enough or if I need to describe my whole architecture.
You already tried, so I pass the word about shared contexts.
An OpenGL context is bound to a window: if you want only one context, the straight answer is to have only one window.
Using the widgets module, you can have multiple views of a same scene using multiple viewports in a same QOpenGLWidget. Something like:
void myWidget::paintGL() {
//...
glViewport(
0, 0,
this->width()/2, this->height()/2
);
// draw scene from one point of view
glViewport(
this->width()/2, this->height()/2,
this->width()/2, this->height()/2
);
// draw scene from an other point of view
//...
}
You should probably design a viewport class to store and manage the rendering parameters for each viewport.
The drawback is that you will have to detect in which viewport the user is clicking to handle interactions: some kind of if event.pos.x is between 0 and this->width()/2 ....
An other way could be to let down the widgets module and use Qt Quick and QML: a quick window declares a unique OpenGL context, where each quick item is like a viewport, but encapsulated in its own object so you don't have to think about where the user is interacting.
Inherit QQuickItem instead of QOpenGLWidget and export your class to QML using the qmlRegisterType() macro. You can then create a QQuickView in your program to load a QML code where you declare your items. An example from Qt's documentation here.
I think since multiple views/surfces can update independently, unfortunately its not possible to have one single QOpenGLContext that does the job. And sharing contexts have the limitation you already point out in your question.
QOpenGLContext can be moved to a different thread with moveToThread().
Do not call makeCurrent() from a different thread than the one to
which the QOpenGLContext object belongs. A context can only be current
in one thread and against one surface at a time, and a thread only has
one context current at a time.
Link : http://doc.qt.io/qt-5/qopenglcontext.html
So one way you can get it working is have independent updates to your views in a sequential order and make the context current one by one and render before moving on to the next view. This will guarantee that the context is current in only one view at any given time. Perhaps use a QMutex to serialize the updates.
Alternatively you can also pass the context around among threads and serialize their updates, but this is a bad approach.
Goal:
In my application I'm trying to implement multiple viewports to allow the user to view a scene from multiple perspectives. Each of my viewports need to be able to switch between wireframe, shaded, lighting, etc. I can currently render from different perspectives in each viewport, but I have issues.
Problem:
When I try to set various settings such as glPolygonMode() or qglClearColor() within any viewport, these settings only seem to apply to a single viewport, generally the very last viewport that was created. This isn't a signals/slots issue, since these connections are handled internally within each widget, and cannot be mixed up between widgets.
Attempts at solving the problem:
Since I'm using Qt as the library for managing all UI related things, I'm sure there are a lot of things Qt has taken care of for creating and setting up each OpenGL instance for me, so there may be things that I'm overlooking that I don't know about.
I've checked the constructors available for QGLWidgets, and seen that a QGLWidget can take in another QGLWidget as a "sharedwidget", and also a QGLContext object.
I currently use the "sharedwidget" route, because without it for some reason I can't get textures to bind for more than 1 viewport. However, this doesn't solve the problem of not being able to switch between wireframe or shaded in each QGLWidget instance.
I've also tried the QGLContext route. By default each QGLWidget
creates a new context anyways, but when trying to assign new ones or
sharing a single Context between all of them I would just get issues
with my shaders not linking (I believe the initializeGL slot is not
getting called in that case), leading to a crash every time a context is shared to another QGLWidget:
ASSERT: "QOpenGLFunctions::isInitialized(d_ptr)" in file
c:\work\build\qt5_workdir\w\s\qtbase\include\qtgui../../src/gui/opengl/qopenglfunctions.h,
line 2018
Details:
Currently, my application takes on the following hierarchy:
Application
Window
ViewportWidget [dynamic array]
QGLWidget (custom variation)
The only thing each QGLWidget needs to share is the pointer to the current "map", so that each can render the map based on whatever settings are set within that particular widget's instance.
I perform the following functions for setting up a viewport:
I create a new ViewportWidget, parent it and add it to the appropriate frame and Layout. If the viewport isn't the first one, then it also passes the very first QGLWidget to be used as a "sharedwidget"
The viewport then creates a QGLFormat with a swap interval of 1, and passes said format into the constructor of a new QGLWidget.
I then am forced to call "makeCurrent()" for the viewport, otherwise I crash with the reason:
ASSERT: "false" in file qgl.cpp, line 122
Is it even possible to have separate QGLWidgets with different "polygonMode"'s, or "clearColor"'s? I'm just worried that I'm doing something wrong that will bite me in the butt later on, which I want to avoid.
I tried to implement(i.e. draw on) two UserControl with the same OpenGL context in the same one form. In other words, i wanna show two same picture simultaneous on the form. My tool is VC6 and use C++.
I've tried many methods but failed. Could someone give me a simple sample code or some advises?
edit
It looks like there's two possibilities - either copy the final image to the second GUI element or create a second "device context" for the second element, use *MakeCurrent to change to it (see link and discussion below) and re-render or blit the result.
Copy:
Assuming the GL context draws directly to your primary GUI element (which would stop you using the GUI library to do the copy), you can copy the data via glReadPixels or investigate drawing to a texture via a Frame Buffer Object and use glGetTexImage2D. Then I guess find some way to display the raw image data on the second GUI element (this part I have no experience in).
MakeCurrent:
Make current OpenGL context on Linux
I recently asked a question about how to get around sharing issues with vertex array objects and frame buffer objects across multiple contexts, I was then convinced that using multiple contexts just caused more headaches then solutions.
I am using Qt and currently my setup is that I have one invisible QGLWidget which I then use in the constructor of my visible QGLWidget's in order to share resources, this works great accept that I cannot share certain things across the contexts.
I wish to find a solution where I am able to use a single context to render all of my different widget's, this question refers to using the QGLWidget constructor where you pass in the QGLContext you desire to be shared, however this does not seem to use one common context, but instead set the context to be used by one QGLWidget, when you try to use it on a second widget, a qWarning is called which informs you that the QGLContext must refer to the widget you are passing it to.
The goal of my application is to have 2 seperate GUI's which render different scenes, yet share the same context. Currently I have a 'World' editor which edits a scene and saves it to a file to be used in my game engine, and I also have a 'Material' editor which allows you to graphically edit a material similar to UDK's Material editor, there is a preview window which utilizes OpenGL.
Ideally I would like to keep my current design of having one unified game editor which is navigable by tabs, rather than having separate programs for each part of the editor.
The only thing that seemed like it was a decent solution was using the QGraphicsView and setting a QGLWidget as the viewport, however this does not seem to work at all. I can render basic primitives, however anything more and it falls apart.
Does anyone have experience dealing with this issue of multiple OpenGL Widgets, and if so could you explain the process you took to achieve your goal?
I don't quite understand why you are having so much trouble, I'm building a CAD-like app so share a few contexts, like this:
I use an application-wide hidden QGLWidget as a member of my main window class, this is the context shaders are loaded in.
For each document window, the window class has a hidden QGLWidget member, this is the context geometry is loaded in. The shader context is used as the 'shared' widget for it, allowing documents access to the application wide shaders.
Each of the 5 viewports in each document window is a visible QGLWidget, this is where the actual rendering takes place. The document window geometry QGLWidget is used as the 'shared' widget, so the viewports have access to the document-wide geometry data and the application-wide shaders.
The shared widget parameter allows you to create an 'inheritance' tree of contexts, every context has access to it's own and all it's ancestors data (but not it's childrens or siblings).
I am making a top-down shooter that makes extensive use of TMX maps created with the "Tiled" application. Within my TMX map, I have a "Background" layer with floor tiles, which appears beneath my characters (CCSprites.)
I have another layer in the TMX file called "Foreground" which I would like to appear "above" my CCSprites, giving the illusion of them walking underneath various objects.
I tried using the vertexZ property of the CCNode class to do this:
CCTMXLayer *backgroundLayer = ...
CCSprite *spriteNode = ...
CCTMXLayer *foregroundLayer = ...
[backgroundLayer setVertexZ:1];
[spriteNode setVertexZ:2];
[foregroundLayer setVertexZ:3];
...but it turns out vertexZ actually alters the node's visual appearance within the openGL view. It effectively causes a CCNode to appear larger, or closer to the user when it has a higher vertexZ value. I don't want that- all I want is a sort of layers-of-an-impossibly-thin-cake effect, without any visual differences between the layers.
So I thought I would try altering the zOrder property of the nodes, like this:
[[backgroundLayer parent] reOrderChild:backgroundLayer z:1];
[[spriteNode parent] reOrderChild:backgroundLayer z:2];
[[foregroundLayer parent] reOrderChild:backgroundLayer z:3];
But I realized there's a fundamental problem with what I'm doing here, since my spriteNode is a direct child of the CCScene, but the background and foreground nodes are both children of my CCTMXTiledMap, which itself is a child of the CCScene.
So I'm basically trying to slip a CCSprite between two layers of the map, which, from the CCScene's perspective, are really just two parts of the same layer.
It seems I could create an additional instance of CCTMXTiledMap just to hold the foreground layer, but that also seems like overkill. My other thought was to create CCSprites to serve the same purpose, but it seems like there's got to be a better way.
Yes, I have used Tiled once very lightly and I do believe there is an option to add an Object Layer into your TMXTiledMap (Tiled -> Layer -> Add Object Layer...), then once imported into your build you can link up a CCSprite with the corresponding Object Layer you have created. I would post your question on the cocos2d forum, as people there are more experienced and equipped to answer this with examples.