I am trying to create a rendering web server for 3D data (initially sent by web-clients), offering real-time images to the clients (using WebSockets).
My problem is with the 3D rendering instances. Using VTK, I did not found a way to keep numerous "instances" for the rendering (one for each web-client) in the same code/server. Plus, I tried OpenGL as well getting the same result. I know options as VTK.js but I need the rendering on the server-side. I am stuck!
Is there any way to render multiple 3D graphics instances using the same code? My server is on Flask, for rendering I tried VTK in C++ connected with DLLs, and VTK Python wrapper, neither doesn't work.
Related
I display a 2D texture in OpenGL using Qt.
Unfortunately I have found out that I need to support running my application via Remote Desktop to a Windows 7 PC. In this case I need to use OpenGL ES 2.0 API (ANGLE).
Due to low bandwidth my 2D visualization seems to be lagging.
My texture may have higher resolution than the screen so that it needs to be minified.
When not using remote desktop my approach have been to specify a very detailed texture and let the graphics card do the minification.
However now I am thinking that the OpenGL calls are executed in software locally and not on the remote machine? In which case the textures have to be transmitted via TCP/IP?
Does this mean that I should do minification myself before using the textures?
As an example instead of using a 2048x2048 texture I may bin 2x2 pixels in C++ and upload a 1024x1024 texture.
Alternatively I could use glGenerateMipmap?
I feel multiple terms are confused here: RDP just transfers the entire remote desktop for you whatever is on it, so no "OpenGL calls are executed in software locally". Hence, unfortunately it will not help if you reduce the texture size in your app, even if you remove it entirely (try it). RDP is not really suitable for real time animation.
Your app better be running locally on the user machine, so better to think how to distribute your OGL app to users.
If you cannot install your app on users machine, or give them installation kit, then
maybe turning your app to a browser app is a better option.
WebGL there for exactly this kind of applications, and is a standard too:
https://www.khronos.org/webgl/
I have an application that runs on Nintendo 3DS -- it uses a variant of OpenGL to render 3D animation. The user is able to store these scenes online as data files. That is, only the data needed to render the scene is stored - the image frames are rendered on the device.
Additionally I would like for people to be able to view these scenes online. One way might be to render them in the browser via WebGL, but I'm worried about the amount of time and memory this would require. I would rather have the server render the scenes into movie files which can be played from a web page.
I don't have a lot of experience with server side programming - is it possible for a server program to render frames to an OpenGL context? They would be offscreen framebuffers since there is no screen.
Any suggestions on an approach to doing that? I've used PHP mostly for web programming, but it seems like that is not feasible for this. Ideally I'd like to write a C++ program which ran on the server, that way I could re-use code from the 3DS. Is that possible? Where can I read about doing this?
Server-side rendering is possible, and would provide more consistent results to the user than relying on consistent WebGL behavior across different browsers and platforms (as well as the time/memory performance concerns you already mentioned). Users with capable browsers and platforms will not get any benefits, so you'll want to consider what your users want and the platforms they're using.
For Windows-based servers, using OpenGL (w/offscreen framebuffers) with "no screen" will present a challenge. You need to start with a window to establish a graphics context. (There may be a provision to establish a "windowless" graphics context for Linux.) You also will need to determine how to manage any GPU resources for rendering, as it will have limits on the number of concurrent rendering requests it can support before slowing down and/or failing to allocate resources (e.g. framebuffer memory).
One alternative might be to use Mesa (software OpenGL) implementation - this won't be as fast, but in theory, this would scale with added server CPU and memory, which matches how most web servers scale out: Mesa offscreen rendering info
It looks like once written, spawning the C++ executable with args from PHP is trivial - although you may wish to route any long-running renderings to a separate rendering server to keep your web server responsive.
Last few days I have been looking around the chromium and WebKit source codes, reading wikis, and watching Google videos. What I want to do is take what WebKit renders and place it into a GL texture. But I need to have different DOM nodes in different textures. I have a few questions and Im not sure if I should go about using Chromium or implementing my own simple browser. Chromium obviously has many nice features, but it is very large and extensive. I also figure that it's algorithms for splitting render layers are unpredictable (I want pretty much full control).
Where should in WebKit or Chromium's source to find where it outputs raster data? It would be convenient if I could get access to Chromium's render layer raster data before it is composted. But as I said the render layers would probably me mixed in a way I didn't want them to be.
Is WebKit GPU accelerated, in that case I should be able to access the data directly. I know Chromium+Blink is but I can't find out if WebKit on its own is.
How much work is it too put together a simple browser?
P.S. I can't use Awesomium because I need to render different DOM nodes/subtrees into different textures. Chromium Embedded Framework doesn't appear to have support for DOM manipulation either and I believe it just renders the entire page and gives you the raster data.
I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?
While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).
I am trying to develop a 3D game in openGL and i need to create many 3D objects.. I am a begginner in openGL.. I have tried with many 3D softwares like Blender , MODO, Unity 3D and Cheetah.
I am easily able to create my objects with these and exporting as Wavefront .OBJ, and converting it to a header file using a perl script. This header file is added to my openGL project..
The 3D objects are seen, but its not perfect. The script i used is to convert the .OBJ to .h using TRIANGLES.. And the object is seen with triangles. Its not full.. No way when i used TRIANGLE STRIP,FAN..? Problems with the vertices..
Is the problem with my Script or is it the wrong way i have gone..?? Or is there any other best ways to directly import 3D objects to openGL..??
The below link is the best one which you can get for 3D objects to openGL.. i got the scripts from these..
http://www.heikobehrens.net/2009/08/27/obj2opengl/
please help..
You don't want to go that way. Direct drawing mode (using TRIANGLE and friends) is extremely slow in OpenGL.
Instead, you should pick a decent format and write a loader for it (or use one found on the web). Good formats would be 3ds, obj if gzipped, collada.
Here's an example tutorial on loading from Milkshape files.
Once you load your objects programatically, you can use Vertex Arrays, or even better VBO's to display them. This is waaay faster.
Google for a mesh loader for your favorite format, or write one yourself.
I have written a reader/renderer for AC3D files that works fine on the iPhone (OpenGL ES)
Feel free to have a look at it here.
There is also an obj loader by Jeff Lamarche at google code.
AC3D can reduce the triangle count pretty good and as an alternative I ported QVis to the mac. My reader/renderer also tries to build tri-strips.
About VBO's. I have not seen any gained performance when using them in the iPhone. I'm not the only one.