OfflineAudioContext only rendering and calling onComplete once - offline

I've created an offline context to render a visualization based on the rendered buffer and call startRendering() to get the rendered buffer in the onComplete callback. If I try to create a new set of connected audio nodes, calling startRendering() again does nothing. Do I have to recreate the whole offline context for each render?

Yes, that's how it is supposed to work.

Related

How to provide OpenGL context of empty QML Item for 3rd party render?

My application contain 3rd party code, which paints something using OpenGL. Only I can do is to provide full-screen surface for it and mouse/touchscreen events. Say I can create full-screen-sized Item and leave it for that 3rd-party render. The render emits some signals for me (maybe from arbitrary thread, by means of QMetaObject::invokeMethod) to update the view. How to block QML to paint into the context temporarily?
How described above can be implemented? Is it technically possible to create such a workflow?
Is there more wise way to achieve desired. Maybe using of FBO would be better?
Can I do this asynchronously? I.e. that render have its own message quque into separate thread.
You should have a look on QQuickFramebufferObject.
The way it works:
You create a subclass and instanciate it in QML. You should override its
Renderer *QQuickFramebufferObject::createRenderer()
You create your own subclass of Renderer http://doc.qt.io/qt-5/qquickframebufferobject-renderer.html
All the code you put inside its void Renderer::render() will affect the FBOItem and be rendered to the screen. Qt is in charge to bind your FBO.
Now if you need to call for modification in the UI you call the void Renderer::update().
Your third party renderer should call this method when he needs to draw but all your opengl methods (glDrawArrays ...) should be written inside the render function of the Renderer. Qt bind your FBO before accessing this method. Maybe your QQuickFramebufferObject should listen to your ThirdParty signals and modify (via some states, or std::functions) the render method of its Renderer components and call for an update.
Here are some good examples
http://blog.qt.io/blog/2015/05/11/integrating-custom-opengl-rendering-with-qt-quick-via-qquickframebufferobject/

Save and reload screen in opengl

I have written a program like ms paint with glut/opengl and i want to save current screen and reload after some drawing.
How to save current display screen into a variable and reload when needed?
You'll likely want to store it in a Framebuffer Object, and swap those out at will.
This depends strongly on your use-case scenario, though.
http://www.opengl.org/wiki/Framebuffer_Object

Multiple RenderTarget in DirectX 11 (C++)

I want to render a scene and display it on the monitor, while rendering another one to a texture.
Do I need to create two swapchains? How do I create the second swapchain in this case? I tried to call CreateSwapChainForCoreWindow but got memory access exceptions.
Swapchains are really just for displaying stuff.
To render to something, you have to add a render target view to the device via the OMSetRenderTargets() call. You can create render target views via CreateRenderTargetView(), which takes a resource as input. Textures are resources too... you just have to create them with the D3D11_BIND_RENDER_TARGET flag.
That's just a few cues that should be able to point you into the right direction.
Btw, Swapchains have buffers, which are resources that are used to create a render target view as well. That's how you render to a swapchain; it really doesn't have anything to do with "swapchains" at all.

Is it possible to render to FBO in a headless mode using LWJGL?

I need to develop an app using Java wrapper for OpenGL LWJGL.The app will run on remote server in a headless mode.I am trying to understand if and how is it possible taking into consideration the fact that GL context in LWJGL (and in other APis) is created via Java UI elements like Canvas etc.In my case I need to be able to init GL context without creating a window as the drawing targets will be FBOs from which the pixel buffers will render to texture. There is one possible solution though already called PBuffer (I guess pixel buffer) object in LWJGL.It indeed doesn't need GL context created via window as it creates it internally.I don't want to use this method both because it is older concept (and weaker ) than Frame buffer object and because I am using OGL 3.3 -> .So I really don't want to mix with any old pipeline legacy.
I have basically 2 questions:
1.Can I create a context without setting up a window to use for FBO based rendering(headless mode) ?
2.If the answer to the first question is negative ,then can I run on the remote server such an app where the windows is still initialized for the sake of context access ?
UPDATE:
The question can be closed.I tested it via first initialization done with PBuffers to set a context.Then FBO rendering works as supposed.
I found the answer on my own. One should set PBuffer first to create headless GL context. Once it is created we can use FBOs to render frames into images.

Can you create OpenGL context without opening a window?

Occassionally I hit places where I'd want to get an OpenGL framebuffer object, but where I'm not interested about opening a window of any kind.
Is it possible to create an opengl context without attaching it to a window of any kind?
Yes! you can use the desktop window as the window passed to OpenGL- as long as you don't try to display anything on it ;)
Just Call GetDesktopWindow and pass the result as an argument when creating new OpenGL window.
http://www.opengl.org/wiki/Creating_an_OpenGL_Context
According to this Web page, WGL_ARB_create_context can be used to create a context without a window. I have not actually tried it myself. I used freeGLUT to create the context and then rendered off-screen to a framebuffer+renderbuffer. I exit the program without ever calling glutMainLoop. It is klugy, but it works for my purposes.
Yes, you can perform off-screen rendering with OpenGL, but the exact way to set it up is dependent on the operating system.
The closest you get to an OS independent way would be to use Mesa 3D, but then your off-screen rendering would not be hw accelerated.