I'm doing double-buffering by creating a render target with its associated depth and stencil buffer, drawing to it, and then drawing a fullscreen, possibly stretched, quad with the back buffer as the texture.
To do this I'm using a CreateTexture() call to create the back buffer, and then a GetSurfaceLevel() call to get the texture from the Surface. This works fine.
However, I'd like to use CreateRenderTarget() directly. It returns a Surface. But then I need a Texture to draw a quad to the front buffer.
The problem is, I can't find a function to get a texture from a surface. I've searched the DX8.1 doc again and again with no luck. Does such function even exist?
You can create empty texture matching size and color format of the surface. Then copy contents of the surface to the surface of the texture.
Here is a snippet from my DirectX9 code without error handling and other complications. It actually creates mipmap-chain.
Note StretchRect that does the actual copying by stretching surface to match geometry of the destination surface.
IDirect3DSurface9* srcSurface = renderTargetSurface;
IDirect3DTexture9* tex = textureFromRenderTarget;
int levels = tex->GetLevelCount();
for (int i=0; i<levels; i++)
{
IDirect3DSurface9* destSurface = 0;
tex->GetSurfaceLevel(i, &destSurface);
pd3dd->StretchRect(srcSurface, NULL, destSurface, NULL, D3DTEXF_LINEAR);
}
But of course, this is for DirectX 9. For 8.1 you can try CopyRect or Blt.
On Dx9 there is ID3DXRenderToSurface that can use surface from texture directly. I am not sure if that's possible with Dx8.1, but above copy-method should work.
If backwards compatibility is the reason your using D3D8, try using SwiftShader instead. http://transgaming.com/business/swiftshader
It a software implementation of D3D9. You can utilize it when you don't have a video card. It costs about 12k though.
Related
I'm making a game in Libgdx.
The only way I have ever known how to use shaders is to have the batch affect the given textures one after another. This is what I normally do in my code:
shader = new ShaderProgram(Gdx.files.internal("shaders/shader.vert"), Gdx.files.internal("shaders/shader.frag"));
batch.setShader(shader);
And that's about all of the needed code.
Anyways, I do not want this separation between textures. However, I can't find any way to affect the whole screen at once with a shader, like the whole screen is just one big texture. To me, it seems like the most logical way to use a shader.
So, does anyone know how to do something like this?
Draw all textures (players, actors, landscape, ...) with the same batch and, if you want to affect also the background with the same shader, draw a still texture with the size of the screen in the background and draw it with the same batch.
Quite easy with FBO objects, you can get "the whole screen as just one big texture" like you said in your question:
First of all, before any rendering, create yout FBO object and begin it:
FrameBuffer fbo = new FrameBuffer(Format.RGBA8888, Width, Height, false);
fbo.begin();
Then do all of your normal rendering:
Gdx.gl.glClearColor(0.2f, 0.2f, 0.2f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
...
Batch b = new SpriteBatach(...
//Whatever rendering code you have
Finally save that FBO into a texture or sprite, do any transformation needed on it, and prepare and use your shader on it.
fbo.end();
SpriteBatch b = new SpriteBatch();
Sprite s = new Sprite(fbo.getColorBufferTexture());
s.flip(false,true); //Coord systems in buffer differs from screen
b.setShader(your_shader);
b.begin();
your_shader.setUniformMatrix("u_projTrans",camera.combined); //if you have camera
viewport.apply(); //if you have viewport
b.draw(s,0,0,viewportWidth,viewportHeight);
b.end();
b.setShader(null);
And this is all!
Essentially what you are doing is to "render" all your assets and game scene and stages into a buffer, than, saving that buffer image into a texture and finally rendering that texture with the shader effect you want.
As you may notice, this is highly inefficient, since you are copying all your screen to a buffer. Also note that some older drivers only support power of 2 sizes for the FBO, so you may have to have that in mind, check here for more information on the topic.
So I am still trying to get the same result within OpenGL and DirectX.
Now my problem is using Rendertargets and Viewports.
What I learned so far:
DirectX's Backbuffer stretches if the Window is resized
-- Size of rendered Texture changes
OpenGL's Backbuffer resized if the Window is resized
-- Rendered Texture stays where it is rendered
Now what I did here was to change OpenGL's Viewport to the Window Size. Now both have the same result, the rendered Texture is stretched.
One Con:
-OpenGL's Viewport's Size cant be set like in DirectX because it is set to the Window Size.
Now when rendering a Rendertarget this happens:
DirectX: Size matters, if Size is greater than the Backbuffer the texture only takes a small place, if Size is lower than the Backbuffer the Texture takes a lot place
OpenGL: Size doesnt matter, rendered Context stays the same/stretches.
Now my question is:
How do I get the Same Result in OpenGL and in DirectX?
I want the same Result I have in DirectX within OpenGL. And is my Idea of doing it so far right or is there a better idea?
What I did: draw everything in OpenGL to a FrameBuffer and blit that to the backBuffer. Now the Content is rescaleable just like in DirectX.
I write a program using OpenGL. It implements a simple function: draw a teapot.And in order to make it look nice on the screen, I enable multisample anti-aliasing. And it does. Look at the following bitmap:
But when I save it as a bmp picture, it looks bad. I use FBO and PBO to do it. Now I post part of my code here:
glGenFramebuffers(1,&m_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glGenRenderbuffers(1,&m_renderBufferColor);
glBindRenderbuffer(GL_RENDERBUFFER,m_renderBufferColor);
glRenderbufferStorage(GL_RENDERBUFFER,GL_RGB,
m_subImageWidth,m_subImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,m_renderBufferColor);
glGenRenderbuffers(1,&m_renderBufferDepth);
glBindRenderbuffer(GL_RENDERBUFFER,m_renderBufferDepth);
glRenderbufferStorage(GL_RENDERBUFFER,GL_DEPTH_COMPONENT,
m_subImageWidth,m_subImageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,m_renderBufferDepth);
glBindFramebuffer(GL_FRAMEBUFFER,0);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glGenBuffers(1,m_subImageBuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER,m_subImageBuffer);
glBufferData(GL_PIXEL_PACK_BUFFER,m_bufferSize,
NULL,GL_STREAM_READ);
glBindFramebuffer(GL_FRAMEBUFFER,m_frameBuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER,m_subImageBuffer);
glPixelStorei(GL_PACK_ALIGNMENT,1);
//注意:以BGR的顺序读取
glReadPixels(0,0,m_subImageWidth,m_subImageHeight,
GL_BGR,GL_UNSIGNED_BYTE,bufferOffset(0));
GLUtils::checkForOpenGLError(__FILE__,__LINE__);
m_subPixels[i] = static_cast<GLubyte*>(glMapBuffer(GL_PIXEL_PACK_BUFFER,GL_READ_ONLY));
gltGenBMP(subImageFile,GLT_BGR,m_subImageWidth,m_subImageHeight,m_subPixels[i]);
glBindBuffer(GL_PIXEL_PACK_BUFFER,0);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER,0);
I am very curious.Why they are different : render to default famebuffer and save to a bmp picture?
Actually, what I want to do is to get 9 small bitmaps in 9 different adjacent angle and then synthesis one bitmap to display on a stereoscope 3D screen. But the synthesised bitmap looks bad.
Could someone tell me why?
Just because you enable multisample on your frame buffer does not mean your FBO will have it too. You need to use glRenderbufferStorageMultisample when creating the FBO.
See: FBO Blitting is not working
And: http://www.opengl.org/wiki/GL_EXT_framebuffer_multisample
This is also relevant: glReadPixels from FBO fails with multisampling
I am currently trying to draw shadows over a scene in direct3d 9. I am trying to do some multi pass rendering and am having trouble understanding how to use/set the blend mode.
I have done a depth pass which fills the depth buffer and I then have a loop which loops through all the lights in the scene. Within that loop I have 2 loops which both loop through all the shapes in the scene
I have this sort of set up
for(number of shapes)
{
//render from camera position and fill depth buffer
}
for(number of lights)
{
for(number of shapes)
{
//render to shadow map
}
for(number of shapes)
{
//render to screen
}
}
In pix I can see that it loops through each light but when I run it only the last light in the light array is displayed. I think it is something to do with the blend mode.
I have looked into the blend mode and found information about source and destination blend. Is this what I need/could someone help explain it please?
Thanks in advance,
Mark
[EDIT]
I got both lights visible using the following code
hr = device->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
hr = device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE);
hr = device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE);
The shadows do not look correct but I am getting closer to the desired result.
Any more advice would be great,
Thanks,
Mark
I also think that you have wrong alpha blend states set. I assume that you do a SRC blend and replacing your old image with the new one and not doing any alpha blending.
You need to think about what you want. I assume that you want a SRC_OVER blending.
The Porter Duff rules can give you a hint and they are very easy to implement in directX. For an example look here.
[Edit] I should read more carefully.
Alpha blending and pixel shaders are independent. So you can of course use the pixel shader to change the color value of your source. Maybe to add some special effect or what ever you want to try.
But as soon as you want to blend your source over a destination and you don't want to replace all the pixels with new ones, you need to enable alpha blending. E.g. like this
hr = device->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
hr = device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
hr = device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
This code will multiply all source pixels with their alpha value and all destination pixels with 1-source_alpha. And combine them to the new destination.
I am creating two render targets, both must share the back buffer's depth buffer, so it is important that I set them to have the same multi sampling parameters, however pDevice->CreateTexture(..) does not give any parameters for setting the multi sampling types. So I created two render target surfaces using pDevice->CreateRenderTarget(...) giving the same values as the depth buffer, now the depth buffer works in conjunction with my render targets, however I am unable to render them over the screen properly because alpha blending does not work with ->StretchRect (or so I have been told, and it did not work when I tried).
So the title of this question is basically my questions, how do i:
- convert a surface to a texture or
- create a texture with certain multisampling parameters or
- render a surface properly with an alpha layer
The documentation for StretchRect specifically explains how to do this:
Using StretchRect to downsample a
Multisample Rendertarget
You can use StretchRect to copy from
one rendertarget to another. If the
source rendertarget is multisampled,
this results in downsampling the
source rendertarget. For instance you
could:
Create a multisampled rendertarget.
Create a second rendertarget of the
same size, that is not multisampled.
Copy (using StretchRect the
multisample rendertarget to the second
rendertarget.
Note that use of the
extra surface involved in using
StretchRect to downsample a
Multisample Rendertarget will result
in a performance hit.
So new response to an old question, but I came across this and thought I'd supply an answer in case someone else comes across it while running into this problem. Here is the solution with a stripped down version of my wrappers and functions for it.
I have a game in which the renderer has several layers, one of which is a geometry layer. When rendering, it iterates over all layers, calling their Draw functions. Each layer has its own instance of my RenderTarget wrapper. When the layer Draws, it "activates" its render target, clears the buffer to alpha, draws the scene, then "deactivates" its render target. After all layers have drawn to their render targets, all of those render targets are then combined onto the backbuffer to produce the final image.
GeometryLayer::Draw
* Activates the render target used by this layer
* Sets needed render states
* Clears the buffer
* Draws geometry
* Deactivates the render target used by this layer
void GeometryLayer::Draw( const math::mat4& viewProjection )
{
m_pRenderTarget->Activate();
pDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,TRUE);
pDevice->SetRenderState(D3DRS_SRCBLEND,D3DBLEND_SRCALPHA);
pDevice->SetRenderState(D3DRS_DESTBLEND,D3DBLEND_INVSRCALPHA);
pDevice->Clear(0,0,D3DCLEAR_TARGET,m_clearColor,1.0,0);
pDevice->BeginScene();
pDevice->Clear(0,0,D3DCLEAR_ZBUFFER,0,1.0,0);
for(auto it = m->visibleGeometry.begin(); it != m->visibleGeometry.end(); ++it)
it->second->Draw(viewProjection);
pDevice->EndScene();
m_pRenderTarget->Deactivate();
}
My RenderTarget wrapper contains an IDirect3DTexture9* (m_pTexture) which is used with D3DXCreateTexture to generate the texture to be drawn to. It also contains an IDirect3DSurface9* (m_pSurface) which is given by the texture. It also contains another IDirect3DSurface9* (m_pMSAASurface).
In the initialization of my RenderTarget, there is an option to enable multisampling. If this option is turned off, the m_pMSAASurface is initialized to nullptr. If this option is turned on, the m_pMSAASurface is created for you using the IDirect3DDevice9::CreateRenderTarget function, specifying my current multisampling settings as the 4th and 5th arguments.
RenderTarget::Init
* Creates a texture
* Gets a surface off the texture (adds to surface's ref count)
* If MSAA, creates msaa-enabled surface
void RenderTarget::Init(const int width,const int height,const bool enableMSAA)
{
m_bEnableMSAA = enableMSAA;
D3DXCreateTexture(pDevice,
width,
height,
1,
D3DUSAGE_RENDERTARGET,
D3DFMT_A8R8G8B8,
D3DPOOL_DEFAULT,
&m_pTexture;
);
m_pTexture->GetSurfaceLevel(0,&m_pSurface);
if(enableMSAA)
{
Renderer::GetInstance()->GetDevice()->CreateRenderTarget(
width,
height,
D3DFMT_A8R8G8B8,
d3dpp.MultiSampleType,
d3dpp.MultiSampleQuality,
false,
&m_pMSAAsurface,
NULL
);
}
}
If this MSAA setting is off, RenderTarget::Activate sets m_pSurface as the render target. If this MSAA setting is on, RenderTarget::Activate sets m_pMSAASurface as the render target and enables the multisampling render state.
RenderTarget::Activate
* Stores the current render target (adds to that surface's ref count)
* If not MSAA, sets surface as the new render target
* If MSAA, sets msaa surface as the new render target, enables msaa render state
void RenderTarget::Activate()
{
pDevice->GetRenderTarget(0,&m_pOldSurface);
if(!m_bEnableMSAA)
{
pDevice->SetRenderTarget(0,m_pSurface);
}
else
{
pDevice->SetRenderTarget(0,m_pMSAAsurface);
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,true);
}
}
If this MSAA setting is off, RenderTarget::Deactivate simply restores the original render target. If this MSAA setting is on, RenderTarget::Deactivate restores the original render target too, but also copies m_pMSAASurface onto m_pSurface.
RenderTarget::Deactivate
* If MSAA, disables MSAA render state
* Restores the previous render target
* Drops ref counts on the previous render target
void RenderTarget::Deactivate()
{
if(m_bEnableMSAA)
{
pDevice->SetRenderState(D3DRS_MULTISAMPLEANTIALIAS,false);
pDevice->StretchRect(m_pMSAAsurface,NULL,m_pSurface,NULL,D3DTEXF_NONE);
}
pDevice->SetRenderTarget(0,m_pOldSurface);
m_pOldSurface->Release();
m->pOldSurface = nullptr;
}
When the Renderer later asks the geometry layer for its RenderTarget texture in order to combine it with the other layers, that texture has the image copied from m_pMSAASurface on it. Assuming you're using a format that facilitates an alpha channel, this texture can be be blended with others, as I'm doing with the render targets of several layers.