I'm writting a program on windows using c++, opengl 2.1 and SDL and am having some issues with the vertex colors.
I'm using glColor3f to set the color for each vertex set, but it seems to not be working. I get every vertex drawn red no matter what color I pick. I checked the values being passed to glColor3f and they are indeed not 1.f,0.f,0.f...
Has anyone ever encountered such a problem, or knows what might be causing it? Am I maybe not including some lib I should? Or do you reckon it might be some issue with the SDL initialization code (it shouldn't be, as I've used it before correctly)?
EDIT4: Solved.. it was indeed a gpu issue, I got the drivers from the manufacturer and it is now working properly... go figure
EDIT: I'm also not using any lighting, textures or anything of the sorts. Graphically I just set up the window and tell opengl some vertices and colors to draw..
EDIT2: Sure, but I highly doubt there's any issue there:
int GLmain::redraw(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_LINE_STRIP);
for(int i=0; i<s->divs.size(); i++){
vec3 v = s->divs.at(i).getPosition();
vec3 c = s->divs.at(i).getColor();
glColor3f(c.get(0),c.get(1),c.get(2));
glVertex3f(v.get(0),v.get(1),v.get(2));
}
glEnd();
return TRUE;
}
As you can see, pretty standard stuff.. c holds values between 0.0-1.0.
I just tried running some other work I had done with OpenGL and everything is showing up red as well (it wasn't before), so I'm guessing it has something to do with the libs I'm using:
opengl32.lib
sdl.lib
sdlmain.lib
glu32.lib
SDL is version 1.2.14. Also, could it be a problem with my gpu? Everything else shows up with normal colors though.. web browser, videos, games, etc.
EDIT3: SDL initialization code:
int done = 0;
int w = 800;
int h = 600;
GLenum gl_error;
char* sdl_error;
SDL_Event event;
Init_OpenGL(argc, argv, w, h); // start window and OpenGL
SDL_WM_SetCaption( "MySlinky", "" ); /* Set the window manager title bar */
void Init_OpenGL(int argc, char* argv[], int w, int h)
{
int rgb_size[3];
int bpp = 0;
Uint32 video_flags = SDL_OPENGL;
if( SDL_Init( SDL_INIT_VIDEO ) < 0 ) {
fprintf(stderr,"Couldn't initialize SDL: %s\n",SDL_GetError());
exit( 1 );
}
/* See if we should detect the display depth */
if ( bpp == 0 ) {
if ( SDL_GetVideoInfo()->vfmt->BitsPerPixel <= 8 ) {
bpp = 8;
} else {
bpp = 16; /* More doesn't seem to work */
}
}
/* Initialize the display */
switch (bpp) {
case 8:
rgb_size[0] = 3;
rgb_size[1] = 3;
rgb_size[2] = 2;
break;
case 15:
case 16:
rgb_size[0] = 5;
rgb_size[1] = 5;
rgb_size[2] = 5;
break;
default:
rgb_size[0] = 8;
rgb_size[1] = 8;
rgb_size[2] = 8;
break;
}
SDL_GL_SetAttribute( SDL_GL_RED_SIZE, rgb_size[0] ); // Sets bits per channel
SDL_GL_SetAttribute( SDL_GL_GREEN_SIZE, rgb_size[1] );
SDL_GL_SetAttribute( SDL_GL_BLUE_SIZE, rgb_size[2] );
SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 ); // 3 channels per pixel (R, G and B)
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute( SDL_GL_ACCELERATED_VISUAL, 1 );
if ( SDL_SetVideoMode( w, h, bpp, video_flags ) == NULL ) {
fprintf(stderr, "Couldn't set GL mode: %s\n", SDL_GetError());
SDL_Quit();
exit(1);
}
}
There are two possible causes:
Either you have enabled some render state that will eventually result in red vertices. E.g. textures, materials (!?), lighting and so on.
The code "around" your OpenGL calls has bugs (e.g.: are you really sure, that c.get(1) does not return 0?).
EDIT: Setup of the OpenGL rendering context failed or the context does not work as expected/intended! E.g. it does not have the expected properties, bit depths and so on.
To eliminate any doubt, please add the following line, and check its results: printf("c=(%f,%f,%f)",(float)c.get(0),(float)c.get(1),(float)c.get(2));
Can you try this :
glShadeModel( GL_SMOOTH );
and tell if the problem is fixed?
Put the call in the Init_OpenGL function.
Not sure if this applies to you, but all I needed was one call to glUseProgram(NULL);, to tell opengl to use the fixed function pipeline, and the problem was fixed.
Related
Can any one tell me why my code works on local machine but not on a remote machine I'm trying to push to?
local driver version: NVIDIA-SMI 460.91.03 Driver Version: 460.91.03
remote driver version: NVIDIA-SMI 435.21 Driver Version: 435.21
When trying to run on remote machine, I keep getting:
QGLFramebufferObject: Framebuffer incomplete attachment.
QGLFramebufferObject: Framebuffer incomplete attachment.
Framebuffer is not valid, may be out of memoryor context is not valid
Could not bind framebuffer
Image passed to GenImage is null!
Framebuffer code:
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
void GlowFramebuffer::Create( int width , int height )
{
QGLFramebufferObjectFormat format;
if( m_format == GLOW_R )
{
format.setInternalTextureFormat(GL_RED );
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height, format) );
} else if ( m_attachment == GLOW_DEPTH_STENCIL ) {
format.setAttachment( QGLFramebufferObject::CombinedDepthStencil );
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height, format) );
}
else // GLOW_RGBA
{
m_framebuffer =
QSharedPointer<QGLFramebufferObject>(
new QGLFramebufferObject(width, height) );
}
SetClearColor( m_clear_color );
}
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
void GlowFramebuffer::Create( const QSize& size )
{
Create( size.width() , size.height() );
if( !m_framebuffer->isValid() )
{
qCritical() << "Framebuffer is not valid, may be out of memory"
"or context is not valid";
}
}
/*--------------------------------------------------------------------------*/
/*--------------------------------------------------------------------------*/
int GlowFramebuffer::CopyMultiTexture( GlowFilter filter , GlowFormat format )
{
GLint width = m_framebuffer->width();
GLint height = m_framebuffer->height();
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLenum glfilter = (filter == GLOW_NEAREST) ? GL_NEAREST : GL_LINEAR;
GLenum glformat = (format == GLOW_R ) ? GL_R : GL_RGBA;
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Give an empty image to OpenGL ( the last "0" )
glTexImage2D(GL_TEXTURE_2D, 0,glformat, width, height, 0,glformat, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, glfilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, glfilter);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
// Set the list of draw buffers.
GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers
GLenum status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if( status != GL_FRAMEBUFFER_COMPLETE )
{
qCritical() << "Error with framebuffer!";
}
GLuint handle = m_framebuffer->handle();
GLClear();
glBindFramebuffer( GL_DRAW_FRAMEBUFFER , FramebufferName );
glBindFramebuffer( GL_READ_FRAMEBUFFER , handle );
glDrawBuffer( GL_BACK );
glBlitFramebuffer( 0 , 0 , width , height , 0 , 0 , width , height , GL_COLOR_BUFFER_BIT , GL_NEAREST );
glBindTexture( GL_TEXTURE_2D , 0 );
glBindFramebuffer( GL_FRAMEBUFFER , handle );
glDeleteFramebuffers( 1 , &FramebufferName );
return renderedTexture;
}
I know its likely because FBOs are specific to each machine and driver. In order to ensure its compatible, you need to check the system to make sure the Format which you created your frame buffer is valid.
I think its failing on the remote machine at this line:
GLenum status = glCheckFramebufferStatus( GL_FRAMEBUFFER );
if( status != GL_FRAMEBUFFER_COMPLETE )
{
qCritical() << "Error with framebuffer!";
}
I think glformat variable is the format to be adjusted in this function:
glTexImage2D(GL_TEXTURE_2D, 0,glformat, width, height, 0,glformat, GL_UNSIGNED_BYTE, 0);
How do I have it pick the appropriate format to make the FBO on the remote machine?
Code for the image from the framebuffer we're trying to attach:
bool Product::SetImgInfo(GeoRadarProd &prod, const JsonConfig &config)
{
DataInput input(FLAGS_input.c_str());
QString file_name_data = FLAGS_file_name.c_str();
int width = config.GetInt("width");
int height = config.GetInt("height");
if(!input.isValid())
{
LOG(INFO) << "Input is not valid";
return false;
}
QByteArray data = input.getByteArray();
VLOG(3) << "width from config: "<< width;
VLOG(3) << "height from config: "<< height;
VLOG(3) << "data : "<< data.size();
QImage image_data;
image_data.fromData(data, "PNG");
VLOG(3) << "what is file_name_data ???: " << file_name_data.toStdString();
VLOG(3) << "is image_data load???: " << image_data.load(file_name_data, "PNG");
VLOG(3) << "is image_data null???: " << image_data.isNull();
VLOG(3) << "image data width: "<< image_data.width();
VLOG(3) << "image data height: "<< image_data.height();
VLOG(3)<< "Original Format was tif";
VLOG(3)<<"Data Img H: "<< image_data.height()<<" W: "<<image_data.width();
QImage new_image(image_data.width(), image_data.height(), QImage::Format_RGBA8888);
// Format_ARGB32 , Format_RGBA8888_Premultiplied
VLOG(3)<<"New Img H: "<<new_image.height()<<" W: "<<new_image.width();
VLOG(3)<<"Setting img data";
for(int idx = 0; idx < image_data.width(); idx++)
{
for(int idy = 0; idy < image_data.height(); idy++)
{
int index_value = image_data.pixelIndex(idx, idy);
uint color_value;
if(index_value == 0 )
{
color_value = qRgba((int(0)), 0, 0, 0);
}
else
{
//! +1*20 to have a wider spread in the palette
//! and since the values start from 0
// index_value = index_value + 1;
color_value = qRgb(((int)index_value ), 0, 0);
}
new_image.setPixel(idx, idy, color_value);
}
}
const QImage& img = QGLWidget::convertToGLFormat(new_image);
prod.setQImageData(img);
return true;
}
The image format you use is never the cause of this completeness error:
Framebuffer incomplete attachment.
This error means that one of the attachments violates OpenGL's rules on attachment completeness. This is not a problem of the wrong format (per se); it is a code bug.
If you see this on one implementation, you should see it on all implementations running the same FBO setup code. Attachment completeness rules are not allowed to change based on the implementation (with an exception). So if you're seeing this error appear or not appear based on the implementation, one of the following is happening:
There is a bug in one or more of the implementations. Your code may or may not be following the attachment completeness rules, but implementations are supposed to implement the same rules.
You are not in fact using the same FBO setup logic on the different implementations. Since you're creating a QGLFramebufferObject rather than a proper FBO, a lot of the FBO setup code is out of your hands. So maybe stop using Qt for this.
You are running on implementations that differ significantly in which version of OpenGL they implement. See, while the attachment rules don't allow implementation variance, this is only true for a specific version of OpenGL itself. New versions of the API can change the attachment rules, but backwards compatibility guarantees ensure that code that worked on older versions continues to work on newer ones.
But that only works in one direction: code written against newer OpenGL versions may not be compatible with older implementation versions. Not unless you specifically check the standards and write code that ought to work on all versions you intend to run on.
Again, most of your FBO logic is hidden behind Qt's implementation, so there's no simple way to know what's going on. It would be good to ditch Qt and do it yourself to make sure.
Now, the above assumes that when Qt tells you "QGLFramebufferObject: Framebuffer incomplete attachment", it really means the GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT completeness status. It's possible Qt is lying to you and your problem really is the format.
When making textures meant to be used by FBOs, you should stick to the image formats that are required to work for FBO-attached images. GL_RED is not one of them.
I made a program that has two different states, one is for menu display-"Menu State", and the other state is for drawing some stuff-"Draw State".
But I came across a weird thing, if i load certain png for texture and copy them to renderer to display , then leave "Menu State" to enter "Draw State". The texture will somehow make the rectangle color not display properly (for example make green go dark).
In my code, switching to a new state(invoke MenuState::onExit()) will erase the texture map(map of texture smart pointer indexing with std::string)
So the texutre loaded doesn't even exist in the "Drawing State".
I couldn't figure out what went wrong. Here is some of my codes
void TextureManager::DrawPixel(int x, int y, int width, int height, SDL_Renderer *pRenderer)
{
SDL_Rect rect;
rect.x = x;
rect.y = y;
rect.w = width;
rect.h = height;
SDL_SetRenderDrawColor(pRenderer, 0, 255, 0, 255);//same color value
SDL_RenderFillRect(pRenderer, &rect);
}
static bool TextureManagerLoadFile(std::string filename, std::string id)
{
return TextureManager::Instance().Load(filename, id, Game::Instance().GetRenderer());
}
bool TextureManager::Load(std::string filename, std::string id, SDL_Renderer *pRenderer)
{
if(m_textureMap.count(id) != 0)
{
return false;
}
SDL_Surface *pTempSurface = IMG_Load(filename.c_str());
SDL_Texture *pTexutre = SDL_CreateTextureFromSurface(pRenderer, pTempSurface);
SDL_FreeSurface(pTempSurface);
if(pTexutre != 0)
{
m_textureMap[id] = std::make_unique<textureData>(pTexutre, 0, 0);
SDL_QueryTexture(pTexutre, NULL, NULL, &m_textureMap[id]->width, &m_textureMap[id]->height);
return true;
}
return false;
}
void TextureManager::ClearFromTextureMap(std::string textureID)
{
m_textureMap.erase(textureID);
}
bool MenuState::onEnter()
{
if(!TextureManagerLoadFile("assets/Main menu/BTN PLAY.png", "play_button"))
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN Exit.png", "exit_button"))
//replace different png file here will also affect the outcome
{
return false;
}
if(!TextureManagerLoadFile("assets/Main menu/BTN SETTINGS.png", "setting_button"))
{
return false;
}
int client_w,client_h;
SDL_GetWindowSize(Game::Instance().GetClientWindow(),&client_w, &client_h);
int playBtn_w = TextureManager::Instance().GetTextureWidth("play_button");
int playBtn_h = TextureManager::Instance().GetTuextureHeight("play_button");
int center_x = (client_w - playBtn_w) / 2;
int center_y = (client_h - playBtn_h) / 2;
ParamsLoader pPlayParams(center_x, center_y, playBtn_w, playBtn_h, "play_button");
int settingBtn_w = TextureManager::Instance().GetTextureWidth("setting_button");
int settingBtn_h = TextureManager::Instance().GetTuextureHeight("setting_button");
ParamsLoader pSettingParams(center_x , center_y + (playBtn_h + settingBtn_h) / 2, settingBtn_w, settingBtn_h, "setting_button");
int exitBtn_w = TextureManager::Instance().GetTextureWidth("exit_button");
int exitBtn_h = TextureManager::Instance().GetTuextureHeight("exit_button");
ParamsLoader pExitParams(10, 10, exitBtn_w, exitBtn_h, "exit_button");
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pPlayParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pSettingParams, s_menuToPlay));
m_gameObjects.push_back(std::make_shared<MenuUIObject>(&pExitParams, s_menuExit));
//change order of the 3 line code above
//or replace different png for exit button, will make the rectangle color different
std::cout << "Entering Menu State" << std::endl;
return true;
}
bool MenuState::onExit()
{
for(auto i : m_gameObjects)
{
i->Clean();
}
m_gameObjects.clear();
TextureManager::Instance().ClearFromTextureMap("play_button");
TextureManager::Instance().ClearFromTextureMap("exit_button");
TextureManager::Instance().ClearFromTextureMap("setting_button");
std::cout << "Exiting Menu State" << std::endl;
return true;
}
void Game::Render()
{
SDL_SetRenderDrawColor(m_pRenderer, 255, 255, 255, 255);
SDL_RenderClear(m_pRenderer);
m_pGameStateMachine->Render();
SDL_RenderPresent(m_pRenderer);
}
Menu State Figure
Correct Color
Wrong Color
edit :Also, I found out that this weird phenomenon only happens when the renderer was created with 'SDL_RENDERER_ACCELERATED' flag and -1 or 0 driver index, i.e SDL_CreateRenderer(m_pWindow, 1, SDL_RENDERER_ACCELERATED); or SDL_CreateRenderer(m_pWindow, -1, SDL_RENDERER_SOFTWARE);works fine!
I have been plagued by this very same issue. The link provided by ekodes is how I resolved it, as order of operations had no effect for me.
I was able to pull the d3d9Device via SDL_RenderGetD3D9Device(), then SetTextureStageState as described in ekodes d3d blending link.
I was having the same issue. I got a vibrant green color when trying to render a light gray.
The combination of the parameters that are fixing the issue for you pertain to the driver to be used. -1 selects the first driver that meets the criteria, int this case it needs to be accelerated.
Using SDL_GetRendererInfo I was able to see this happens when using the "direct3d" driver.
I found this question talking about blending in direct3d.
I figured it out eventually. In addition to Alpha Blending there is a Color Blending. So DirectX merges color of the last texture with the last primitive.
The question does provide a fix for this in DirectX, however I'm not sure how to apply that it in regards to SDL. I also have not been able to find a solution for this problem in SDL.
I was drawing Green text with SDL_ttf, which uses a texture. Then drawing a gray rectangle for another component elsewhere on the screen.
What's strange is it doesn't seem to happen all the time. However, mine seems to predominantly happen with SDL_ttf. At first I thought it may be a byproduct of TTF_RenderText_Blended however, it happens with the other ones as well. It also does not appear to be affected by the blend mode of the Texture generated by those functions
So in my case, I was able to change the order of the operations to get the correct color.
Alternatively, using the OpenGL driver appeared to fix this as well. Similar to what you mentioned. (This was driver index 1 for me)
I'm not sure this classifies as an "Answer" but hopefully it helps someone out or points them in the right direction.
I tried to render my surface with using alpha channel, but when I setting alpha value, it renders with random colors and not semi-transparent
// Init memory
Q3DSurface *poSurface = new Q3DSurface();
QSurface3DSeries *poSeries = new QSurface3DSeries();
QSurfaceDataArray *poDataArray = new QSurfaceDataArray();
// Generating test surface series
for ( int i = 0, k = 0; i < 10; ++i)
{
QSurfaceDataRow *poRow = new QSurfaceDataRow();
for ( int j = 0; j < 10; ++j )
{
float x = j;
float y = i;
float z = k;
poRow->append( QSurfaceDataItem( QVector3D( x, y, z ) ) );
}
poDataArray->append( poRow );
if ( i % 2 == 0 )
{
++k;
}
}
//
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
// Setting color with alpha value
poSeries->setBaseColor( QColor( 100, 100, 100, 100 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
What am I doing wrong?
I'm not sure what you mean by "random colours", but at a guess, are you accounting for the default lighting? The effect of the 3d lighting can make colours look differently to what they are explicitly set to.
With regard to your transparency setting, I think this code looks fine. You are setting the RGBA values as R=100, G=100, B=100, A=100 which will produce a grey colour. Under the default light this may look like light/dark patches because of the function you have graphed and the way the light "bounces" off the edges.
Try changing your code slightly to see if this is really what is happening:
poSeries->dataProxy()->resetArray( poDataArray );
poSurface->addSeries( poSeries );
//PICK A DARK THEME THAT WILL HELP TO ILLUSTRATE THE EFFECT
poSurface->activeTheme()->setType(Q3DTheme::ThemeEbony);
//TURN THE AMBIENT LIGHTING UP TO FULL
poSurface->activeTheme()->setAmbientLightStrength(1.0f);
// Setting color with alpha value
//SET IT TO RED WITH A FULL ALPHA CHANNEL
poSeries->setBaseColor( QColor( 100, 0, 0, 255 ));
// Show surface widget
QWidget *poWidget = QWidget::createWindowContainer( poSurface );
poWidget->setWindowTitle( "test ");
poWidget->show();
This should produce a dark red image of your graph with a dark background (just to make things clearer). Now put the alpha value back to what you wanted originally and you will see what effect this has on the colouring:
// Setting color with alpha value: "washed out" red colour
poSeries->setBaseColor( QColor( 100, 0, 0, 100 ));
You can probably see that it is the colour (rather than the mesh) that is being rendered at the transparency setting set through "setBaseColor".
Unfortunately I cannot tell you how to render transparently the Q3DSurface itself, but I hope that helps a little.
I am fairly new to DirectX 10 programming, and I have been trying to do the following with my limited skills (though I have a strong background with OpenGL)
I am trying to display 2 different textured Quads, 1 per monitor. To do so, I understood that I need a single D3D10 Device, multiple (2) swap chains, and single VertexBuffer
While I think I'm able to create all of those, I'm still pretty unsure how to handle all of them. Do I need multiple ID3D10RenderTargetView(s) ? How and where should I Use OMSetRenderTargets(...) ?
Other than MSDN, documentation or explaination of those concepts are rather limited, so any help would be very welcome. Here is some code I have :
Here's the rendering code
for(int i = 0; i < screenNumber; i++){
//clear scene
pD3DDevice->ClearRenderTargetView( pRenderTargetView, D3DXCOLOR(0,1,0,0) );
//fill vertex buffer with vertices
UINT numVertices = 4;
vertex* v = NULL;
//lock vertex buffer for CPU use
pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v );
v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1), D3DXVECTOR2(0.0f, 1.0f) );
v[1] = vertex( D3DXVECTOR3(-1,1,0), D3DXVECTOR4(0,1,0,1), D3DXVECTOR2(0.0f, 0.0f) );
v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1), D3DXVECTOR2(1.0f, 1.0f) );
v[3] = vertex( D3DXVECTOR3(1,1,0), D3DXVECTOR4(1,1,0,1), D3DXVECTOR2(1.0f, 0.0f) );
pVertexBuffer->Unmap();
// Set primitive topology
pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
//set texture
pTextureSR->SetResource( textureSRV[textureIndex] );
//get technique desc
D3D10_TECHNIQUE_DESC techDesc;
pBasicTechnique->GetDesc( &techDesc );
// This is where you actually use the shader code
for( UINT p = 0; p < techDesc.Passes; ++p )
{
//apply technique
pBasicTechnique->GetPassByIndex( p )->Apply( 0 );
//draw
pD3DDevice->Draw( numVertices, 0 );
}
//flip buffers
pSwapChain[i]->Present(0,0);
}
And here's the code for creating rendering targets, which I am not sure is good
for(int i = 0; i < screenNumber; ++i){
//try to get the back buffer
ID3D10Texture2D* pBackBuffer;
if ( FAILED( pSwapChain[1]->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*) &pBackBuffer) ) ) return fatalError("Could not get back buffer");
//try to create render target view
if ( FAILED( pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView) ) ) return fatalError("Could not create render target view");
pBackBuffer->Release();
pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL);
}
return true;
}
I hope I got the gist of what you wish to do - render different content on two different monitors while using a single graphics card (graphics adapter) which maps its output to those monitors. For that, you're going to need one device (for the single graphics card/adapter) and enumerate just how many outputs there are at the user's machine.
So, in total - that means one device, two outputs, two windows and therefore - two swap chains.
Here's a quick result of my little experiment:
A little introduction
With DirectX 10+, this falls into the DXGI (DirectX Graphics Infrastructure) which manages the common low-level logistics involved with DirectX 10+ development which, as you probably know, dumped the old requirement of enumerating feature sets and the like - requiring every DX10+ capable card to share in on all of the features defined by the API. The only thing that varies is the extent and capability of the card (in other words, lousy performance is preferable to the app crashing and burning). This was all within DirectX 9 in the past, but people at Microsoft decided to pull it out and call it DXGI. Now, we can use DXGI functionality to set up our multi monitor environment.
Do I need multiple ID3D10RenderTargetView(s) ?
Yes, you do need multiple render target views, count depends (like the swap chains and windows) on the number of monitors you have. But, to save you from spewing words, let's write it out as simple as possible and additional information where it's needed:
Enumerate all adapters available on the system.
For each adapter, enumerate all outputs available (and active) and create a device to accompany it.
With the enumerated data stored in a suitable structure (think arrays which can quickly relinquish size information), use it to create n windows, swap chains, render target views, depth/stencil textures and their respective views where n is equal to the number of outputs.
With everything created, for each window you are rendering into, you can define special routines which will use the available geometry (and other) data to output your results - which resolves to what each monitor gets in fullscreen (don't forget to adjust the viewport for every window accordingly).
Present your data by iterating over every swap chain which is linked to its respective window and swap buffers with Present()
Now, while this is rich in word count, some code is worth a lot more. This is designed to give you a coarse idea of what goes into implementing a simple multimonitor application. So, assumptions are that there is only one adapter ( a rather bold statement nowadays ) and multiple outputs - and no failsafes. I'll leave the fun part to you. Answer to the second question is downstairs...
Do note there's no memory management involved. We assume everything magically gets cleaned up when it is not needed for illustration purposes. Be a good memory citizen.
Getting the adapter
IDXGIAdapter* adapter = NULL;
void GetAdapter() // applicable for multiple ones with little effort
{
// remember, we assume there's only one adapter (example purposes)
for( int i = 0; DXGI_ERROR_NOT_FOUND != factory->EnumAdapters( i, &adapter ); ++i )
{
// get the description of the adapter, assuming no failure
DXGI_ADAPTER_DESC adapterDesc;
HRESULT hr = adapter->GetDesc( &adapterDesc );
// Getting the outputs active on our adapter
EnumOutputsOnAdapter();
}
Acquiring the outputs on our adapter
std::vector<IDXGIOutput*> outputArray; // contains outputs per adapter
void EnumOutputsOnAdapter()
{
IDXGIOutput* output = NULL;
for(int i = 0; DXGI_ERROR_NOT_FOUND != adapter->EnumOutputs(i, &output); ++i)
{
// get the description
DXGI_OUTPUT_DESC outputDesc;
HRESULT hr = output->GetDesc( &outputDesc );
outputArray.push_back( output );
}
}
Now, I must assume that you're at least aware of the Win32 API considerations, creating window classes, registering with the system, creating windows, etc... Therefore, I will not qualify its creation, only elaborate how it pertains to multiple windows. Also, I will only consider the fullscreen case here, but creating it in windowed mode is more than possible and rather trivial.
Creating the actual windows for our outputs
Since we assume existence of just one adapter, we only consider the enumerated outputs linked to that particular adapter. It would be preferable to organize all window data in neat little structures, but for the purposes of this answer, we'll just shove them into a simple struct and then into yet another std::vector object, and by them I mean handles to respective windows (HWND) and their size (although for our case it's constant).
But still, we have to address the fact that we have one swap chain, one render target view, one depth/stencil view per window. So, why not feed all of that in that little struct which describes each of our windows? Makes sense, right?
struct WindowDataContainer
{
//Direct3D 10 stuff per window data
IDXGISwapChain* swapChain;
ID3D10RenderTargetView* renderTargetView;
ID3D10DepthStencilView* depthStencilView;
// window goodies
HWND hWnd;
int width;
int height;
}
Nice. Well, not really. But still... Moving on! Now to create the windows for outputs:
std::vector<WindowDataContainer*> windowsArray;
void CreateWindowsForOutputs()
{
for( int i = 0; i < outputArray.size(); ++i )
{
IDXGIOutput* output = outputArray.at(i);
DXGI_OUTPUT_DESC outputDesc;
p_Output->GetDesc( &outputDesc );
int x = outputDesc.DesktopCoordinates.left;
int y = outputDesc.DesktopCoordinates.top;
int width = outputDesc.DesktopCoordinates.right - x;
int height = outputDesc.DesktopCoordinates.bottom - y;
// Don't forget to clean this up. And all D3D COM objects.
WindowDataContainer* window = new WindowDataContainer;
window->hWnd = CreateWindow( windowClassName,
windowName,
WS_POPUP,
x,
y,
width,
height,
NULL,
0,
instance,
NULL );
// show the window
ShowWindow( window->hWnd, SW_SHOWDEFAULT );
// set width and height
window->width = width;
window->height = height;
// shove it in the std::vector
windowsArray.push_back( window );
//if first window, associate it with DXGI so it can jump in
// when there is something of interest in the message queue
// think fullscreen mode switches etc. MSDN for more info.
if(i == 0)
factory->MakeWindowAssociation( window->hWnd, 0 );
}
}
Cute, now that's done. Since we only have one adapter and therefore only one device to accompany it, create it as usual. In my case, it's simply a global interface pointer which can be accessed all over the place. We are not going for code of the year here, so why the hell not, eh?
Creating the swap chains, views and the depth/stencil 2D texture
Now, our friendly swap chains... You might be used to actually creating them by invoking the "naked" function D3D10CreateDeviceAndSwapChain(...), but as you know, we've already made our device. We only want one. And multiple swap chains. Well, that's a pickle. Luckily, our DXGIFactory interface has swap chains on its production line which we can receive for free with complementary kegs of rum. Onto the swap chains then, create for every window one:
void CreateSwapChainsAndViews()
{
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// get the dxgi device
IDXGIDevice* DXGIDevice = NULL;
device->QueryInterface( IID_IDXGIDevice, ( void** )&DXGIDevice ); // COM stuff, hopefully you are familiar
// create a swap chain
DXGI_SWAP_CHAIN_DESC swapChainDesc;
// fill it in
HRESULT hr = factory->CreateSwapChain( DXGIDevice, &swapChainDesc, &p_Window->swapChain );
DXGIDevice->Release();
DXGIDevice = NULL;
// get the backbuffer
ID3D10Texture2D* backBuffer = NULL;
hr = window->swapChain->GetBuffer( 0, IID_ID3D10Texture2D, ( void** )&backBuffer );
// get the backbuffer desc
D3D10_TEXTURE2D_DESC backBufferDesc;
backBuffer->GetDesc( &backBufferDesc );
// create the render target view
D3D10_RENDER_TARGET_VIEW_DESC RTVDesc;
// fill it in
device->CreateRenderTargetView( backBuffer, &RTVDesc, &window->renderTargetView );
backBuffer->Release();
backBuffer = NULL;
// Create depth stencil texture
ID3D10Texture2D* depthStencil = NULL;
D3D10_TEXTURE2D_DESC descDepth;
// fill it in
device->CreateTexture2D( &descDepth, NULL, &depthStencil );
// Create the depth stencil view
D3D10_DEPTH_STENCIL_VIEW_DESC descDSV;
// fill it in
device->CreateDepthStencilView( depthStencil, &descDSV, &window->depthStencilView );
}
}
We now have everything we need. All that you need to do is define a function which iterates over all windows and draws different stuff appropriately.
How and where should I Use OMSetRenderTargets(...) ?
In the just mentioned function which iterates over all windows and uses the appropriate render target (courtesy of our per-window data container):
void MultiRender( )
{
// Clear them all
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// There is the answer to your second question:
device->OMSetRenderTargets( 1, &window->renderTargetView, window->depthStencilView );
// Don't forget to adjust the viewport, in fullscreen it's not important...
D3D10_VIEWPORT Viewport;
Viewport.TopLeftX = 0;
Viewport.TopLeftY = 0;
Viewport.Width = window->width;
Viewport.Height = window->height;
Viewport.MinDepth = 0.0f;
Viewport.MaxDepth = 1.0f;
device->RSSetViewports( 1, &Viewport );
// TO DO: AMAZING STUFF PER WINDOW
}
}
Of course, don't forget to run through all the swap chains and swap buffers per window basis. The code here is just for the purposes of this answer, it requires a bit more work, error checking (failsafes) and contemplation to get it working just the way you like it - in other words - it should give you a simplified overview, not a production solution.
Good luck and happy coding! Sheesh, this is huge.
I am making an application that does some custom image processing. The program will be driven by a simple menu in the console. The user will input the filename of an image, and that image will be displayed using openGL in a window. When the user selects some processing to be done to the image, the processing is done, and the openGL window should redraw the image.
My problem is that my image is never drawn to the window, instead the window is always black. I think it may have to do with the way I am organizing the threads in my program. The main execution thread handles the menu input/output and the image processing and makes calls to the Display method, while a second thread runs the openGL mainloop.
Here is my main code:
#include <iostream>
#include <GL/glut.h>
#include "ImageProcessor.h"
#include "BitmapImage.h"
using namespace std;
DWORD WINAPI openglThread( LPVOID param );
void InitGL();
void Reshape( GLint newWidth, GLint newHeight );
void Display( void );
BitmapImage* b;
ImageProcessor ip;
int main( int argc, char *argv[] ) {
DWORD threadID;
b = new BitmapImage();
CreateThread( 0, 0, openglThread, NULL, 0, &threadID );
while( true ) {
char choice;
string path = "TestImages\\";
string filename;
cout << "Enter filename: ";
cin >> filename;
path += filename;
b = new BitmapImage( path );
Display();
cout << "1) Invert" << endl;
cout << "2) Line Thin" << endl;
cout << "Enter choice: ";
cin >> choice;
if( choice == '1' ) {
ip.InvertColour( *b );
}
else {
ip.LineThinning( *b );
}
Display();
}
return 0;
}
void InitGL() {
int argc = 1;
char* argv[1];
argv[0] = new char[20];
strcpy( argv[0], "main" );
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition( 0, 0 );
glutInitWindowSize( 800, 600 );
glutCreateWindow( "ICIP Program - Character recognition using line thinning, Hilbert curve, and wavelet approximation" );
glutDisplayFunc( Display );
glutReshapeFunc( Reshape );
glClearColor(0.0,0.0,0.0,1.0);
glEnable(GL_DEPTH_TEST);
}
void Reshape( GLint newWidth, GLint newHeight ) {
/* Reset viewport and projection parameters */
glViewport( 0, 0, newWidth, newHeight );
}
void Display( void ) {
glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
b->Draw();
glutSwapBuffers();
}
DWORD WINAPI openglThread( LPVOID param ) {
InitGL();
glutMainLoop();
return 0;
}
Here is my draw method for BitmapImage:
void BitmapImage::Draw() {
cout << "Drawing" << endl;
if( _loaded ) {
glBegin( GL_POINTS );
for( unsigned int i = 0; i < _height * _width; i++ ) {
glColor3f( _bitmap_image[i*3] / 255.0, _bitmap_image[i*3+1] / 255.0, _bitmap_image[i*3+2] / 255.0 );
// invert the y-axis while drawing
glVertex2i( i % _width, _height - (i / _width) );
}
glEnd();
}
}
Any ideas as to the problem?
Edit: The problem was technically solved by starting a glutTimer from the openglThread which calls glutPostRedisplay() every 500ms. This is OK for now, but I would prefer a solution in which I only have to redisplay every time I make changes to the bitmap (to save on processing time) and one in which I don't have to run another thread (the timer is another thread im assuming). This is mainly because the main processing thread is going to be doing a lot of intensive work and I would like to dedicate most of the resources to this thread rather than anything else.
I've had this problem before - it's pretty annoying. The problem is that all of your OpenGL calls must be done in the thread where you started the OpenGL context. So when you want your main (input) thread to change something in the OpenGL thread, you need to somehow signal to the thread that it needs to do stuff (set a flag or something).
Note: I don't know what your BitmapImage loading function (here, your constructor) does, but it probably has some OpenGL calls in it. The above applies to that too! So you'll need to signal to the other thread to create a BitmapImage for you, or at least to do the OpenGL-related part of creating the bitmap.
A few points:
Generally, if you're going the multithreaded route, it's preferable if your main thread is your GUI thread i.e. it does minimal tasks keeping the GUI responsive. In your case, I would recommend moving the intensive image processing tasks into a thread and doing the OpenGL rendering in your main thread.
For drawing your image, you're using vertices instead of a textured quad. Unless you have a very good reason, it's much faster to use a single textured quad (the processed image being the texture). Check out glTexImage2D and glTexSubImage2D.
Rendering at a framerate of 2fps (500ms, as you mentioned) will have negligible impact on resources if you're using an OpenGL implementation that is accelerated, which is almost guaranteed on any modern system, and if you use a textured quad instead of a vertex per pixel.
Your problem may be in Display() at the line
b->Draw();
I don't see where b is passed into the scope of Display().
You need to make OpenGL calls on the thread in which context was created (glutInitDisplayMode). Hence glXX calls inside Display method which is on different thread will not be defined. You can see this easily by dumping the function address, hopefully it would be undefined or NULL.
It sounds like the 500ms timer is calling Display() regularly, after 2 calls it fills the back-buffer and the front-buffer with the same rendering. Display() continues to be called until the user enters something, which the OpenGL thread never knows about, but, since, global variable b is now different, the thread blindly uses that in Display().
So how about doing what Jesse Beder says and use a global int, call it flag, to flag when the user entered something. For example:
set flag = 1; after you do the b = new BitmapImage( path );
then set flag = 0; after you call Display() from the OpenGL thread.
You loop on the timer, but, now check if flag = 1. You only need call glutPostRedisplay() when flag = 1, i.e. the user entered something.
Seems like a good way without using a sleep/wake mechanism. Accessing global variables among more than one thread can also be unsafe. I think the worst that can happen here is the OpenGL thread miss-reads flag = 0 when it should read flag = 1. It should then catch it after no more than a few iterations. If you get strange behavior go to synchronization.
With the code you show, you call Display() twice in main(). Actually, main() doesn't even need to call Display(), the OpenGL thread does it.