Drawing a rectangle in Direct X - c++

Using the tutorial here, I have managed to get a red triangle up on my screen: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-4
CUSTOMVERTEX OurVertices[] =
{
{ 0, 0, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ WIDTH, 0, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ 0, 300, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ WIDTH, 300, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) }
};
d3dDevice->CreateVertexBuffer(3*sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&vBuffer,
NULL);
VOID* pVoid; // the void* we were talking about
vBuffer->Lock(0, 0, (void**)&pVoid, 0); // locks v_buffer, the buffer we made earlier
memcpy(pVoid, OurVertices, sizeof(OurVertices)); // copy vertices to the vertex buffer
vBuffer->Unlock(); // unlock v_buffer
d3dDevice->SetFVF(CUSTOMFVF);
d3dDevice->SetStreamSource(0, vBuffer, 0, sizeof(CUSTOMVERTEX));
d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1);
But you can see that I really want to be drawing a rectangle.
I have changed the Primitive to draw 2 triangles and extended the buffer size to 4*size of my custom vertex but I can't really say I understand how to get it from my triangle to my rectangle I would like:
Is there a better way of drawing a rectangle rather than using a quad considering I just want to sling some text on top of it something like this:
http://1.bp.blogspot.com/-6HjFVnrVM94/TgRq8oP4U-I/AAAAAAAAAKk/i8N0OZU999E/s1600/monkey_island_screen.jpg

I had to exend my buffer to allow for 4 vertex array size:
d3dDevice->CreateVertexBuffer(4*sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&vBuffer,
NULL);
And then changed the draw primitive from TRIANGLELIST to STRIP extending the amount of triangles drawn to 2
d3dDevice->DrawPrimitive (D3DPT_TRIANGLESTRIP, 0, 2 );
Source: http://www.mdxinfo.com/tutorials/tutorial4.php

Related

Register a texture array from OpenGL to CUDA

I want to register a texture array created with OpenGL to CUDA. For that I simply use the interoperability function cudaGraphicsGLRegisterImage (see CUDA documentation) :
void registerTextureInCUDA()
{
// _textureDepth = 2 here
GLenum target = _textureDepth < 2 ? GL_TEXTURE_2D : GL_TEXTURE_2D_ARRAY;
GLuint texture = 0;
GLsizei width = 2;
GLsizei height = 2;
GLsizei layerCount = 2;
GLsizei mipLevelCount = 1;
// Read you texels here. In the current example, we have 2*2*2 = 8 texels, with each texel being 4 GLubytes.
GLubyte texels[32] =
{
// Texels for first image.
0, 0, 0, 255,
255, 0, 0, 255,
0, 255, 0, 255,
0, 0, 255, 255,
// Texels for second image.
255, 255, 255, 255,
255, 255, 0, 255,
0, 255, 255, 255,
255, 0, 255, 255,
};
glGenTextures(1,&texture);
glBindTexture(target,texture);
// No error after this call
GL_CHECK();
CUDA_CHECK(cudaGraphicsGLRegisterImage(&_pGraphicsResource, texture, target, cudaGraphicsRegisterFlagsWriteDiscard));
}
For simple GL_TEXTURE_2D I have no error and I can write into the texture normally, but with GL_TEXTURE_2D_ARRAY I have the following error :
Cuda error: 1 invalid argument
In CUDA documentation this type of return does not seem to be expected ?
What argument could be in cause here ?
I found the problem. I didn't allocate the storage for the texture, I just need to add this line before registering the texture in cuda :
glTexStorage3D(target, mipLevelCount, GL_RGBA8, width, height, layerCount);

OpenGL how to manage negative normals?

With a cube defined as in the following code, you see that normals are often negative in one axis. (even if we calcultate them)
OpenGL manages it with its fixed pipeline, correct me if I'm wrong, but with programmable pipeline, it causes artifacts like black faces. (My previous stackoverflow question provides code.)
I managed to run my code with an operation on my normals (normal = (0.5 + 0.5 * normal); ), but even if the result looks ok, I wonder if my normals are still valid? (And is this operation the best?)
I mean, from a shader point of view, can I still use them to shade or brighten my models? How do you usually do?
The mentionned normals:
const GLfloat cube_vertices[] = {
1, 1, 1, -1, 1, 1, -1,-1, 1, // v0-v1-v2 (front)
-1,-1, 1, 1,-1, 1, 1, 1, 1, // v2-v3-v0
1, 1, 1, 1,-1, 1, 1,-1,-1, // v0-v3-v4 (right)
1,-1,-1, 1, 1,-1, 1, 1, 1, // v4-v5-v0
1, 1, 1, 1, 1,-1, -1, 1,-1, // v0-v5-v6 (top)
-1, 1,-1, -1, 1, 1, 1, 1, 1, // v6-v1-v0
-1, 1, 1, -1, 1,-1, -1,-1,-1, // v1-v6-v7 (left)
-1,-1,-1, -1,-1, 1, -1, 1, 1, // v7-v2-v1
-1,-1,-1, 1,-1,-1, 1,-1, 1, // v7-v4-v3 (bottom)
1,-1, 1, -1,-1, 1, -1,-1,-1, // v3-v2-v7
1,-1,-1, -1,-1,-1, -1, 1,-1, // v4-v7-v6 (back)
-1, 1,-1, 1, 1,-1, 1,-1,-1 }; // v6-v5-v4
const GLfloat cube_normalsI[] = {
0, 0, 1, 0, 0, 1, 0, 0, 1, // v0-v1-v2 (front)
0, 0, 1, 0, 0, 1, 0, 0, 1, // v2-v3-v0
1, 0, 0, 1, 0, 0, 1, 0, 0, // v0-v3-v4 (right)
1, 0, 0, 1, 0, 0, 1, 0, 0, // v4-v5-v0
0, 1, 0, 0, 1, 0, 0, 1, 0, // v0-v5-v6 (top)
0, 1, 0, 0, 1, 0, 0, 1, 0, // v6-v1-v0
-1, 0, 0, -1, 0, 0, -1, 0, 0, // v1-v6-v7 (left)
-1, 0, 0, -1, 0, 0, -1, 0, 0, // v7-v2-v1
0,-1, 0, 0,-1, 0, 0,-1, 0, // v7-v4-v3 (bottom)
0,-1, 0, 0,-1, 0, 0,-1, 0, // v3-v2-v7
0, 0,-1, 0, 0,-1, 0, 0,-1, // v4-v7-v6 (back)
0, 0,-1, 0, 0,-1, 0, 0,-1 }; // v6-v5-v4
No, this makes no sense at all. Either you need to update your question or you got it all wrong.
Normal may face any direction and normals are often negative in one axis is completely natural. Why wouldn't they be? From what you are describing you seem to be working with lighting. A part of lighting uses normal to see what is the angle between light source and surface. The idea here is that when you turn the normal a light ray effectively lightens a larger part of surface which reduces density of reflected light. With basic math you can see that the correlation is cos(angle) so parallel vectors will produce highest brightness. Since we are using vectors we are better of replacing cos with dot product.
So at some point you have
float factor = dot(normalize(normal), normalize(lightSource-surfacePoint))
Let's have 2 examples here:
normal = (0, 1, 0)
lightSource = (0, 1, 0)
surfacePoint = (0, 0, 0)
dot((0, 1, 0), (0, 1, 0)) = 0+1+0 = 1
and turn it around:
normal = (-1, 0, 0)
lightSource = (-3, 1, 0)
surfacePoint = (0, 1, 0)
dot((-1, 0, 0), normalize(-3, 0, 0)) = dot((-1, 0, 0), (1, 0, 0)) = 1+0+0 = 1
so even if positions are completely changed and normals negative we will get the same result for same angles (in these cases the vectors being perpendicular).
The only question here is what to do when dot product is negative. That happens when normal faces away from the light. In your case you have a cube and all normals point outwards. What if you needed to be inside a cube and still have lighting? You will get
normal = (0, 1, 0)
lightSource = (0, 0, 0)
surfacePoint = (0, 1, 0)
dot((0, 1, 0), (0, -1, 0)) = 0-1+0 = -1
Because of such cases you need to either clam the values or use absolute values. Clamping will produce interior of cube to be black (not lighted) while absolute value will light those as well:
fragmentColor += lightColor*dotFactor // Do nothing and your light will darken the area
fragmentColor += lightColor*abs(dotFactor) // Use absolute value to lighten even if facing away
fragmentColor += lightColor*max(0.0, dotFactor) // Clamp minimum so there are no negative values.
But none of these have nothing to do with normals facing any direction in absolute coordinate system. It just has to do with relative positions between normal, pixel location and light source.

How to show a PNG image with 25% opacity using GDI+? (MFC)

I am trying to output a PNG image by using GDI+, MFC. I want to output it with 25% opacity. Below is the way to output a PNG image on x=10, y=10:
CDC *pDC =GetDC();
Graphics graphics(pDC->m_hDC);
Image image(L"test1.png", FALSE);
graphics.DrawImage(&image, 10, 10);
But I don't know how to make it translucent. Any idea?
To draw the image with alpha blending, declare Gdiplus::ImageAttributes and Gdiplus::ColorMatrix with required alpha channel:
float alpha = 0.25f;
Gdiplus::ColorMatrix matrix =
{
1, 0, 0, 0, 0,
0, 1, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, alpha, 0,
0, 0, 0, 0, 1
};
Gdiplus::ImageAttributes attrib;
attrib.SetColorMatrix(&matrix);
graphics.DrawImage(&image,
Gdiplus::Rect(10, 10, image.GetWidth(), image.GetHeight()),
0, 0, image.GetWidth(), image.GetHeight(), Gdiplus::UnitPixel, &attrib);
See also: Using a Color Matrix to Transform a Single Color
Note that GetDC() is usually not used in MFC. If you do use it, be sure to call ReleaseDC(pDC) when pDC is no longer needed. Or simply use CClientDC dc(this) which has automatic cleanup. If painting is done in OnPaint then use CPaintDC which also has automatic cleanup:
void CMyWnd::OnPaint()
{
CPaintDC dc(this);
Gdiplus::Graphics graphics(dc);
...
}

OpenGL overwrites Windows' buttons

I have following code:
void DrawGLScene(unsigned char *drawing_bytes, HDC hdc, int xWidth, int yWidth) {
if ((!xWidth) || (!yWidth)) return;
BOOL returnVal = wglMakeCurrent(hdc, hrc);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, xWidth, yWidth, 0, GL_BGR_EXT, GL_UNSIGNED_BYTE, drawing_bytes);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glViewport(0,0,xWidth,yWidth); // Reset The Current Viewport
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
gluPerspective(25.0f,1.0f,0.1f,100.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
glEnable(GL_TEXTURE_2D); // Enable Texture Mapping
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glDisable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glLoadIdentity(); // Reset The View
glTranslatef(0.0f,0.0f,-5.0f);
glBindTexture(GL_TEXTURE_2D, texture);
glColor4f(1.0, 1.0, 1.0, 1.0);
glBegin(GL_QUADS);
// Front Face
glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 0.5f);
glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, 0.5f);
glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, 1.0f, 0.5f);
glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 0.5f);
glEnd();
glSwapBuffers(hdc);
}
This code overwrites my buttons created earlier via
hInstallButton = CreateWindow(TEXT("button"), "",
WS_VISIBLE | WS_CHILD | BS_AUTOCHECKBOX,
137, 70, 13, 13,
hWnd, (HMENU) 1, GetModuleHandle(NULL), NULL);
The issue is the glSwapBuffers(), which hides the buttons for good.
This is generated by the PIXELFORMATDESCRIPTOR
static PIXELFORMATDESCRIPTOR pfd= // pfd Tells Windows How We Want Things To Be
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
0, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
24, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
How can I force a single buffer, or something to write the buttons to both buffers? I am at a good loss here, and don't know how to do it properly (except maybe recreate the buttons with each WM_PAINT call)?
Edit:
Tried with subwindow (see code), but it creates a second window, instead of embedding into the first window.
BOOL InitInstance(HINSTANCE hInstance, int nCmdShow)
{
HWND hWnd;
hInst = hInstance; // Instanzenhandle in der globalen Variablen speichern
DWORD dwExStyle; // Window Extended Style
DWORD dwStyle; // Window Style
hWnd = CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL);
if (!hWnd) {
return FALSE;
}
dwExStyle=WS_EX_APPWINDOW | WS_EX_WINDOWEDGE | CS_OWNDC; // Window Extended Style
dwStyle=WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN; // Windows Style
WNDCLASS wndClass;
wndClass.style = CS_OWNDC | CS_HREDRAW | CS_VREDRAW;
wndClass.lpfnWndProc = WndProc;
wndClass.cbClsExtra = 0;
wndClass.cbWndExtra = 0;
wndClass.hInstance = hInstance;
wndClass.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wndClass.hCursor = LoadCursor(NULL, IDC_ARROW);
wndClass.hbrBackground = CreateSolidBrush(BLACK_BRUSH);
wndClass.lpszMenuName = NULL;
wndClass.lpszClassName = "Test Window";
RegisterClass(&wndClass);
hWndOpenGL = CreateWindowEx( dwExStyle, // Extended Style For The Window
"Test Window", // Class Name
"Testy test", // Window Title
dwStyle, // Required Window Style
0, 0, // Window Position
800,
600,
hWnd, // Parent Window
NULL, // No Menu
hInstance, // Instance
NULL);
//CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW,
//CW_USEDEFAULT, 0, CW_USEDEFAULT-500, 0, hWnd, NULL, hInstance, NULL);
static PIXELFORMATDESCRIPTOR pfd= // pfd Tells Windows How We Want Things To Be
{
sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor
1, // Version Number
PFD_DRAW_TO_WINDOW | // Format Must Support Window
PFD_SUPPORT_OPENGL | // Format Must Support OpenGL
0, // Must Support Double Buffering
PFD_TYPE_RGBA, // Request An RGBA Format
24, // Select Our Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored
};
hdcOpenGL=GetDC(hWndOpenGL);
GLuint PixelFormat; // Holds The Results After Searching For A Match
PixelFormat=ChoosePixelFormat(hdcOpenGL,&pfd);
SetPixelFormat(hdcOpenGL,PixelFormat,&pfd);
hrc=wglCreateContext(hdcOpenGL);
ShowWindow(hWnd, nCmdShow);
UpdateWindow(hWnd);
ShowWindow(hWndOpenGL, nCmdShow);
UpdateWindow(hWndOpenGL);
return TRUE;
}
I'm guessing that you created the buttons as childs of the OpenGL window. If you did this, well, then you actually did something, that's explicitly mentioned in the WGL and Win32 API documentation to break things .
The fix is simple: The OpenGL window should be a sibling to the buttons and have its very own DC: Create a own subwindow for OpenGL operations with the CS_OWNDC window class flag set and the WS_CLIPSIBLINGS | WS_CLIPCHILDREN window styles being set. Both the OpenGL subwindow and the buttons are created with the desired container window as parent.
That way the buttons will not get clobbered by OpenGL operations, even with a double buffered pixelformat.

DX11 C++ Shader can't receive Instance Buffer content

i would like to draw Instances of an obj File. After i implemented the Instancing instead of drawing each Object by his own draw() function (which worked just fine), the Instances are not positioned correctly. Probably the data from the InstanceBuffer is not set in the shader correctly.
D3DMain.cpp - creating input layout
struct INSTANCE {
//D3DXMATRIX matTrans;
D3DXVECTOR3
};
/***/
// create the input layout object
D3D11_INPUT_ELEMENT_DESC ied[] =
{
//vertex buffer
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
//instance buffer
{"INSTTRANS", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
};
if (FAILED(d3ddev->CreateInputLayout(ied, 4, VS->GetBufferPointer(), VS->GetBufferSize(), &pLayout))) throw(std::string("Input Layout Creation Error"));
d3ddevcon->IASetInputLayout(pLayout);
World.cpp - setting up instance buffer
std::vector<INSTANCE> instanceBuffer;
INSTANCE insertInstance;
D3DXMATRIX scaleMat, transMat;
D3DXMatrixScaling(&scaleMat, 50.0f, 50.0f, 50.0f);
int i=0;
for (std::list<SINSTANCES>::iterator it = sInstances.begin(); it != sInstances.end(); it++) {
if ((*it).TypeID == typeId) {
//do something
D3DXMatrixTranslation(&transMat, (*it).pos.x, (*it).pos.y, (*it).pos.z);
insertInstance.matTrans = (*it).pos;//scaleMat * transMat;
instanceBuffer.push_back(insertInstance);
i++;
}
}
instanceCount[typeId] = i;
//create new IB
D3D11_BUFFER_DESC instanceBufferDesc;
ZeroMemory(&instanceBufferDesc, sizeof(instanceBufferDesc));
instanceBufferDesc.Usage = D3D11_USAGE_DEFAULT;
instanceBufferDesc.ByteWidth = sizeof(INSTANCE) * i;
instanceBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
instanceBufferDesc.CPUAccessFlags = 0;
instanceBufferDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA instanceData;
ZeroMemory(&instanceData, sizeof(instanceData));
instanceData.pSysMem = &instanceBuffer[0];
if (FAILED(d3ddev->CreateBuffer(&instanceBufferDesc, &instanceData, &instanceBufferMap[typeId]))) throw(std::string("Failed to Update Instance Buffer"));
OpenDrawObj.cpp - drawing .obj file
UINT stride[2] = {sizeof(VERTEX), sizeof(INSTANCE)};
UINT offset[2] = {0, 0};
ID3D11Buffer* combinedBuffer[2] = {meshVertBuff, instanceBuffer};
d3ddevcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3ddevcon->IASetVertexBuffers(0, 2, combinedBuffer, stride, offset);
d3ddevcon->IASetIndexBuffer(meshIndexBuff, DXGI_FORMAT_R32_UINT, 0);
std::map<std::wstring, OBJMATERIAL>::iterator fit;
for (std::vector<DRAWLIST>::iterator it = drawList.begin(); it != drawList.end(); it++) {
fit = objMaterials.find((*it).material);
if (fit != objMaterials.end()) {
if ((*fit).second.texture != NULL) {
d3ddevcon->PSSetShaderResources(0, 1, &((*fit).second.texture));
}
d3ddevcon->DrawIndexedInstanced((*it).indexCount, instanceCount, (*it).startIndex, 0, 0);
}
}
the drawing function (above) is called here: I pass the instance buffer (map(int, ID3D11Buffer*) and the instance numbers)
(*it).second->draw(0.0f, 0.0f, 0.0f, 0, instanceBufferMap[typeId], instanceCount[typeId]);
shader.hlsl
struct VIn
{
float4 position : POSITION;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD;
//row_major float4x4 instTrans : INSTTRANS;
float4 instTrans : INSTTRANS;
uint instanceID : SV_InstanceID;
};
VOut VShader(VIn input)
{
VOut output;
//first: transforming instance
//output.position = mul(input.instTrans, input.position);
output.position = input.position;
output.position.xyz *= 50.0; //scale
output.position.z += input.instTrans.z; //apply only z value
float4 transPos = mul(world, output.position); //transform position with world matrix
output.position = mul(view, transPos); //project to screen
the "input.instTrans" in the last file is incorrect and contains ramdom data.
Do you have any ideas?
So i found the bug, it was at an totally unexpected location...
So here is the code snippet:
ID3D10Blob *VS, *VS2, *PS, *PS2; //<- i only used VS and PS before
//volume shader
if (FAILED(D3DX11CompileFromFile(L"resources/volume.hlsl", 0, 0, "VShader", "vs_5_0", D3D10_SHADER_PREFER_FLOW_CONTROL | D3D10_SHADER_SKIP_OPTIMIZATION, 0, 0, &VS, 0, 0))) throw(std::string("Volume Shader Error 1"));
if (FAILED(D3DX11CompileFromFile(L"resources/volume.hlsl", 0, 0, "PShader", "ps_5_0", D3D10_SHADER_PREFER_FLOW_CONTROL | D3D10_SHADER_SKIP_OPTIMIZATION, 0, 0, &PS, 0, 0))) throw(std::string("Volume Shader Error 2"));
// encapsulate both shaders into shader objects
if (FAILED(d3ddev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pvolumeVS))) throw(std::string("Volume Shader Error 1A"));
if (FAILED(d3ddev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pvolumePS))) throw(std::string("Volume Shader Error 2A"));
//sky shader
if (FAILED(D3DX11CompileFromFile(L"resources/sky.hlsl", 0, 0, "VShader", "vs_5_0", D3D10_SHADER_OPTIMIZATION_LEVEL3, 0, 0, &VS2, 0, 0))) throw(std::string("Sky Shader Error 1"));
if (FAILED(D3DX11CompileFromFile(L"resources/sky.hlsl", 0, 0, "PShader", "ps_5_0", D3D10_SHADER_OPTIMIZATION_LEVEL3, 0, 0, &PS2, 0, 0))) throw(std::string("Sky Shader Error 2"));
// encapsulate both shaders into shader objects
if (FAILED(d3ddev->CreateVertexShader(VS2->GetBufferPointer(), VS2->GetBufferSize(), NULL, &pskyVS))) throw(std::string("Sky Shader Error 1A"));
if (FAILED(d3ddev->CreatePixelShader(PS2->GetBufferPointer(), PS2->GetBufferSize(), NULL, &pskyPS))) throw(std::string("Sky Shader Error 2A"));
Using two buffers for compiling the shaders solved the problem, though i have no idea why. Thank you for the support, though ;)