Related
Using the following code
Sleep(3000);
keybd_event(VK_SHIFT, 0, 0, 0);
keybd_event(VK_DOWN, 0, 0, 0);
keybd_event(VK_DOWN, 0, KEYEVENTF_KEYUP, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
I expect windows to select a line of text if i place my cursor in an editor during the sleep (hence it is foreground window)
However this just moves the cursor down a line.
Using the following works but surely this isn't the way its supposed to be done.
Sleep(3000);
keybd_event(VK_SHIFT, 0, KEYEVENTF_EXTENDEDKEY, 0);
keybd_event(VK_DOWN, 0, 0, 0);
keybd_event(VK_DOWN, 0, KEYEVENTF_KEYUP, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP | KEYEVENTF_EXTENDEDKEY, 0);
keybd_event(VK_SHIFT, 0, 0, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
It appears by using the KEYEVENTF_EXTENDEDKEY flag I can hold it down for the arrow key. But no matter how I try to release it doesn't work. But I can press and release it normally to clear the held shift key.
With a cube defined as in the following code, you see that normals are often negative in one axis. (even if we calcultate them)
OpenGL manages it with its fixed pipeline, correct me if I'm wrong, but with programmable pipeline, it causes artifacts like black faces. (My previous stackoverflow question provides code.)
I managed to run my code with an operation on my normals (normal = (0.5 + 0.5 * normal); ), but even if the result looks ok, I wonder if my normals are still valid? (And is this operation the best?)
I mean, from a shader point of view, can I still use them to shade or brighten my models? How do you usually do?
The mentionned normals:
const GLfloat cube_vertices[] = {
1, 1, 1, -1, 1, 1, -1,-1, 1, // v0-v1-v2 (front)
-1,-1, 1, 1,-1, 1, 1, 1, 1, // v2-v3-v0
1, 1, 1, 1,-1, 1, 1,-1,-1, // v0-v3-v4 (right)
1,-1,-1, 1, 1,-1, 1, 1, 1, // v4-v5-v0
1, 1, 1, 1, 1,-1, -1, 1,-1, // v0-v5-v6 (top)
-1, 1,-1, -1, 1, 1, 1, 1, 1, // v6-v1-v0
-1, 1, 1, -1, 1,-1, -1,-1,-1, // v1-v6-v7 (left)
-1,-1,-1, -1,-1, 1, -1, 1, 1, // v7-v2-v1
-1,-1,-1, 1,-1,-1, 1,-1, 1, // v7-v4-v3 (bottom)
1,-1, 1, -1,-1, 1, -1,-1,-1, // v3-v2-v7
1,-1,-1, -1,-1,-1, -1, 1,-1, // v4-v7-v6 (back)
-1, 1,-1, 1, 1,-1, 1,-1,-1 }; // v6-v5-v4
const GLfloat cube_normalsI[] = {
0, 0, 1, 0, 0, 1, 0, 0, 1, // v0-v1-v2 (front)
0, 0, 1, 0, 0, 1, 0, 0, 1, // v2-v3-v0
1, 0, 0, 1, 0, 0, 1, 0, 0, // v0-v3-v4 (right)
1, 0, 0, 1, 0, 0, 1, 0, 0, // v4-v5-v0
0, 1, 0, 0, 1, 0, 0, 1, 0, // v0-v5-v6 (top)
0, 1, 0, 0, 1, 0, 0, 1, 0, // v6-v1-v0
-1, 0, 0, -1, 0, 0, -1, 0, 0, // v1-v6-v7 (left)
-1, 0, 0, -1, 0, 0, -1, 0, 0, // v7-v2-v1
0,-1, 0, 0,-1, 0, 0,-1, 0, // v7-v4-v3 (bottom)
0,-1, 0, 0,-1, 0, 0,-1, 0, // v3-v2-v7
0, 0,-1, 0, 0,-1, 0, 0,-1, // v4-v7-v6 (back)
0, 0,-1, 0, 0,-1, 0, 0,-1 }; // v6-v5-v4
No, this makes no sense at all. Either you need to update your question or you got it all wrong.
Normal may face any direction and normals are often negative in one axis is completely natural. Why wouldn't they be? From what you are describing you seem to be working with lighting. A part of lighting uses normal to see what is the angle between light source and surface. The idea here is that when you turn the normal a light ray effectively lightens a larger part of surface which reduces density of reflected light. With basic math you can see that the correlation is cos(angle) so parallel vectors will produce highest brightness. Since we are using vectors we are better of replacing cos with dot product.
So at some point you have
float factor = dot(normalize(normal), normalize(lightSource-surfacePoint))
Let's have 2 examples here:
normal = (0, 1, 0)
lightSource = (0, 1, 0)
surfacePoint = (0, 0, 0)
dot((0, 1, 0), (0, 1, 0)) = 0+1+0 = 1
and turn it around:
normal = (-1, 0, 0)
lightSource = (-3, 1, 0)
surfacePoint = (0, 1, 0)
dot((-1, 0, 0), normalize(-3, 0, 0)) = dot((-1, 0, 0), (1, 0, 0)) = 1+0+0 = 1
so even if positions are completely changed and normals negative we will get the same result for same angles (in these cases the vectors being perpendicular).
The only question here is what to do when dot product is negative. That happens when normal faces away from the light. In your case you have a cube and all normals point outwards. What if you needed to be inside a cube and still have lighting? You will get
normal = (0, 1, 0)
lightSource = (0, 0, 0)
surfacePoint = (0, 1, 0)
dot((0, 1, 0), (0, -1, 0)) = 0-1+0 = -1
Because of such cases you need to either clam the values or use absolute values. Clamping will produce interior of cube to be black (not lighted) while absolute value will light those as well:
fragmentColor += lightColor*dotFactor // Do nothing and your light will darken the area
fragmentColor += lightColor*abs(dotFactor) // Use absolute value to lighten even if facing away
fragmentColor += lightColor*max(0.0, dotFactor) // Clamp minimum so there are no negative values.
But none of these have nothing to do with normals facing any direction in absolute coordinate system. It just has to do with relative positions between normal, pixel location and light source.
I am trying to output a PNG image by using GDI+, MFC. I want to output it with 25% opacity. Below is the way to output a PNG image on x=10, y=10:
CDC *pDC =GetDC();
Graphics graphics(pDC->m_hDC);
Image image(L"test1.png", FALSE);
graphics.DrawImage(&image, 10, 10);
But I don't know how to make it translucent. Any idea?
To draw the image with alpha blending, declare Gdiplus::ImageAttributes and Gdiplus::ColorMatrix with required alpha channel:
float alpha = 0.25f;
Gdiplus::ColorMatrix matrix =
{
1, 0, 0, 0, 0,
0, 1, 0, 0, 0,
0, 0, 1, 0, 0,
0, 0, 0, alpha, 0,
0, 0, 0, 0, 1
};
Gdiplus::ImageAttributes attrib;
attrib.SetColorMatrix(&matrix);
graphics.DrawImage(&image,
Gdiplus::Rect(10, 10, image.GetWidth(), image.GetHeight()),
0, 0, image.GetWidth(), image.GetHeight(), Gdiplus::UnitPixel, &attrib);
See also: Using a Color Matrix to Transform a Single Color
Note that GetDC() is usually not used in MFC. If you do use it, be sure to call ReleaseDC(pDC) when pDC is no longer needed. Or simply use CClientDC dc(this) which has automatic cleanup. If painting is done in OnPaint then use CPaintDC which also has automatic cleanup:
void CMyWnd::OnPaint()
{
CPaintDC dc(this);
Gdiplus::Graphics graphics(dc);
...
}
i would like to draw Instances of an obj File. After i implemented the Instancing instead of drawing each Object by his own draw() function (which worked just fine), the Instances are not positioned correctly. Probably the data from the InstanceBuffer is not set in the shader correctly.
D3DMain.cpp - creating input layout
struct INSTANCE {
//D3DXMATRIX matTrans;
D3DXVECTOR3
};
/***/
// create the input layout object
D3D11_INPUT_ELEMENT_DESC ied[] =
{
//vertex buffer
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
//instance buffer
{"INSTTRANS", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
//{"INSTTRANS", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 1},
};
if (FAILED(d3ddev->CreateInputLayout(ied, 4, VS->GetBufferPointer(), VS->GetBufferSize(), &pLayout))) throw(std::string("Input Layout Creation Error"));
d3ddevcon->IASetInputLayout(pLayout);
World.cpp - setting up instance buffer
std::vector<INSTANCE> instanceBuffer;
INSTANCE insertInstance;
D3DXMATRIX scaleMat, transMat;
D3DXMatrixScaling(&scaleMat, 50.0f, 50.0f, 50.0f);
int i=0;
for (std::list<SINSTANCES>::iterator it = sInstances.begin(); it != sInstances.end(); it++) {
if ((*it).TypeID == typeId) {
//do something
D3DXMatrixTranslation(&transMat, (*it).pos.x, (*it).pos.y, (*it).pos.z);
insertInstance.matTrans = (*it).pos;//scaleMat * transMat;
instanceBuffer.push_back(insertInstance);
i++;
}
}
instanceCount[typeId] = i;
//create new IB
D3D11_BUFFER_DESC instanceBufferDesc;
ZeroMemory(&instanceBufferDesc, sizeof(instanceBufferDesc));
instanceBufferDesc.Usage = D3D11_USAGE_DEFAULT;
instanceBufferDesc.ByteWidth = sizeof(INSTANCE) * i;
instanceBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
instanceBufferDesc.CPUAccessFlags = 0;
instanceBufferDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA instanceData;
ZeroMemory(&instanceData, sizeof(instanceData));
instanceData.pSysMem = &instanceBuffer[0];
if (FAILED(d3ddev->CreateBuffer(&instanceBufferDesc, &instanceData, &instanceBufferMap[typeId]))) throw(std::string("Failed to Update Instance Buffer"));
OpenDrawObj.cpp - drawing .obj file
UINT stride[2] = {sizeof(VERTEX), sizeof(INSTANCE)};
UINT offset[2] = {0, 0};
ID3D11Buffer* combinedBuffer[2] = {meshVertBuff, instanceBuffer};
d3ddevcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3ddevcon->IASetVertexBuffers(0, 2, combinedBuffer, stride, offset);
d3ddevcon->IASetIndexBuffer(meshIndexBuff, DXGI_FORMAT_R32_UINT, 0);
std::map<std::wstring, OBJMATERIAL>::iterator fit;
for (std::vector<DRAWLIST>::iterator it = drawList.begin(); it != drawList.end(); it++) {
fit = objMaterials.find((*it).material);
if (fit != objMaterials.end()) {
if ((*fit).second.texture != NULL) {
d3ddevcon->PSSetShaderResources(0, 1, &((*fit).second.texture));
}
d3ddevcon->DrawIndexedInstanced((*it).indexCount, instanceCount, (*it).startIndex, 0, 0);
}
}
the drawing function (above) is called here: I pass the instance buffer (map(int, ID3D11Buffer*) and the instance numbers)
(*it).second->draw(0.0f, 0.0f, 0.0f, 0, instanceBufferMap[typeId], instanceCount[typeId]);
shader.hlsl
struct VIn
{
float4 position : POSITION;
float3 normal : NORMAL;
float2 texcoord : TEXCOORD;
//row_major float4x4 instTrans : INSTTRANS;
float4 instTrans : INSTTRANS;
uint instanceID : SV_InstanceID;
};
VOut VShader(VIn input)
{
VOut output;
//first: transforming instance
//output.position = mul(input.instTrans, input.position);
output.position = input.position;
output.position.xyz *= 50.0; //scale
output.position.z += input.instTrans.z; //apply only z value
float4 transPos = mul(world, output.position); //transform position with world matrix
output.position = mul(view, transPos); //project to screen
the "input.instTrans" in the last file is incorrect and contains ramdom data.
Do you have any ideas?
So i found the bug, it was at an totally unexpected location...
So here is the code snippet:
ID3D10Blob *VS, *VS2, *PS, *PS2; //<- i only used VS and PS before
//volume shader
if (FAILED(D3DX11CompileFromFile(L"resources/volume.hlsl", 0, 0, "VShader", "vs_5_0", D3D10_SHADER_PREFER_FLOW_CONTROL | D3D10_SHADER_SKIP_OPTIMIZATION, 0, 0, &VS, 0, 0))) throw(std::string("Volume Shader Error 1"));
if (FAILED(D3DX11CompileFromFile(L"resources/volume.hlsl", 0, 0, "PShader", "ps_5_0", D3D10_SHADER_PREFER_FLOW_CONTROL | D3D10_SHADER_SKIP_OPTIMIZATION, 0, 0, &PS, 0, 0))) throw(std::string("Volume Shader Error 2"));
// encapsulate both shaders into shader objects
if (FAILED(d3ddev->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pvolumeVS))) throw(std::string("Volume Shader Error 1A"));
if (FAILED(d3ddev->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pvolumePS))) throw(std::string("Volume Shader Error 2A"));
//sky shader
if (FAILED(D3DX11CompileFromFile(L"resources/sky.hlsl", 0, 0, "VShader", "vs_5_0", D3D10_SHADER_OPTIMIZATION_LEVEL3, 0, 0, &VS2, 0, 0))) throw(std::string("Sky Shader Error 1"));
if (FAILED(D3DX11CompileFromFile(L"resources/sky.hlsl", 0, 0, "PShader", "ps_5_0", D3D10_SHADER_OPTIMIZATION_LEVEL3, 0, 0, &PS2, 0, 0))) throw(std::string("Sky Shader Error 2"));
// encapsulate both shaders into shader objects
if (FAILED(d3ddev->CreateVertexShader(VS2->GetBufferPointer(), VS2->GetBufferSize(), NULL, &pskyVS))) throw(std::string("Sky Shader Error 1A"));
if (FAILED(d3ddev->CreatePixelShader(PS2->GetBufferPointer(), PS2->GetBufferSize(), NULL, &pskyPS))) throw(std::string("Sky Shader Error 2A"));
Using two buffers for compiling the shaders solved the problem, though i have no idea why. Thank you for the support, though ;)
Using the tutorial here, I have managed to get a red triangle up on my screen: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-4
CUSTOMVERTEX OurVertices[] =
{
{ 0, 0, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ WIDTH, 0, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ 0, 300, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) },
{ WIDTH, 300, 0, 1.0f, D3DCOLOR_XRGB( 127, 0, 0 ) }
};
d3dDevice->CreateVertexBuffer(3*sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&vBuffer,
NULL);
VOID* pVoid; // the void* we were talking about
vBuffer->Lock(0, 0, (void**)&pVoid, 0); // locks v_buffer, the buffer we made earlier
memcpy(pVoid, OurVertices, sizeof(OurVertices)); // copy vertices to the vertex buffer
vBuffer->Unlock(); // unlock v_buffer
d3dDevice->SetFVF(CUSTOMFVF);
d3dDevice->SetStreamSource(0, vBuffer, 0, sizeof(CUSTOMVERTEX));
d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 1);
But you can see that I really want to be drawing a rectangle.
I have changed the Primitive to draw 2 triangles and extended the buffer size to 4*size of my custom vertex but I can't really say I understand how to get it from my triangle to my rectangle I would like:
Is there a better way of drawing a rectangle rather than using a quad considering I just want to sling some text on top of it something like this:
http://1.bp.blogspot.com/-6HjFVnrVM94/TgRq8oP4U-I/AAAAAAAAAKk/i8N0OZU999E/s1600/monkey_island_screen.jpg
I had to exend my buffer to allow for 4 vertex array size:
d3dDevice->CreateVertexBuffer(4*sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&vBuffer,
NULL);
And then changed the draw primitive from TRIANGLELIST to STRIP extending the amount of triangles drawn to 2
d3dDevice->DrawPrimitive (D3DPT_TRIANGLESTRIP, 0, 2 );
Source: http://www.mdxinfo.com/tutorials/tutorial4.php