I have this in my Notepad:
hello
I want to simulate selecting in C++ using Windows' keybd_event function.
here is my code:
keybd_event(VK_SHIFT, 0, 0, 0);
for (size_t i = 0; i < 5; i++)
{
keybd_event(VK_LEFT, 0, 0, 0);
keybd_event(VK_LEFT, 0, KEYEVENTF_KEYUP, 0);
}
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
but after I run this, it didn't select anything, it just go to the start of the file. Why isn't this working?
Add KEYEVENTF_EXTENDEDKEY will select rightly.
https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-keybdinput#members
#include <windows.h>
void main()
{
Sleep(2000);
keybd_event(VK_SHIFT, MapVirtualKey(VK_SHIFT, 0), KEYEVENTF_EXTENDEDKEY, 0);
for (size_t i = 0; i < 5; i++)
{
keybd_event(VK_LEFT, 0, 0, 0);
keybd_event(VK_LEFT, 0, KEYEVENTF_KEYUP, 0);
Sleep(20);
}
keybd_event(VK_SHIFT, MapVirtualKey(VK_SHIFT, 0), KEYEVENTF_EXTENDEDKEY | KEYEVENTF_KEYUP, 0);
}
Use VK_SHIFT in keybd_event, There is a problem that shift cannot be released,I recommend you use SendInput instead of keybd_event.
For higher-level operations, I also recommend you use UI Automation.
https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-sendinput
https://learn.microsoft.com/en-us/windows/win32/winauto/entry-uiauto-win32
I create an algorithm that copies some data from one file and pastes it into the program.
To copy this data I use this:
keybd_event (VK_CONTROL, 0, KEYEVENTF_EXTENDEDKEY, 0);
sleep (500);
keybd_event (VK_CONTROL, 0, KEYEVENTF_KEYUP, 0);
keybd_event (0x43, 0, KEYEVENTF_EXTENDEDKEY, 0);
Sleep (1);
keybd_event (0x43, 0, KEYEVENTF_KEYUP, 0);
Everything works fine, but after I stop the program, ctrl is still pressed and I have to reset my pc to turn it off. I tried to change the order but still nothing.
keybd_event (VK_CONTROL, 0, KEYEVENTF_EXTENDEDKEY, 0);
keybd_event (0x43, 0, KEYEVENTF_EXTENDEDKEY, 0);
sleep (500);
keybd_event (VK_CONTROL, 0, KEYEVENTF_KEYUP, 0);
Sleep (1);
keybd_event (0x43, 0, KEYEVENTF_KEYUP, 0);
What should I do to fix this?
My incoming data is 3840.2160, how should I edit my draw image function for this resolution? If you have a sample for 1080p, you can share it too.
I made some changes on the values in the code below. Do not take those values into account.
public void DrawImage(int image)
{
GL.MatrixMode(MatrixMode.Projection);
GL.PushMatrix();
GL.LoadIdentity();
GL.Ortho(0, 3840, 0, 2160, -1, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.PushMatrix();
GL.LoadIdentity();
GL.Disable(EnableCap.Lighting);
GL.Enable(EnableCap.Texture2D);
//GL.Color4(1, 0, 0, 1);
GL.BindTexture(TextureTarget.Texture2D, image);
GL.Begin(BeginMode.Quads);
GL.TexCoord2(0, 0);
GL.Vertex3(0, 0, 0);
GL.TexCoord2(1, 0);
GL.Vertex3(2160, 0, 0);
GL.TexCoord2(1, 1);
GL.Vertex3(2160, 3840, 0);
GL.TexCoord2(0, 1);
GL.Vertex3(0, 3840, 0);
GL.End();
GL.Disable(EnableCap.Texture2D);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Projection);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Modelview);
}
I'm learning DirectX 11, and I wanted to get my head around DirectX debugging because I got an access violation reading location error on line 199 (create inputLayout)
I am trying to get an error box with directX errors show up because I read somewhere that it is a good programming practice to have that box show up with information about errors
Any ideas?
Also, help with the input layout would be appreciated
ID3DBlob *VS, *PS;
#if defined(DEBUG) | defined(_DEBUG)
#ifndef HR
#define HR(x) \
{ \
HRESULT hr = (x); \
if(FAILED(hr)) \
{ \
DXTrace(__FILE__, (DWORD)__LINE__, hr, L#x, true); \
} \
}
#endif
#else
#ifndef HR
#define HR(x) (x)
#endif
#endif
D3DX11CompileFromFile(L"shaders.fx", 0, 0, "VS", "vs_5_0", 0, 0, 0, &VS, 0, 0);
D3DX11CompileFromFile(L"shaders.fx", 0, 0, "PS", "ps_5_0", 0, 0, 0, &PS, 0, 0);
device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &vShader);
device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pShader);
VS->Release();
PS->Release();
context->VSSetShader(vShader, 0, 0);
context->PSSetShader(pShader, 0, 0);
// define the input layout
D3D11_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }
};
UINT numElements = ARRAYSIZE(layout);
//below gives me access violation error and says that &inputLayout is NULL
HR(device->CreateInputLayout(layout, numElements, VS->GetBufferPointer(), VS->GetBufferSize(), &inputLayout));
You are releasing the VS blob before you create the layout according to the code above. You need the original Vertex Shader binary blob at the time you create the Input Layout so that Direct3D can validate they match up.
A simple fix is to move VS->Release(); to after you call CreateInputLayout.
A better answer is to remove all explicit use of Release an instead rely on a smart-pointer like Microsoft::WRL::ComPtr.
#include <wrl/client.h>
using Microsoft::WRL::ComPtr;
...
ComPtr<ID3DBlob> VS, PS;
...
D3DX11CompileFromFile(L"shaders.fx", 0, 0, "VS", "vs_5_0", 0, 0, 0, &VS, 0, 0);
D3DX11CompileFromFile(L"shaders.fx", 0, 0, "PS", "ps_5_0", 0, 0, 0, &PS, 0, 0);
device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &vShader);
device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pShader);
context->VSSetShader(vShader, 0, 0);
context->PSSetShader(pShader, 0, 0);
...
//below gives me access violation error and says that &inputLayout is NULL
HR(device->CreateInputLayout(layout, numElements, VS->GetBufferPointer(), VS->GetBufferSize()));
Whenever VS and PS go out of scope, they will take clear of cleaning themselves up.
I have a 64-byte block and want to append a 64 Bit (8 Byte) Block of data at the end.
typedef unsigned char uint1; // 1 Byte
typedef unsigned int uint4; // 4 Byte
// The 64 Byte-Block:
int BLOCKSIZE=64;
static uint1 padding[BLOCKSIZE] = {
0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
};
// [[10000000][00000000].........[00000000]]
// The 64 Bit (8 Byte-Block):
uint4 appendix[2] = {};
appendix[1] = 0x000000ff;
// [[00000000000000000000000000000000][00000000000000000000000011111111]]
after memcpy 8 bytes from appendix to the last 8 byte of padding
memcpy(&padding[56], &appendix, 8);
it looks like
static uint1 padding[BLOCKSIZE] = {
0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0xff, 0, 0, 0, 0
};
but shouldn't it look like this?
static uint1 padding[BLOCKSIZE] = {
0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0xff
};
I don't know whats wrong here!?!?
Can you help me?
appendix[1] = 0x000000ff;
// [[00000000000000000000000000000000][00000000000000000000000011111111]]
You're making assumptions about the byte order (endianness). You can't make such assumptions. Depending on byte-order of the architechture, appendix could alternatively be represented like this:
// [[00000000000000000000000000000000][11111111000000000000000000000000]]
If you want to set the last byte specifically, then you need to operate on bytes, not multi-byte integers. Like this for example:
uint1 appendix[8] = {};
appendix[7] = 0xff;
If you indeed need the last 8 bytes to represent two 4 byte integers, your code is correct in that regard and only your assumption about what the memory should look like is wrong.
If the integer must be in a particular byte order for sending it over network, then you must convert it appropriately. POSIX provides htonl and it's sister functions for exactly that. The functions are also provided by msvc.
You're also making the assumption that unsigned int is 4 bytes. It's not guaranteed to be. Use int32_t instead if you need a 4 byte integer.
Update:
My Goal is to implement MD5 and I need to append a 64 bit representation of the length of a file.
According to rfc1321:
... a sequence of
bytes can be interpreted as a sequence of 32-bit words, where each
consecutive group of four bytes is interpreted as a word with the
low-order (least significant) byte given first.
MD5 is little endian. Therefore writing a 2*4 array without converting the byte order will work correctly only on a little endian processor.
I recommend using a 8*1 byte-array so that you can control the order of the bytes exactly as the specification requires. Alternatively, if you're on linux or another platform that provides them, you could use htole32 and le32toh functions to convert to the correct byte order. On another platform you may need to implement them yourself.
So, as far as I'm able to understand the RFC1321 I need a 64 Bit Integer representation of the original message (file) size. The file size is 64 Byte. In a 64 Bit Integer the value 64 is in binary either:
0000000000000000000000000000000000000000000000000000000001000000
or:
0000001000000000000000000000000000000000000000000000000000000000
I have decoding funcitons for both, but i don't know which is right for md5 ?
You should look at Endianless. Your option is big-endian here.