I am trying to pass some data from my kernel driver up to the user application.
I have defined the structure in my header file shared by my driver and application:
typedef struct _CallBack
{
HANDLE hParId;
HANDLE hProId;
BOOLEAN bCreate;
}CB_INFO, *PCB_INFO;
In my driver, I have a switch statement
case IOCTL_CODE:
if (outputBufferLength >= sizeof(PCB_INFO))
{
callback->hParId = deviceExtension->hParId;
callback->hProId = deviceExtension->hProId;
callback->bCreate = deviceExtension->bCreate;
Irp->IoStatus.Information = outputBufferLength;
Status = STATUS_SUCCESS;
}
I have tried debugging the code by using DbgPrint, there was nothing wrong with the if statement, as outputBufferLength is 12 and PCB_INFO is 8.
As for the DeviceIoControl code in my application:
DeviceIoControl(
driver,
IOCTL_CODE,
0,
0,
&callback,
sizeof(callback),
&bytesReturn,
NULL);
I have checked the bytesReturn and it does not return 0, it returns a 12.
Other info:
I am using 64-bit Windows 7.
I really have no idea what is wrong, and really would appreciate any form of help. I would be glad to provide more of my code if you need more details. Could it be something to do with me writing the driver on a 64-bit platform, or is there just something wrong with my code?
Thanks in advance!
Firstly, PCB_INFO is a pointer type, so sizeof(PCB_INFO) is the size of a pointer, not the size of the buffer you're pointing to. Use sizeof(CB_INFO) or sizeof(*PCB_INFO) instead. The code shown in the question is actually writing past the end of the buffer, so the results are unpredictable.
Secondly, your structure includes two elements of type HANDLE which has a different size in 32-bit and 64-bit architectures. In most situations Windows automatically takes care of converting ("thunking") between 32-bit and 64-bit structures, but in the case of I/O control codes this is your driver's responsibility. This is described in the DDK article Supporting 32-Bit I/O in Your 64-Bit Driver.
Alternatively you can make your application 64-bit, or change the structure so that it uses only elements of constant size.
Related
It has taken me two days, but I've finally narrowed down the source of an ERROR_ACCESS_DENIED (5) error when attempting to CallNamedPipe as a problem with structure alignment. We had a 32-bit service and a 32-bit application, and I am trying to update the service to be a 64-bit service. The strange thing is that everything was working fine in 32-bit mode, but in 64-bit mode CallNamedPipe from the 32-bit application was reporting an access denied error.
The service was already setting up a SECURITY_ATTRIBUTES structure and populating the lpSecurityDescriptor member with a properly initialized PSECURITY_DESCRIPTOR. And this wasn't reporting any errors when passed to CreateNamedPipe. I still don't know why it wasn't reporting an error; maybe bad security attributes silently falls back to a default instead of failing.
Through many gyrations (including some earlier incomplete/incorrect attempts at changing structure alignment - debugging service startup code is not easy), I came to realize that the project setting that sets the default struct alignment to 1 byte (/Zp1) was causing problems. When I finally used #pragma pack(push,8) before all occurrences of #include <windows.h> and #pragma pack(pop) afterward, then things started working.
My question now is why is this necessary. I see that there are many header files in the Windows API that explicitly set structure alignment by including pshpack1.h, pshpack2.h, pshpack4.h, pshpack8.h and poppack.h. How am I to know when the Windows API controls its own packing and when it becomes my responsibility to have the proper pack level set? Shouldn't every Windows API declaration that cares about structure alignment set the proper packing so I don't have to sift through all the code in the system for anything including Windows API header files? One alternative would be to change the project setting to use default structure alignment, but I have to assume that this was done because we have even more code in our system relying on 1-byte structure alignment than we have relying on the Windows API.
This is what the server side code looks like:
BOOL OpenMyPipe()
{
SECURITY_ATTRIBUTES sa;
PSECURITY_DESCRIPTOR pSD;
printf("sizeof(SECURITY_ATTRIBUTES) == %d\n", sizeof(SECURITY_ATTRIBUTES));
pSD = (PSECURITY_DESCRIPTOR)GlobalAlloc(LPTR, SECURITY_DESCRIPTOR_MIN_LENGTH);
if (pSD == NULL)
return FALSE;
if (!InitializeSecurityDescriptor(pSD, SECURITY_DESCRIPTOR_REVISION))
return FALSE;
if (!SetSecurityDescriptorDacl(pSD, TRUE, (PACL)NULL, FALSE))
return FALSE;
sa.nLength = sizeof(sa);
sa.lpSecurityDescriptor = pSD;
sa.bInheritHandle = FALSE;
char szPipeName[MAX_PATH];
sprintf(szPipeName, "\\\\.\\pipe\\%s%s", "__SQLTST_",
"MAINMR");
hPipe = CreateNamedPipe(szPipeName, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
1, 0, 0, NMPWAIT_WAIT_FOREVER, &sa);
if (hPipe == INVALID_HANDLE_VALUE)
return FALSE;
return TRUE;
}
For simplicity's sake I verified this with a small VB.NET client:
Sub Main()
Dim pipes = System.IO.Directory.GetFiles("\\.\pipe\")
Using pipe As New System.IO.Pipes.NamedPipeClientStream(".", "__SQLTST_MAINMR")
Dim message(16) As Byte
pipe.Connect(3000)
Array.Copy(BitConverter.GetBytes(Process.GetCurrentProcess().Id), message, 4)
pipe.Write(message, 0, 16)
End Using
End Sub
I believe the error only occurs when the server side code is running under a different account like the SYSTEM account. I don't know how to easily test that, though. What I do know is that the above code will not fail even with no SECURITY_ATTRIBUTES set up when it's all running under the same account as regular application code. Also, of course, you have to set the structure alignment in the server code to 1 byte to see the error.
The Windows SDK expects packing to be 8 bytes. From Using the Windows Headers
Projects should be compiled to use the default structure packing, which is currently 8 bytes because the largest integral type is 8 bytes. Doing so ensures that all structure types within the header files are compiled into the application with the same alignment the Windows API expects. It also ensures that structures with 8-byte values are properly aligned and will not cause alignment faults on processors that enforce data alignment.
This is necessary to ensure that data structures are aligned as the system expects. I suspect the reason for not doing it explicitly is that they want the default, so why ask for anything else. Changing the packing is relatively rare and should be confined to specific circumstances. If Microsoft added in #pragma pack(push,8) to every header file, they would be implicitly saying it is normal to change alignment.
Unaligned structures save space, but reduce performance as alignment faults are generated when accessing unaligned members.
The Windows SDK does change alignments for structures for a number of reasons. One might be for file formats that need to read either 32 or 64-bit data structures. For example, the PE-file format can be read using either IMAGE_THUNK_DATA64 or IMAGE_THUNK_DATA32. The former needs 8 byte padding whilst the latter needs 4 byte padding. Similarly, Wininet.h will pack the data structures differently depending on whether it is being compiled for 32-bit or 64-bit code. These are legitimate changes in packing, but with a specific reason.
Using the following code, more or less copy-pasted from the MSDN example of
GetAdaptersAddresses, I get the return value 122, which means ERROR_INSUFFICIENT_BUFFER (according to this system error code list).
ULONG outBufLen = 150000; // Tried for different (large) values here...
PIP_ADAPTER_ADDRESSES pAddresses = (IP_ADAPTER_ADDRESSES *) malloc(outBufLen);
DWORD dwRetVal = GetAdaptersAddresses(AF_INET, 0, NULL, pAddresses, &outBufLen);
// ....
free(pAddresses);
The documentation of GetAdaptersAddresses does not list ERROR_INSUFFICIENT_BUFFER as one of the expected return values. (It lists ERROR_BUFFER_OVERFLOW, which should adjust outBufLen to the needed value, but that remains unchanged).
Using GetAdaptersInfo instead leads to the same symptoms.
This error does not occur on my development machine, but on one virtual and one real clean Windows 7 x86 SP1 installation (added the VC++ redistributables).
As a c++ newbie, am I doing something wrong? What could cause this error and how to fix it? =)
First of all, you can - as others suggested - do two calls, to find out required buffer size, and then do the query itself. Especially if you are seeing the error, your first try would be to ask API what size it expected.
Second, you need to know that this API is not quite safe in 32-bit processes consuming high amounts of memory, so that buffers span into higher 2GB of address space. API might start acting in a weird way, either due to its own bug, or a bug in an underlying layer. See details on this on MS Connect here: GetAdaptersAddresses API incorrectly returns no adapters for a process with high memory consumption.
The fact that error code is not "one of the expected return values" tells for the versions that the error comes from an underlying layer and this API just passes it up on internal failure. As a clue, having disabled some network adapter on the system, you might get rid of the error.
Visual Studio deployed a library named "IPHLPAPI.dll" together with my project which caused the problem. Deleting this file solved it.
Why this was the case is subject to further research =)
First, a buffer is a block of memory.
So insufficient could mean that you haven't given it enough memory somehow. Our could be a block of memory which you don't have access to. Maybe the address doesn't even exist.
Look at this:
ERROR_INSUFFICIENT_BUFFER
122 (0x7A)
The data area passed to a system call is too small.
This sounds really like the buffer hasn't got enough allocated memory. Or similar.
Maybe the
outBufLen
has to be a specific length, maybe the size of the memory block. Because sometimes it doesn't check for the 'name' but tries to compare for each of the variables size. This idea came from the High Level Shader Language.
So i would try to look a bit more on the:
ULONG outBufLen = 150000; // Tried for different (large) values here...
PIP_ADAPTER_ADDRESSES pAddresses = (IP_ADAPTER_ADDRESSES *) malloc(outBufLen);
Good luck!
To know the exact buffer size required, you can just pass NULL into pAddresses and size will be set to the required size. You may want to rewrite your code slightly to make that work;
DWORD rv, size = 0;
PIP_ADAPTER_ADDRESSES adapter_addresses;
rv = GetAdaptersAddresses(AF_INET, 0, NULL, NULL, &size);
if (rv != ERROR_BUFFER_OVERFLOW)
return false; // ERROR
adapter_addresses = (PIP_ADAPTER_ADDRESSES)malloc(size);
rv = GetAdaptersAddresses(AF_INET, 0, NULL, adapter_addresses, &size);
if (rv != ERROR_SUCCESS) {
free(adapter_addresses);
return false; // ERROR
}
I am helping a friend to finish a final year project in which he has this circuit that we want to switch on and off using a C++ program.
I initially thought it would be easy, but I have failed to implement this program.
The main problem is that
Windows XP and above don't allow direct access to hardware so some websites are suggesting that I need to write a driver or find a driver.
I have also looked at some projects online but they seem to work for Windows XP but fail to work for Windows 7.
Also, most projects were written in VB or C# which I am not familiar with.
Question:
Is there a suitable driver that works for Windows XP and Windows 7, and if yes how can I use it in my code? (code snippets would be appreciated)
Is there a cross platform way of dealing communicating with parallel ports?
Have a look at codeproject: here, here and here. You'll find treasures.
The 1st link works for Windows 7 - both 32 bit and 64 bit.
You shouldn't need to write a driver or anything -- you just call CreateFile with a filename like "LPT1" to open up a handle to the parallel port, and then you can use WriteFile to write data to it. For example:
HANDLE parallelPort = CreateFile("LPT1", GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL);
if(parallelPort == INVALID_HANDLE_VALUE)
{
// handle error
}
...
// Write the string "foobar" (and its null terminator) to the parallel port.
// Error checking omitted for expository purposes.
const char *data = "foobar";
WriteFile(parallelPort, data, strlen(data)+1, NULL, NULL);
...
CloseHandle(parallelPort);
Is there a C++ .NET function that I can call that will detect if my program is running in compatibility mode? If there is not one, could someone show me the code for one? Thanks.
For example:
Program loads up
Compatibility Mode check
if true then exit
else run
From another forum
After a few google searches went in
vain, I decided to experiment myself.
I found that the compatibility
settings for each executable are
stored - as I thought it would be - in
the windows registry.
The key where the settings are stored
is
HKEY_CURRENT_USER\Software\Microsoft\Windows
NT\CurrentVersion\AppCompatFlags\Layers
For each application that has its
compatibility settings specified,
there exists a value under that key
whose name is the path to the
executable and the data is a string
consisting of the compatibility
settings.
The keywords in the string that
specify the compatibility settings
are: WIN95 WIN98 NT4SP5
WIN2000 256COLOR 640X480
DISABLETHEMES DISABLECICERO
If multiple settings are specified (or
are to be specified), the data
consists of the settings above
separated by a space each. The first
four settings are mutually exclusive,
i.e. only one of them is to be
specified (if at all). I haven't
tested the consequences of specifying
multiple operating systems.
So, back to addressing your problem.
To check if an executable (let's say,
"C:\path\executable.exe") is set to
run in 256 color mode, there would be
a value named "C:\path\executable.exe"
(without the quotes, even if the path
contains spaces) under the key
[HKEY_CURRENT_USER\Software\Microsoft\Windows
NT\CurrentVersion\AppCompatFlags\Layers],
and the data associated with the value
would contain the string "256COLOR".
If it is also set to run in
compatibility mode under Windows
98/ME, the data would be "WIN98
256COLOR".
So, the approach is simple. Test if
there is a value with the full path of
the executable under the key I
mentioned above. If there isn't, the
executable has not been specified any
compatibility settings. If the value
exists, retrieve its data and check
for the presence of "256COLOR" in the
data. Accordingly, the presence of
"WIN95" or "WIN98" or "NT4SP5" or
"WIN2000" would mean that the
executable is set to run in
compatibility mode for that particular
operating system.
Get the version of the operation system that is returned from GetVersionEx and compare it to the file version on kernel32.dll. When in application compatibility mode GetVersionEx will always return the version of the operating system that is being 'emulated'. If both versions are different then you are in application compatibility mode.
The answer above helped me to get a "solution" for the question at hand. It is probably not the most elegant, but seems to work. Obviously you can get a bit more creative on the return type. Booleon does not suffice here. I think a native API would be good.
typedef VOID (NTAPI* TRtlGetNtVersionNumbers)(LPDWORD pdwMajorVersion, LPDWORD pdwMinorVersion, LPDWORD pdwBuildNumber);
bool IsRunningCompatMode()
{
TRtlGetNtVersionNumbers RtlGetNtVersionNumbers = (TRtlGetNtVersionNumbers)GetProcAddress(GetModuleHandleA("ntdll.dll"), "RtlGetNtVersionNumbers");
assert(RtlGetNtVersionNumbers);
if(RtlGetNtVersionNumbers)
{
OSVERSIONINFO osInfo = {0};
osInfo.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);
GetVersionEx(&osInfo);
DWORD dwMajorVersion;
DWORD dwMinorVersion;
DWORD dwBuildNumber;
RtlGetNtVersionNumbers(&dwMajorVersion, &dwMinorVersion, &dwBuildNumber);
dwBuildNumber &= 0x0000FFFF;
if(osInfo.dwBuildNumber != dwBuildNumber)
{
return true;
}
}
return false;
};
I'm working on a machine that has 8 gigs of memory installed and I'm trying to programmatically determine how much memory is installed in the machine. I've already attempted using sysctlbyname() to get the amount of memory installed, however it seems to be limited to returning a signed 32 bit integer.
uint64_t total = 0;
size_t size = sizeof(total);
if( !sysctlbyname("hw.physmem", &total, &size, NULL, 0) )
m_totalMemory = total;
The above code, no matter what type is passed to sysctlbyname, always returns 2147483648 in the total variable. I've been searching through IOKit and IORegistryExplorer for another route of determining installed memory, but have come up with nothing so far. I've found IODeviceTree:/memory in IORegistryExplorer, but there's no field in there for size. I'm not finding anything anywhere else in the IO Registry either. Is there a way to access this information via IOKit, or a way to make sysctlbyname return more than a 32-bit signed integer?
You can use sysctl() and query HW_MEMSIZE.This returns the memory size as a 64-bit integer, instead of the default 32-bit integer.
The man page gives the details.
The easy way:
[[NSProcessInfo processInfo] physicalMemory]