Close shared files programmatically - c++

The company I'm working with has a program written in ye olde vb6, which is updated pretty frequently, and most clients run the executable from a mapped network drive. This actually has surprisingly few issues, the biggest of which is automatic updates. Currently the updater program (written in c++) renames the existing exe, then downloads and places the new version into the old version's place. This generally works fine, but in some environments it simply fails.
The solution is running this command from microsoft:
for /f "skip=4 tokens=1" %a in ('net files') do net files %a /close
This command closes all network files that are shared (well... most) and then the updater can replace the exe.
In C++ I can use the System(""); function to run that command, or I could redirect the output of net files, and iterate through the results looking for the particular file in question and run net file /close command to close them. But it would be much much nicer if there were winapi functions that have similar capabilities for better reliability and future safety.
Is there any way for me to programmatically find all network shared files and close relevant ones?

You can programmatically do what net file /close does. Just include lmshare.h and link to Netapi32.dll. You have two functions to use: NetFileEnum to enumerate all open network files (on a given computer) and NetFileClose to close them.
Quick (it assumes program is running on same server and there are not too many open connections, see last paragraph) and dirty (no error checking) example:
FILE_INFO_2* pFiles = NULL;
DWORD nRead = 0, nTotal = 0;
NetFileEnum(
NULL, // servername, NULL means localhost
"c:\\directory\\path", // basepath, directory where VB6 program is
NULL, // username, searches for all users
2, // level, we just need resource ID
(LPBYTE*)&pFiles, // bufptr, need to use a double pointer to get the buffer
MAX_PREFERRED_LENGTH, // prefmaxlen, collect as much as possible
&nRead, // entriesread, number of entries stored in pFiles
&nTotal, // totalentries, ignore this
NULL //resume_handle, ignore this
);
for (int i=0; i < nRead; ++i)
NetFileClose(NULL, pFiles[i].fi2_id);
NetApiBufferFree(pFiles);
Refer to MSDN for details about NetFileEnum and NetFileClose. Note that NetFileEnum may return ERROR_MORE_DATA if more data is available.

Related

Is it good to use ntdll.dll in a win32 console application?

Short:
In my c++ project i need to read/write extended file properties. I managed it with using alternate data streams (ADS). My problem is, for opening the ADS i need to use the CreateFile API. But it is not fulfilling my needs. NtCreateFile will fullfill all my needs. (Or alternatively NtSetEaFile and NtQueryEaFile) But NtCreateFile is not directly accessible from a win32 console application.
I know i can use this function easily via GetProcAdress. But i like to know the opinion of you all, if i did miss something? Some other libs are using this pattern already, for example Chromium (https://github.com/chromium-googlesource-mirror/chromium/blob/1c1996b75d3611f56d14e2b30e7ae4eabc101486/src/sandbox/src/win_utils.cc function: ResolveNTFunctionPtr)
But im uncertain, because the c++ project is not a hobby project and i ask myself if it is dangerous or not.
I guess NtCreateFile is maybe the securest way to do, because it is well documented and supported by winternl.h header. Especially because this method is unchanged since windows 2000. But what is with NtSetEaFile, NtQueryEaFile which are fitting my needs perfectly. They are only half documented. A documentation for ZwSetEaFile and ZwQueryEaFile exist (unchanged since windows 2000).
Reason why i want to do that:
I want to write and read extended properties from files via ADS. But in case of writing the extended property of a given file the first time, i need to open the file with OPEN_ALWAYS. In case of file is not existing it will create a new file, even if i only access not the content stream of the file. To avoid this i get first the handle of the original file and check with this HANDLE if the file still exist.
But i dont want to blog any file with reduced access rights, because from my point of view that is a very bad pattern. The user needs to have full access to any file any time. Because of that we open all HANDLES with the flag FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE. And now i have the race.
auto hFile = CreateFileW(originalPath, …, FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE, …).
// this is the little race: if somebody at least rename originalPath the
// second CreateFileW call will cause the creation of a empty file with the
// path originalPath (the old path).
auto hADS = CreateFileW(originalPath + adsName, …, FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE, OPEN_ALWAYS, …).
This is a main issue, especially because this happens from time to time in our tests. NtCreateFile will fix it, because i can create the second HANDLE with the help of the first HANDLE. Because of that no race. Or NtSetEaFile and NtQueryEaFile will help, because i only need one HANDLE.
The thing is, that the application needs not to be save for the future, because ADS works only on NTFS anyway. And who knows when NTFS will be exchanged. But i dont want a flaky behaviour. I want to trust this Methods. I I am fine if the API will change in the future and the software needs to adapt to it. But i want to be sure, that all Windows higher or equal then 7 can deal with it. Somebody some experience to share? I would like to hear them very much.
This question is wrong. Your proposed solution for your problem, is not using NtCreateFile, but use CreateFile with dwCreationDisposition set to the OPEN_EXISTING.
From documentation:
OPEN_EXISTING
Opens a file or device, only if it exists. If the specified file or
device does not exist, the function fails and the last-error code is
set to ERROR_FILE_NOT_FOUND.
Simply open file if exists and set whatever you want. If file is renamed, CreateFile returns ERROR_FILE_NOT_FOUND.
THE PROBLEM
Now, to your proposed solution, what is better method or why is not possible use ntdll.dll in win32 console application (???).
Again, your "better" method - GetProcAddress is "wrong" same as using linking against ntdll.dll. In Windows 11, or Windows 12 or Windows 3030 the function may be removed and both solutions (statical vs. dynamical import) will be fail.
It is not really unsecure to use this kind of APIs if their is a documentation. In case of NtSetEaFile, NtQueryEaFile and NtCreateFile you can find a description inside of Microsoft's Doc. (keep in mind NtXxx == ZwXxx)
But this API can change in the future and Microsoft does not guarantee that it will provides the same methods in the next Windows version. If you can, use the public API, because then you are safe. If not it is a case by case decision. In this case the three methods from the API are unchanged since Windows2000. Plus for example NtSetEaFile and NtQueryEaFile is used by Microsoft for WSL (Windows Subsystem for Linux). And especially NtCreateFile is used by a wide range of OpenSource Projects. So it is very unlikely that this API will change.
In my use case another aspect is important. Because I wanted to use ADS, but ADS is only supported by NTFS. So using ADS does not ensure future compatibility as well. So it was very clear for me using NtSetEaFile and NtQueryEaFile.
But how you can use this kind of APIs? Dynamic or static linking is possible. It depends on your needs what is better. In case of static linking you need to download the last WDK (Windows Driver Kit) and link against the ntdll.lib. In case of dynamic linking you can access the dll directly via GetModuleHandle and finding out the address of the method with GetProcAddress. Under Windows ntdll.dll is accessible from any application. In both cases you don't have directly a header file. You have to define the header file by yourself or use WDK to get them.
In my project dynamic linking was the best choice. The reason was, that on every windows the right implementation will be choosen and in case the method is not available i have the chance to deactivate the feature in my software instead of crash. Microsoft is recommending the dynamic way, because of the last reason.
Simple PseudoCode (dynamic case):
typedef struct _FILE_FULL_EA_INFORMATION {
ULONG NextEntryOffset;
UCHAR Flags;
UCHAR EaNameLength;
USHORT EaValueLength;
CHAR EaName[1];
} FILE_FULL_EA_INFORMATION, *PFILE_FULL_EA_INFORMATION;
typedef struct _IO_STATUS_BLOCK {
union {
NTSTATUS Status;
PVOID Pointer;
};
ULONG_PTR Information;
} IO_STATUS_BLOCK, *PIO_STATUS_BLOCK;
typedef NTSTATUS(WINAPI *NtSetEaFileFunction)(IN HANDLE FileHandle,
OUT PIO_STATUS_BLOCK
IoStatusBlock,
IN PVOID Buffer,
IN ULONG Length);
HMODULE ntdll = GetModuleHandle(L"ntdll.dll");
NtSetEaFileFunction function = nullptr;
FARPROC *function_ptr = reinterpret_cast<FARPROC *>(&function);
*function_ptr = GetProcAddress(ntdll, "NtQueryEaFile");
// function could be used normally.
The other answer is incorrect. The reason is that the reason of my problem is, that I need to use OPEN_ALWAYS. Of course, if you don't need this flag, everything is fine. But in my case there is a point where I needed to create the ADS. And it will not be created without the OPEN_ALWAYS flag.

Could DropBox interfere with DeleteFile()/rename()

I had the following code which got executed every two
minutes all day long:
int sucessfully_deleted = DeleteFile(dest_filename);
if (!sucessfully_deleted)
{
// this never happens
}
rename(source_filename,dest_filename);
Once every several hours the rename() would fail with errno=13 (EACCES). The files involved were all sitting on a DropBox directory and I had a hunch that DropBox could be the cause. I figured that it might just be possible that the DeleteFile() function may return with a non-zero successfully_deleted but actually DropBox could still be busy doing some stuff in relation to the deletion that prevented rename() from succeeding. What I did next was to change rename() to my_rename() which would attempt a rename() and upon any failure would Sleep() for one second and try a second time. Sure enough that has worked perfectly ever since. What's more, I get a diagnostic message displaying first-attempt-failures every several hours. It has never failed on the second attempt.
So you could say that the problem is entirely solved... but I would like to understand what might be going on so as to better defend myself against any related DropBox issues in the future...
Really I would like to have a new super_delete() function which does not return until the file is properly deleted and finished with in all respects.
under windows request to delete file really never delete file just. it mark it FCB (File Control Block) with special flag (FCB_STATE_DELETE_ON_CLOSE). real deletion will be only when the last file handle will be closed.
The DeleteFile function marks a file for deletion on close. Therefore,
the file deletion does not occur until the last handle to the file is
closed. Subsequent calls to CreateFile to open the file fail with
ERROR_ACCESS_DENIED.
also if exist section ( memory-mapped file ) open on file - file even can not be marked for delete. api call fail with STATUS_CANNOT_DELETE. so in general impossible always delete file.
in case exist another open handles for file (but not section !) begin from windows 10 rs1 exist new functional for delete - FileDispositionInformationEx with FILE_DISPOSITION_POSIX_SEMANTICS. in this case:
Normally a file marked for deletion is not actually deleted until all
open handles for the file have been closed and the link count for the
file is zero. When marking a file for deletion using
FILE_DISPOSITION_POSIX_SEMANTICS, the link gets removed from the visible namespace as soon as the POSIX delete handle has been closed,
but the file’s data streams remain accessible by other existing
handles until the last handle has been closed.
ULONG DeletePosix(PCWSTR lpFileName)
{
HANDLE hFile = CreateFileW(lpFileName, DELETE, FILE_SHARE_VALID_FLAGS, 0, OPEN_EXISTING,
FILE_FLAG_BACKUP_SEMANTICS|FILE_FLAG_OPEN_REPARSE_POINT, 0);
if (hFile == INVALID_HANDLE_VALUE)
{
return GetLastError();
}
static FILE_DISPOSITION_INFO_EX fdi = { FILE_DISPOSITION_DELETE| FILE_DISPOSITION_POSIX_SEMANTICS };
ULONG dwError = SetFileInformationByHandle(hFile, FileDispositionInfoEx, &fdi, sizeof(fdi))
? NOERROR : GetLastError();
// win10 rs1: file removed from parent folder here
CloseHandle(hFile);
return dwError;
}
Update
Sorry i didn't get the question correctly the first time. I thought DeleteFile returned error 13.
Now I understand that DeleteFile succeeds but rename fails immediatlely after.
It could be just a sync issue with the filesystem. After calling DeleteFile the file will be deleted when the OS commits the changes to the filesystem. That may not appen immediately.
If you need to perform multiple operations to the same path, you should have a look at transactions https://learn.microsoft.com/it-it/windows/desktop/api/winbase/nf-winbase-deletefiletransacteda.
-- OLD ANSWER --
That is correct. If the another application handles to that file, DeleteFile will fail.
Citing MSDN docs https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-deletefile :
The DeleteFile function fails if an application attempts to delete a file that has other handles open for normal I/O or as a memory-mapped file (FILE_SHARE_DELETE must have been specified when other handles were opened).
This applies to dropbox, the antivirus, or in general, any other application that may open those files.
Dropbox may open the file to compute its hash (to look for changes) at any moment. Same goes with the antivirus.

C++ WinINet InternetReadFile function refresh

I am trying to get the content of a file using WinHTTP in C++. The file is a XML File and is generated by a executable on a server.
The code for init, connect and even read a file on the specified server address is working.
// Connect to internet.
m_hInternet = InternetOpen(L"HTTPRIP",INTERNET_OPEN_TYPE_PRECONFIG,NULL,NULL,0);
// Check if worked.
if( !m_hInternet )
return;
// Connect to selected URL.
m_hUrl = InternetOpenUrlA(m_hInternet, strUrl.c_str(), NULL, 0, INTERNET_FLAG_PRAGMA_NOCACHE | INTERNET_FLAG_RESYNCHRONIZE, 0);
// Check if worked.
if( !m_hUrl )
return;
if( InternetReadFile(m_hUrl, buf, BUFFER_SIZE, &bytesread) && bytesread != 0 )
{
// Put into std::string.
strData = std::string(buf,buf+bytesread);
}
Now I want to update the file (same address). The server update the file at 50Hz and I want my code to be able to ReadFile only if it has been updated by the server. Can InternetReadFile do that kind of thing? Maybe with a FLAG but I didn't find a thing on MSDN.
Thanks for your help.
There is no way in the HTTP protocol for you directly do that, hence there is no such function in WinHTTP. The easiest solution might be to download the file and see if it's changed, if the file is relatively small, or if the file is large, let the server which writes the file, also write a timestamp, checksum or counter increment file next to it.
Then your code would download the checksum file, see if it's changed, and in that case download the original file.
Or another solution would be to put a timestamp or similar data in the beginning of the XML file, and stop downloading the file if the timestamp (or checksum) is not updated. (This comes with its own drawbacks of course, you may have to write your own parser.)
If HTTP server has a page with info (e.g. timestamp) on this file (no matters that a file is generated; the page may be generated too), you may examine this page.
As you know that server updates the file with (nearly) constant speed, your app may just use the timer.
P.S. I doubt if there's really a sense in reading some file 50 times every second.

Correctly creating and running a win32 service with file I/O

I've written a very simple service application based on this code example.
The application as part of its normal running assumes there exists a file in the directory it is found, or in its execution path.
When I 'install' the service and then subsequently 'start' the service from the service manager in control panel. The application fails because it can't find the file to open and read from (even though the file is in the same directory as the installed executable).
My question is when a windows service is run, which is the expected running path supposed to be?
When calling 'CreateService' there only seems to be a path parameter for the binary, not for execution. Is there someway to indicate where the binary should be executed from?
I've tried this on windows vista and windows 7. Getting the same issues.
Since Windows services are run from a different context than normal user-mode applications, it's best if you don't make any assumptions about working directories or relative paths. Aside from differences in working directories, a service could run using a completely different set of permissions, etc.
Using an absolute path to the file that your service needs should avoid this problem entirely. Absolute paths will be interpreted the same regardless of the working directory, so this should make the working directory of your service irrelevant. There are several ways to go about this:
Hard-code the absolute path - This is perhaps the easiest way to avoid the problem, however it's also the least flexible. This method is probably fine for basic development and testing work, but you probably want something a bit more sophisticated before other people start using your program.
Store the absolute path in an environment variable - This gives you an extra layer of flexibility since the path can now be set to any arbitrary value and changed as needed. Since a service can run as a different user with a different set of environment variables, there are still some gotchas with this approach.
Store an absolute path in the registry - This is probably the most fool-proof method. Retrieving the path from the registry will give you the same result for all user accounts, plus this is relatively easy to set up at install time.
By default, the current directory for your Windows service is the System32 folder.
A promising solution is creating an environment variable that keeps the full path of your input location and retrieving the path from this variable at runtime.
If you use the same path as binary, you could just read binary path and modify it accordingly. But this is rather quick-fix rather than designed-solution. If I were you, I would either create system-wide environment variable and store value there, or (even better) use windows registry to store service configuration.
Note:
You will need to add Yourself some privileges using AdjustTokenPrivileges function, you can see an example here in ModifyPrivilege function.
Also be sure to use HKEY_LOCAL_MACHINE and not HKEY_CURRENT_USER. Services ar running under different user account so it's HKCU's will be different than what you can see in your registry editor.
Today I solved this problem as it was needed for some software I was developing.
As people above have said; you can hardcode the directory to a specific file - but that would mean whatever config files are needed to load would have to be placed there.
For me, this service was being installed on > 50,000 computers.
We designed it to load from directory in which the service executable is running from.
Now, this is easy enough to set up and achieve as a non-system process (I did most of my testing as a non-system process). But the thing is that the system wrapper that you used (and I used as well) uses Unicode formatting (and depends on it) so traditional ways of doing it doesn't work as well.
Commented parts of the code should explain this. There are some redundancies, I know, but I just wanted a working version when I wrote this.
Fortunately, you can just use GetModuleFileNameA to process it in ASCII format
The code I used is:
char buffer[MAX_PATH]; // create buffer
DWORD size = GetModuleFileNameA(NULL, buffer, MAX_PATH); // Get file path in ASCII
std::string configLoc; // make string
for (int i = 0; i < strlen(buffer); i++) // iterate through characters of buffer
{
if (buffer[i] == '\\') // if buffer has a '\' in it, replace with doubles
{
configLoc = configLoc + "\\\\"; // doubles needed for parsing. 4 = 2(str)
}
else
{
configLoc = configLoc + buffer[i]; // else just add char as normal
}
}
// Complete location
configLoc = configLoc.substr(0, configLoc.length() - 17); //cut the .exe off the end
//(change this to fit needs)
configLoc += "\\\\login.cfg"; // add config file to end of string
From here on, you can simple parse configLoc into a new ifsteam - and then process the contents.
Use this function to adjust the working directory of the service to be the same as the working directory of the exe it's running.
void AdjustCurrentWorkingDir() {
TCHAR szBuff[1024];
DWORD dwRet = 0;
dwRet = GetModuleFileName(NULL, szBuff, 1024); //gets path of exe
if (dwRet != 0 && GetLastError() != ERROR_INSUFFICIENT_BUFFER) {
*(_tcsrchr(szBuff, '\\') + 1) = 0; //get parent directory of exe
if (SetCurrentDirectory(szBuff) == 0) {
//Error
}
}
}

How to see if a subfile of a directory has changed

In Windows, is there an easy way to tell if a folder has a subfile that has changed?
I verified, and the last modified date on the folder does not get updated when a subfile changes.
Is there a registry entry I can set that will modify this behavior?
If it matters, I am using an NTFS volume.
I would ultimately like to have this ability from a C++ program.
Scanning an entire directory recursively will not work for me because the folder is much too large.
Update: I really need a way to do this without a process running while the change occurs. So installing a file system watcher is not optimal for me.
Update2: The archive bit will also not work because it has the same problem as the last modification date. The file's archive bit will be set, but the folders will not.
This article should help. Basically, you create one or more notification object such as:
HANDLE dwChangeHandles[2];
dwChangeHandles[0] = FindFirstChangeNotification(
lpDir, // directory to watch
FALSE, // do not watch subtree
FILE_NOTIFY_CHANGE_FILE_NAME); // watch file name changes
if (dwChangeHandles[0] == INVALID_HANDLE_VALUE)
{
printf("\n ERROR: FindFirstChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
// Watch the subtree for directory creation and deletion.
dwChangeHandles[1] = FindFirstChangeNotification(
lpDrive, // directory to watch
TRUE, // watch the subtree
FILE_NOTIFY_CHANGE_DIR_NAME); // watch dir name changes
if (dwChangeHandles[1] == INVALID_HANDLE_VALUE)
{
printf("\n ERROR: FindFirstChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
and then you wait for a notification:
while (TRUE)
{
// Wait for notification.
printf("\nWaiting for notification...\n");
DWORD dwWaitStatus = WaitForMultipleObjects(2, dwChangeHandles,
FALSE, INFINITE);
switch (dwWaitStatus)
{
case WAIT_OBJECT_0:
// A file was created, renamed, or deleted in the directory.
// Restart the notification.
if ( FindNextChangeNotification(dwChangeHandles[0]) == FALSE )
{
printf("\n ERROR: FindNextChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
break;
case WAIT_OBJECT_0 + 1:
// Restart the notification.
if (FindNextChangeNotification(dwChangeHandles[1]) == FALSE )
{
printf("\n ERROR: FindNextChangeNotification function failed.\n");
ExitProcess(GetLastError());
}
break;
case WAIT_TIMEOUT:
// A time-out occurred. This would happen if some value other
// than INFINITE is used in the Wait call and no changes occur.
// In a single-threaded environment, you might not want an
// INFINITE wait.
printf("\nNo changes in the time-out period.\n");
break;
default:
printf("\n ERROR: Unhandled dwWaitStatus.\n");
ExitProcess(GetLastError());
break;
}
}
}
This is perhaps overkill, but the IFS kit from MS or the FDDK from OSR might be an alternative. Create your own filesystem filter driver with simple monitoring of all changes to the filesystem.
ReadDirectoryChangesW
Some excellent sample code in this CodeProject article
If you can't run a process when the change occurs, then there's not much you can do except scan the filesystem, and check the modification date/time. This requires you to store each file's last date/time, though, and compare.
You can speed this up by using the archive bit (though it may mess up your backup software, so proceed carefully).
An archive bit is a file attribute
present in many computer file systems,
notably FAT, FAT32, and NTFS. The
purpose of an archive bit is to track
incremental changes to files for the
purpose of backup, also called
archiving.
As the archive bit is a binary bit, it
is either 1 or 0, or in this case more
frequently called set (1) and clear
(0). The operating system sets the
archive bit any time a file is
created, moved, renamed, or otherwise
modified in any way. The archive bit
therefore represents one of two
states: "changed" and "not changed"
since the last backup.
Archive bits are not affected by
simply reading a file. When a file is
copied, the original file's archive
bit is unaffected, however the copy's
archive bit will be set at the time
the copy is made.
So the process would be:
Clear the archive bit on all the files
Let the file system change over time
Scan all the files - any with the archive bit set have changed
This will eliminate the need for your program to keep state, and since you're only going over the directory entries (where the bit is stored) and they are clustered, it should be very, very fast.
If you can run a process during the changes, however, then you'll want to look at the FileSystemWatcher class. Here's an example of how you might use it.
It also exists in .NET (for future searchers of this type of problem)
Perhaps you can leave a process running on the machine watching for changes and creating a file for you to read later.
-Adam
Perhaps you can use the NTFS 5 Change Journal with DeviceIoControl as explained here
If you are not opposed to using .NET the FileSystemWatcher class will handle this for you fairly easily.
From the double post someone mentioned: WMI Event Sink
Still looking for a better answer though.
Nothing easy - if you have a running app you can use the Win32 file change notification apis (FindFirstChangeNotification) as suggested with the other answers. warning: circa 2000 trend micro real-time virus scanner would group the changes together making it necessary to use really large buffers when requesting the file system change lists.
If you don't have a running app, you can turn on ntfs journaling and scan the journal for changes http://msdn.microsoft.com/en-us/library/aa363798(VS.85).aspx but this can be slower than scanning the whole directory when the # of changes is larger than the # of files.