Can I open the same file twice (with CreateFileA), using different flags (in this case, one with FILE_FLAG_NO_BUFFERING, and one without)?
In detail, this is the case: During startup, I create a temporary file (with FILE_FLAG_DELETE_ON_CLOSE). I fill it up sequentially, and I don't want to worry about doing unbuffered IO in this part. Then, while the process is running, I want to access that file using unbuffered IO, because I have my own caching logic. Thus, I'm thinking of opening the same file again, this time with FILE_FLAG_NO_BUFFERING, and then closing the old handle. I want to do this in this overlapped way for two reasons:
Concurrency. If I close the old handle before I open the new one, someone else might mess with my file in the meantime.
FILE_FLAG_DELETE_ON_CLOSE would delete my file when I close the first handle without having another one open. This is a minor annoyance that I could work around.
Just remember to include FILE_SHARE_DELETE in share mode. I think FILE_FLAG_DELETE_ON_CLOSE is the only flag that affects more than just "your" handle.
Isn't it ridiculous?
You want to open twice because if open after close previous handle someone might mess with your file. But reality is that you are trying to mess with your file.
If can not guarantee exclusive file access how can you prevent someone doing something? But if you open exclusively how you can reopen the file?
AFAIK, if the file is already opened exclusively no more open is allowed, even from the same process.
Related
We create a file for use as memorymappedfile.
we open with GENERIC_READ | GENERIC_WRITE
we use share with FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE
we use file attributes FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE
we create the file successfully. We can reopen it as many times with the same flags as we want.
Once one handle has been closed, we can no longer open any more handles, it returns with ERROR_ACCESS_DENIED. We can cause this by closing any of the handles, either the first from CreateFile(ALWAYS_CREATE), or the others from CreateFile(OPEN_EXISTING).
Is there any way to avoid this ? We use the memoryMappedFile as communication between the different processes that must share resources. these processes sometimes start and stop. Right now as soon as we close one handle, we are stuck unable to open the memorymappedfile.
I have tried changing the open calls to use FILE_ATTRIBUTE_NORMAL, so only the create call uses CLOSE_ON_DELETE but that has no effect on this situation.
The problem you're running into is that once a file handle opened with FILE_FLAG_DELETE_ON_CLOSE is closed, the operating system will no longer allow new handles to be created.
The gory details: When processing an IRP_MJ_CLEANUP (which is what happens win a handle is closed) for a file opened for delete-on-close, Windows file systems will set an internal flag on the file object indicating that it's on it's way out. Subsequent open attempts on the file will be failed with STATUS_DELETE_PENDING, which the Win32 subsystem will map to the Win32 ERROR_ACCESS_DENIED code you're seeing.
For your use case, you might want to consider using the Named Shared Memory (MSDN) pattern. Basically, let the operating system manage the space for your shared memory. Just make sure you apply the appropriate security attributes, and you're good to go.
It turns out the venerable Raymond Chen replied to this on his Microsoft devblog, The Old New Thing. Bukes is correct but as an alternative, as Raymond says in his article:
It looks like they really want the file to remain valid (including allowing further CreateFile calls to succeed) for as long as any open handle continues to refer to the file. Fortunately, the customer needed the handle only to create a memory-mapped view. The file pointer was not important. Therefore, the customer could use DuplicateHandle instead of CreateFile to get additional handles to the file. Since all of the handles refer to the same file object, the file object will not delete the file until all of the handles are closed.
There is a static library I use in my program which can only take filenames as its input, not actual file contents. There is nothing I can do about the library's source code. So I want to: create a brand-new file, store data to being processed into it, flush it onto the disk(?), pass its name to the library, then delete it.
But I also want this process to be rather secure:
1) the file must be created anew, without any bogus data (maybe it's not critical, but whatever);
2) anyone but my process must not be able read or write from/to this file (I want the library to process my actual data, not bogus data some wiseguy managed to plug in);
3) after I'm done with this file, it must be deleted (okay, if someone TerminateProcess() me, I guess there is nothing much can be done, but still).
The library seems to use non-Unicode fopen() to open the given file though, so I am not quite sure how to handle all this, since the program is intended to run on Windows. Any suggestions?
You have a lot of suggestions already, but another option that I don't think has been mentioned is using named pipes. It will depend on the library in question as to whether it works or not, but it might be worth a try. You can create a named pipe in your application using the CreateNamedPipe function, and pass the name of the pipe to the library to operate on (the filename you would pass would be \\.\pipe\PipeName). Whether the library accepts a filename like that or not is something you would have to try, but if it works the advantage is your file never has to actually be written to disk.
This can be achieved using the CreateFile and GetTempFileName functions (if you don't know if you can write to the current working directory, you may also want to use , GetTempPath).
Determine a directory to store your temporary file in; the current directory (".") or the result of GetTempPath would be good candidates.
Use GetTempFileName to create a temporary file name.
Finally, call CreateFile to create the temporary file.
For the last step, there are a few things to consider:
The dwFlagsAndAttributes parameter of CreateFile should probably include FILE_ATTRIBUTE_TEMPORARY.
The dwFlagsAndAttributes parameter should probably also include FILE_FLAG_DELETE_ON_CLOSE to make sure that the file gets deleted no matter what (this probably also works if your process crashes, in which case the system closes all handles for you).
The dwShareMode parameter of CreateFile should probably be FILE_SHARE_READ so that other attempts to open the file will succeed, but only for reading. This means that your library code will be able to read the file, but nobody will be able to write to it.
This article should give you some good guidelines on the issue.
The gist of the matter is this:
The POSIX mkstemp() function is the secure and preferred solution where available. Unfortunately, it is not available in Windows, so you would need to find a wrapper that properly implements this functionality using Windows API calls.
On Windows, the tmpfile_s() function is the only one that actually opens the temporary file atomically (instead of simply generating a filename), protecting you from a race condition. Unfortunately, this function does not allow you to specify which directory the file will be created in, which is a potential security issue.
Primarily, you can create file in user's temporary folder (eg. C:\Users\\AppData\Local\Temp) - it is a perfect place for such files. Secondly, when creating a file, you can specify, what kind of access sharing do you provide.
Fragment of CreateFile help page on MSDN:
dwShareMode
0 Prevents other processes from opening a file or device
if they request delete, read, or write access.
FILE_SHARE_DELETE Enables subsequent open operations on a file or device to
request delete access. Otherwise, other processes cannot open the file or device if they
request delete access. If this flag is not specified, but the file or device has been opened for delete access, the function fails. Note: Delete access allows both delete and rename operations.
FILE_SHARE_READ Enables subsequent open operations on a
file or device to request read access. Otherwise, other processes cannot open the file or device if they request read access. If this flag is not specified, but the file or device has been opened for read access, the function fails.
FILE_SHARE_WRITE Enables subsequent open operations on a file or device to request
write access.
Otherwise, other processes cannot open the file or device if they
request write access.
If this flag is not specified, but the file or device has been opened
for write access or has a file mapping with write access, the function
fails.
Whilst suggestions given are good, such as using FILE_SHARE_READ, FILE_DELETE_ON_CLOSE, etc, I don't think there is a completely safe way to do thist.
I have used Process Explorer to close files that are meant to prevent a second process starting - I did this because the first process got stuck and was "not killable and not dead, but not responding", so I had a valid reason to do this - and I didn't want to reboot the machine at that particular point due to other processes running on the system.
If someone uses a debugger of some sort [including something non-commercial, written specifically for this purpose], attaches to your running process, sets a breakpoint and stops the code, then closes the file you have open, it can write to the file you just created.
You can make it harder, but you can't stop someone with sufficient privileges/skills/capabilities from intercepting your program and manipulating the data.
Note that file/folder protection only works if you reliably know that users don't have privileged accounts on the machine - typical Windows users are either admins right away, or have another account for admin purposes - and I have access to sudo/root on nearly all of the Linux boxes I use at work - there are some fileservers that I don't [and shouldn't] have root access. But all the boxes I use myself or can borrow of testing purposes, I can get to a root environment. This is not very unusual.
A solution I can think of is to find a different library that uses a different interface [or get the sources of the library and modify it so that it]. Not that this prevents a "stop, modify and go" attack using the debugger approach described above.
Create your file in your executable's folder using CreateFile API, You can give the file name some UUID, each time its created, so that no other process can guess the file name to open it. and set its attribute to hidden. After using it, just delete the file .Is it enough?
How can I check if a file is still being written? I need to wait for a file to be created, written and closed again by another process, so I can go on and open it again in my process.
In general, this is a difficult problem to solve. You can ask whether a file is open, under certain circumstances; however, if the other process is a script, it might well open and close the file multiple times. I would strongly recommend you use an advisory lock, or some other explicit method for the other process to communicate when it's done with the file.
That said, if that's not an option, there is another way. If you look in the /proc/<pid>/fd directories, where <pid> is the numeric process ID of some running process, you'll see a bunch of symlinks to the files that process has open. The permissions on the symlink reflect the mode the file was opened for - write permission means it was opened for write mode.
So, if you want to know if a file is open, just scan over every process's /proc entry, and every file descriptor in it, looking for a writable symlink to your file. If you know the PID of the other process, you can directly look at its proc entry, as well.
This has some major downsides, of course. First, you can only see open files for your own processes, unless you're root. It's also relatively slow, and only works on Linux. And again, if the other process opens and closes the file several times, you're stuck - you might end up seeing it during the closed period, and there's no easy way of knowing if it'll open it again.
You could let the writing process write a sentinel file (say "sentinel.ok") after it is finished writing the data file your reading process is interested in. In the reading process you can check for the existence of the sentinel before reading the data file, to ensure that the data file is completely written.
#blu3bird's idea of using a sentinel file isn't bad, but it requires modifying the program that's writing the file.
Here's another possibility that also requires modifying the writer, but it may be more robust:
Write to a temporary file, say "foo.dat.part". When writing is complete, rename "foo.dat.part" to "foo.dat". That way a reader either won't see "foo.dat" at all, or will see a complete version of it.
You can try using inotify
http://en.wikipedia.org/wiki/Inotify
If you know that the file will be opened once, written and then closed, it would be possible for your app to wait for the IN_CLOSE_WRITE event.
However if the behaviour of the other application doing the writing of the file is more like open,write,close,open,write,close....then you'll need some other mechanism of determining when the other app has truly finished with the file.
So I have many log files that I need to write to. They are created when program begins, and they save to file when program closes.
I was wondering if it is better to do:
fopen() at start of program, then close the files when program ends - I would just write to the files when needed. Will anything (such as other file io) be slowed down with these files being still "open" ?
OR
I save what needs to be written into a buffer, and then open file, write from buffer, close file when program ends. I imagine this would be faster?
Well, fopen(3) + fwrite(3) + fclose(3) is a buffered I/O package, so another layer of buffering on top of it might just slow things down.
In any case, go for a simple and correct program. If it seems to run slowly, profile it, and then optimize based on evidence and not guesses.
Short answer:
Big number of opened files shouldn't slow down anything
Writing to file will be buffered anyway
So you can leave those files opened, but do not forget to check the limit of opened files in your OS.
Part of the point of log files is being able to figure out what happened when/if your program runs into a problem. Quite a few people also do log file analysis in (near) real-time. Your second scenario doesn't work for either of these.
I'd start with the first approach, but with a high-enough level interface that you could switch to the second if you really needed to. I wouldn't view that switch as a major benefit of the high-level interface though -- the real benefit would normally be keeping the rest of the code a bit cleaner.
There is no good reason to buffer log messages in your program and write them out on exit. Simply write them as they're generated using fprintf. The stdio system will take care of the buffering for you. Of course this means opening the file (with fopen) from the beginning and keeping it open.
For log files, you will probably want a functional interface that flushes the data to disk after each complete message, so that if the program crashes (it has been known to happen), the log information is safe. Leaving stuff in standard I/O buffers means excavating the data from a core dump - which is less satisfactory than having the information on disk safely.
Other I/O really won't be affected by holding one - or even a few - log files open. You lose a few file descriptors, perhaps, but that is not often a serious problem. When it is a problem, you use one file descriptor for one log file - and you keep it open so you can log information. You might elect to map stderr to the log file, leaving that as the file descriptor that's in use.
It's been mentioned that the FILE* returned by fopen is already buffered. For logging, you should probably also look into using the setbuf() or setvbuf() functions to change the buffering behavior of the FILE*.
In particular, you might want to set the buffering mode to line-at-a-time, so the log file is flushed automatically after each line is written. You can also specify the size of the buffer to use.
I have to develop an application which parses a log file and sends specific data to a server. It has to run on both Linux and Windows.
The problem appears when I want to test the log rolling system (which appends .1 to the name of the creates a new one with the same name). On Windows (haven't tested yet on Linux) I can't rename a file that I have opened with std::ifstream() (exclusive access?) even if I open it in "input mode" (ios::in).
Is there a cross-platform way to open file in a non-exclusive way?
Is there a way to open file in a non-exclusive way,
Yes, using Win32, passing the various FILE_SHARE_Xxxx flags to CreateFile.
is it cross platform?
No, it requires platform-specific code.
Due to annoying backwards compatibility concerns (DOS applications, being single-tasking, assume that nothing can delete a file out from under them, i.e. that they can fclose() and then fopen() without anything going amiss; Win16 preserved this assumption to make porting DOS applications easier, Win32 preserved this assumption to make porting Win16 applications easier, and it's awful), Windows defaults to opening files exclusively.
The underlying OS infrastructure supports deleting/renaming open files (although I believe it does have the restriction that memory-mapped files cannot be deleted, which I think isn't a restriction found on *nix), but the default opening semantics do not.
C++ has no notion of any of this; the C++ operating environment is much the same as the DOS operating environment--no other applications running concurrently, so no need to control file sharing.
It's not the reading operation that's requiring the exclusive mode, it's the rename, because this is essentially the same as moving the file to a new location.
I'm not sure but I don't think this can be done. Try copying the file instead, and later delete/replace the old file when it is no longer read.
Win32 filesystem semantics require that a file you rename not be open (in any mode) at the time you do the rename. You will need to close the file, rename it, and then create the new log file.
Unix filesystem semantics allow you to rename a file that's open because the filename is just a pointer to the inode.
If you are only reading from the file I know it can be done with windows api CreateFile. Just specify FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE as the input to dwShareMode.
Unfortunally this is not crossplatform. But there might be something similar for Linux.
See msdn for more info on CreateFile.
EDIT: Just a quick note about Greg Hewgill comment. I've just tested with the FILE_SHARE* stuff (too be 100% sure). And it is possible to both delete and rename files in windows if you open read only and specify the FILE_SHARE* parameters.
I'd make sure you don't keep files open. This leads to weird stuff if your app crashes for example.
What I'd do:
Abstract (reading / writing / rolling over to a new file) into one class, and arrange closing of the file when you want to roll over to a new one in that class. (this is the neatest way, and since you already have the roll-over code you're already halfway there.)
If you must have multiple read/write access points, need all features of fstreams and don't want to write that complete a wrapper then the only cross platform solution I can think of is to always close the file when you don't need it, and have the roll-over code try to acquire exclusive access to the file a few times when it needs to roll-over before giving up.