How to throw "The file is in use" - c++

I have some files (.xml extensions) that my app requires them to be present as long as my application is open.
So, is there a cross-platform solution to mark these files as "File in use" so that the user cannot delete or modify them?

Since you specify you need it to work cross-platform, you might want to use Qt with QFile::setPermissions and set it to QFileDevice::ReadOwner. Do note the platform-specifc notes the documentation makes. There is nothing similar in the C++ Standard Library as far as I am aware.
Edit: turns out I was wrong! Since C++17 can use std::filesystem::permissions and set the permissions to read-only.

These steps could work for you:
read and store the file(s) to memory (or perhaps a temporary storage if memory is a problem)
implement a file-change watcher (concrete solutions here How do I make my program watch for file modification in C++? )
if a change occurs, you could:
overwrite the changed file (from data you have in memory or in temporary storage); or create the file anew if it was deleted
notify the user that the files have been changed and the program might not work correctly
Not sure about this one, never tried it, but could theoretically block access:
Open the file(stream) for reading and writing but don't close it (until program finishes)
try to even read or write from/to file

File management is always tricky because it is operating system dependant, although most OS behave similarly. Here you have some ideas that could work:
If the .xml files are never modified: make them read-only. The C++17 standard introduced a method to configure file permissions (https://en.cppreference.com/w/cpp/filesystem/permissions), so you can always ensure that they are read-only from your application. However, this will not prevent some user deleting them, but for example in linux you will see a warning when trying to remove the files with "rm".
If the files are not that big, I would just parse the XML files at the beginning of the program and keep the data structures in RAM, so then you can just forget about the actual files in disk.
An alternative would be to just copy the .xml files to a temporary location, rarely someone will be deleting temporary files. The "tmp" directories are platform dependant, but the lib has a method to create temporary files, so you could create one for each xml and copy their contents: http://www.cplusplus.com/reference/cstdio/tmpfile/

Since it is platform-dependent and you want a cross-platform solution, you'll have to make it by preprocessor flags. So you have to consider all the platforms that your application will have support for and write special code for each of them. As far as I know with Windows and Linux you can use std::filesystem::permissions. Just set read-only and OS will automatically warn the user once he wants to remove any of marked files. Also, tmpfile is mentioned in the answers could be a good fit if you don't say I exactly need to set file permissions.

Related

Should I bother about user might mess up my program's files?

I am writing a C++ program for Linux which creates some files on the disk during the work. These files contain information about program's internal objects state, so that the next time the program is started it reads these files to resume previous session. Some of these files are also being read/written to during the execution to read/write some variables values. The problem is that modifying/renaming/deleting these files would lead in undefined behavior, segmentation faults and other surprises causing the program to crash. I am certainly not going to restrict the user in accessing files on his/her machine, but inside the program I can always check if a file has been modified before accessing it to at least prevent the program from crash. It would involve many extra checks and make the code larger though.
The question is what is a good practice to deal with such issues? Should I even be so paranoid, or just expect the user to be smart enough to not mess with program's files?
First, check your resources and if it is worth the effort. Will the user be even tempted to trace and edit these files?
If so, my advice is this: Don't be concerned whether or not the file has been modified. Rather you should validate the input you get (from the file).
This may not be the most satisfying answer, but error handling is a big part of programming, especially when it comes to input validation.
Let assume you are writing into a ~/.config/yourApp/state.conf.
Prepare defaults.
Does the file exist? If not use defaults.
Use some well known structure like INI, JSON, YAML, TOML, you name it, that the user can understand and the application can check for integrity (libraries may help here). If the file is broken use defaults.
In case the user does delete a single entry, use the default value.
Some values deserve special checking in case a out of bound access would lead to undefined behavior. If the values is out of bounds, use a default.
Ignore unknown fields.
In case you can't recover from a malformed input, provide a meaningful error message, so the user may be able to restore a save state (restore from a backup or start over from the beginning).
If using files are really important to your codebase then you can simply hide those files by adding dot "." before them. This way the user will not be able to know the files are there unless checked specifically.
If you want to access those files from anywhere in the system you can also store these files in some temp hidden folder which can be accessed easily.
eg: ~/.temp_proj/<file>
This way the folder will be hidden from the user and you can use it for your own program.

How can I find the underlying file type in C++?

In *nix system there is a command called 'file', which can tell you the underlying type of a file. Say, if you rename a binary executable's name into foo.txt, or you rename a mp3 file into .txt, the system will always tell you the real type of the file. But in Windows, there seems no such functionality, if you rename an executable into .txt, you cannot execute it. Can anyone explain to me how this is done in *nix system, and how can I find the real type of a file using C++, especially in windows, where I cannot use std::system("file blah")?
File utility uses libmagic library. It recognises filetype parsing "special" fields in the file.
Of course, you can program by yourself recognition of some formats, but sometimes this requires plenty of work. E.g. when you try to differentiate between different formats of MP4.
Developers of that library did pretty huge amount of work. So it's adviced to use their results if you want to get god results in saying what type format you deal with.(this is a big sphere, really, and if knowing what type format you are working with,better rely on them then on your code)
File utility - http://www.darwinsys.com/file/
You can download source code and see how really many different recognition types they use.
Download archive file-4.26 -> magic -> Magdir
Personally I had luck with compiling file 4.26 on Windows ftp://ftp.astron.com/pub/file/
Caution It's merely a convention that files of certain formats should have predefined signatures and it's true almost always and helps identify formats of files properly.
If it's not point of concern, you can surely trust signature. But just keep in mind that anyone having enough knowledge and wish can open a file in hex editor and playing with bits make another format of file.
Even in Unix/Linux, the system doesn't actually definitively know a file's type. The "file" program makes an educated guess by comparing the file's contents against a database of patterns that characterize a variety of common file types, but it's no more than a guess — it doesn't know about all possible file formats, and it can be wrong about the ones that it does know.
It's entirely possible to write a program like "file" for Windows; it doesn't depend on any special capabilities in the OS. Cygwin provides a Windows port of the "file" program, for example.
The issue of renaming a program to have a .txt extension is unrelated to the "file" program. That comes from the fact that Windows decides whether a file is executable based on its name (specifically, its extension), whereas Unix/Linux decides whether a file is executable based on its permissions — not its contents. If you chmod a-x a program on a Linux system, the system will consider it non-executable, just like if you remove the .exe extension from a program on Windows.
The command reference is suggesting that the type information is saved to an external place for further usage. It is also mentioning magic numbers, which is refering to file signatures.
Being 100% sure of a file type is theorically impossible since there is no precise rules around what a certain type should contain. Even if they were such rules, it would be possible to alter the file in a way to make it look like another one. While both signatures and extension can give you a good idea of what the type actually is, you still need to face the possibility of dealing with a wrong type.
UNIX file command uses heuristics. There is a database of magic numbers, usually in /usr/share/file/magic and /etc/magic/ that allows you to add new file "types" to be recogized by the file command. It simply probes the file to look for magic numbers (signatures) in its contents.
UNIX traditionally doesn't have the same type of file extension and type associations that Windows does, although Linux is accumulating that in recent times.
I would think on Windows you'd want to at least check the file extension association, to be correct. But even within a given extension (such as .txt) the individual program may perform its own heuristics. Example, notepad has to make an educated guess at the character encoding when it opens a file. Raymond Chen wrote a good read in his blog about it The Old New Thing - The Notepad file encoding problem, redux

Override c library file functions?

I am working on a game, and one of the requirements per the licence agreement of the sound assets I am using is that they be distributed in a way that makes them inaccessible to the end user. So, I am thinking about aggregating them into a flat file, encrypting them, or some such. The problem is that the sound library I am using (Hekkus Sound System) only accepts a 'char*' file path and handles file reading internally. So, if I am to continue to use it, I will have to override the c stdio file functions to handle encryption or whatever I decide to do. This seems doable, but it worries me. Looking on the web I am seeing people running into strange frustrating problems doing this on platforms I am concerned with(Win32, Android and iOS).
Does there happen to be a cross-platform library out there that takes care of this? Is there a better approach entirely you would recommend?
Do you have the option of using a named pipe instead of an ordinary file? If so, you can present the pipe to the sound library as the file to read from, and you can decrypt your data and write it to the pipe, no problem. (See Beej's Guide for an explanation of named pipes.)
Override stdio in a way that a lib you not knowing how it works exactly works in a way the developer hasn't in mind do not look like the right approach for me, as it isn't really easy. Implement a ramdrive needs so much effort that I recommend to search for another audio lib.
The Hekkus Sound System I found was build by a single person and last updated 2012. I wouldn't rely on a lib with only one person working on it without sharing the sources.
My advice, invest your time in searching for a proper sound lib instead of searching for a fishy work around for this one.
One possibility is to use a encrypted loopback filesystem (google for additional resources).
The way this works is that you put your assets on a encrypted filesystem, which actually lives in a simple file. This filesystem gets mounted someplace as a loopback device. Password needs to be supplied at attach / mount time. Once mounted, all files are available as regular files to your software. But otherwise, the files are encrypted and inaccessible.
It's compiler-dependent and not a guaranteed feature, but many allow you to embed files/resources directly into the exe and read them in your code as if from disk. You could embed your sound files that way. It will significantly increase the size of your exe however.
Another UNIX-based approach:
The environment variable LD_PRELOAD can be used to override any shared library an executable has been linked against. All symbols exported by a library mentioned in LD_PRELOAD are resolved to that library, including calls to libc functions like open, read, and close. Using the libdl, it is also possible for the wrapping library to call through to the original implementation.
So, all you need to do is to start the process which uses the Hekkus Sound System in an environment that has LD_PRELOAD set appropriately, and you can do anything you like to the file that it reads.
Note, however, that there is absolutely no way that you can keep the data inaccessible from the user: the very fact that he has to be able to hear it means he has to have access. Even if all software in the chain would use encryption, and your user is not willing to hack hardware, it would not be exactly difficult to connect the audio output jack with an audio input jack, would it? And you can't forbid you user to use earphones, can you? And, of course, the kernel can see all audio output unencrypted and can send a copy somewhere else...
The solution to your problem would be a ramdisk.
http://en.wikipedia.org/wiki/RAM_drive
Using a piece of memory in ram as if it was a disk.
There is software available for this too. Caching databases in ram is becoming popular.
And it keeps the file from being on the disk that would make it easy accessible to the user.

File table in Ubuntu OS

Does linux/Ubuntu OS creates a table, which keeps the entry of every file with its absolute address that is stored on the hard drive?
Just curious to know, because I am planning to make a file searcher program.
I know there are terminal commands like find etc, but as I will program in C I was thinking if there any such thing Ubuntu OS does, if so, how can I access that table?
Update:
As some people mentioned there is no such thing, then If I want to make a file searcher program, I would have to search each and every folder of every directory, starting program root directory. The resultant program will be very sluggish and will perform poorly! So is there a better way? Or my way is good!
The "thing" you describe is commonly called a file system and as you may know there's a choice of file systems available for Linux: ext3, ext4, btrfs, Reiser, xfs, jffs, and others.
The table you describe would probably map quite well onto the inode-directory combo.
From my point of view, the entire management of where files are physically located on the harddisk is none of the user's business, it's strictly the operating system's domain and not something to mess with unless you have an excellent excuse (like you're writing a data recovery program) and very deep knowledge of the file system(s) involved. Moreover, in most cases a file's storage will not be contiguous, but spread over several locations on the disk (fragments).
But the more important question here is probably: what exactly do you hope to achieve by finding files this way?
EDIT: based on OP's comment I think there may be a serious misunderstanding here - I can't see the link between absolute file addresses and a file searcher, but that may be due to a fundamental difference between our respective understanding of "absolute address" in the context of a file system.
If you just want to look at all files in the file system you can either
perform a recursive directory read or
use the database prepared by updatedb as suggested by SmartGuyz
As you want to look into the files anyways - and that's where almost all runtime will be spent on - I can't think of any advantage 2) would have over 1) and 2) has the disadvantage to have an external dependency, in casu the file prepared by updatedb must exist and be very fresh.
An SO question speaking about more advanced ways of traversing directories than good old opendir/readdir/closedir : Efficiently Traverse Directory Tree with opendir(), readdir() and closedir()
EDIT2 based on OP's question addendum: yes, traversing directories takes time, but that's life. Consider the next best thing, ie locate and friends. It depends on a "database" that will be updated regularly (typically once daily), so all files that were added or renamed after the last scheduled update will not be found, and files that were removed after the last scheduled update will be mentioned in the database although they don't exist anymore. Assuming locate is even installed on the target machine, something you can't be sure of.
As with most things in programming, it never hurts to look at previous solutions to the same problem, so may I suggest you read the documentation of GNU findutils?
No, there is no a single table of block addresses of files, you need to go deeper.
First of all, the file layout depends on the filesystem type (e.g. ext2, ext3, btrfs. reisersf, jfs, xfs, etc). This is abstracted by the Linux kernel, which provides drivers for access to files on a lot of filesystems and a specific partition with its filesystem is abstracted under the single Virtual File System (the single file-directory tree, which contains other devices as its subtrees).
So, basically no, you need to use the kernel abstract interfaces (readdir(), /proc/mounts and so on) in order to search for files or roll your own userspace drivers (e.g. through FUSE) to examine raw block devices (/dev/sda1 etc) if you really need to examine low-level details (this requires a lot of understanding of the kernel/filesystems internals and is highly error-prone).
updatedb -l 0 -o db_file -U source_directory
This will create a database with files, I hope this will help you.
No. The file system is actually structured with directories, each directory containing files and directories.
Within Linux, all of this is managed into the kernel with inodes.
YES.
Conceptually, it does create a table of every file's location on the disc**. There are a lot of details which muddy this picture slightly.
However, you should usually not care. You don't want to work at that level, nor should you. There are many filesystems in Linux which all do it in a slightly (or even significantly) different way.
** Not actually physical location. A hard drive may map the logical blocks to physical blocks in some way determine by its firmware.

How to create a virtual file?

I'd like to simulate a file without writing it on disk. I have a file at the end of my executable and I would like to give its path to a dll. Of course since it doesn't have a real path, I have to fake it.
I first tried using named pipes under Windows to do it. That would allow for a path like \\.\pipe\mymemoryfile but I can't make it works, and I'm not sure the dll would support a path like this.
Second, I found CreateFileMapping and GetMappedFileName. Can they be used to simulate a file in a fragment of another ? I'm not sure this is what this API does.
What I'm trying to do seems similar to boxedapp. Any ideas about how they do it ? I suppose it's something like API interception (Like Detour ), but that would be a lot of work. Is there another way to do it ?
Why ? I'm interested in this specific solution because I'd like to hide the data and for the benefit of distributing only one file but also for geeky reasons of making it works that way ;)
I agree that copying data to a temporary file would work and be a much easier solution.
Use BoxedApp and do not worry.
You can store the data in an NTFS stream. That way you can get a real path pointing to your data that you can give to your dll in the form of
x:\myfile.exe:mystreamname
This works precisely like a normal file, however it only works if the file system used is NTFS. This is standard under Windows nowadays, but is of course not an option if you want to support older systems or would like to be able to run this from a usb-stick or similar. Note that any streams present in a file will be lost if the file is sent as an attachment in mail or simply copied from a NTFS partition to a FAT32 partition.
I'd say that the most compatible way would be to write your data to an actual file, but you can of course do it one way on NTFS systems and another on FAT systems. I do recommend against it because of the added complexity. The appropriate way would be to distribute your files separately of course, but since you've indicated that you don't want this, you should in that case write it to a temporary file and give the dll the path to that file. Make sure you write the temporary file to the users' temp directory (you can find the path using GetTempPath in C/C++).
Your other option would be to write a filesystem filter driver, but that is a road that I strongly advise against. That sort of defeats the purpose of using a single file as well...
Also, in case you want only a single file for distribution, how about using a zip file or an installer?
Pipes are for communication between processes running concurrently. They don't store data for later access, and they don't have the same semantics as files (you can't seek or rewind a pipe, for instance).
If you're after file-like behaviour, your best bet will always be to use a file. Under Windows, you can pass FILE_ATTRIBUTE_TEMPORARY to CreateFile as a hint to the system to avoid flushing data to disk if there's sufficient memory.
If you're worried about the performance hit of writing to disk, the above should be sufficient to avoid the performance impact in most cases. (If the system is low enough on memory to force the file data out to disk, it's probably also swapping heavily anyway -- you've already got a performance problem.)
If you're trying to avoid writing to disk for some other reason, can you explain why? In general, it's quite hard to stop data from ever hitting the disk -- the user can always hibernate the machine, for instance.
Since you don't have control over the DLL you have to assume that the DLL expects an actual file. It probably at some point makes that assumption which is why named pipes are failing on you.
The simplest solution is to create a temporary file in the temp directory, write the data from your EXE to the temp file and then delete the temporary file.
Is there a reason you are embedding this "pseudo-file" at the end of your EXE instead of just distributing it with our application? You are obviously already distributing this third party DLL with your application so one more file doesn't seem like it is going to hurt you?
Another question, will this data be changing? That is are you expecting to write back data this "pseudo-file" in your EXE? I don't think that will work well. Standard users may not have write access to the EXE and that would probably drive anti-virus nuts.
And no CreateFileMapping and GetMappedFileName definitely won't work since they don't give you a file name that can be passed to CreateFile. If you could somehow get this DLL to accept a HANDLE then that would work.
And I wouldn't even bother with API interception. Just hand the DLL a path to an acutal file.
Reading your question made me think: if you can pretend an area of memory is a file and have kind of "virtual path" to it, then this would allow loading a DLL directly from memory which is what LoadLibrary forbids by design by asking for a path name. And this is why people write their own PE loader when they want to achieve that.
I would say you can't achieve what you want with file mapping: the purpose of file mapping is to treat a portion of a file as if it was physical memory, and you're wanting the reciprocal.
Using Detours implies that you would have to replicate everything the intercepted DLL function does except from obtaining data from a real file; hence it's not generic. Or, even more intricate, let's pretend the DLL uses fopen; then you provide your own fopen that detects a special pattern in the path and you mimmic the C runtime internals... Hmm is it really worth all the pain? :D
Please explain why you can't extract the data from your EXE and write it to a temporary file. Many applications do this -- it's the classic solution to this problem.
If you really must provide a "virtual file", the cleanest solution is probably a filesystem filter driver. "clean" doesn't mean "good" -- a filter is a fully documented and supported solution, so it's cleaner than API hooking, injection, etc. However, filesystem filters are not easy.
OSR Online is the best place to find Windows filesystem information. The NTFSD mailing list is where filesystem developers hang out.
How about using a some sort of RamDisk and writing the file to this disk? I have tried some ramdisks myself, though never found a good one, tell me if you are successful.
Well, if you need to have the virtual file allocated in your exe, you will need to create a vector, stream or char array big enough to hold all of the virtual data you want to write.
that is the only solution I can think of without doing any I/O to disk (even if you don't write to file).
If you need to keep a file like path syntax, just write a class that mimics that behaviour and instead of writing to a file write to your memory buffer. It's as simple as it gets. Remember KISS.
Cheers
Open the file called "NUL:" for writing. It's writable, but the data are silently discarded. Kinda like /dev/null of *nix fame.
You cannot memory-map it though. Memory-mapping implies read/write access, and NUL is write-only.
I'm guessing that this dll cant take a stream? Its almost to simple to ask BUT if it can you could just use that.
Have you tried using the \?\ prefix when using named pipes? Many APIs support using \?\ to pass the remainder of the path directly through without any parsing/modification.
http://msdn.microsoft.com/en-us/library/aa365247(VS.85,lightweight).aspx
Why not just add it as a resource - http://msdn.microsoft.com/en-us/library/7k989cfy(VS.80).aspx - the same way you would add an icon.