Does linux/Ubuntu OS creates a table, which keeps the entry of every file with its absolute address that is stored on the hard drive?
Just curious to know, because I am planning to make a file searcher program.
I know there are terminal commands like find etc, but as I will program in C I was thinking if there any such thing Ubuntu OS does, if so, how can I access that table?
Update:
As some people mentioned there is no such thing, then If I want to make a file searcher program, I would have to search each and every folder of every directory, starting program root directory. The resultant program will be very sluggish and will perform poorly! So is there a better way? Or my way is good!
The "thing" you describe is commonly called a file system and as you may know there's a choice of file systems available for Linux: ext3, ext4, btrfs, Reiser, xfs, jffs, and others.
The table you describe would probably map quite well onto the inode-directory combo.
From my point of view, the entire management of where files are physically located on the harddisk is none of the user's business, it's strictly the operating system's domain and not something to mess with unless you have an excellent excuse (like you're writing a data recovery program) and very deep knowledge of the file system(s) involved. Moreover, in most cases a file's storage will not be contiguous, but spread over several locations on the disk (fragments).
But the more important question here is probably: what exactly do you hope to achieve by finding files this way?
EDIT: based on OP's comment I think there may be a serious misunderstanding here - I can't see the link between absolute file addresses and a file searcher, but that may be due to a fundamental difference between our respective understanding of "absolute address" in the context of a file system.
If you just want to look at all files in the file system you can either
perform a recursive directory read or
use the database prepared by updatedb as suggested by SmartGuyz
As you want to look into the files anyways - and that's where almost all runtime will be spent on - I can't think of any advantage 2) would have over 1) and 2) has the disadvantage to have an external dependency, in casu the file prepared by updatedb must exist and be very fresh.
An SO question speaking about more advanced ways of traversing directories than good old opendir/readdir/closedir : Efficiently Traverse Directory Tree with opendir(), readdir() and closedir()
EDIT2 based on OP's question addendum: yes, traversing directories takes time, but that's life. Consider the next best thing, ie locate and friends. It depends on a "database" that will be updated regularly (typically once daily), so all files that were added or renamed after the last scheduled update will not be found, and files that were removed after the last scheduled update will be mentioned in the database although they don't exist anymore. Assuming locate is even installed on the target machine, something you can't be sure of.
As with most things in programming, it never hurts to look at previous solutions to the same problem, so may I suggest you read the documentation of GNU findutils?
No, there is no a single table of block addresses of files, you need to go deeper.
First of all, the file layout depends on the filesystem type (e.g. ext2, ext3, btrfs. reisersf, jfs, xfs, etc). This is abstracted by the Linux kernel, which provides drivers for access to files on a lot of filesystems and a specific partition with its filesystem is abstracted under the single Virtual File System (the single file-directory tree, which contains other devices as its subtrees).
So, basically no, you need to use the kernel abstract interfaces (readdir(), /proc/mounts and so on) in order to search for files or roll your own userspace drivers (e.g. through FUSE) to examine raw block devices (/dev/sda1 etc) if you really need to examine low-level details (this requires a lot of understanding of the kernel/filesystems internals and is highly error-prone).
updatedb -l 0 -o db_file -U source_directory
This will create a database with files, I hope this will help you.
No. The file system is actually structured with directories, each directory containing files and directories.
Within Linux, all of this is managed into the kernel with inodes.
YES.
Conceptually, it does create a table of every file's location on the disc**. There are a lot of details which muddy this picture slightly.
However, you should usually not care. You don't want to work at that level, nor should you. There are many filesystems in Linux which all do it in a slightly (or even significantly) different way.
** Not actually physical location. A hard drive may map the logical blocks to physical blocks in some way determine by its firmware.
Related
So every directory, file, queue or whatever in Linux creates it's own inodes that can be accessed in one way or another. How would I go about implementing my own inode type that doesn't quite fit any of the existing descriptions? A custom something that is visible in the file system but isn't a file? Do I have to extend the kernel or is there some simpler approach?
So every directory, file, queue or whatever in Linux creates it's own inodes that can be accessed in one way or another.
False. Directories, files etc. do not create their own inodes. They are stored with use of inodes belonging to the filesystem on which they are stored. The inodes are not even created specifically for particular files -- all inodes are created as part of filesystem creation, before there are any files stored on it.*
How would I go about implementing my own inode type that doesn't quite fit any of the existing descriptions?
It's unclear why you think you need a custom inode type, but if you do, then you need a whole custom filesystem. You will need to write either kernel drivers or FUSE drivers implementing it, plus all the needed utilities for formatting a device with that FS, mounting and unmounting it, checking it for errors, etc.
A custom something that is visible in the file system but isn't a file? Do I have to extend the kernel or is there some simpler approach?
Everything is a file. This is one of the principles of UNIX. But perhaps you mean something that isn't a regular file. Unfortunately for you, even a custom file system and inode wouldn't be enough to give you a custom file type. The partition of filesystem entries regular files, directories, character and block special files, etc. is deeply ingrained in the kernel and the standard file management APIs and utilities. You would not only have to extend the kernel (beyond writing filesystem drivers), but also modify the C standard library, several standard utilities, and probably a bunch of other libraries and utilities affected by those changes. In the end, you basically have your own whole operating system.
But maybe your premise is wrong. UNIX has been going along just fine with pretty much its current file model for a very long time. It's unclear why you want what you say you want, but there are at least two simpler options that might suit you:
Write a kernel driver for a character or block device with a filesystem interface, and use the system's existing facilities to link one or more device instances to the filesystem as a character or block special file.
Embed what you want to do in regular files / directories / etc.
*More or less. I ignore special administrative actions that may in some cases be able to expand a filesystem and add inodes to it in the process.
We need to implement a feature to our program that would sync 2 or more watched folders.
In reality, the folders will reside on different computers on the local network, but to narrow down the problem, let's assume the tool runs on a single computer, and has a list of watched folders that it needs to sync, so any changes to one folder should propagate to all others.
There are several problems I've thought about so far:
Deleting files is a valid change, so if folder A has a file but folder B doesn't, it could mean that the file was created in folder A and needs to propagate to folder B, but it could also mean that the file was deleted in folder B and needs to propagate to folder A.
Files might be changed/deleted simultaneously in several directories, and with conflicting changes, I need to somehow resolve the conflicts.
One or more of the folders might be offline at any time, so changes must be stored and later propagated to it when it comes online.
I am not sure what kind of help if any the community can offer here, but I'm thinking about these:
If you know of a tool that already does this, please point it out. Our product is closed-source and commercial, however, so its license must be compatible with that for us to be able to use it.
If you know of any existing literature or research on the problem (papers and such), please link to it. I assume that this problem would have been researched already.
Or if you have general advice on the best way to approach this problem, which algorithms to use, how to solve conflicts, or race conditions if they exists, and other gotchas.
The OS is Windows, and I will be using Qt and C++ to implement it, if no tools or libraries exist.
It's not exceptionally hard. You just need to compare the relevant change journal records. Of course, in a distributed network you have to assume the clocks are synchronized.
And yes, if a complex file (anything you can't parse) is edited while the network is split, you cannot avoid problems. This is known as the
CAP theorem . Your system cannot be Consistent, Always Available and also resistant against Partitioning (going offline)
I am working on a game, and one of the requirements per the licence agreement of the sound assets I am using is that they be distributed in a way that makes them inaccessible to the end user. So, I am thinking about aggregating them into a flat file, encrypting them, or some such. The problem is that the sound library I am using (Hekkus Sound System) only accepts a 'char*' file path and handles file reading internally. So, if I am to continue to use it, I will have to override the c stdio file functions to handle encryption or whatever I decide to do. This seems doable, but it worries me. Looking on the web I am seeing people running into strange frustrating problems doing this on platforms I am concerned with(Win32, Android and iOS).
Does there happen to be a cross-platform library out there that takes care of this? Is there a better approach entirely you would recommend?
Do you have the option of using a named pipe instead of an ordinary file? If so, you can present the pipe to the sound library as the file to read from, and you can decrypt your data and write it to the pipe, no problem. (See Beej's Guide for an explanation of named pipes.)
Override stdio in a way that a lib you not knowing how it works exactly works in a way the developer hasn't in mind do not look like the right approach for me, as it isn't really easy. Implement a ramdrive needs so much effort that I recommend to search for another audio lib.
The Hekkus Sound System I found was build by a single person and last updated 2012. I wouldn't rely on a lib with only one person working on it without sharing the sources.
My advice, invest your time in searching for a proper sound lib instead of searching for a fishy work around for this one.
One possibility is to use a encrypted loopback filesystem (google for additional resources).
The way this works is that you put your assets on a encrypted filesystem, which actually lives in a simple file. This filesystem gets mounted someplace as a loopback device. Password needs to be supplied at attach / mount time. Once mounted, all files are available as regular files to your software. But otherwise, the files are encrypted and inaccessible.
It's compiler-dependent and not a guaranteed feature, but many allow you to embed files/resources directly into the exe and read them in your code as if from disk. You could embed your sound files that way. It will significantly increase the size of your exe however.
Another UNIX-based approach:
The environment variable LD_PRELOAD can be used to override any shared library an executable has been linked against. All symbols exported by a library mentioned in LD_PRELOAD are resolved to that library, including calls to libc functions like open, read, and close. Using the libdl, it is also possible for the wrapping library to call through to the original implementation.
So, all you need to do is to start the process which uses the Hekkus Sound System in an environment that has LD_PRELOAD set appropriately, and you can do anything you like to the file that it reads.
Note, however, that there is absolutely no way that you can keep the data inaccessible from the user: the very fact that he has to be able to hear it means he has to have access. Even if all software in the chain would use encryption, and your user is not willing to hack hardware, it would not be exactly difficult to connect the audio output jack with an audio input jack, would it? And you can't forbid you user to use earphones, can you? And, of course, the kernel can see all audio output unencrypted and can send a copy somewhere else...
The solution to your problem would be a ramdisk.
http://en.wikipedia.org/wiki/RAM_drive
Using a piece of memory in ram as if it was a disk.
There is software available for this too. Caching databases in ram is becoming popular.
And it keeps the file from being on the disk that would make it easy accessible to the user.
my problem is pretty complicated and potentially impossible but here we go:
Using C++,
I'm currently working on an universal server engine for a game project of mine. Universal, because every part of the engine will be loaded dynamically after startup. Now, also game objects will inherit from a base object and have overloaded "Simulate" functions. In that way, every object would have it's specific behavior and I can do something I call "C++ Scripting" which is alot faster than interpreted lua script files. Also it's more dynamic.
(Please no solutions which would kill the c++ "scripting" part, like "forget the dynamic linking, that's insane". This performance boost is totally necessary, since I'm working with large voxel maps)
My Problem:
That are indeed alot of .dll/.so files and I wanted to pack those into a simple archive so I can use zlib on said source code and maybe pack everything together with textures and sounds in little "object packages".
Now the Windows DLL API and the Linux SO API won't allow me to load a dll/so file from a memory address, which is a shame.(Am I right there, or can I bypass that? :) ) I don't want to unzip and temp save those files on the filesystem because there are hundreds to thousands of them and that would increase the loading time alot.
Also I'm not interested in more external dependencies like boost.
So here are my Questions:
Is there a cross platform-method to create virtual files IN memory with a real path?
That way I could bypass the slow IO speeds of HDDs.
Or is it really not such a big deal to use temp files, because the file buffers of modern operating systems are fast/intelligent enough to NOT write all those files to disc?
(Actually Linux supports virtual file systems, but windows does not...)
I hope you guys can help me there :)
Not with winapi, that's for sure, but you can do it manually. You can load it into the memory, fill it's import table and call exported functions (after you called DllMain). I saw a program, where someone actually created a new process with that method ... See the PE documentation for details, but it works.
Also it's relatively easy to do, since you only need to find the PE import tables, and do what the dynamic linker does, fill it with jumps and addresses. Dlls contains position independent code, so no relocation needed.
It sould be the same on linux (only using the elf structure), but if you have a better solution with virtual file systems, you should use that.
I'd like to simulate a file without writing it on disk. I have a file at the end of my executable and I would like to give its path to a dll. Of course since it doesn't have a real path, I have to fake it.
I first tried using named pipes under Windows to do it. That would allow for a path like \\.\pipe\mymemoryfile but I can't make it works, and I'm not sure the dll would support a path like this.
Second, I found CreateFileMapping and GetMappedFileName. Can they be used to simulate a file in a fragment of another ? I'm not sure this is what this API does.
What I'm trying to do seems similar to boxedapp. Any ideas about how they do it ? I suppose it's something like API interception (Like Detour ), but that would be a lot of work. Is there another way to do it ?
Why ? I'm interested in this specific solution because I'd like to hide the data and for the benefit of distributing only one file but also for geeky reasons of making it works that way ;)
I agree that copying data to a temporary file would work and be a much easier solution.
Use BoxedApp and do not worry.
You can store the data in an NTFS stream. That way you can get a real path pointing to your data that you can give to your dll in the form of
x:\myfile.exe:mystreamname
This works precisely like a normal file, however it only works if the file system used is NTFS. This is standard under Windows nowadays, but is of course not an option if you want to support older systems or would like to be able to run this from a usb-stick or similar. Note that any streams present in a file will be lost if the file is sent as an attachment in mail or simply copied from a NTFS partition to a FAT32 partition.
I'd say that the most compatible way would be to write your data to an actual file, but you can of course do it one way on NTFS systems and another on FAT systems. I do recommend against it because of the added complexity. The appropriate way would be to distribute your files separately of course, but since you've indicated that you don't want this, you should in that case write it to a temporary file and give the dll the path to that file. Make sure you write the temporary file to the users' temp directory (you can find the path using GetTempPath in C/C++).
Your other option would be to write a filesystem filter driver, but that is a road that I strongly advise against. That sort of defeats the purpose of using a single file as well...
Also, in case you want only a single file for distribution, how about using a zip file or an installer?
Pipes are for communication between processes running concurrently. They don't store data for later access, and they don't have the same semantics as files (you can't seek or rewind a pipe, for instance).
If you're after file-like behaviour, your best bet will always be to use a file. Under Windows, you can pass FILE_ATTRIBUTE_TEMPORARY to CreateFile as a hint to the system to avoid flushing data to disk if there's sufficient memory.
If you're worried about the performance hit of writing to disk, the above should be sufficient to avoid the performance impact in most cases. (If the system is low enough on memory to force the file data out to disk, it's probably also swapping heavily anyway -- you've already got a performance problem.)
If you're trying to avoid writing to disk for some other reason, can you explain why? In general, it's quite hard to stop data from ever hitting the disk -- the user can always hibernate the machine, for instance.
Since you don't have control over the DLL you have to assume that the DLL expects an actual file. It probably at some point makes that assumption which is why named pipes are failing on you.
The simplest solution is to create a temporary file in the temp directory, write the data from your EXE to the temp file and then delete the temporary file.
Is there a reason you are embedding this "pseudo-file" at the end of your EXE instead of just distributing it with our application? You are obviously already distributing this third party DLL with your application so one more file doesn't seem like it is going to hurt you?
Another question, will this data be changing? That is are you expecting to write back data this "pseudo-file" in your EXE? I don't think that will work well. Standard users may not have write access to the EXE and that would probably drive anti-virus nuts.
And no CreateFileMapping and GetMappedFileName definitely won't work since they don't give you a file name that can be passed to CreateFile. If you could somehow get this DLL to accept a HANDLE then that would work.
And I wouldn't even bother with API interception. Just hand the DLL a path to an acutal file.
Reading your question made me think: if you can pretend an area of memory is a file and have kind of "virtual path" to it, then this would allow loading a DLL directly from memory which is what LoadLibrary forbids by design by asking for a path name. And this is why people write their own PE loader when they want to achieve that.
I would say you can't achieve what you want with file mapping: the purpose of file mapping is to treat a portion of a file as if it was physical memory, and you're wanting the reciprocal.
Using Detours implies that you would have to replicate everything the intercepted DLL function does except from obtaining data from a real file; hence it's not generic. Or, even more intricate, let's pretend the DLL uses fopen; then you provide your own fopen that detects a special pattern in the path and you mimmic the C runtime internals... Hmm is it really worth all the pain? :D
Please explain why you can't extract the data from your EXE and write it to a temporary file. Many applications do this -- it's the classic solution to this problem.
If you really must provide a "virtual file", the cleanest solution is probably a filesystem filter driver. "clean" doesn't mean "good" -- a filter is a fully documented and supported solution, so it's cleaner than API hooking, injection, etc. However, filesystem filters are not easy.
OSR Online is the best place to find Windows filesystem information. The NTFSD mailing list is where filesystem developers hang out.
How about using a some sort of RamDisk and writing the file to this disk? I have tried some ramdisks myself, though never found a good one, tell me if you are successful.
Well, if you need to have the virtual file allocated in your exe, you will need to create a vector, stream or char array big enough to hold all of the virtual data you want to write.
that is the only solution I can think of without doing any I/O to disk (even if you don't write to file).
If you need to keep a file like path syntax, just write a class that mimics that behaviour and instead of writing to a file write to your memory buffer. It's as simple as it gets. Remember KISS.
Cheers
Open the file called "NUL:" for writing. It's writable, but the data are silently discarded. Kinda like /dev/null of *nix fame.
You cannot memory-map it though. Memory-mapping implies read/write access, and NUL is write-only.
I'm guessing that this dll cant take a stream? Its almost to simple to ask BUT if it can you could just use that.
Have you tried using the \?\ prefix when using named pipes? Many APIs support using \?\ to pass the remainder of the path directly through without any parsing/modification.
http://msdn.microsoft.com/en-us/library/aa365247(VS.85,lightweight).aspx
Why not just add it as a resource - http://msdn.microsoft.com/en-us/library/7k989cfy(VS.80).aspx - the same way you would add an icon.