I would like to log every syscall of a specified directory and I've found this repository https://github.com/rflament/loggedfs
It creates a virtual filesystem with fuse and log everything in it, just like I want.
I tried to port it on mac but it uses a "trick" that doesn't works on osx. The lstat is stuck 10s and crash.
I would like to understand why ?
This is a main part of my code :
// g++ -Wall main.cpp `pkg-config fuse --cflags --libs` -o hello
#define FUSE_USE_VERSION 26
#include <fuse.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
static char *path;
static int savefd;
static int getattr(const char *path, struct stat *stbuf)
{
int res;
char rPath[1024];
strcpy(rPath, "."); strcat(rPath, path);
res = lstat(rPath, stbuf); // Mac stuck here
return (res == -1 ? -errno : 0);
}
static void* loggedFS_init(struct fuse_conn_info* info)
{
fchdir(savefd); close(savefd); return NULL;
}
int main(int argc, char *argv[])
{
struct fuse_operations oper;
bzero(&oper, sizeof(fuse_operations));
oper.init = loggedFS_init;
oper.getattr = getattr;
path = strdup(argv[argc - 1]);
printf("chdir to %s\n", path);
chdir(path);
savefd = open(".", 0);
return fuse_main(argc, argv, &oper, NULL);
}
I had a very close look at LoggedFS and tested it for POSIX compliance using pjdfstest, resulting in 3 issues (or groups of issues). I ended up re-implementing it in Python, fully POSIX compliant. I have not tested it on OS X yet, so I'd be happy to receive some feedback ;)
The "trick" you are mentioning could be the root cause of your issue, although I am not entirely sure. It causes a fundamental problem by adding another character to the path, which leads to issues when the length of path gets close to PATH_MAX. libfuse already passes paths with a leading / into FUSE operations. The additional . plus the "misleading" / (root of the mounted filesystem, not the "global" root folder) are two characters "too many", effectively reducing the maximum allowed path length to PATH_MAX minus 2. I explored options of altering PATH_MAX and informing user land software about a smaller PATH_MAX, which turned out to be impossible.
There is a way around, however. Do not close the file descriptor savefd in the init routine. Keep it open and instead close it in the destroy routine, which will be called by FUSE when the filesystem is unmounted. You can actually use savefd for specifying paths relative to it. You can then use fstatat (Linux, OS X / BSD) instead of lstat. Its prototype looks like this:
int fstatat(int dirfd, const char *pathname, struct stat *buf,
int flags);
You have to pass savefd into dirfd and remove the leading / from the content of path before passing it into pathname.
Related
I'm implementing a native host for a browser extension. I designed my implementation around std::cin instead of C-style getchar()
The issue here is that std::cin not opened in binary mode and this has effects on Windows based hosts because Chrome browser don't work well with Windows style \r\n hence I have to read it in binary mode.
To read in binary mode, I have to use _setmode(_fileno(stdin), _O_BINARY);
My IDE can't find definition for _fileno and I found that the workaround is to use the following macro,
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
However, I'm not confident enough with this macro. I believe something is wrong, but I'm using the latest MinGW compiler and not sure why it's not defined.
Update: it seems the function is behind a __STRICT_ANSI__ and I have no idea how to disable it.
Whatever, the program compiles fine and the browser starts it, and when I send message from browser, the application able to read the length of message, and when it try to read the content, the std::cin.read() operation inserts nothing to the buffer vector and the message is not null terminated, but I don't think that causing the issue.
I also made an attempt to send a dummy message to browser without reading but it seems freezing the browser.
#include <iostream>
#include <cstdio>
#include <string>
#include <vector>
#ifdef __WIN32
#include <fcntl.h>
#include <io.h>
#endif
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
enum class Platforms {
macOS = 1,
Windows = 2,
Linux = 3
};
Platforms platform;
#ifdef __APPLE__
constexpr Platforms BuildOS = Platforms::macOS;
#elif __linux__
constexpr Platforms BuildOS = Platforms::Linux;
#elif __WIN32
constexpr Platforms BuildOS = Platforms::Windows;
#endif
void sendMessage(std::string message) {
auto *data = message.data();
auto size = uint32_t(message.size());
std::cout.write(reinterpret_cast<char *>(&size), 4);
std::cout.write(data, size);
std::cout.flush();
}
int main() {
if constexpr(BuildOS == Platforms::Windows) {
// Chrome doesn't deal well with Windows style \r\n
_setmode(_fileno(stdin), _O_BINARY);
_setmode(_fileno(stdout), _O_BINARY);
}
while(true) {
std::uint32_t messageLength;
// First Four contains message legnth
std::cin.read(reinterpret_cast<char*>(&messageLength), 4);
if (std::cin.eof())
{
break;
}
std::vector<char> buffer;
// Allocate ahead
buffer.reserve(std::size_t(messageLength) + 1);
std::cin.read(&buffer[0], messageLength);
std::string message(buffer.data(), buffer.size());
sendMessage("{type: 'Hello World'}");
}
}
Solution:
buffer.reserve(std::size_t(messageLength) + 1);
should be
buffer.resize(std::size_t(messageLength) + 1);
or we can presize the buffer during construction with
std::vector<char> buffer(messageLength +1);
Problem Explanation:
buffer.reserve(std::size_t(messageLength) + 1);
reserves capacity but doesn't change the size of the vector, so technically
std::cin.read(&buffer[0], messageLength);`
is illegal, and at
std::string message(buffer.data(), buffer.size());`
buffer.size() is still 0.
Im trying to solve a binary exploitation problem from picoCTF, but I'm having trouble with gdb.
Here is the source code of the problem (I've commented some stuff to help me).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <wchar.h>
#include <locale.h>
#define BUF_SIZE 32
#define FLAG_LEN 64
#define KEY_LEN 4
void display_flag() {
char buf[FLAG_LEN];
FILE *f = fopen("flag.txt","r");
if (f == NULL) {
printf("'flag.txt' missing in the current directory!\n");
exit(0);
}
fgets(buf,FLAG_LEN,f);
puts(buf);
fflush(stdout);
}
// loads value into key, global variables ie not on stack
char key[KEY_LEN];
void read_canary() {
FILE *f = fopen("/problems/canary_3_257a2a2061c96a7fb8326dbbc04d0328/canary.txt","r");
if (f == NULL) {
printf("[ERROR]: Trying to Read Canary\n");
exit(0);
}
fread(key,sizeof(char),KEY_LEN,f);
fclose(f);
}
void vuln(){
char canary[KEY_LEN];
char buf[BUF_SIZE];
char user_len[BUF_SIZE];
int count;
int x = 0;
memcpy(canary,key,KEY_LEN); // copies "key" to canary, an array on the stack
printf("Please enter the length of the entry:\n> ");
while (x<BUF_SIZE) {
read(0,user_len+x,1);
if (user_len[x]=='\n') break;
x++;
}
sscanf(user_len,"%d",&count); // gives count the value of the len of user_len
printf("Input> ");
read(0,buf,count); // reads count bytes to buf from stdin
// compares canary (variable on stack) to key
// if overwriting need to get the value of key and maintain it, i assume its constant
if (memcmp(canary,key,KEY_LEN)) {
printf("*** Stack Smashing Detected *** : Canary Value Corrupt!\n");
exit(-1);
}
printf("Ok... Now Where's the Flag?\n");
fflush(stdout);
}
int main(int argc, char **argv){
setvbuf(stdout, NULL, _IONBF, 0);
int i;
gid_t gid = getegid();
setresgid(gid, gid, gid);
read_canary();
vuln();
return 0;
}
When I run this normally, with ./vuln, I get normal execution. But when I open it in gdb with gdb ./vuln and then run it with run, I get the [ERROR]: Trying to Read Canary message. Is this something that is intended to make the problem challenging? I don't want the solution, I just don't know if this is intended behaviour or a bug. Thanks
I don't want the solution, I just don't know if this is intended behaviour or a bug.
I am not sure whether you'll consider it intended behavior, but it's definitely not a bug.
Your ./vuln is a set-gid program. As such, it runs as group canary_3 when run outside of GDB, but as your group when run under GDB (for obvious security reason).
We can assume that the canary_3 group has read permissions on the canary.txt, but you don't.
P.S. If you printed strerror(errno) (as comments suggested), the resulting Permission denied. should have made the failure obvious.
I am working with mmap() to fastly read big files, basing my script on this question answer (Fast textfile reading in c++).
I am using the second version from sehe answer :
#include <algorithm>
#include <iostream>
#include <cstring>
// for mmap:
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
const char* map_file(const char* fname, size_t& length);
int main()
{
size_t length;
auto f = map_file("test.cpp", length);
auto l = f + length;
uintmax_t m_numLines = 0;
while (f && f!=l)
if ((f = static_cast<const char*>(memchr(f, n, l-f))))
m_numLines++, f++;
std::cout << "m_numLines = " << m_numLines << "n";
}
void handle_error(const char* msg) {
perror(msg);
exit(255);
}
const char* map_file(const char* fname, size_t& length)
{
int fd = open(fname, O_RDONLY);
if (fd == -1)
handle_error("open");
// obtain file size
struct stat sb;
if (fstat(fd, &sb) == -1)
handle_error("fstat");
length = sb.st_size;
const char* addr = static_cast<const char*>(mmap(NULL, length, PROT_READ, MAP_PRIVATE, fd, 0u));
if (addr == MAP_FAILED)
handle_error("mmap");
// TODO close fd at some point in time, call munmap(...)
return addr;
}
and it works just great.
But if I implement it over a loop of several files (I just change the main() function name to:
void readFile(std::string &nomeFile) {
and then get the file content in "f" object in main() function with:
size_t length;
auto f = map_file(nomeFile.c_str(), length);
auto l = f + length;
and call it from main() on a loop over a filenames list), after a while I got:
open: Too many open files
I imagine there would be a way to close the open() call after working on a file, but I can not figure out how and where to put it exactly. I tried:
int fc = close(fd);
at the end of the readFile() function but it did change nothing.
Thanks a lot in advance for any help!
EDIT:
after the important suggestions I received I made some performance comparison with different approaches with mmap() and std::cin(), check out: fast file reading in C++, comparison of different strategies with mmap() and std::cin() results interpretation for the results
Limit to the number of concurrently open files
As you can imagine, keeping a file open consumes resources. So there is in any case a practical limit to the number of open file descriptors on your system. This is why it's highly recommended to close files that you no longer need.
The exact limit depends on the OS and the configuration. If you want to know more, there are already a lot of answers available for this kind of question.
Special case of mmap
Obviously, with mmap() you open a file. And doing so repetitively in a loop risk to reach sooner or later the fatal file description limit, as you could experience.
The idea of trying to close the file is not bad. The problem is that it does not work. This is specified in the POSIX documentation:
The mmap() function adds an extra reference to the file associated
with the file descriptor fildes which is not removed by a subsequent
close() on that file descriptor. This reference is removed when there
are no more mappings to the file.
Why ? Because mmap() links the file in a special way to the virtual memory management in your system. And this file will be needed as long as you use the address range to which it was allocated.
So how to remove those mappings ? The answer is to use munmap():
The function munmap() removes any mappings for those entire pages
containing any part of the address space of the process starting at
addr and continuing for len bytes.
And of course, close() the file descriptor that you no longer need. A prudent approach would be to close after munmap(), but in principle, at least on a POSIX compliant system, it should not matter when you're closing. Nevertheless, check your latest OS documentation to be on the safe side :-)
*Note: file mapping is also available on windows; the documentation about closing the handles is ambiguous on potential memory leaks if there are remaining mappings. This is why I recommend prudence on the closing moment. *
I want to delete all the files which begin with sub string.
CString Formatter = _T("C:\\logs\\test\\test_12-12-2018_1*.*");
DeleteFile(Formatter);
I intend to delete following files with above code
C:\logs\test\test_12-12-2018_1_G1.txt
C:\logs\test\test_12-12-2018_1_G2.txt
C:\logs\test\test_12-12-2018_1_G3.txt
C:\logs\test\test_12-12-2018_1_G4.txt
When I check error from GetLastError, I get ERROR_INVALID_NAME.
Any idea how to fix this?
DeleteFile doesn't take wildcards. It looks like what you need is a FindFirstFile/FindNextFile/FindClose loop to turn your wildcard into a list of full file names.
#include <windows.h>
#include <pathcch.h>
#pragma comment(lib, "pathcch.lib")
// (In a function now)
WIN32_FIND_DATAW wfd;
WCHAR wszPattern[MAX_PATH];
HANDLE hFind;
INT nDeleted = 0;
PathCchCombine(wszPattern, MAX_PATH, L"C:\\Logs\\Test", L"test_12-12-2018_1*.*");
SetCurrentDirectoryW(L"C:\\Logs\\Test");
hFind = FindFirstFileW(wszPattern, &wfd);
if(hFind == INVALID_HANDLE_VALUE)
{
// Handle error & exit
}
do
{
DeleteFileW(wfd.cFileName);
nDeleted++;
}
while (FindNextFileW(hFind, &wfd));
FindClose(hFind);
wprintf(L"Deleted %d files.\n", nDeleted);
Note that PathCchCombine, FindFirstFileW, and DeleteFileW can all fail, and robust code would check their return values and handle failures appropriately. Also, if FindNextFileW returns 0 and the last error code is not ERROR_NO_MORE_FILES, then it failed because of an actual error (not because there was nothing left to find), and that needs to be handled as well.
Also, if speed is a concern of yours (your example in your post about deleting four files in the same directory doesn't seem like it needs it), replace the line hFind = FindFirstFileW(...) with:
hFind = FindFirstFileExW(wszPattern, FindExInfoBasic, (LPVOID)&wfd, FindExSearchNameMatch, NULL, FIND_FIRST_EX_LARGE_FETCH);
Although you can search for the file names, and then call DeleteFile individually for each, my advice would be to use one of the Windows shell functions to do the job instead.
For example, you could use code something like this:
#define _WIN32_IE 0x500
#include <windows.h>
#include <shellapi.h>
#include <shlobj.h>
#include <iostream>
#include <string>
static char const *full_path(std::string const &p) {
static char path[MAX_PATH+2] = {0};
char *ignore;
GetFullPathName(p.c_str(), sizeof(path), path, &ignore);
return path;
}
static int shell_delete(std::string const &name) {
SHFILEOPSTRUCT op = { 0 };
op.wFunc = FO_DELETE;
op.pFrom = full_path(name);
op.fFlags = FOF_ALLOWUNDO | FOF_SILENT | FOF_WANTNUKEWARNING | FOF_NOCONFIRMATION;
return !SHFileOperation(&op);
}
int main(int argc, char **argv) {
if ( argc < 2) {
fprintf(stderr, "Usage: delete <filename> [filename ...]");
return 1;
}
for (int i=1; i<argc; i++)
shell_delete(argv[i]);
}
One obvious advantage to this is that you can pass the FOF_ALLOWUNDO flag (as I have in the code above), which moves the files to the recycle bin instead of removing it permanently. Of course, you can omit that flag if you want to the files nuked.
Depending on what you're doing, there are a few other flags that might be handy, such as FOF_FILESONLY, to delete only files, not directories that might match the wildcard you specify, and FOF_NORECURSION to have it not recurse into subdirectories at all.
Microsoft considers SHFileOperation obsolescent, and has (in Windows Vista, if memory serves) "replaced" it with IFileOperation. IFileOperation is a COM interface though, so unless you're using COM elsewhere in your code, chances are pretty good that using it will add a fair amount of extra work for (at least in this case) little or no real advantage. Especially you're already using COM, however, this might be worth considering.
I'm using code like the following to check whether a file has been created before continuing, thing is the file is showing up in the file browser much before it is being detected by stat... is there a problem with doing this?
//... do something
struct stat buf;
while(stat("myfile.txt", &buf))
sleep(1);
//... do something else
alternatively is there a better way to check whether a file exists?
Using inotify, you can arrange for the kernel to notify you when a change to the file system (such as a file creation) takes place. This may well be what your file browser is using to know about the file so quickly.
The "stat" system call is collecting different information about the file, such as, for example, a number of hard links pointing to it or its "inode" number. You might want to look at the "access" system call which you can use to perform existence check only by specifying "F_OK" flag in "mode".
There is, however, a little problem with your code. It puts the process to sleep for a second every time it checks for file which doesn't exist yet. To avoid that, you have to use inotify API, as suggested by Jerry Coffin, in order to get notified by kernel when file you are waiting for is there. Keep in mind that inotify does not notify you if file is already there, so in fact you need to use both "access" and "inotify" to avoid a race condition when you started watching for a file just after it was created.
There is no better or faster way to check if file exists. If your file browser still shows the file slightly faster than this program detects it, then Greg Hewgill's idea about renaming is probably taking place.
Here is a C++ code example that sets up an inotify watch, checks if file already exists and waits for it otherwise:
#include <cstdio>
#include <cstring>
#include <string>
#include <unistd.h>
#include <sys/inotify.h>
int
main ()
{
const std::string directory = "/tmp";
const std::string filename = "test.txt";
const std::string fullpath = directory + "/" + filename;
int fd = inotify_init ();
int watch = inotify_add_watch (fd, directory.c_str (),
IN_MODIFY | IN_CREATE | IN_MOVED_TO);
if (access (fullpath.c_str (), F_OK) == 0)
{
printf ("File %s exists.\n", fullpath.c_str ());
return 0;
}
char buf [1024 * (sizeof (inotify_event) + 16)];
ssize_t length;
bool isCreated = false;
while (!isCreated)
{
length = read (fd, buf, sizeof (buf));
if (length < 0)
break;
inotify_event *event;
for (size_t i = 0; i < static_cast<size_t> (length);
i += sizeof (inotify_event) + event->len)
{
event = reinterpret_cast<inotify_event *> (&buf[i]);
if (event->len > 0 && filename == event->name)
{
printf ("The file %s was created.\n", event->name);
isCreated = true;
break;
}
}
}
inotify_rm_watch (fd, watch);
close (fd);
}
your code will check if the file is there every second. you can use inotify to get an event instead.