QSettings - Sync issue between two process - c++

I am using Qsettings for non gui products to store its settings into xml files. This is written as a library which gets used in C, C++ programs. There will be 1 xml file file for each product. Each product might have more than one sub products and they are written into xml by subproduct grouping as follows -
File: "product1.xml"
<product1>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</product1>
File: productn.xml
<productn>
<subproduct1>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproduct1>
...
<subproductn>
<settings1>..</settings1>
....
<settingsn>..</settingsn>
</subproductn>
</productn>
The code in one process does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.setValue("settings1",<value>)
sleep(20);
settings.setValue("settings2", <value2>)
settings.sync();
When the first process goes to sleep, I start another process which does the following -
settings = new QSettings("product1.xml", XmlFormat);
settings.remove("settings1")
settings.setValue("settings3", <value3>)
settings.sync();
I would expect the settings1 to go away from product1.xml file but it still persist in the file - product1.xml at the end of above two process. I am not using QCoreApplication(..) in my settings library. Please point issues if there is anything wrong in the above design.

This is kind of an odd thing that you're doing, but one thing to note is that the sync() call is what actually writes the file to disk. In this case if you want your second process to actually see the changes you've made, then you'll need to call sync() before your second process accesses the file in order to guarantee that it will actually see your modifications. Thus I would try putting a settings.sync() call right before your sleep(20)

Maybe you have to do delete settings; after the sync() to make sure it is not open, then do the writing in the other process?

Does this compile? What implementation of XmlFormat are you using and which OS? There must be some special code in your project for storing / reading to and from Xml - there must be something in this code which works differently from what you expect.

Related

Does linux "rename" function call block until copy(when source and target in different disks) is completed

If an C/C++ app call rename(https://linux.die.net/man/3/rename) function where 'newpath' is in a different disk volume/partition and assume the copying from current path to new path consume time.
Does'rename' call block until copying from current to new is completed ? or does it return immediately (or quickly) while copying happen asynchronously ?
I'd imagine it would return immediately with an error code:
Errors
The rename() function shall fail if:
[...]
EXDEV
The links named by new and old are on
different file systems and the implementation
does not support links between file systems.
That said, I don't have a Linux box handy to test with, so I could be wrong about that.

when to call _findclose?

Note: Since problem is solved, I've added comments to my original posts.
According to "http://msdn.microsoft.com/en-us/library/6tkkkc1y%28v=vs.90%29.aspx", it stated as this:
*You must call _findclose after you are finished using either the _findfirst or _findnext function (or any variants). This frees up resources used by these functions in your application.*
--comment: it is vague, but what microsoft is trying to say is: some users just need to find the first file(they don't need to call _findnext), then call _findclose; some users called _findnext (they MUST have already called _findfirst), after finished using that, call _findclose. Actually _findnext can be called multiple times, while _findclose is only responsible to a handle, which is created by _findfirst.
And following is a piece of code that is widely used to list files in the directory. -- comment: it is correct.
For example, if there are 2 files and 1 directory in the directory, then:
.
..
ddd
file1.txt
file2.txt
_findfirst is called once. the handle's corresponding fileinfo is system directory "." (is that right?)
--comment: no. the handle is a group of files+directories, the fileinfo is acting as the "cursor". (fileinfo always contained the "name" field, I bet the implementation of _findnext is using the "name" to find the next in a group of files+directories specified by the handle)
_findnext is called 4 times. (the first argument is always the handle corresponding to ".", is that right?)
--comment: yes + no. The first argument is always the same handle; the handle is NOT corresponding to any fileinfo, but to a group of them.
My questions are:
Does "_findclose" be called ONCE is enough?
*--comment:* yes.
if _findnext will not change the handle value, how can it "remember" where to start to find the next file(or directory)? (sorry, maybe I was thinking in the "linked list" pattern.)
*--comment:* I bet is using fileinfo's name field. Just as in Windows Explorer, we sort the contents in a folder, given a file name, we can know their position in the list, so we can "find next".
Are there any harm to call _findclose more times than needed? (like crash or something)
*--comment:* a stupid question. Sorry!
Or is the following code wrong at all? If yes, what's the correct way to implement it?
--It is correct code.
// List the files in the directory
intptr_t file;
_finddata_t filedata;
file = _findfirst(desc.c_str(),&filedata);
if (file != -1)
{
do
{
cout << filedata.name << endl;
// Or put the file name in a vector here
} while (_findnext(file,&filedata) == 0);
}
else
{
cout << "No described files found" << endl;
}
_findclose(file);
I asked this because I've met an issue that an application is freezing a directory which can not be deleted if the process is alive. However, I can guaranteed that "_findclose" is called on every return value from "_findfirst". If I add "_findclose" after calling "_findnext", then will fix the issue perfectly. How can you help me to explain it?
--comment: pardon. don't use "guarantee" too easy. That's where the bug is.
Note: I don't have problem to understand what is a handle, like open a file, read/write/read/write..., close the file handle. I just find the documentations describing these three APIs are vague.
--comment: go to improve your english.
Thank you in advance.
Your calls to _findclose should match with your calls to _findfirst -- i.e., each time you call _findfirst, you should have a matching call to _findclose.
In the code above, since you have only one call to _findfirst, it's correct to have only one call to _findclose.
If you were doing a recursive search of subdirectories, then you'd end up with multiple calls to _findfirst as you descend the hierarchy, and matching calls to _findclose as you finish and ascend back up the hierarchy.
You only need to call _findclose once, when you are finished.
On Windows, a directory may be locked if it is the current directory of your process. Try calling _chdir.
If that doesn't work... are you opening any of the files in the directory you're searching? An open file may lock the directory as well.
It may be useful to let Process Explorer get a look at your app. It can tell you for sure what handle you have left open.

how to JUDGE other program's result via cpp?

I've got a series of cpp source file and I want to write another program to JUDGE if they can run correctly (give input and compare their output with standart output) . so how to:
call/spawn another program, and give a file to be its standard input
limit the time and memory of the child process (maybe setrlimit thing? is there any examples?)
donot let the process to read/write any file
use a file to be its standard output
compare the output with the standard output.
I think the 2nd and 3rd are the core part of this prob. Is there any way to do this?
ps. system is Linux
To do this right, you probably want to spawn the child program with fork, not system.
This allows you to do a few things. First of all, you can set up some pipes to the parent process so the parent can supply the input to the child, and capture the output from the child to compare to the expected result.
Second, it will let you call seteuid (or one of its close relatives like setreuid) to set the child process to run under a (very) limited user account, to prevent it from writing to files. When fork returns in the parent, you'll want to call setrlimit to limit the child's CPU usage.
Just to be clear: rather than directing the child's output to a file, then comparing that to the expected output, I'd capture the child's output directly via a pipe to the parent. From there the parent can write the data to a file if desired, but can also compare the output directly to what's expected, without going through a file.
std::string command = "/bin/local/app < my_input.txt > my_output_file.txt 2> my_error_file.txt";
int rv = std::system( command.c_str() );
1) The system function from the STL allows you to execute a program (basically as if invoked from a shell). Note that this approach is inherenly insecure, so only use it in a trusted environment.
2) You will need to use threads to be able to achieve this. There are a number of thread libraries available for C++, but I cannot give you recommendation.
[After edit in OP's post]
3) This one is harder. You either have to write a wrapper that monitors read/write access to files or do some Linux/Unix privilege magic to prevent it from accessing files.
4) You can redirect the output of a program (that it thinks goes to the standard output) by adding > outFile.txt after the way you would normally invoke the program (see 1)) -- e.g. otherapp > out.txt
5) You could run diff on the saved file (from 3)) to the "golden standard"/expected output captured in another file. Or use some other method that better fits your need (for example you don't care about certain formatting as long as the "content" is there). -- This part is really dependent on your needs. diff does a basic comparing job well.

Read from a file when a new line 's been written to it by another process

What is the fastest method in C++, to read a new line from a file which is written by another process. Or how my program can be notified that there is a new line in file so read it? (in linux)
The fastest method is to use pipes or events (for Windows apps).
If you still want use files, first of all that you really need, making sure, that a file has been really modified (use seek and compare it with prew value). Than go to the 'last val of seek' and read it.
And it will be better use mutex (if you read data from file).
Assuming the OS supports concurrent file access, all you should need to do is seek to EOF, wait for the stat to change then try to read from the file. You might want to add in a sleep to slow down the loop.
The 'tail' command on POISX (with the -f option) implements this - source code is available.
From the top of my head, did u tried something like this:
Count the lines in a file, store it.
Get the size of the file (google it, i dont want to ruin the fun :D ).
Then try to read from the last line u stored when size of the file changes... and again and again.
Have fun :)
Use inotify to get notification about file changes and then reread from your last pos if the file is now larger then before.

Editing an /etc/fstab entry in C++

I'm trying to edit the /etc/fstab file on a CentOS installation using C++. The idea being that based on another config file I will add entries that don't exist in the fstab, or edit entries in the fstab file where the mount point is the same. This lets us set the system up properly on initial bootup.
I've found setmntent() and getmntent() for iterating over the exiting entries so I can easily check whether an entry in fstab also exists in my config file. And I can then use addmntent() to add any entry that doesn't already exist - the documentation says nothing about this being able to edit an entry, only add a new entry to the end of the file. There seems to be no way to edit an existing entry or delete an entry. It seems odd that this feature doesn't exist, only the CR and not the UD of CRUD.
I'd rather not have to write my own parser if I can at all help it.
My other alternative is to:
open the file using setmntent()
read the whole of fstab into memory using getmentent() and perform any additions and/or edits
close the file using endmntent()
open /etc/fstab for writing
close /etc/fstab (thus emptying the file)
open the fstab using setmntent()
loop through the entries I read in previously and write them out using addmntent()
Which although probably fine, just seems a bit messy.
When modifying system configuration files such as /etc/fstab keep in mind that these are critical state and, should your "edit" be interrupted by a power loss, might result in a failure to reboot.
The way to deal with this is:
create an empty output:
FILE* out = setmntent("/etc/fstab.new", "rw");
open the original for input:
FILE* in = setmntent("/etc/fstab", "r");
copy the contents:
while (m = getmntent(in)) { addmntent(out, m); }
make sure the output has it all:
fflush(out); endmntent(out); endmntent(in);
atomically replace /etc/fstab:
rename("/etc/fstab.new", "/etc/fstab");
It's left as an exercise to the reader to change the body of the while loop to make a modification to an existing element, to substitute a specifically crafted mntent or whatever. If you have specific questions on that please ask.
UN*X semantics for rename() guarantee that even in the case of power loss, you'll have either the original version or your new updated one.
There's a reason why there is no modifymntent() - because that would encourage bad programming / bad ways of changing system critical files. You say at the end of your post "... probably fine ..." - not. The only safe way to change a system configuration file is to write a complete modified copy, sync that to safe storage, and then use rename to replace the old one.