exception of boostlog when date changed to next day - c++

I use boost log by this config.
[Sinks.2]
Filter="%Severity% >= 2"
Destination=TextFile
AutoFlush=true
Format="[%TimeStamp%] [%ThreadID%] <%Severity%> %Message%"
Asynchronous=false
Target="logs"
FileName="logs/quo.%Y%m%dT%H%M%S.%a.%5N.log.detail"
RotationTimePoint="00:00:00"
RotationSize=104857600
MinFreeSpace=4294967296
MaxSize=4294967296
ScanForFiles=All
when date change to next day. my program crash by exception:
terminate called after throwing an instance of
'boost::filesystem::filesystem_error'
what(): boost::filesystem::last_write_time: No such file or directory: "/root/work/hy-trade/bin/debug/logs/quo.20181027T173106.Sat.00000.log.detail"
I check my disk space, find the free space less than MinFreeSpace in config and the file quo.20181027T173106.Sat.00000.log.detail not exists.
how to avoid this exception?
version of boost is 1.67
thank you

It looks like someone had already deleted the log file before it was rotated. It may have been an external process, or Boost.Log.
With Boost.Log, this can happen if you have multiple file sinks that write log files into the same directory, which is also used as the target directory for the rotated files (i.e. the FileName parameter includes the path specified in the Target parameter, and there are multiple sinks that use that path). The problem is because, according to ScanForFiles=All, the library scans the target directory for any files but does not update the file counter to be used for creating new files. This means that if the file "quo.20181027T173106.Sat.00000.log.detail" was present in that directory when your process started then it would be considered as an old file, even if upon starting your process would be still writing new logs to that file. Then, when a file rotation happens and storage limits are exceeded (e.g. if MinFreeSpace is not satisfied), that file may be deleted. The rotation has to happen on another sink that still stores files into the same "logs" directory.
To solve the problem you can do one of the following:
Use ScanForFiles=Matching in your settings so that the file counter is updated after scanning. This will make sure that new log files have unique names and don't get deleted prematurely.
Write log files to a different directory from your target storage. I.e. specify FileName so that it doesn't point to the same directory as Target.
Also, you may want to add exception handling to avoid crashing in case of errors (which may still happen for whatever reason on filesystem operations). See here and here for more info (also, follow the links in those sections).

Related

Does linux "rename" function call block until copy(when source and target in different disks) is completed

If an C/C++ app call rename(https://linux.die.net/man/3/rename) function where 'newpath' is in a different disk volume/partition and assume the copying from current path to new path consume time.
Does'rename' call block until copying from current to new is completed ? or does it return immediately (or quickly) while copying happen asynchronously ?
I'd imagine it would return immediately with an error code:
Errors
The rename() function shall fail if:
[...]
EXDEV
The links named by new and old are on
different file systems and the implementation
does not support links between file systems.
That said, I don't have a Linux box handy to test with, so I could be wrong about that.

Linux : Can a directory rename be partially executed due to a power/transient mechanical failure in Server/Disk?

C/C++ rename function can be used to rename a directory.
Assume below situation (for Linux)
Directory X has files A, B and C.
X is renamed to Y (using C/C++ rename function). While the operation is in progress Server/Disk's power goes out. Then it is restarted.
Now is there a possibility of few files being in a directory X while others in Y.
e.g.
X : B
Y : A, C
Rename is only changing a name. The "id" of the file, and in linux a directory is a file which contains the "links" to other files and directories, stays the same!
As a file system is always directly relying to the physical block storage on any kind of hardware, always the complete block where the label is stored, must be rewritten and linked in the file system structure.
If a power fail happens in between, the "directory file" can be corrupted. This means, that more than the single renaming operations is involved!
BUT:
Modern file systems have many options to detect and repair such situations. E.G. ext4 has a journal in background. If any access may be interrupted, the journal has the information that these operation has started but not completed. By mounting such a partition/fs, the repair takes place automatically. If this is not possible, a fschk can do that job.
The situation, that only "some files" have been moved, is definitely never possible, because renaming a directory is not creating a new directory and moving the file names/links to the node id into the new directory, it is only a new name for an existing directory.
As a user: Simply use modern file systems and mostly all power down failures can be recovered by restart. You may find your filesystem in the "old" or the "new" version, but not in between.
It depends on implementation but for all file systems I know renaming is just replacing a label. So renaming of a directory doesn't differ from file rename operation.
And when you rename a file and turn off power you never end up with two files X and Y each having half of the file content.

Immuconf with Clojure not handling tree config files

Whenever I add a third config file to my .immuconf.edn I get:
No configuration files were specified, and neither an .immuconf.edn file nor
an IMMUCONF_CFG environment variable was found
This is driving me crazy since I cant really find anything wrong.
Using this loads thing OK:
["configs/betfair.edn" "configs/web-server.edn"]
however this generated an error:
["configs/betfair.edn" "configs/web-server.edn" "~/betfair.edn"]
This is the content of betfair.edn
{:betfair {:usr "..."
:pwd "..."
:app-key "..." ;; key used
:app-key-live "..."
:app-key-test "..."}}
(where ... is replaced with actual strings)
Why am I getting this error when adding the third file and how can I fix this?
Make sure that the last file specified in your <project dir>/.immuconf.edn (~/betfair.edn) exists in your home directory.
Immuconf does some magic to replace ~ in filenames specified in .immuconf.edn with a value of (System/getProperty "user.home") so you might check if that system property points to the same directory where your ~/betfair.edn file is located.
I have recreated your setup and it works on my machine so it is probably a problem with locations or access rights to your files. Unfortunately, error handling for the no arg invocation of (immuconf.config/load) doesn't help in troubleshooting as it swallows any exceptions and returns nil. That exception would probably tell you what kind of error occured (some file not found or some IO error happened). You might want to file a pull request with a patch to log such errors as warnings instead of ignoring them.

Strange semantic error

I have reinstalled emacs 24.2.50 on a new linux host and started a new dotEmacs config based on magnars emacs configuration. Since I have used CEDET to some success in my previous workflow I started configuring it. However, there is some strange behaviour whenever I load a C++ source file.
[This Part Is Solved]
As expected, semantic parses all included files (and during the initial setup parses all files specified by the semantic-add-system-include variables), but it prints this an error message that goes like this:
WARNING: semantic-find-file-noselect called for /usr/include/c++/4.7/vector while in set-auto-mode for /usr/include/c++/4.7/vector. You should call the responsible function into 'mode-local-init-hook'.
In the above example the error is printed for the STL vector but a corresponding error message is printed for every file included by the one I'm visiting and any subsequent includes. As a result it takes quite a long time to finish and unfortunately the process is repeated any type I open a new buffer.
[This Problem Is Solved Too]
Furthermore it looks like the parsing doesn't really work as when I place the point above a non-c primitive type (i.e. not int,double,float, etc) instead of printing the type's definition in the modeline an error message like
Idle Service Error semantic-idle-local-symbol-highlight-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, (((0) \"IndexMap\"))"
Idle Service Error semantic-idle-summary-idle-function: "#<buffer DEPFETResolutionAnalysis.cc> - Wrong type argument: stringp, ((\"fXBetween\" 0 nil nil))"
where DEPFETResolutionAnalysis.cc is the file & buffer I'm currently editing and IndexMap and fXBetween are types defined in files included by the file I'm editing/some file included by the file I'm editing.
I have not tested any further features of CEDET/semantic as the problem is pretty annoying. My cedet config can be found here.
EDIT: With the help of Alex Ott I kinda solved the first problem. It was due to my horrible cedet initialisation. See his first answer for the proper way to configure CEDET!
There still remains the problem with the Idle Service Error (which, when enabling global-semantic-idle-local-symbol-highlight-mode, occurs permanently, not only when checking the definition of the type at point).
And there is the new problem of how to disable the site-wise init file(s).
EDIT2: I have executed semantic-debug-idle-function in a buffer where the problem occurs and it produces a ~700kb [sic!] output. It looks like it is performing some operations on a data container which, by the looks of it, contains information on all the symbols defined in the files parsed. As I have parsed a rather large package (~20Mb source files) this table is rather large. Can semantic handle a database that large or is this impossible and the reason of my problem?
EDIT3: Deleting the content of ~/.semanticdb and reparsing all includes did the trick. I still need to disable the site-wise init files but as this is not related to CEDET I will close this question (the question related to the site-wise init files can be found here).
You need to change your init file so it will perform loading of CEDET only once, not in the hook that will be called for each .h/.hpp/.c/.cpp files. You can change this config as the base, and read more in following article.
The problem that you have is caused because Semantic is trying to analyze header files, and when it tries to open them, then its initialization routines are called again, and again...
The first problem was solved by correctly configuring CEDET which is discribed on Alex Ott's homepage. His answer solves this first problem. The config file specified in his answer is a great start for a nice config; I have used the very same to config CEDET for my needs.
The second problem vanished once I updated CEDET from 1.1 to the bazaar (repository) version, which is explained here and in Alex' article. Additionaly one must delete the content of the directory ~/.semanticdb (which contains the semantic database and was corrupted I guess).
I'd like to thank Alex Ott for his help and sticking with me throughout my journey to the solution :)

Editing an /etc/fstab entry in C++

I'm trying to edit the /etc/fstab file on a CentOS installation using C++. The idea being that based on another config file I will add entries that don't exist in the fstab, or edit entries in the fstab file where the mount point is the same. This lets us set the system up properly on initial bootup.
I've found setmntent() and getmntent() for iterating over the exiting entries so I can easily check whether an entry in fstab also exists in my config file. And I can then use addmntent() to add any entry that doesn't already exist - the documentation says nothing about this being able to edit an entry, only add a new entry to the end of the file. There seems to be no way to edit an existing entry or delete an entry. It seems odd that this feature doesn't exist, only the CR and not the UD of CRUD.
I'd rather not have to write my own parser if I can at all help it.
My other alternative is to:
open the file using setmntent()
read the whole of fstab into memory using getmentent() and perform any additions and/or edits
close the file using endmntent()
open /etc/fstab for writing
close /etc/fstab (thus emptying the file)
open the fstab using setmntent()
loop through the entries I read in previously and write them out using addmntent()
Which although probably fine, just seems a bit messy.
When modifying system configuration files such as /etc/fstab keep in mind that these are critical state and, should your "edit" be interrupted by a power loss, might result in a failure to reboot.
The way to deal with this is:
create an empty output:
FILE* out = setmntent("/etc/fstab.new", "rw");
open the original for input:
FILE* in = setmntent("/etc/fstab", "r");
copy the contents:
while (m = getmntent(in)) { addmntent(out, m); }
make sure the output has it all:
fflush(out); endmntent(out); endmntent(in);
atomically replace /etc/fstab:
rename("/etc/fstab.new", "/etc/fstab");
It's left as an exercise to the reader to change the body of the while loop to make a modification to an existing element, to substitute a specifically crafted mntent or whatever. If you have specific questions on that please ask.
UN*X semantics for rename() guarantee that even in the case of power loss, you'll have either the original version or your new updated one.
There's a reason why there is no modifymntent() - because that would encourage bad programming / bad ways of changing system critical files. You say at the end of your post "... probably fine ..." - not. The only safe way to change a system configuration file is to write a complete modified copy, sync that to safe storage, and then use rename to replace the old one.