boost::log with multi-process - c++

I have asked a question about how to use boost::log in multi-process
How to use Boost::log not to rewrite the log file?
That answer can solve most part of the problem. But in some rare case, when one process is writing log and another process is starting to write log,
13548:Tue Nov 30 17:33:41 2021
12592:Tue Nov 30 17:33:41 2021
13548:Tue Nov 30 17:33:41 2021
12592:Tue Nov 30 17:33:4572:Tue Nov 30 17:33:41 2021
17196:Tue Nov 30 17:33:41 2021
8572:Tue Nov 30 17:33:41 2021
17196:Tue Nov 30 17:33:41 2021
8572:Tue Nov 30 17:33:41 2021
The fourth line 17:33:4(1 2021) the rear part was erased with (8)572:Tue Nov which was written by the other process.
How do I prevent this?

Boost.Log does not synchronize multiple processes writing to the same file. You must have a single process that writes logs to the file, while other processes passing their log records to the writer. There are multiple ways to achieve this. For example, you can pass your logs to a syslog service or writing your own log writer process. In the latter case, you can use inter-process queue to pass log messages between processes.
Alternatively, you can write your own sink backend that will perform synchronization when writing to a shared file. However, I suspect the performance will be lower than having a single log writer process.

Related

Why do processes hang randomly while my Mac is locked

This is on an iMac (late 2015) running MacOS Catalina. All energy saving options are off.
I have noticed this with several long-running processes and so I made a simple example.
I ran
while true; do date; sleep 60; done
around 11 am and let the screen lock. After a few hours I come back and check on it. Everything is fine for about an hour and a half, i.e. I do get one line per minute and then
Mon Feb 6 12:32:39 CET 2023
Mon Feb 6 12:33:39 CET 2023
Mon Feb 6 12:41:33 CET 2023
Mon Feb 6 12:43:08 CET 2023
Mon Feb 6 12:52:57 CET 2023
Mon Feb 6 13:28:00 CET 2023
after which it goes back to normal for about half an hour and then starts being erratic again.
Experience shows with other long running processes that the times this happens are fairly random.
Ideas on how to avoid this? I want my processes to run normally. Thanks.
As #Rob mentioned in his comment yesterday, turning off power nap in Energy settings seems to solve this problem. (I had a chance to test it for several periods now, including overnight.)
Thanks!

Crash due to AfxGetThread returning NULL in mfc140!CFrameWnd::OnActivateTopLevel

This app statically links multiple mfc dlls and loads those at app startup. But in some cases when CPU usage is high it crashes while activating the frame window. This crash due to AfxGetThread returning NULL in mfc140!CFrameWnd::OnActivateTopLevel at app startup.
It has only crashed in client domains and while debugging through time travel debugger in local system. Client domain crash has mostly happened while user was starting a new session in morning. So there may have been some load.
The apps main window is part of a mymain.dll. I added some logs to the mymain dll's app object ctor to log threadid.
When I run the app normally on my system it calls the app object ctor on the same thread that calls mfc140!CFrameWnd::OnActivateTopLevel and does not crash.
But when I run the app through time travel debugger it calls the app object ctor on another thread that is not the main thread and later crashes on main thread in CFrameWnd::OnActivateTopLevel
00 MyMain!CMyMainApp::CMyMainApp
01 MyMain!`dynamic initializer for 'theApp''
02 ucrtbase!initterm
03 MyMain!dllmain_crt_process_attach
04 MyMain!dllmain_dispatch
05 mscoreei!CorDllMain
06 mscoree!_CorDllMain_Exported
07 ntdll!LdrpCallInitRoutine
08 ntdll!LdrpInitializeNode
09 ntdll!LdrpInitializeGraphRecurse
10 ntdll!LdrpInitializeGraphRecurse
11 ntdll!LdrpInitializeProcess
12 ntdll!LdrpInitialize
13 ntdll!LdrInitializeThunk
Please help understand why the loader is loading this mfc dll on a background thread on certain conditions?

FFMPEG and AWS: What's the most efficient way to handle this?

I'm new to AWS and I originally built the FFmpeg functions on my Node.JS API. But I realized this is the wrong way to do it in a real-world app, and that you need to use separate Lambda functions in AWS that handle the video editing separately from the main server.
I'm mainly a front-end developer but I'm open to learning new things.
I basically have the following process in my app:
User uploads video.
I need to take that video and add a watermark to it.
I then need a copy of the watermarked video in a smaller resolution.
I then need a 6 seconds GIF of the smaller resolution video.
Finally, I need to upload the 3 edited files (2 .mp4's and 1 .gif) to S3, and remove the original, non-watermarked video.
Here are my questions to be clear:
Should I upload the original file to S3 or to the server? And why?
Is the process above doable in a single Lambda function? Or do I need more Lambda functions?
How would you handle this problem, personally?
I originally built it by chaining one function to the next with promises, but AWS seems like a different world of doing things and the way I originally built it would not work.
Thanks a lot.
Update
Here are some tests I did with a couple videos:
Test 1
Test 2
Test 3
Test 4
Test 5
Original video resolution
1080p
1080p
1080p
1080p
480p
Original video duration
23 minutes
15 minutes
11 minutes
3.5 minutes
5 minutes
Step 1 duration (Watermarking original video)
30 minutes
18 minutes
14 minutes
4 minutes
2 minutes
Step 2 duration (Watermarking lower resolution)
5 minutes
3 minutes
3 minutes
1 minute
skip (already low res)
Step 3 duration (6 seconds GIF creation)
negligible (15 seconds)
negligible (10 seconds)
negligible (7 seconds)
negligible
negligible
Total
~35 minutes
~21 minutes
~17 minutes
~5 minutes
~2 minutes

URLFetch Max Deadline is Only 60 Seconds for TaskQueues

Problem:I am calling URLFetch with a deadline of 480 seconds from
within a TaskQueue, but it is timing out after only 60 seconds.
The original question was asked in official group more than year ago, but still unanswered.
Bug confirmed, but there is no technical support or developers of gae. Maybe they're here?
While there is information on this old thread that suggests otherwise, I don't believe this is a bug that will be fixed (or that it is a bug). It's unfortunate that this issue has not been updated or closed.
A Urlfetch (regardless) of where you make it from within appengine world has a maximum deadline of 60 seconds.
Requests on front end instances within appengine also have a lifetime of a maximum of 60 seconds.
Requests within the context of the Taskqueue however, have a lifetime of up to 10 minutes. This however does not mean that you can make a Urlfetch made from within the taskqueue context exceed the 60 second deadline.

Is it possible in C/C++ in Linux to get informed when a specified date/time is reached?

Is it possible using standard C++ in Linux to get informed when a specified date/time is reached by the system time (assuming my process is up of course)?
I could just set a timer to the time I need to wait, but what happens if the user changes the system time? Can I get informed by the system that the user changed the system time to reset my timers?
The Linux kernel has such system calls, although, they are not integrated into the libc API.
You can create a timer, get a file descriptor for it from the kernel, and do a select or epoll call on the descriptor to be notified when the timer fires.
The man page for it: http://www.kernel.org/doc/man-pages/online/pages/man2/timerfd_create.2.html
Sure, just write code to do exactly what you want. For example: You could keep a list of all such timers, the date/time they're supposed to be fired, and what code should be executed when they occur. Then every so often you could check the time to see if it's past the time any of your timers should fire. If so, fire them, remove them from the list, and go on about your business.
See the documentation for 'at' command (man at)
For example, at could send you an email at a given time, like 2:35 PM.
at 14:35
warning: commands will be executed using /bin/sh
at> mail -s "It is 2:35 PM" dbadmin < /dev/null
at><EOT> # After CTRL/D pressed.
job 9 at Tue May 8 14:35:00 2012
You can calculate the time from program start to the event, and call
sleep (difftime-1);
Then you could control if the time was reset, but this way you would only be able to correct the time to the future - if the time was set back, you would have skipped the event.