I have made a program in c++ for changing the password of a system and I wanna run it for every 2 hours,then I end up with two choice in c++ ,one is Sleep(ms) and the other is using recent thread lib this_thread::sleep_for(2h)[ 2h using std::chrono_literals].
The doubt I have been wandering is, does long pausing an exe will work the way we want, is it any other better way than what i mentioned?
I have also planned to put my exe as a windows service.
any other better way than what i mentioned?
Yes.
I suggest, that you do not pause the program at. Simply do the thing, and exit.
Extract the scheduling part to a separate program. You don't even need to write this scheduler, because it already exists on most operating systems.
If you have some task that must be run periodically with long periods of waiting, you should use a program or script, that does the task and exits, and a scheduler, which handles the waiting. There're also questions you need to consider, for example:
do you need to start your task if the scheduled time was missed (due to reboot, for example)
do you allow several of your tasks to run at once, if time it takes to complete is longer than wait period
What you're trying to do is to implement a scheduler yourself. If this is what you want, then sleep is a posix function, and chrono::thread::sleep_for is cross-platform, so it's better to use the second one.
However, it's not generally recommended to implement schedulers, moreover, so simple ones.
Related
I have a collection of executables that regularly update a collection of files every couple of minutes 24/7. I am thinking about writing a single monitoring program that will continuously check the last write time (using the function stat()) of all these files so that if any have not been updated recently enough it can ring an alarm. My concern though is that perhaps the very act of calling stat() may cause a program that is attempting to write to that file, to fail. Need I worry?... and if so is there an alternative way to achieve my goal?
Yes, a stat call can be thought of as atomic, in that all the information it returns is guaranteed to be consistent. If you call stat at the same instant some other process is writing to the file, there should be no possibility that, say, the other process's write is reflected in st_mtime but not st_size.
And in any case, there's certainly no possibility that calling stat at the same instant some other process is writing to the file could cause that other process to fail. (That would be a serious and quite unacceptable bug in the operating system -- one of an OS'es main jobs is to ensure that unrelated processes can't accidentally interact with each other in such ways. This lack-of-interference property isn't usually what we mean by "atomic", though.)
With that said, though, the usual way to monitor a process is via its process ID. And there are probably plenty of prewritten packages out there to help you manage one or more processes that are supposed to run continuously, giving you clean start/stop and monitoring capabilities. (See s6 as an example. I know nothing about this package and am not recommending it; it's just the first one I came across in a web search.)
Another possibility, if you have any kind of IPC mechanism set up between your processes, is to set up a periodic heartbeat that each one publishes, so that a watchdog timer somewhere can detect a process dying.
If you want to keep monitoring your processes by the timeliness of the files they write, though, that sounds like a perfectly fine technique also.
I was reading this article about Inter-Process communication with message passing. In order to run the examples and see the it says and I quote: "should be compiled and run at the same time". Someone has any ideas how exactly should I do this?
You can create BAT file and start both programs almost simultaneously:
START first.exe
START second.exe
"should be compiled and run at the same time"
I think it is clear a program can not be run until after it is compiled (this is a minor grammatical issue, and should be ignored).
In Linux, my preferred mechanism to launch a process is popen invoked by my C++ program.
In C++, it is easy for one thread (let us call this your start process) to use popen to launch as many processes as needed for your application (call these work processes).
I would then use messages to synchronize the start up (i.e. work processes should initialize themselves, then wait (at startup) for a go message from start process). These start up messages work in the same way your application would use them. This ensures that the multiple work processes are running at the same time (but within the constraints of how many cores your system has available).
I will have a linux service that waits messages from a central and let the tasks do that are ordered by those messages. I think to do is I need to create a new thread.
Moreover, one task would have an absolute priority compared to others and when the order for that task comes, I need to accomplish it as soon as possible. Also, since all these stuff is on embedded system and resources are restricted, I thought I need to pause all other threads that were created.
I imagine that I need some thing similar as here:
How to sleep or pause a PThread in c on Linux
But the question is not duplicate. I do not have exact point to pause threads. I need to pause them wherever possible, and continue when the prioritized task is finished.
And here it suggests a way that seems obsolete and also I could use std::thread.
How to pause a pthread ANY TIME I want?
How could I achieve prioritize one task?
(Maybe before that) To do tasks, do I need to design some thing like "Thread manager", or there could be simpler thoughts?
Note: I have used the word "task" as it is, not as a technical term.
We have a decoding function that runs in its own thread to carry out its job.
The time of execution is usually well below a defined timeout value, but on some occasions it may take much longer to complete. Thus the need to have a timeout in order to make sure this function will not cause extra delays to the rest of the program.
This is currently being developed on Windows OS but I'm also looking at a portable solution to Linux.
The implementation so far as multiple checks within the decoding function to see if it still has time to continue or abort processing. Which is def. not great practice and I'm looking at improving this.
I'm aware that boost provides such facility, but we do not use boost in this project.
Here is an excellent article by Herb Sutter on the subject. The conclusion would be: your current approach is OK. Just have your decoding threads periodicly check if they run out of time. The important thing is to strike a balance about how frequently you check.
One way is to set a flag on timeout to instruct the thread instance to not report any completion, not continue and to delete/terminate itself ASAP. Reduce its priority to the lowest possible and forget about it. Create another thread object immediately, overwriting the old instance value, and use the new thread instance for subsequent decoding.
The lowest-priority orphaned thread will eventually die off itself when it finally gets around to checking its suicide-flag.
Usually developing applications I am used to print to console in order to get useful debugging/tracing information. The application I am working now since it is multi-threaded sometimes I see my printf overlapping each other.
I tried to synchronize the screen using a mutex but I end up in slowing and blocking the app. How to solve this issue?
I am aware of MT logging libraries but in using them, since I log too much, I slow ( a bit ) my app.
I was thinking to the following idea..instead of logging within my applications why not log outside it? I would like to send logging information via socket to a second application process that actually print out on the screen.
Are you aware of any library already doing this?
I use Linux/gcc.
thanks
afg
You have 3 options. In increasing order of complexity:
Just use a simple mutex within each thread. The mutex is shared by all threads.
Send all the output to a single thread that does nothing but the logging.
Send all the output to a separate logging application.
Under most circumstances, I would go with #2. #1 is fine as a starting point, but in all but the most trivial applications you can run in to problems serializing the application. #2 is still very simple, and simple is a good thing, but it is also quite scalable. You still end up doing the processing in the main application, but for the vast majority of applications you gain nothing by spinning this off to it's own, dedicated application.
Number 3 is what you're going to do in preformance-critical server type applications, but the minimal performance gain you get with this approach is 1: very difficult to achieve, 2: very easy to screw up, and 3: not the only or even most compelling reason people generally take this approach. Rather, people typically take this approach when they need the logging service to be seperated from the applications using it.
Which OS are you using?
Not sure about specific library's, but one of the classical approaches to this sort of problem is to use a logging queue, which is worked by a writer thread, who's job is purely to write the log file.
You need to be aware, either with a threaded approach, or a multi-process approach that the write queue may back up, meaning it needs to be managed, either by discarding entries or by slowing down your application (which is obviously easier if it's the threaded approach).
It's also common to have some way of categorising your logging output, so that you can have one section of your code logging at a high level, whilst another section of your code logs at a much lower level. This makes it much easier to manage the amount of output that's being written to files and offers you the option of releasing the code with the logging in it, but turned off so that it can be used for fault diagnosis when installed.
As I know critical section has less weight.
Critical section
Using critical section
If you use gcc, you could use atomic accesses. Link.
Frankly, a Mutex is the only way you really want to do that, so it's always going to be slow in your case because you're using so many print statements.... so to solve your question then, don't use so many print_f statements; that's your problem to begin with.
Okay, is your solution using a mutex to print? Perhaps you should have a mutex to a message queue which another thread is processing to print; that has a potential hang up, but I think will be faster. So, use an active logging thread that spins waiting for incoming messages to print. The networking solution could work too, but that requires more work; try this first.
What you can do is to have one queue per thread, and have the logging thread routinely go through each of these and post the message somewhere.
This is fairly easy to set up and the amount of contention can be very low (just a pointer swap or two, which can be done w/o locking anything).