Implementing Realtime in a Text-Adventure? - c++

I'm making a text-based RPG, and I'd really like to emulate time.
I could just make some time pass between each time the player types something, but id like it to be better than that if possible. I was wondering if multithreading would be a good way to do this.
I was thinking maybe just have a second, really simple thread in the background that just has a loop, looping every 1000ms. For every pass though its loop the world time would increase by 1 sec and the player would regenerate a bit of health and mana.
Is this something that multithreading could do, or is there some stuff i don't know about that would make this not work? (I'd prefer not to spend a bunch of time struggling to learn this if its not going to help me with this project.)

Yes, mutlithreading could certainly do this, but be weary that threading is usually more complicated than the alternative (which would be the main thread polling various update events as part of its main loop, which should be running at least once every 100ms or so anyway).
In your case, if the clock thread follows pretty strict rules, you'll probably be "ok."
The clock thread is the only thread allowed to set/modify the time variables.
The main/ui thread is only allowed to read the time.
You must still use a system time function, since the thread sleep functions cannot be trusted for accuracy (depending on system activity, the thread's update loop may not run until some milliseconds after you requested it run).
If you implement it like that, then you won't even need to familiarize yourself with mutexes in order to get the thread up and running safely, and your time will be accurate.
But! Here's some food for thought: what if you want to bind in-game triggers at specific times of the day? For example, a message that would be posted to the user "The sun has set" or similar. The code needed to do that will need to be running on the main thread anyway (unless you want to implement cross-thread message communication queues!), and will probably look an awful lot like basic periodic-check-and-update-clock code. So at that point you would be better off just keeping a simple unified thread model anyway.

I usually use a class named Simulation to step forward time. I don't have it in C++ but I've done threading in Java that is stepping time forward and activating events according to schedule (or a random event at a planned time). You can take this and translate to C++ or use to see how an object-oriented implementation is.
package adventure;
public class Simulation extends Thread {
private PriorityQueue prioQueue;
Simulation() {
prioQueue = new PriorityQueue();
start();
}
public void wakeMeAfter(Wakeable SleepingObject, double time) {
prioQueue.enqueue(SleepingObject, System.currentTimeMillis() + time);
}
public void run() {
while (true) {
try {
sleep(5);
if (prioQueue.getFirstTime() <= System.currentTimeMillis()) {
((Wakeable) prioQueue.getFirst()).wakeup();
prioQueue.dequeue();
}
} catch (InterruptedException e) {
}
}
}
}
To use it, you just instantiate it and add your objects:
` Simulation sim = new Simulation();
// Load images to be used as appearance-parameter for persons
Image studAppearance = loadPicture("Person.gif");
// --- Add new persons here ---
new WalkingPerson(sim, this, "Peter", studAppearance);

I'm going to assume that your program currently spends the majority of its time waiting for user input - which blocks your main thread irregularly and for a relatively long period of time, preventing you from having short time-dependant updates. And that you want to avoid complicated solutions (threading).
If you want to access the time in the main thread, accessing it without a separate thread is relatively easy (look at the example).
If you don't need to do anything in the background while waiting for user input, couldn't you write a function to calculate the new value, based on the amount of time that has passed while waiting? You can have some variable LastSystemTimeObserved that gets updated every time you need to use one of your time-dependant variables - calling some function that calculates the variable's changed value based on how much time has passed since it was last called, instead of recalculating values every second.
If you do make a separate thread, be sure that you properly protect any variables that are accessed by both threads.

Related

Forcibly terminate method after a certain amount of time

Say I have a function whose prototype looks like this, belonging to class container_class:
std::vector<int> container_class::func(int param);
The function may or may not cause an infinite loop on certain inputs; it is impossible to tell which inputs will cause a success and which will cause an infinite loop. The function is in a library of which I do not have the source of and cannot modify (this is a bug and will be fixed in the next release in a few months, but for now I need a way to work around it), so solutions which modify the function or class will not work.
I've tried isolating the function using std::async and std::future, and using a while loop to constantly check the state of the thread:
container_class c();
long start = get_current_time(); //get the current time in ms
auto future = std::async(&container_class::func, &c, 2);
while(future.wait_for(0ms) != std::future_status::ready) {
if(get_current_time() - start > 1000) {
//forcibly terminate future
}
sleep(2);
}
This code has many problems. One is that I can't forcibly terminate the std::future object (and the thread that it represents).
At the far extreme, if I can't find any other solution, I can isolate the function in its own executable, run it, and then check its state and terminate it appropriately. However, I would rather not do this.
How can I accomplish this? Is there a better way than what I'm doing right now?
You are out of luck, sorry.
First off, C++ doesn't even guarantee you there will be a thread for future execution. Although it would be extremely hard (probably impossible) to implement all std::async guarantees in a single thread, there is no direct prohibition of that, and also, there is certainly no guarantee that there will be a thread per async call. Because of that, there is no way to cancel the async execution.
Second, there is no such way even in the lowest level of thread implementation. While pthread_cancel exists, it won't protect you from infinite loops not visiting cancellation points, for example.
You can not arbitrarily kill a thread in Posix, and C++ thread model is based on it. A process really can't be a scheduler of it's own threads, and while sometimes it is a pain, it is what it is.

Executing function for some amount of time

I am sorry if this was asked before, but I didn't find anything related to this. And this is for my understanding. It's not an home work.
I want to execute a function only for some amount of time. How do I do that? For example,
main()
{
....
....
func();
.....
.....
}
function func()
{
......
......
}
Here, my main function calls another function. I want that function to execute only for a minute. In that function, I will be getting some data from the user. So, if user doesn't enter the data, I don't want to be stuck in that function forever. So, Irrespective of whether function is completed by that time or it is not completed, I want to come back to the main function and execute the next operation.
Is there any way to do it ? I am on windows 7 and I am using VS-2013.
Under windows, the options are limited.
The simplest option would be for func() to explicitly and periodically check how long it has been executing (e.g. store its start time, periodically check the amount of time elapses since that start time) and return if it has gone longer than you wish.
It is possible (C++11 or later) to execute the function within another thread, and for main() to signal that thread when the required time period has elapsed. That is best done cooperatively. For example, main() sets a flag, the thread function checks that flag and exits when required to. Such a flag is usually best protected by a critical section or mutex.
An extremely unsafe way under windows is for main() to forceably terminate the thread. That is unsafe, as it can leave the program (and, in worst cases, the operating system itself) in an unreliable state (e.g. if the terminated thread is in the process of allocating memory, if it is executing certain kernel functions, manipulating global state of a shared DLL).
If you want better/safer options, you will need a real-time operating system with strict memory and timing partitioning. To date, I have yet to encounter any substantiated documentation about any variant of Windows and unix (not even real time variants) with those characteristics. There are a couple of unix-like systems (e.g. LynxOS) with variants that have such properties.
I think a part of your requirement can be met using multithreading and a loop with a stopwatch.
Create a new thread.
Start a stopwatch.
Start a loop with one minute as the condition for the loop.
During each iteration check if the user has entered the input and process.
when one minute is over, the loop quits.
I 'am not sure about the feasibility about this idea, just shared my idea. I don't know much about c++, but in Node.js your requirement can be achieved using 'events'. May be such things exists in C++ too.

One-Shot Timer in C++ on WinCE

I'm writing an event handling function, f(d), which receives some data, d, and must take take an action X(d), then sleep for 100ms, then take another action Y(d). I would implement it as:
void f(d)
{
X(d);
Sleep(100);
Y(d);
}
However, f(d) is called from a single-threaded event handler, so the Sleep(100) is unacceptable.
I would like to do the following:
void f(d)
{
X(d);
ScheduleOneShotTimer(Y,d,100);
}
I could implement ScheduleOneShotTimer by creating a new thread for each call, passing the data as the thread parameter, and calling Sleep before executing Y(d). However, as this event may occur up to 100 times per second, I'm concerned about the overhead involved with creating a destroying all those threads.
Preferably there would be operating system level support for a "one-shot timer", but I don't think this is the case on CE. I know about SetTimer, but that is not applicable to me because I am writing a "Console Application" that has no message loop.
Any other suggestions for how to structure this would be appreciated.
Call the timeSetEvent API (a completely non-intuitive API name, I know). Use a callback function and the TIME_ONESHOT parameter.
I'd create one thread that would keep a queue of timestamp-callback pairs, sleep for 100ms (or something smaller) and then execute all elapsed callbacks.
OFC with all inter-thread synchronization (interlocking on a critical section, etc).
It's a performance-conscious solution, not a precision-oriented one. As callbacks pile up, it may take longer than exactly 100ms to execute. But since you're measuring time with Wait (which is not precise) I guess it may be good enough.

How to implement efficient C++ runtime statistics

I would like to know if there is a good way to monitor my application internals, ideally in the form of an existing library.
My application is heavily multithreaded, and uses a messaging system to communicate in-between threads and to the external world. My goal is to monitor what kind of messages are sent, at which frequency, etc.
There could also be other statistics in a more general way, like how many threads are spawned every minute, how much new/delete are called, or more specific aspects of the application; you name it.
What would be awesome is something like the "internal pages" you have for Google Chrome, like net or chrome://tracing , but in a command line fashion.
If there is a library that's generic enough to accomodate for the specificities of my app, that would be great.
Otherwise I'm prepared to implement a small class that would do the job, but I don't know where to start. I think the most important thing is that the code shouldn't interfere too much, so that performances are not impacted.
Do you guys have some pointers on this matter?
Edit: my application runs on Linux, in an embedded environment, sadly not supported by Valgrind :(
I would recommend that in your code, you maintain counters that get incremented. The counters can be static class members or globals. If you use a class to define your counter, you can have the constructor register your counter with a single repository along with a name. Then, you can query and reset your counters by consulting the repository.
struct Counter {
unsigned long c_;
unsigned long operator++ () { return ++c_; }
operator unsigned long () const { return c_; }
void reset () { unsigned long c = c_; ATOMIC_DECREMENT(c_, c); }
Counter (std::string name);
};
struct CounterAtomic : public Counter {
unsigned long operator++ () { return ATOMIC_INCREMENT(c_, 1); }
CounterAtomic (std::string name) : Counter(name) {}
};
ATOMIC_INCREMENT would be a platform specific mechanism to increment the counter atomically. GCC provides a built-in __sync_add_and_fetch for this purpose. ATOMIC_DECREMENT is similar, with GCC built-in __sync_sub_and_fetch.
struct CounterRepository {
typedef std::map<std::string, Counter *> MapType;
mutable Mutex lock_;
MapType map_;
void add (std::string n, Counter &c) {
ScopedLock<Mutex> sl(lock_);
if (map_.find(n) != map_.end()) throw n;
map_[n] = &c;
}
Counter & get (std::string n) const {
ScopedLock<Mutex> sl(lock_);
MapType::const_iterator i = map_.find(n);
if (i == map_.end()) throw n;
return *(i->second);
}
};
CounterRepository counterRepository;
Counter::Counter (std::string name) {
counterRepository.add(name, *this);
}
If you know the same counter will be incremented by more than one thread, then use CounterAtomic. For counters that are specific to a thread, just use Counter.
I gather you are trying to implement the gathering of run-time statistics -- things like how many bytes you sent, how long you've been running, and how many times the user has activated a particular function.
Typically, in order to compile run-time statistics such as these from a variety of sources (like worker threads), I would have each source (thread) increment its own, local counters of the most fundamental data but not perform any lengthy math or analysis on that data yet.
Then back in the main thread (or wherever you want these stats analyzed & displayed), I send a RequestProgress type message to each of the worker threads. In response, the worker threads will gather up all the fundamental data and perhaps perform some simple analysis. This data, along with the results of the basic analysis, are sent back to the requesting (main) thread in a ProgressReport message. The main thread then aggregates all this data, does additional (perhaps costly) analysis, formatting and display to the user or logging.
The main thread sends this RequestProgress message either on user request (like when they press the S key), or on a timed interval. If a timed interval is what I'm going for, I'll typically implement another new "heartbeat" thread. All this thread does is Sleep() for a specified time, then send a Heartbeat message to the main thread. The main thread in turn acts on this Heartbeat message by sending RequestProgress messages to every worker thread the statistics are to be gathered from.
The act of gathering statistics seems like it should be fairly straightforward. So why such a complex mechanism? The answer is two-fold.
First, the worker threads have a job to do, and computing usage statistics isn't it. Trying to refactor these threads to take on a second responsibility orthoganal to their main purpose is a little like trying to jam a square peg in to a round hole. They weren't built to do that, so the code will resist being written.
Second, the computation of run-time statistics can be costly if you try to do too much, too often. Suppose for example you have a worker thread that send multicast data on the network, and you want to gather throughput data. How many bytes, over how long a time period, and an average of how many bytes per second. You could have the worker thread compute all this on the fly itself, but it's a lot of work and that CPU time is better spent by the worker thread doing what it's supposed to be doing -- sending multicast data. If instead you simply incremented a counter for how many bytes you've sent every time you send a message, the counting has minimal impact on the performance of the thread. Then in response to the occasional RequestProgress message you can figure out the start & stop times, and send just that along to let the main thread do all the divison etc.
Use shared memory (POSIX, System V, mmap or whatever you have available). Put a fixed length array of volatile unsigned 32- or 64-bit integers (i.e. the largest you can atomically increment on your platform) in there by casting the raw block of memory to your array definition. Note that the volatile doesn't get you atomicity; it prevents compiler optimizations that might trash your stats values. Use intrinsics like gcc's __sync_add_and_fetch() or the newer C++11 atomic<> types.
You can then write a small program that attaches to the same block of shared memory and can print out one or all stats. This small stats reader program and you main program would have to share a common header file that enforced the position of each stat in the array.
The obvious drawback here is that you're stuck with a fixed number of counters. But it's hard to beat, performance-wise. The impact is the atomic increment of an integer at various points in your program.
In embedded systems, a common technique is to reserve a block of memory for a "log" and treat it like a circular queue. Write some code that can read this block of memory; which will help take "snapshots" during run-time.
Search the web for "debug logging". Should turn up some source you could use to play with. Most shops I've been at usually roll their own.
Should you have extra non-volatile memory, you could reserve an area and write to that. This would also include files if your system is large enough to support a file system.
Worst case, write data out to a debug (serial) port.
For actual, real-time, measurements, we usually use an oscilloscope connected to a GPIO or test point and output pulses to the GPIO / Test point.
Have a look at valgrind/callgrind.
It can be used for profiling, which is what I understand you are looking for. I do not think it works at runtime though, but it can generate after your process finnished.
That's a good answer, #John Dibling! I had a system quite similar to this. However, my "stat" thread was querying workers 10 times per second and it affected performance of the worker threads as each time the "stat" thread asks for a data, there's a critical section accessing this data (counters, etc.) and it means that the worker thread is blocked for the time this data is being retrieved. It turned out, that under heavy load of worker threads, this 10Hz stat querying affected overall performance of the workers.
So I switched to a slightly different stat reporting model - instead of actively querying worker threads from the main threads, I now have worker threads to report their basic stat counters to their exclusive statistics repos, which can be queried by the main thread at any time with no direct impact on the workers.
If you are on C++11 you could use std::atomic<>
#include <atomic>
class GlobalStatistics {
public:
static GlobalStatistics &get() {
static GlobalStatistics instance;
return instance;
}
void incrTotalBytesProcessed(unsigned int incrBy) {
totalBytesProcessed += incrBy;
}
long long getTotalBytesProcessed() const { return totalBytesProcessed; }
private:
std::atomic_llong totalBytesProcessed;
GlobalStatistics() { }
GlobalStatistics(const GlobalStatistics &) = delete;
void operator=(const GlobalStatistics &) = delete;
};

So I got myself a threadpool-task manager system. Should I from now on use only it for all threads creation?

So I have a thread pool that allows dynamic resizing and uses task paradigm. I wonder - when people get such thing do they stop creating threads by hand at all and just use tasks all the time? So is it common to use only thread-pool\task-executor for threads creation inside of my class?
my thread pool is based on boost::asio::io_service and works with boost::packaged_task.it is header only, having boost 1.47.0 all you need for it to work are timer, my costume thread_group and the thread_pool class. It was quite fun to develop such small thing but now I stand behind a dilemma.
my task constructions look like:
boost::shared_ptr< boost::packaged_task<int> > task(new boost::packaged_task<int>( boost::bind(calculate_the_answer_to_life_the_universe_and_everything, argument_int_value )));
this is quite over head in case when I want create a function that would newer return anything, would have in it some run again timer (for example files indexer that needs to check every 5 seconds if user has created any new file in some folder)
so for example I would have:
void infinite_thread()
{
while(true)
{
timerForCaptureFame.restart();
do_stuff();
spendedTimeForCaptureFame = (int64_t)timerForCaptureFame.elapsed();
if (spendedTimeForCaptureFame < desiredTimeForCaptureFame)
boost::this_thread::sleep(boost::posix_time::milliseconds(desiredTimeForCaptureFame - spendedTimeForCaptureFame));
}
}
and I would simply create this wraper into new thread with code like
boost::thread workerThread(infinite_thread);
But now I can have tasks so it could turn into
boost::shared_ptr< boost::packaged_task<void> > task(new boost::packaged_task<void>(infinite_thread));
task_manager->post<void>(task);
My task manager after some small amount of time would get that thread does not close itself and will generally add to itself new thread for execution keeping this one working.
So I really wonder if it is common practice having a thread_pool/task_pool to use only it (for example one per class) for threads creation or people mix there tasks with "pure" threads?
There is no clear answer. There might be things that seem better suited for regular threads and don't quite fit the task paradigm, for example threads that need to last for the whole duration of the program, or that might outlive the thread pool. If it is never going to be taken back to the pool, then you might as well handle it as a separate thing.
Then again, since you already have the thread pool, you might want to just force all threads to be tasks even if they are infinitely long tasks... but beware of the law of the instrument. It might seem that every job is a task/nail to your new pool/golden hammer.