I'm writing a C++ program that keeps track of dates whenever it's ran, keeping a record. I want to test and debug this record keeping functionality, but the records themselves can span years; letting time pass naturally to expose bugs would take... a while.
Is there an established way of simulating the passage of time, so I an debug this program more easily? I'm using C's ctime library to acquire dates.
If you want your system to be testable, you'll want to be able to replace your entire time engine.
There are a number of ways to replace a time engine.
The easiest would be to create a new time engine singleton that replicates the time engine API you are using (in your case, ctime).
Then sweep out every use of ctime based APIs and redirect them to your new time engine.
This new time engine can be configured to use ctime, or something else.
To be more reliable, you'll even want to change the binary layout and type of any data structures used that interact with your old API, audit every case where you convert things to/from void pointers or reinterpret cast them, etc.
Another approach is dependency injection, where instead of using a singleton you pass the time-engine as an argument to your program. Every part of your program that needs access to time now either stores a pointer to the time engine, or takes it as a function argument.
Now that you have control over time you can arrange for it to pass faster, jump around, etc. Depending on your system you may want to be more or less realistic in how time passes, and it could expose different real and not real bugs.
Even after you do all of this you will not be certain that your program doesn't have time bugs. You may interact with OS time in ways you don't expect. But you can at least solve some of these problems.
Related
Currently, I'm using a blueprint script to generate and delete around 60 actors in a radius of a flying pawn.
This creates framerate spikes, and I've read that this is quite a heavy process.
So obviously I would like to increase the performance of this process by just placing the same logic in C++.
But I would like to know a couple of things first to make sure what I'm doing is right.
1 ) Is the spawnActor function in C++ by itself - faster - than the blueprint node?
2 ) Which kind of properties of the blueprint, increase the processing time of spawning?
I know that for example enabling physics will increase the process time, but are there any more properties that I need to take into consideration?
I thank everyone taking their time reading this, and any kind of help is much appreciated :)
You can't say that C++'s SpawnActor is faster, since the Blueprint's SpawnActor finally leads to the C++ SpawnActor. But of course if you directly write C++ codes then it saves the time of calling several functions routing Blueprint node to C++ function. Hence for the spike of SpawnActor I don't think it's because of calling Blueprint SpawnActor, i.e. calling C++ won't fix it.
The SpawnActor itself does have its cost, so calling it 60 times in a row surely makes a spike. I'm not very sure how many overhead is in SpawnActor, but at least I can guess that allocating memory for these new Actors cost some time. Also if your Actor template has a lot of components then it takes more time. So a common technique is to use an Actor Pool to pre-spawn some Actors and reuse them.
C++ will be always faster than Blueprints because last are based on templates, so your compilation and bug fixing won't be easy.
Amount of characters will always affect performance because of CPU time, but it's ok. You must take into account that getting access to characters based on UE4 iterators and dedicated containers, so they are already optimised and thread safe.
For mapping multiple custom signals in one slot in qt i basically have two options: the QSignalMapper or the cast from the senderID pointer (see: http://doc.qt.digia.com/qq/qq10-signalmapper.html).
My question is: which is more efficient code?
I want to use it in a timecritical section of my program.
Should i consider using seperate Signals/Slots to optimzie the code?
Thank you in advance.
You're most likely wrong about what "time critical" means and where is your application actually spending the CPU time. You can't make any arguments without actually measuring things. At this point I believe you're micro-optimizing and wasting your time. Don't do anything optimization-related unless you can measure the starting point and see any improvements in real numbers.
If your signal-slot connection is invoked on the order of 1000 times per second, you can do pretty much anything you want - the overhead won't matter. It only starts to matter if you're in the 100k invocations/second range, and then probably you're architecting things wrongly to begin with.
A signal-slot connection without any parameters is always faster than one that sends some parameters. You can simply add a property to the sender object using the dynamic property system, and check for that property by using sender()->property("..."). Dynamic property look-up takes a bit more time than using qobject_cast<...>(sender()) and a call to a member function on a custom QObject or QWidget-derived class. But this is immaterial, because unless you can measure the difference, you don't need to worry about it. Premature optimization is truly the root of all evil.
Standard hacking case. Hack file type injects into a started process and writes over process memory using WriteProcessMemory call. In games this is not something you would want because it can provide the hacker to change the portion of the game and give himself an advantage.
There is a possibility to force a user to run a third-party program along with the game and I would need to know what would be the best way to prevent such injection. I already tried to use a function EnumProcessModules which lists all process DLLs with no success. It seems to me that the hacks inject directly into process memory (end of stack?), therefore it is undetected. At the moment I have came down to a few options.
Create a blacklist of files, file patterns, process names and memory patterns of most known public hacks and scan them with the program. The problem with this is that I would need to maintain the blacklist and also create an update of the program to hold all avalible hacks. I also found this usefull answer Detecting memory access to a process but it could be possible that some existing DLL is already using those calls so there could be false positives.
Using ReadProcessMemory to monitor the changes in well known memory offsets (hacks usually use the same offsets to achieve something). I would need to run a few hacks, monitor the behaviour and get samples of hack behaviour when comparing to normal run.
Would it be possible to somehow rearrange the process memory after it starts? Maybe just pushing the process memory down the stack could confuse the hack.
This is an example of the hack call:
WriteProcessMemory(phandler,0xsomeoffset,&datatowrite,...);
So unless the hack is a little more smarter to search for the actual start of the process it would already be a great success. I wonder if there is a system call that could rewrite the memory to another location or somehow insert some null data in front of the stack.
So, what would be the best way to go with this? It is a really interesting and dark area of the programming so I would like to hear as much interesting ideas as possible. The goal is to either prevent the hack from working or detect it.
Best regards
Time after time compute the hash or CRC of application's image stored in memory and compare it with known hash or CRC.
Our service http://activation-cloud.com provides the ability to check integrity of application against the signature stored in database.
I have some application, which makes database requests. I guess it doesn't actually matter, what kind of the database I am using, but let's say it's a simple SQLite-driven database.
Now, this application runs as a service and does some amount of requests per minute (this number might actually be huge).
I'm willing to benchmark the queries to retrieve their number, maximal / minimal / average running time for some period and I wish to design my own tool for this (obviously, there are some, but I need my own for some appropriate reasons :).
So - could you advice an approach for this task?
I guess there are several possible cases:
1) I have access to the application source code. Here, obviously, I want to make some sort of cross-application integration, probably using pipes. Could you advice something about how this should be done and (if there is one) any other possible solution?
2) I don't have sources. So, is this even possible to perform some neat injection from my application to benchmark the other one? I hope there is a way, maybe hacky, whatever.
Thanks a lot.
See C++ Code Profiler for a range of profilers.
Or C++ Logging and performance tuning library for rolling your own simple version
My answer is valid just for the case 1).
In my experience profiling it is a fun a difficult task. Using professional tools can be effective but it can take a lot of time to find the right one and learn how to use it properly. I usually start in a very simple way. I have prepared two very simple classes. The first one ProfileHelper the class populate the start time in the constructor and the end time in the destructor. The second class ProfileHelperStatistic is a container with extra statistical capability (a std::multimap + few methods to return average, standard deviation and other funny stuff).
The ProfilerHelper has an reference to the container and before exit the destructor push the data in the container.You can declare the ProfileHelperStatistic in the main and if you create on the stack ProfilerHelper at the beginning of a specific function the job is done. The constructor of the ProfileHelper will store the starting time and the destructor will push the result on the ProfileHelperStatistic.
It is fairly easy to implement and with minor modification can be implemented as cross-platform. The time to create and destroy the object are not recorded, so you will not polluted the result. Calculating the final statistic can be expensive, so I suggest you to run it once at the end.
You can also customize the information that you are going to store in ProfileHelperStatistic adding extra information (like timestamp or memory usage for example).
The implementation is fairly easy, two class that are not bigger than 50 lines each. Just two hints:
1) catch all in the destructor!
2) consider to use collection that take constant time to insert if you are going to store a lot of data.
This is a simple tool and it can help you profiling your application in a very effective way. My suggestion is to start with few macro functions (5-7 logical block) and then increase the granularity. Remember the 80-20 rule: 20% of the source code use 80% of the time.
Last note about database: database tunes the performance dynamically, if you run a query several time at the end the query will be quicker than at the beginning (Oracle does, I guess other database as well). In other word, if you test heavily and artificially the application focusing on just few specific queries you can get too optimistic results.
I guess it doesn't actually matter,
what kind of the database I am using,
but let's say it's a simple
SQLite-driven database.
It's very important what kind of database you use, because the database-manager might have integrated monitoring.
I could speak only about IBM DB/2, but I beliefe that IBM DB/2 is not the only dbm with integrated monitoring tools.
Here for example an short overview what you could monitor in IBM DB/2:
statements (all executed statements, execution count, prepare-time, cpu-time, count of reads/writes: tablerows, bufferpool, logical, physical)
tables (count of reads / writes)
bufferpools (logical and physical reads/writes for data and index, read/write times)
active connections (running statements, count of reads/writes, times)
locks (all locks and type)
and many more
Monitor-data could be accessed via SQL or API from own software, like for example DB2 Monitor does.
Under Unix, you might want to use gprof and its graphical front-end, kprof. Compile your app with the -pg flag (I assume you're using g++) and run it through gprof and observe the results.
Note, however, that this type of profiling will measure the overall performance of an application, not just SQL queries. If it's the performance of queries you want to measure, you should use special tools that are designed for your DBMS - for example, MySQL has a builtin query profiler (for SQLite, see this question: Is there a tool to profile sqlite queries? )
There is a (linux) solution you might find interesting since it could be used on both cases.
It's the LD_PRELOAD trick. It's an environment variable that let's you specify a shared library to be loaded right before your program is executed. The symbols load from this library will override any other available on the system.
The basic idea is to this custom library as a wrapper around the original functions.
There is a bunch of resources available that explain how to use this trick: 1 , 2, 3
Here, obviously, I want to make some sort of cross-application integration, probably using pipes.
I don't think that's obvious at all.
If you have access to the application, I'd suggest dumping all the necessary information to a log file and process that log file later on.
If you want to be able to activate and deactivate this behavior on-the-fly, without re-starting the service, you could use a logging library that supports enabling/disabling log channels on-the-fly.
Then you'd only need to send a message to the service by whatever means (socket connection, ...) to enable/disable logging.
If you don't have access to the application, then I think the best way would be what MacGucky suggested: let the profiling/monitoring tools of the DBMS do it. E.g. MS-SQL has a nice profiler that can capture requests to the server, including all kinds of useful data (CPU time for each request, IO time, wait time etc.).
And if it's really SQLite (plus you don't have access to the source) then your chances are rather low. If the program in question uses SQLite as a DLL, then you could substitute your own version of SQLite, modified to write the necessary log files.
Use the apache jmeter.
To test performances of your sql queries under high load
I have a wizard class that gets used a lot in my program. Unfortunately, the wizard takes a while to load mostly because the GUI framework is very slow. I tried to redesign the wizard class multiple times (like making the object reusable so it only gets created once) but I always hit a brick wall somewhere. So, at this point is it a huge ugly hack to just load 50 instances of this beast into a vector and just pop them off as I use them? That way the delay will only be noticed on startup and run fine thereafter. Too much of a hack? Is such a construct common?
In games, we often first allocate and construct everything needed in a game session. Then we recycle the objects if they have short life-time, trying to get 0 allocations/deallocations while the game session is running.
So no it's not really a hack, it's just good sense to make the computer do less work to get faster. One strategy is "caching", that is, in general, first compute your non-variant data, then run with the dynamic ones. Memory allocation, object constructions, etc have to be prepared before use, where possible and necessary.
Unfortunately, the wizard takes a while to load mostly because the GUI framework is very slow.
Isn't a wizard just a form-based template? Shouldn't that carry essentially no overhead? Find what's slowing the framework down (uncompressed background image?) and fix the root cause.
As a stopgap, you could create the windows in the background and not display them until the user asks. But that's obviously just moving the problem somewhere else. Even if you create them in a background thread at startup, the user's first command might ask for the last wizard and then they have to wait 50x as long… which they'll probably interpret as a crash. At the very least, anticipate and test such corner cases. Also test on a low-RAM setup.
Yes it is bad practice, it breaks RFC2549 standard.
OK ok, I was just kidding. Do whatever is best for your application.
It isn't a matter of "hacks" or "standards".
Just make sure you have proper documentation about what isn't as straightforward as it should be (such as hacks).
Trust me, if a 5k investment produced a product with lots of hacks (such as windows), then they [hacks] must really help at some point.