I am creating a game for iOS. The game is written as a C++ library so that it can be ported to another platform. I am also using Swift and Objective C to call the game update, draw functions and process user input etc. in the view controller.
I am having and issue where the game begins to stutter when I press a UIButton to perform some action in the game. I have noticed even when the UIButton callback function is empty, the game still stutters.
The game idles at ~30% CPU usage. When I press the button, the cpu usage drops to ~15% for a few seconds, then gradually makes its way back up 30%. It's during this time that the game stutters.
It seems to me that when the button is pressed, the operating system is allocating fewer resources to the app, causing it to slow down.
To try and resolve this problem, I have put the game update function on a new thread to take some weight off the main thread. This did not work.
Any idea what could be causing this?
Related
I'm writing a C++ application whose main window needs to receive real-time data from a server and draw plots and histograms in realtime based on this data. I'm using GTK3 (actually its C++ binding gtkmm) and Cairo.
In particular, data is received every 1 second from the network, and refresh happens every time the data is received, thus every 1 second. Refresh is done by calling the invalidate_rect() method for the entire drawing area, whose on_draw() even redraws all figures and plots using the newly received data.
Now, the application works but it's extremely unreliable. In particular, it freezes very often, especially when the CPU load increases. The CPU usage of my application, as well as memory, are very low. Suddenly the window becomes grey and unresponsive, and I need to kill it with Ctrl-C, since even pressing the window close icon doesn't work.
I'm wondering: is it the wrong approach to call invalidate_rect() in the scenario above? What is a better way, using GTKMM/Cairo, to obtain smooth graphics in a reliable way?
I am working on a Qt project in which exact time at which certain events occur is of prime importance. To be specific: I have a very simple animation that must be drawn to the screen at certain time say t1. Once I issue the QWidget update to start the animation, it will take a small time dt (depending on screen refresh rates etc.) to actually show the update on screen. I need to measure this extra time dt. I am unsure as to how to do it.
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
Similarly, when the user presses a key it will be registered after a small delay based on the polling rate of the keyboard. I need to account for this delay as well. If I could get the polling rate I know on average how much will the delay be.
What you're asking for is--by definition--not possible from within the computer.
How would you expect to be able to tell when a pixel "actually showed up" on the screen, without a sensor stuck to the monitor and synchronized to an atomic clock the computer has access to also? :-)
The odds are stacked even further against Qt because it's generally used as an abstraction layer on top of Win/OSX/Linux. Those weren't Real-Time Operating Systems of any kind in the first place.
All you can know is when you asked for something to happen. Then you can time how long it takes for you to get back control to make another request. You can set some expectations on your basic "frame rate" throughput by doing this, but there are countless factors that could lead to wide variations in performance at any moment in time.
If you can dig through to the kernel/driver level you can find out a closer-to-the-metal measure of when the actual effect went to the hardware. But that's not Qt's domain, and still doesn't tell you the "actual" answer of when the effect manifested in the outside world.
About the best you're going to get out of Qt is a periodic QTimer. It can make a callback at (roughly) millisecond resolution. If that's not good enough... you're going to need a smaller boat. :-)
You might get a little boost from stuff related to the search term "high resolution timer":
Qt high-resolution timer
http://qt-project.org/forums/viewthread/31941
I thought of using QTime and QElapsedTimer object in the paint event of the QWidget but I'm not sure if that would achieve my goal.
This is, in fact, the only way to do it, and is all you can actually do. There is nothing further that can be done without resorting to a real-time operating system, custom drivers, or external hardware.
You may not need both - the QElapsedTimer measuring the time passed since the last update is sufficient.
Do note that when the event loop is empty, the delay between invocation of widget.update() and the paintEvent executing is under a microsecond, assuming that your process wasn't preempted.
it is a reaction time experiment for some studies. A visual input is presented to which the user responds via keyboard or mouse. To be able to find the reaction time precisely I need to know when was the stimulus presented on the screen and when was the key pressed.
There is essentially only one way of doing it right without resorting to a realtime operating system or a custom driver, and a whole lot of ways of doing it wrong. So, what's the right way?
A small area of the screen needs to change color or brightness coincidentally with the presentation of the visual stimulus. You attach a fiber optic to the screen, and feed it into a receiver attached to an external event timer. The contact closure in the keyboard is also fed to the same event timer. This lets you precisely time the latency of the response with no regard for operating system latencies, thread preemption, etc. The event timer can be something as cheap as an Arduino, if you are willing to do a bit more development work.
If you are showing the stimulus repetitively and need a certain timing between stimulus presentations, you simply repeat the presentation often and collect both response latency and stimulus-to-stimulus timing in your data. You can then discard the presentations that were outside of desired tolerances.
This approach is screen-agnostic and you can use it even on a mobile device, as long as it can somehow interface with your timer hardware. The timer hardware can of course be networked, making interfacing easy.
I'm working on a two player iOS game that is similar to checkers. I'm using cocos2d to create this game.
I want a think time of .5 seconds between when the player's move renders and when the computer's move renders to simulate think time.
The flow of the game is controlled using NSNotification events and looks like this...
Player (computer or human) submits a move -> The board adds the new sprite -> The game controller updates the current player and asks them to submit a move.
I've tried adding usleep(500000) at the end of the board update or the beginning of the game update. What ends up happening is the sprites added in the board update, for the human player, don't show up until after the computer player has submitted his move. So the game waits 500 milliseconds and then updates with both moves.
Is there a way to force the CCLayer to update its child sprites before the usleep, or is there just a better way of adding this think time?
Thanks
if you schedule for receiving an update in your controller, you could slip time in the update:(ccTime)dt function.
in your .h
float _slipTime;
in your .m
// with other declaratives
static float THINK_TIME=.5f;
// last line before the stall
_slipTime=0.f;
[self schedule:#selector(pauseForThink:)];
-(void) pauseForThink:(ccTime) dt {
_slipTime+=dt;
if(_slipTime>THINK_TIME) {
[self unschedule:#selector(pauseForThink:)];
// trigger here whatever you wanted to accomplish after
// the think pause.
}
}
ok, this is simple, but will prevent the main loop from being blocked (this is what happens with your sleep). When running at 60 FPS, the pauseForThink method will be called about 30 times, and cocos will have 30 draw cycles during the pause.
I created a widget that serves as some kind of popup window und hence should have a drop shadow all around to optically raise it from the background. I initialize the drop shadow effect in the constructor of my popup widget as follows:
dropshadow = new QGraphicsDropShadowEffect(this);
dropshadow->setBlurRadius(32);
dropshadow->setColor(QColor("#121212"));
dropshadow->setOffset(0,0);
setGraphicsEffect(dropshadow);
The application runs on an embedded system with an Intel Atom CPU, a custom Linux distribution, Qt v4.7.3 running with a qws server. When I disable the drop shadow, my cpu usage is less than 10% when the GUI is idle. Enabling the drop shadow raises the cpu usage to more than 80%. Profiling the app shows that most of the CPU time is spent within libQtGui.so.4.7.3.
Does anyone have an idea why the cpu usage explodes like this even though there is absolutely nothing going on in the GUI, not even mouse movement?
Edit: Changing the size of the popup changes the amount of cpu usage. Reducing the size to a quarter reduces the cpu usage to about a quarter. Very strange.
The problem was only partly with the drop shadow. It seems that repainting a drop shadow requires quite a lot of CPU time - which is OK if it is not redrawn too often. The problem was simple really. The widget that was behind this popup was redrawn four to five times per second and hence, the popup needed to be redrawn, too. This swallowed huge amounts of CPU time. The solution is equally simple: Avoid repaint events if nothing really changes on screen.
I am writing a game in C++ using SDL 1.2.14 and the OpenGL bindings included with it.
However, if the game is in fullscreen and I Alt - Tab out then back into the game, the results are unpredictable. The game logic still runs. However, rendering stops. I only see the last frame of the game that was drawn before the Alt-tab
I've made sure to re-initialize the OpenGL context and reload all textures when I get an SDL_APPACTIVE = 1 event and that seems to work for only one Alt - Tab, then all subsequent Alt - Tabs will stop rendering (I've made sure SDL_APPACTIVE is properly being handled each time and setting the context accordingly.)
I'd hazard a guess that SDL does something under the hood when minimizing the application that I'm not aware of.
Any ideas?
It's a good pratice to "slow down" your fullscreen application when it looses the focus. Two reasons:
User may need to Alt-Tab and do something important (like closing a heavy application that's hogging the resources). When he switches, the new application takes control, and the OS must release resources from your app as needed
Modern OS uses a lot of GPU - this means it needs to release some graphics memory to work.
Try shutting down every GL resource you use when APPACTIVE=0 and alloc them again on APPACTIVE=1. If this solves, it was "your fault". If it does not solves, it's SDL (or GL or OS) bug.
EDIT: s/SO/OS/g