I'm attempting to run two applications simultaneously on windows 7, however, I'm finding that when I do this, whichever has focus runs at a normal speed but the other is clearly running at a far slower speed. (For reference, one is a unity application and the other is a C++ direct X application). Has anyone ever encountered something like this? Is there a way to allow both applications to run at full speed? The system ought to have the resources to run both, neither are very complex. When I monitor the system resources, etc, everything looks good.
Windows automatically offers less system resources to unfocused programs no matter their complexity or requirements. I don't believe you can disable that.
That makes sense. I looked into a bit deeper and found that the Desktop Window Manager was the one causing the headache. I stopped the service, set the processor affinity for each application, and everything was golden after that.
Related
Is there a Windows standard way to do things such as "start fan", "decrease speed" or the like, from C/C++?
I have a suspicion it might be ACPI, but I am a frail mortal and cannot read that kind of documentation.
Edit: e.g. Windows 7 lets you select in your power plan options such as "passive cooling" (only when things get hot?) vs. "active cooling" (keep the CPU proactively cool?). It seems the OS does have a way to control the fan generically.
I am at the moment working on a project that, among other things, controls the computer fans. Basically, the fans are controlled by the superIO chip of your computer. We access the chip directly using port-mapped IO, and from there we can get to the logical fan device. Using port-mapped IO requires the code to run in kernel mode, but windows does not supply any drivers for generic port IO (with good reason, since it is a very powerful tool), so we wrote our own driver, and used that.
If you want to go down this route, you basically need knowledge in two areas: driver development and how to access and interpret superIO chip information. When we started the project, we didn't know anything in either of these areas, so it has been learning by browsing, reading and finally doing. To gain the knowledge, we have been especially helped by looking at these links:
The WDK, which is the Windows Driver Kit. You need this to compile any driver you write for windows, With it comes a whole lot of source code for example drivers, including a driver for general port-mapped IO, called portio.
WinIO has source code for a driver in C, a dll in C that programmatically installs and loads that driver, and some C# code for a GUI, that loads the dll and reads/writes to the ports. The driver is very similar to the one in portio.
lm-sensors is a linux project, that, among other things, detects your superIO chip. /prog/detect/sensors-detect is the perl program, that does the detecting, and we have spent some time going through the code to see how to interface with a superIO chip.
When we were going through the lm-sensors code, it was very nice to have tools like RapidDriver and RW-everything, since they allowed us to simulate a run of sensors-detect. The latter is the more powerful, and is very helpful in visualising the IO space, while the former provides easier access to some operations which map better to the ones in sensors-detect (read/write byte to port)
Finally, you need to find the datasheet of your superIO chip. From the examples, that I have seen, the environment controllers of each chip provide similar functionality (r/w fan speed, read temperature, read chip voltage), but vary in what registers you have to write to in order to get to this functionality. This place has had all the datasheets, we have needed so far.
If you want something real quick to just lower fans to a level where you know things won't overheat, there's the speedfan program to do so. Figuring out how to configure it in the early versions to automatically lower fans to 50% on computer startup was so painful that my first approach was to simply byte-patch it to start the only superio managed fan I had at lower speed. The newer versions are still bit tough but it's doable - there's a graphical slider system that looks like audio equalizer except that the x axis is temp and y is fan speed. You drag them down one by one. After you figure out how to get manual control for the fan you want, this is next step.
There's a project to monitor hardware (like fans) with C#:
http://code.google.com/p/open-hardware-monitor/
I haven't extensively looked at it, but the source code and use of WinRing0.sys atleast gives the impression that if you know what fan controller you have and have the datasheet, it should be modifiable to also set values instead of just getting them. I don't know what tool is suited (beside kernel debugger) to look at what Speedfan does, if you preferred to snoop around and imitate speedfan instead of looking at the datasheets and trying things out.
Yes, It would be ACPI, and to my knowledge windows doesn't give much/any control over that from user space. So you'd have to start mucking with drivers, which is nigh impossible on windows.
That said, google reveals there are a few open source windows libraries for this for specific hardware... so depending on your hardware you might be able to find something.
ACPI may or may not allow you to adjust the fan settings. Some BIOS implementations may not allow that control though -- they may force control depending on the BIOS/CMOS settings. One might be hard-pressed for a good use case where the BIOS control (even customized) is insufficient. I have come across situations where the BIOS control indeed was insufficient, but not for all possible motherboard platforms.
WIndows Management Instrumentation library (WMI) does provide a Win32_Fan Class and even a SetSpeed method. Alas, the docs say this is not implemented, so I guess it's not very helpful. But you may be able to control things by setting the power state.
So I need to simulate Isis2 in ns-3. (I am also to modify Isis2 slightly, wrapping it with some C/C++ code since I need at least a quasi real-time mission-critical behavior)
Since I am far from having any of that implemented it would interesting to know if this is a suitable way of conduct. I need to specifically monitor the performance of the consensus during sporadic wifi (ad hoc) behavior.
Would it make sense to virtualize a machine for each instance of Isis2 and then use the tap bridge( model and analyze the traffic in the ns-3 channel?
(I also am to log the events on each instance; composing the various data into a unified presentation)
You need to start by building an Isis2 application program, and this would have to be done using C/CLI or C++/CLI. C++/CLI will be easier because the match with the Isis2 type system is closer. But as I type these words, I'm trying to remember whether Mono actually supports C++/CLI. If there isn't a Mono compiler for C++/CLI, you might be forced to use C# or IronPython. Basically, you have to work with what the compiler will support.
You'll build this and the library on your mono platform and should test it out, which you can do on any Linux system. Once you have it working, that's the thing you'll experiment with on NS/3. Notice that if you work on Windows, you would be able to use C++/CLI (for sure) and then can just make a Windows VM for NS3. So this would mean working on Windows, but not needing to learn C#.
This is because Isis2 is a library for group communication, multicast, file replication and sharing, DHTs and so forth and to access any particular functionality you need an application program to "drive" it. I wouldn't expect performance issues if you follow the recommendations in the video tutorials and the user manual; even for real-time uses the system is probably both fast enough and steady enough in its behavior.
Then yes, I would take a virtual machine with the needed binaries for Mono (Mono is loaded from DLLs so they need to be available at the right virtual file system locations) and your Isis2 test program and run that within NS3. I haven't tried this but don't see any reason it wouldn't work.
Keep in mind that the default timer settings for timeout and retransmission are very slow and tuned for running on Amazon AWS, inside a data center. So once you have this working, but before simulating your wifi setup, you may want to experiment with tuning the system to be more responsive in that setting. I'm thinking that ISIS_DEFAULTTIMEOUT will probably be way too long for you, and the RTDELAY setting may also be too long for you. Amazon AWS is a peculiar environment and what makes Isis2 stable in AWS might not be ideal in a Wifi setting with very different goals... but all of those parameters can be tuned by just setting the desired values in the Environment, which can be done in bash on the line that launches your test program, or using the bash "Export" command.
I am developing a multi threaded application which ran fine on my development system which has 8 cores. When I ran it on a PC with 2 cores I encountered some synchronization issues.
Apart from turning off hyper-threading is there any way of limiting the number of cores an application can use so that I can emulate single and dual core environments for testing & debugging.
My application is written in C++ using Visual Studio 2010.
We always test in virtual machines nowadays since it's so easy to set up specific environments with given limitations.
For example, VMWare easily allows you to limit the number of processors in use, how much memory there is, hard disk sizes, the presence of USB or floppies or printers and all sorts of other wondrous things.
In fact, we have scripts which do all the work at the push of a button, from restoring the VM to a known initial state, then booting it up, installing the code over the network, running a test cycle then moving the results to an analysis machine on the network as well.
It greatly speeds up and simplifies the testing regime.
You want the SetProcessAffinityMask function or the SetThreadAffinityMask function.
The former works on the whole process and the latter on a specific thread.
You can also limit the active cores via the Windows Task Manager. Right click on process name and select "Set Affinity".
I want to create a C++ application that is to run on some Linux platform on a specific laptop computer. I do however not want the users of this laptop to use any other applications/system features than this program - much like the kiosk modes you would find on computers in a typical internet café.
One issue is that the laptop will be booted by the user, and such has to start my software automatically - leaving as little room as possible for the user to intervene with the process. It does not have to be completely secure, but it should be as close as possible.
What would be the best way to accomplish such a thing? Does there exist (free) Linux distributions specifically made for this (if not, I will probably use Arch Linux)? Are there any steps I could/should take in my program, or can I leave it all to the OS? Would creating my own little Linux distribution specifically for this be worth it?
This shouldn't be on stackoverflow but anyway:
Run a plain X session with no window manager, into this plain X session start your program in fullscreen. Done.
I run a small XUL application this way:
X :10 &
sleep 10
DISPLAY=:10 xulrunner ~/zkfoxtemp/application.ini
I would use a minimal live linux distribution - I prefer tinycorelinux but most will do.
using a minimal distribution ensures that the system doesn't have almost any features or programs you didn't plant there, and will make it easy to modify according to your needs
use a window manager as many programs don't behave properly if ran in plain X session (especially if they use pop up windows), but remove all it's menus and shortcuts
prefer booting from a read only media - this will minimize the chances of corruption (accidentally or intentionally)
remove unneeded services and features from the boot and login scripts
I'm considering writing a new Windows GUI app, where one of the requirements is that the app must be very responsive, quick to load, and have a light memory footprint.
I've used WTL for previous apps I've built with this type of requirement, but as I use .NET all the time in my day job WTL is getting more and more painful to go back to. I'm not interested in using .NET for this app, as I still find the performance of larger .NET UIs lacking, but I am interested in using a better C++ framework for the UI - like Qt.
What I want to be sure of before starting is that I'm not going to regret this on the performance front.
So: Is Qt fast?
I'll try and qualify the question by examples of what I'd like to come close to matching: My current WTL app is Programmer's Notepad. The current version I'm working on weighs in at about 4mb of code for a 32-bit, release compiled version with a single language translation. On a modern fast PC it takes 1-3 seconds to load, which is important as people fire it up often to avoid IDEs etc. The memory footprint is usually 12-20 mb on 64-bit Win7 once you've been editing for a while. You can run the app non-stop, leave it minimized, whatever and it always jumps to attention instantly when you switch to it.
For the sake of argument let's say I want to port my WTL app to Qt for potential future cross-platform support and/or the much easier UI framework. I want to come close to if not match this level of performance with Qt.
Just chiming in with my experience in case you still haven't solved it or anyone else is looking for more experience. I've recently developed a pretty heavy (regular QGraphicsView, OpenGL QGraphicsView, QtSQL database access, ...) application with Qt 4.7 AND I'm also a stickler for performance. That includes startup performance of course, I like my applications to show up nearly instantly, so I spend quite a bit of time on that.
Speed: Fantastic, I have no complaints. My heavy app that needs to instantiate at least 100 widgets on startup alone (granted, a lot of those are QLabels) starts up in a split second (I don't notice any delay between doubleclicking and the window appearing).
Memory: This is the bad part, Qt with many subsystems in my experience does use a noticeable amount of memory. Then again this does count for the many subsystems usage, QtXML, QtOpenGL, QtSQL, QtSVG, you name it, I use it. My current application at startup manages to use about 50 MB but it starts up lightning fast and responds swiftly as well
Ease of programming / API: Qt is an absolute joy to use, from its containers to its widget classes to its modules. All the while making memory management easy (QObject) system and mantaining super performance. I've always written pure win32 before this and I wil never go back. For example, with the QtConcurrent classes I was able to change a method invocation from myMethod(arguments) to QtConcurrent::run(this, MyClass::myMethod, arguments)and with one single line a non-GUI heavy processing method was threaded. With a QFuture and QFutureWatcher I could monitor when the thread had ended (either with signals or just method checking). What ease of use! Very elegant design all around.
So in retrospect: very good performance (including app startup), quite high memory usage if many submodules are used, fantastic API and possibilities, cross-platform
Going native API is the most performant choice by definition - anything other than that is a wrapper around native API.
What exactly do you expect to be the performance bottleneck? Any strict numbers? Honestly, vague ,,very responsive, quick to load, and have a light memory footprint'' sounds like a requirement gathering bug to me. Performance is often overspecified.
To the point:
Qt's signal-slot mechanism is really fast. It's statically typed and translates with MOC to quite simple slot method calls.
Qt offers nice multithreading support, so that you can have responsive GUI in one thread and whatever else in other threads without much hassle. That might work.
Programmer's Notepad is an text editor which uses Scintilla as the text editing core component and WTL as UI library.
JuffEd is a text editor which uses QScintilla as the text editing core component and Qt as UI library.
I have installed the latest versions of Programmer's Notepad and JuffEd and studied the memory footprint of both editors by using Process Explorer.
Empty file:
- juffed.exe Private Bytes: 4,532K Virtual Size: 56,288K
- pn.exe Private Bytes: 6,316K Virtual Size: 57,268K
"wtl\Include\atlctrls.h" (264K, ~10.000 lines, scrolled from beginning to end a few times):
- juffed.exe Private Bytes: 7,964K Virtual Size: 62,640K
- pn.exe Private Bytes: 7,480K Virtual Size: 63,180K
after a select all (Ctrl-A), cut (Ctrl-X) and paste (Ctrl-V)
- juffed.exe Private Bytes: 8,488K Virtual Size: 66,700K
- pn.exe Private Bytes: 8,580K Virtual Size: 63,712K
Note that while scrolling (Pg Down / Pg Up pressed) JuffEd seemed to eat more CPU than Programmer's Notepad.
Combined exe and dll sizes:
- juffed.exe QtXml4.dll QtGui4.dll QtCore4.dll qscintilla2.dll mingwm10.dll libjuff.dll 14Mb
- pn.exe SciLexer.dll msvcr80.dll msvcp80.dll msvcm80.dll libexpat.dll ctagsnavigator.dll pnse.dll 4.77 Mb
The above comparison is not fair because JuffEd was not compiled with Visual Studio 2005, which should generate smaller binaries.
We have been using Qt for multiple years now, developing a good size UI application with various elements in the UI, including a 3D window. Whenever we hit a major slowdown in app performance it is usually our fault (we do a lot of database access) and not the UIs.
They have done a lot of work over the last years to speed up drawing (this is where most of the time is spent). In general unless you really do implement a kind of editor usually there is not a lot of time spent executing code inside the UI. It mostly waits on input from the user.
Qt is a very nice framework, but there is a performance penalty. This has mostly to do with painting. Qt uses its own renderer for painting everything - text, rectangles, you name it... To the underlying window system every Qt application looks like a single window with a big bitmap inside. No nested windows, no nothing. This is good for flicker-free rendering and maximum control over the painting, but this comes at the price of completely forgoing any possibility for hardware acceleration. Hardware acceleration is still noticeable nowadays, e.g. when filling large rectangles in a single color, as is often the case in windowing systems.
That said, Qt is "fast enough" in almost all cases.
I mostly notice slowness when running on a Macbook whose CPU fan is very sensitive and will come to life after only a few seconds of moderate CPU activity. Using the mouse to scroll around in a Qt application loads the CPU a lot more than scrolling around in a native application. The same goes for resizing windows.
As I said, Qt is fast enough but if increased battery draining matters to you, or if you care about very smooth window resizing, then you don't have much choice besides going native.
Since you seem to consider a 3 second application startup "fast", it doesn't sound like you would care at all about Qt's performance, though. I would consider 3 second startup dog-slow, but opinions on that vary naturally.
The overall program performance will of course be up to you, but I don't think that you have to worry about the UI. Thanks to the graphics scene and OpenGL support you can do fast 2D/3D graphics too.
Last but not least, an example from my own experience:
Using Qt on Linux/Embedded XP machine with 128 MB of Ram. Windows uses MFC, Linux uses Qt. Custom user GUI with lots of painting, and a regular admin GUI with controls/widgets. From a user's point of view, Qt is as fast as MFC. Note: it was a full screen program that could not be minimized.
Edited after you have added more info:
you can expect a larger executable size (especially with Qt MinGW) and more memory usage. In your case, try playing with one of the IDEs (e.g. Qt Creator) or text editors written in Qt and see what you think.
I personally would choose Qt as I've never seen any performance hit for using it. That said, you can get a little closer to native with wxWidgets and still have a cross-platform app. You'll never be quite as fast as straight Win32 or MFC (and family) but you gain a multi-platform audience. So the question for you is, is this worth a small trade-off?
My experience is mostly with MFC, and more recently with C#. MFC is pretty close to the bare metal so unless you define a ton of data structure, it should be pretty quick.
For graphics painting, I always find it useful to render to a memory bitmap, and then blt that to the screen. It looks faster, and it may even be faster, because it's not worrying about clipping.
There usually is some kind of performance problem that creeps in, in spite of my trying to avoid it. I use a very simple way to find these problems: just wait until it's being subjectively slow, pause it, and examine the call stack. I do this a number of times - 10 is usually more than enough. It's a poor man's profiler but works well, no fuss, no bother. The problem is always something no one could have guessed, and usually easy to fix. This is why it works.
If there are dialogs of any complexity, I use my own technique, Dynamic Dialogs, because I'm spoiled. They are not for the faint-of-heart, but are very flexible and perform nicely.
I once made an app to determine the "primeness" of a number (whether it was prime or composite).
I first attempted a Qt GUI, and it took 5 hours to return the answer for 1,299,827 on a computer with 8GB of RAM and an AMD 1090T # 4GHz running no other foreground processes under Linux.
My second attempt used a QProcess of a console application that used the exact same code. On a laptop with 1.3GB of RAM and a 1.4GHz CPU, the response came with no perceivable delay.
I will not deny, though, that it is far easier than GTK+ or Win32, and it handles things quite nicely, but separate intensive processing ENTIRELY from the GUI if you use it.