I am using Eclipse for C/C++ development. I am trying to compile and run a project. When I compile and run the project after a while my CPU gets to 100% usage . I checked "Task Manager" and there I found that Eclipse isn't closing any of the previous build and it's running in the background which uses my CPU heavily. How do I solve this problem. When at 100% usage my PC becomes very very slow.
If you don't want the build to use up all your CPU time (maybe because you want to do other stuff while building) then you could decrease the parallelism of the build to a point where it leaves one or more cores unused. For example, if you have 8 cores you could configure your build to only use 6 of them.
Your build will take longer, but your machine will be more responsive for other tasks while the build runs.
Adding More RAM seems to have solved my problem. Disk usage is also low now. Maybe Since there wasnt enough RAM in my laptop the CPU was fetching data from the Disk directly which made the disk usage to go up.
Related
I'm trying to boost the performance of my C++ facial tracking algorithm by optimizing the most time-consuming sections of my code. Interestingly, 86% of the entire tracking processing time is consumed by its feature extraction section. That feature extractor takes an image as input and returns the feature vector as output. I isolated the feature extractor from the tracker into a separate project to write an optimized version from scratch and made sure the input and output format and containers are the same as the ones used in the original tracker. Here's the breakdown of the resulting process time.
Original one 3 ms/frame
Optimized version 1 ms/frame
As can be seen it's supposed to be 3 times faster. But when I insert the new feature extractor into my tracker, inside the tracker it runs at about 2.5 ms/frame. I can't understand why it gets slower inside the tracker as opposed to when it runs as a standalone project?
But here's the catch that I discovered by accident. If I run the tracker and at the same time I run another project in the background, all of a sudden, the feature extractor inside the tracker would converge to 1 ms/frame. But as soon as I stop the background project it goes back to 2.5 ms/frame. This happens both on my laptop with Ubuntu 16.04 and on my Desktop PC with Windows 10.
In an attempt to understand this sort of behavior I used my Ubuntu System Monitor and noticed the following regarding all the 8 CPU cores.
When I run the feature extractor alone 7 cores are engaged by about 5% and one of the cores is engaged by 100%.
When I insert the feature extractor in the tracker and run the tracker, all the 8 cores are engaged by about 30%.
Now when I run the tracker and along with the tracker I run any other program (let's say the standalone feature extractor), out of all the cores that are engaged by 30%, one jumps up to 100% engagement which may explain the performance boost I get in the tracker as a result.
The evidence suggests that in order for the feature extractor to run at its maximum potential of 1 ms/frame, at least one of the cores should be engaged by 100%. My question is how can I make this happen?
We have a C++ based Multi-threaded application on Windows that captures network packets in real-time using the WinPCAP library and then processes these packets for monitoring the network. This application is intended to run 24x7. Our applicatin easily consumes 7-8 GB of RAM.
Issue that we are observing :
Lets say the application is monitoring 100Mbps of network traffic and consumes 60% CPU. We have observed that when the application keeps running for a longer duration like a day or two, the CPU consumption of the application increases to like 70-80%, even though it is still processing 100 Mbps traffic (doing the same amount of work).
We have tried to debug this issue to the thread level using ProcessExplorer and noticed that the packet capturing threads start consuming more CPU over time. This issue is not resolved even after re-starting the application. Only a machine restart solves the problem.
We have observed this issue is easily reproducible on Windows 2012 R2 Server OS during over night runs. In Windows 7, the issue happens but over few days.
Any idea what might be causing this ?
Thanks in Advance
What about memory allocation? Because you are using lots of memory it could be a memory fregmentation problem so if you do several allocation/reallocation of buffers this of course will cause a major cost for the processor to find and allocate space available.
I finally found the reason for the above behavior : it was the winpcap code that was causing it. After replacing that, we did not observe this behavior.
I am developing a multi threaded application which ran fine on my development system which has 8 cores. When I ran it on a PC with 2 cores I encountered some synchronization issues.
Apart from turning off hyper-threading is there any way of limiting the number of cores an application can use so that I can emulate single and dual core environments for testing & debugging.
My application is written in C++ using Visual Studio 2010.
We always test in virtual machines nowadays since it's so easy to set up specific environments with given limitations.
For example, VMWare easily allows you to limit the number of processors in use, how much memory there is, hard disk sizes, the presence of USB or floppies or printers and all sorts of other wondrous things.
In fact, we have scripts which do all the work at the push of a button, from restoring the VM to a known initial state, then booting it up, installing the code over the network, running a test cycle then moving the results to an analysis machine on the network as well.
It greatly speeds up and simplifies the testing regime.
You want the SetProcessAffinityMask function or the SetThreadAffinityMask function.
The former works on the whole process and the latter on a specific thread.
You can also limit the active cores via the Windows Task Manager. Right click on process name and select "Set Affinity".
I have a simple anisotropic filter c/c++ code that will process an .pgm image which is an text file with greyscale information for each pixel, and after done processing, it will generate an output image with the filter applied.
This program takes up to some seconds in order for it to do about 10 iterations on a x86 CPU running windows.
Me and an academic finishing his master degree on applied computing, we need to run the code under FPGA (Altera DE2-115) to see if there is considerable results of performance gain when running the code directly on the processor (NIOS 2).
We have successfully booted up the S.O uClinux under the FPGA, but there are some errors with device hardware, and by that we can't access SD-Card not even Ethernet, so we can't get the code and image into the FPGA in order to test its performance.
So I am here asking to an alternative way to test our code performance directly into an CPU with a file system so the code can read the image and generate another one.
The alternative can be either with an product that has low cost and easy to use (I was thinking raspberry PI), or either if I could upload the code somewhere that runs automatically for me and give me the reports.
Thanks in advance.
what you're trying to do is benchmarking some software on a multi GHz x86 Processor vs. a soft-core processor running 50MHz? (as much as I can tell from Altera docs)
I can guarantee that it will be even slower on the FPGA! Since it is also running an OS (even embedded Linux) it also has threading overhead and what not. This can not be considered running it "directly" on CPU (whatever you mean by this)
If you really want to leverage the performance of an FPGA you should "convert" your C-Code into a HDL and run it directly in hardware. Accessing the data should be possible. I don't know how it's done with an Altera board but Xilinx has some libraries accessing data from a SD card with FAT.
You can use on board SRAM or DDR2 RAM to run OS and your application.
Hardware design in your FPGA must have memory controller in it. In SOPC or Qsys select external memory as reset vector and compile design.
Then open NioSII build tools for Eclipse.
In Eclipse create new project by selecting NiosII Application and BSP project.
Once the project is created, go to BSP properties and type offset of external memory in the linker tab and generate BSP.
Compile project and Run as Nios II hardware.
This will run you application on through external memory.
You wont be able to see the image but 2-D array representing image in memory can be
printed on console.
I have imported an app from Visual Studio compiler to MinGW and I faced a problem – performance degradation. Usage of CPU increased from 30% to 100%.
There is one interesting thing. If before running my app or during, I’ve run Windows Media Player – performance of my app is going to fine. CPU usage is going down till 30% and works faster (about 10 times faster).
I’ve googled it and found. It relates to a service, which names as a Multimedia Class Scheduler Service (MMCSS). The main problem is: this service forks under Windows Vista and later, but I’ve tested and imported my app under Win XP.
So, does anyone know how to use this feature under XP? And how Windows Media Player increases performance of my app?
Windows Media Player changes the resolution of the system multimedia timer. Basically, this occurs when your application really should be using something like the High Performance Timer but is using the multimedia timer instead, which simply doesn't have and isn't intended to have the necessary accuracy or resolution to be a high-performance timer. As a result, any timings in your program essentially don't work as they should, which is especially bad if you're trying to sleep or block for a fixed time.