CentOS 6.5 spike - c++

I have developed an application in C. I am running this application on "Red Hat Enterprise Linux Server release 5.8 (Tikanga)" and everything looks good but when we deploy this application on "CentOS release 6.5 (Final)" it starts doing problem. It occupies more chache memory and after 30-45 minutes it shoot up a spike and all cpu shows 100% cpu utilization for 1-2 second.
I google this issue and found that CPU high usage of the usleep on Cent OS 6.3
Since one process in my application is using 10 usleep. It is taking less than 3% CPU in RedHat, however it is taking quite high in CentOS around 90%. After reading the link when I change the sleep from 10 usleep to 1000 usleep or 1 us then it takes 40% CPU.
I need to know that the Kernel of CentOS 6.5 is using high speed timers or not or I need to set any configuration in compiling the Kernel.

In the first place, you are comparing apples and oranges: CentOS 6 corresponds to RHEL 6. Very likely your code would behave the same on RHEL 6.5 as it does on CentOS 6.5, and the same on CentOS 5.8 as on RHEL 5.8. It is misleading to describe the issue as a difference between RHEL and CentOS.
In the second place, if your CPU utilization is that strongly affected by a few usleep() calls (executed, apparently, very many times), then your code is flawed and you should fix it. Building a custom kernel to mask the problem would be pretty backward. Nevertheless, if the objective is more to move over to CentOS than to move up to a (somewhat) more up-to-date environment, then switch to CentOS 5 instead of to CentOS 6.

Related

What is CLANG indexing and why is it slow?

I setup an older PC (with 2GB memory) and wanted to try to use that as a compiling machine to free up my laptop. But, what takes 2 minutes to build on my new laptop takes over 1/2 hour on the old pc!
I'm working in Qt Creator on both machines, and on the old PC it shows "Indexing with clangd" and counts from 1 to 516 (number of files in my project), but that steps takes 30 minutes!
What is the "indexing with clangd" step?
Why is it SO SLOW? HTOP shows 20% free memory, and the CPU's are averaging 70%
Any tips on how to speed this up?

Why SQLite queries run slower in Windows [server] machine compared to Ubuntu & MacOS?

We have 3 machines: One has Windows server OS 2012-r2 installed with decent specs (12 GB RAM, 3.6 GHz, 4 cores, 600 GB hard disk). The others are home laptops with regular specs of Ubuntu 20.04 & MacOS. All are dealing with an SQLite DB.
In a loop, simple 4000 SELECT - COUNT queries are run to calculate certain value of a table row. This is followed by an UPDATE of that calculated value in another table. We notice that:
In MacOS, it takes 2-3 mins
In Ubuntu, it takes 5 mins
In Windows, it takes 3 hours 8 mins!!!
Upon seeing logs, we noticed that every SELECT + UPDATE queries together take 1-3 seconds in Windows. Moreover Ubuntu uses a core with 100% CPU for our program, while Windows server utilizes only < 2% only.
This is a very significant difference. All are running the same source code. Is there anything we can do to make the Windows server OS performing the queries on par with Linux & MacOS?
Turns out that the performance was worsening due to a Mutex lock every time before a SELECT + UPDATE. This was meant for the thread safety as the DB is expected to be accessed from the multiple threads.
After changing the design where the DB is now accessed from the single thread, the performance improved manifold. In Ubuntu it became 5X faster and in Windows it became 10X faster!!
#prapin's comments also has some merit. We are now executing all the UPDATEs within a single transaction. This speeds up at least the Windows performance by 2X.

Boost Compute buffer deconstructor behaving differently on different OS

I'm having an issue with a bit of boost compute code that I have running on two machines. My dev machine is running windows 7 using a radeon WX9100 gpu, and everything is running fine. Another lab machine I am using is nearly identical, but is running windows 10 and has the windows 10 version of the radeon driver.
The windows 7 machine shows the openCL device name being "GFX900" and the windows 10 machine shows the name as "GFX901". A bitcoin mining site i found said this is fine for that model.
What is not fine is that the boost compute/opencl memory buffers do not free from device memory on the windows 10 machine, even (especially) if i use "BUFFERNAME.~buffer()".
Im thinking this might be a driver issue, but I'm really not sure.
Thanks in advance for any help!
Eric

VMware works slow after some time

I own a lets say a decent laptop with 6gb ram and i5 4CPUs ~2.5 Ghz and I create a virtual machine with half of my resources with default options, I don't change anything and after some time let's say 1h OR if I don't to nothing on virtual machine the cpu start to idle and if I came back to the virtual machine it works VERY slow, looks like the VM freeze or something. There is any options to improve performance on VMware? Why it freeze? Also sometimes the vmware crash. Any tips?
EDIT
I installed a new version of vmware I will comeback to an answer after a few hours too see how it goes...
SOLVED Works great with the last version of VMware... the version which cause me probles was from 2013.
When you have i5 and 6GB of ram. VMware should run perfectly fine. Make sure that there are no other applications running in your base operating system.
The probable reasons can be insufficient RAM or CPU over heat.You dont have to allot half of your resources until and unless you are going to use them in your virtual machine.
If possible reinstall your VMware with optimum resources. Need not to be half of your system.

50% core utilization on mingw32-make -j enabled

My first post here, but I found a lot of answers regarding C++ and Qt, thanks!
When compiling my Qt Project with mingw32-maxe.exe I only reach 50% of cpu utilization (the Task Manager shows 50% on all 8 diagrams (i7) ), I already tried using -j, -j8, -j9 and -j16 but nothing changes.
Also my CPU never reaches the 2.4GHz (probably due to the low utilization of 50%). My energy-saving settings in Win are set to "Höchstleistung" (Maximum Performance), I checked the minimum CPU-Frequency Mark, it is 100% on Battery and AC, but the Cpu always stays at 1.2 GHz.
I noticed this issue after upgrading to Win 8.1 (I didnt notice it immediately, so im not sure it is Win 8.1) but 1 month ago all cores ran at 100%.
Thanks for any advice!
SOLVED!
On my Samsung laptop the "silent" option was switched on in the settings app, this reduces the CPU power regardless of the current Windows energy-saving options. Setting the option to off solved my problem.
Thanks anyway for all contributions!