I'm working on an application that can print the usage of each CPU core using C++ by reading proc/stat and parsing it, but I'm having problem with getting temperature, the problem is that different applications show we different amount of CPU cores. when I launch system monitor it shows me 12 cores, btop also shows me 12 cores same as my stat/proc. but when I'm using something like sensors to get temperature it shows temperature for only 6 cores, same thing for /sys/devices/platform/coretemp.0/hwmon/hwmon2/ it shows me temperature only for 6 cores. I don't know where's the problem, how am I supposed to get temperature of all 12 cores? I tried looking at btop source code because it displays temperature for all 12 cores, the problem is that I am terrible at reading source code so I weren't able to find anything useful :(.
Related
I setup an older PC (with 2GB memory) and wanted to try to use that as a compiling machine to free up my laptop. But, what takes 2 minutes to build on my new laptop takes over 1/2 hour on the old pc!
I'm working in Qt Creator on both machines, and on the old PC it shows "Indexing with clangd" and counts from 1 to 516 (number of files in my project), but that steps takes 30 minutes!
What is the "indexing with clangd" step?
Why is it SO SLOW? HTOP shows 20% free memory, and the CPU's are averaging 70%
Any tips on how to speed this up?
I am in the process of creating a C++ application that measures disk usage. I've been able to retrieve current disk usage (read and write speeds) by reading /proc/diskstats at regular intervals.
I would now like to be able to display this usage as a percentage (I find it is more user-friendly than raw numbers, which can be hard to interpret). Therefore, does anyone know of a method for retrieving maximum (or nominal) disk I/O speed programmatically on Linux (API call, reading a file, etc)?
I am aware of various answers about measuring disks speeds(eg https://askubuntu.com/questions/87035/how-to-check-hard-disk-performance), but all are through testing. I would like to avoid such methods as they take some time to run and entail heavy disk I/O while running (thus potentially degrading the performance of other running applications).
In the advent of IBM PC era, there was a great DOS utility, I forgot its name, but it was measuring the speed of the computer (maybe Speedtest? whatever). There was a bar in the 2/3 bottom of the screen, which is represented the speed of the CPU. If you had a 4.0 MHz (not GHz!) the bar occupied the 10% of the screen.
2-3 years later, '386 computers has risen, and the speed indicator bar overgrown not just the line but the screen, and it looked crappy.
So, there is no such as 100% disk speed, CPU speed etc.
The best you can do: if you program runs for a while, you can remember the highest value and set it as 100%. Probably you may save the value into a tmp file.
I'm working on a "search" project. The main idea is how to build a index to respond to the search request as fast as possible. The input is a query, such as "termi termj", ouput is docs where both termi and termj appear.
the index file looks like this:(each line is called a postlist, which is sorted array of unsigned int and can be compressed with good compression ratio)
term1:doc1, doc5, doc8, doc10
term2:doc10, doc51, doc111, doc10000
...
termN:doc2, doc4, doc10
3 main time resuming procedure is
seek termi and termj's postlist in file (random disk read)
decode the postlists (cpu)
calculate the intersection of 2 postlists (cpu)
My question is, How can I know that the application can't be more efficient, it has a disk I/O bottleneck? How can I measure if my computer has used his disk 100 percent? Are there any tools on linux to help? Is there any tools can measure disk I/O perfectly like google cpu profiler can measure cpu?
My develop env is Ubuntu 14.04.
CPU: 8 cores 2.6GHz
disk: SSD
benchmark now is about 2000 queries/second, but I don't know how to improve it.
Any suggestion will be appreciated! Thank you very much!
I have been running a Python octo.py script to do word counting/author on a series of files. The script works well -- I tried it on a limited set of data and am getting the correct results.
But when I run it on the complete data set it takes forever. I am running on a windows XP laptop with dual core 2.33 GHz and 2 GB RAM.
I opened up my CPU usage and it shows the processors running at 0%-3% of maximum.
What can I do to force Octo.py to utilize more CPU?
Thanks.
As your application isn't very CPU intensive, the slow disk turns out to be the bottleneck. Old 5200 RPM laptop hard drives are very slow, which, in addition to fragmentation and low RAM (which impacts disk caching), make reading very slow. This in turns slows down processing and yields low CPU usage. You can try defragmenting, compressing the input files (as they become smaller in disk size, processing speed will increase) or other means of improving IO.
In linux, is there a built-in C library function for getting the CPU load of the machine? Presumably I could write my own function for opening and parsing a file in /proc, but it seems like there ought to be a better way.
Doesn't need to be portable
Must not require any libraries beyond a base RHEL4 installation.
If you really want a c interface use getloadavg(), which also works in unixes without /proc.
It has a man page with all the details.
The preferred method of getting information about CPU load on linux is to read from /proc/stat, /proc/loadavg and /proc/uptime. All the normal linux utilities like top use this method.
from the proc (5) man page:
/proc/loadavg
The first three fields in this file are load average figures
giving the number of jobs in the run queue (state R) or waiting
for disk I/O (state D) averaged over 1, 5, and 15 minutes. They
are the same as the load average numbers given by uptime(1) and
other programs. The fourth field consists of two numbers sepaâ
rated by a slash (/). The first of these is the number of curâ
rently executing kernel scheduling entities (processes,
threads); this will be less than or equal to the number of CPUs.
The value after the slash is the number of kernel scheduling
entities that currently exist on the system. The fifth field is
the PID of the process that was most recently created on the
system.
My understanding is that parsing the contains of /proc is the official interface for that kind of thing (there are a number of files there which are really meant to be parsed before presented to the user).
"Load average" may not be very useful. We find it to be of limited use, as it doesn't actually tell you how much CPU is being used, only the average number of tasks "ready to run". "Ready to run" is somewhat subjective, but not very helpful as it often includes processes waiting for IO.
On busy systems, we see load average of 20+ on machines with only 8 cores, and still the CPUs are relatively idle.
If you want to see what CPU is in use, have a look at the various files in /proc