Performance Issue in Executing Shell Commands - c++

In my application, I need to execute large amount of shell commands via c++ code. I found the program takes more than 30 seconds to execute 6000 commands, this is so unacceptable! Is there any other better way to execute shell commands (using c/c++ code)?
//Below functions is used to set rules for
//Linux tool --TC, and in runtime there will
//be more than 6000 rules to be set from shell
//those TC commans are like below example:
//tc qdisc del dev eth0 root
//tc qdisc add dev eth0 root handle 1:0 cbq bandwidth
// 10Mbit avpkt 1000 cell 8
//tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 5 allot 1514
// cell 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:3 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:1 classid 1:1001 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 8 allot 1514 cell
// 8 maxburst 20 avpkt 1000
//......
void CTCProxy::ApplyTCCommands(){
FILE* OutputStream = NULL;
//mTCCommands is a vector<string>
//every string in it is a TC rule
int CmdCount = mTCCommands.size();
for (int i = 0; i < CmdCount; i++){
OutputStream = popen(mTCCommands[i].c_str(), "r");
if (OutputStream){
pclose(OutputStream);
} else {
printf("popen error!\n");
}
}
}
UPDATE
I tried to put all the shell commands into a shell script and let the test app call this script file using system("xxx.sh"). This time it takes 24 seconds to execute all 6000 entries of shell commands, less than what we toke before. But this is still much bigger than what we expected! Is there any other way that can decrease the execution time to less than 10 seconds?

So, most likely (based on my experience in a similar type of thing), the majority of the time is spent starting a new process running a shell, the execution of the actual command in the shell is very short. (And 6000 in 30 seconds doesn't sound too terrible, actually).
There are a variety of ways you could do this. I'd be tempted to try to combine it all into one shell script, rather than running individual lines. This would involve writing all the 'tc' strings to a file, and then passing that to popen().
Another thought is if you can actually combine several strings together into one execute, perhaps?
If the commands are complete and directly executable (that is, no shell is needed to execute the program), you could also do your own fork and exec. This would save creating a shell process, which then creates the actual process.
Also, you may consider running a small number of processes in parallel, which on any modern machine will likely speed things up by the number of processor cores you have.

You can start shell (/bin/sh) and pipe all commands there parsing the output. Or you can create a Makefile as this would give you more control on how the commands whould be executed, parallel execution and error handling.

Related

Strange behaviour of Parallel Boost Graph Library example code

I have set up simple tests with Parallel Boost Graph Library (PBGL), which I have never used before, and observed entirely unexpected behaviour I would like to explain.
My steps were as follows:
Dump test data in METIS format (a kind of social graph with 50 mln vertices and 100 mln edges);
Build modified PBGL example from graph_parallel\example\dijkstra_shortest_paths.cpp
Example was slightly extended to proceed with Eager, Crauser and delta-stepping algorithms.
Note: building of the example required some obscure workaround about the MUTABLE_QUEUE define in crauser_et_al_shortest_paths.hpp (example code is in fact incompatible with the new mutable_queue)
int lookahead = 1;
delta_stepping_shortest_paths(g, start, dummy_property_map(), get(vertex_distance, g), get(edge_weight, g), lookahead);
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)).lookahead(lookahead));
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)));
Run
mpiexec -n 1 mytest.exe mydata.me
mpiexec -n 2 mytest.exe mydata.me
mpiexec -n 4 mytest.exe mydata.me
mpiexec -n 8 mytest.exe mydata.me
The observed behaviour:
-n 1:
mem usage: 35 GB in 1 running process, which utilizes exactly 1 device thread (processor load 12.5%)
delta stepping time: about 1 min 20 s
eager time: about 2 min
crauser time: about 3 min 20 s.
-n 2:
crash in the stage of data load.
-n 4:
mem usage: 40+ Gb in roughly equal parts in 4 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
-n 8:
mem usage: 44+ Gb in roughly equal parts in 8 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
So, except the unapropriate memory usage and very low total performance the only changes I observe when more MPI processes are running are slightly increased total memory consumption and linear rise of processor load.
The fact that initial graph is somehow partitioned between processes (probably by the vertices number ranges) is nevertheless evident.
What is wrong with this test (and, probably, my idea of MPI usage in whole)?
My enviromnent:
- one Win 10 PC with 64 Gb and 8 kernels;
- MS MPI 10.0.12498.5;
- MSVC 2017, toolset 141;
- boost 1.71
N.B. See original example code here.

How can I stream multiple files at the same time using HTTP::Server?

I'm working on an HTTP service that serves big files. I noticed that parallel downloads are not possible. The process serves only one file at a time and all other downloads are waiting until the previous downloads finish. How can I stream multiple files at the same time?
require "http/server"
server = HTTP::Server.new(3000) do |context|
context.response.content_type = "application/data"
f = File.open "bigfile.bin", "r"
IO.copy f, context.response.output
end
puts "Listening on http://127.0.0.1:3000"
server.listen
Request one file at a time:
$ ab -n 10 -c 1 127.0.0.1:3000/
[...]
Percentage of the requests served within a certain time (ms)
50% 9
66% 9
75% 9
80% 9
90% 9
95% 9
98% 9
99% 9
100% 9 (longest request)
Request 10 files at once:
$ ab -n 10 -c 10 127.0.0.1:3000/
[...]
Percentage of the requests served within a certain time (ms)
50% 52
66% 57
75% 64
80% 69
90% 73
95% 73
98% 73
99% 73
100% 73 (longest request)
The problem here is that both File#read and context.response.output will never block. Crystal's concurrency model is based on cooperatively scheduled fibers, where switching fibers only happens when IO blocks. Reading from the disk using nonblocking IO is impossible which means the only part that's possible to block is writing to context.response.output. However, disk IO is a lot lot slower than network IO on the same machine, meaning that writing will never block because ab is reading at a rate much faster than the disk can provide data, even from the disk cache. This example is practically the perfect storm to break crystal's concurrency.
In the real world, it's much more likely that clients of the service will reside over the network from the machine, making the response write occasionally block. Furthermore, if you were reading from another network service or a pipe/socket you would also block. Another solution would be to use a threadpool to implement nonblocking file IO, which is what libuv does. As a side note, Crystal moved to libevent because libuv doesn't allow a multithreaded event loop (i.e. have any thread resume any fiber).
Calling Fiber.yield to pass execution to any pending fiber is the correct solution. Here's an example of how to block (and yield) while reading files:
def copy_in_chunks(input, output, chunk_size = 4096)
size = 1
while size > 0
size = IO.copy(input, output, chunk_size)
Fiber.yield
end
end
File.open("bigfile.bin", "r") do |file|
copy_in_chunks(file, context.response)
end
This is a transcription of the dicussion here: https://github.com/crystal-lang/crystal/issues/4628
Props to GitHub users #cschlack, #RX14 and #ysbaddaden

Aerospike error: All batch queues are full

I am running an Aerospike cluster in Google Cloud. Following the recommendation on this post, I updated to the last version (3.11.1.1) and re-created all servers. In fact, this change cause my 5 servers to operate in a much lower CPU load (it was around 75% load before, now it is on 20%, as show in the graph bellow:
Because of this low load, I decided to reduce the cluster size to 4 servers. When I did this, my application started to receive the following error:
All batch queues are full
I found this discussion about the topic, recommending to change the parameters batch-index-threads and batch-max-unused-buffers with the command
asadm -e "asinfo -v 'set-config:context=service;batch-index-threads=NEW_VALUE'"
I tried many combinations of values (batch-index-threads with 2,4,8,16) and none of them solved the problem, and also changing the batch-index-threads param. Nothing solves my problem. I keep receiving the All batch queues are full error.
Here is my aerospace.conf relevant information:
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 4
batch-index-threads 40
proto-fd-max 15000
batch-max-requests 30000
replication-fire-and-forget true
}
I use 300GB SSD disks on these servers.
A quick note which may or may not pertain to you:
A common mistake we have seen in the past is that developers decide to use 'batch get' as a general purpose 'get' for single and multiple record requests. The single record get will perform better for single record requests.
It's possible that you are being constrained by the network between the clients and servers. Reducing from 5 to 4 nodes reduced the aggregate pipe. In addition, removing a node will start cluster migrations which adds additional network load.
I would look at the batch-max-buffer-per-queue config parameter.
Maximum number of 128KB response buffers allowed in each batch index
queue. If all batch index queues are full, new batch requests are
rejected.
In conjunction with raising this value from the default of 255 you will want to also raise the batch-max-unused-buffers to batch-index-threads x batch-max-buffer-per-queue + 1 (at least). If you do not do that new buffers will be created and destroyed constantly, as the amount of free (unused) buffers is smaller than the ones you're using. The moment the batch response is served the system will strive to trim the buffers down to the max unused number. You will see this reflected in the batch_index_created_buffers metric constantly rising.
Be aware that you need to have enough DRAM for this. For example if you raise the batch-max-buffer-per-queue to 320 you will consume
40 (`batch-index-threads`) x 320 (`batch-max-buffer-per-queue`) x 128K = 1600MB
For the sake of performance the batch-max-unused-buffers should be set to 13000 which will have a max memory consumption of 1625MB (1.59GB) per-node.

How to get total cpu usage in Linux using C++

I am trying to get total cpu usage in %. First I should start by saying that "top" will simply not do, as there is a delay between cpu dumps, it requires 2 dumps and several seconds, which hangs my program (I do not want to give it its own thread)
next thing what I tried is "ps" which is instant but always gives very high number in total (20+) and when I actually got my cpu to do something it stayed at about 20...
Is there any other way that I could get total cpu usage? It does not matter if it is over one second or longer periods of time... Longer periods would be more useful, though.
cat /proc/stat
http://www.linuxhowtos.org/System/procstat.htm
I agree with this answer above. The cpu line in this file gives the total number of "jiffies" your system has spent doing different types of processing.
What you need to do is take 2 readings of this file, seperated by whatever interval of time you require. The numbers are increasing values (subject to integer rollover) so to get the %cpu you need to calculate how many jiffies have elapsed over your interval, versus how many jiffies were spend doing work.
e.g.
Suppose at 14:00:00 you have
cpu 4698 591 262 8953 916 449 531
total_jiffies_1 = (sum of all values) = 16400
work_jiffies_1 = (sum of user,nice,system = the first 3 values) = 5551
and at 14:00:05 you have
cpu 4739 591 289 9961 936 449 541
total_jiffies_2 = 17506
work_jiffies_2 = 5619
So the %cpu usage over this period is:
work_over_period = work_jiffies_2 - work_jiffies_1 = 68
total_over_period = total_jiffies_2 - total_jiffies_1 = 1106
%cpu = work_over_period / total_over_period * 100 = 6.1%
Try reading /proc/loadavg. The first three numbers are the number of processes actually running (i.e., using a CPU), averaged over the last 1, 5, and 15 minutes, respectively.
http://www.linuxinsight.com/proc_loadavg.html
Read /proc/cpuinfo to find the number of CPU/cores available to the systems.
Call the getloadavg() (or alternatively read the /proc/loadavg), take the first value, multiply it by 100 (to convert to percents), divide by number of CPU/cores. If the value is greater than 100, truncate it to 100. Done.
Relevant documentation: man getloadavg and man 5 proc
N.B. Load average, usual to *NIX systems, can be more than 100% (per CPU/core) because it actually measures number of processes ready to be run by scheduler. With Windows-like CPU metric, when load is at 100% you do not really know whether it is optimal use of CPU resources or system is overloaded. Under *NIX, optimal use of CPU loadavg would give you value ~1.0 (or 2.0 for dual system). If the value is much greater than number CPU/cores, then you might want to plug extra CPUs into the box.
Otherwise, dig the /proc file system.
cpu-stat is a C++ project that permits to read Linux CPU counter from /proc/stat .
Get CPUData.* and CPUSnaphot.* files from cpu-stat's src directory.
Quick implementation to get overall cpu usage:
#include "CPUSnapshot.h"
#include <chrono>
#include <thread>
#include <iostream>
int main()
{
CPUSnapshot previousSnap;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
CPUSnapshot curSnap;
const float ACTIVE_TIME = curSnap.GetActiveTimeTotal() - previousSnap.GetActiveTimeTotal();
const float IDLE_TIME = curSnap.GetIdleTimeTotal() - previousSnap.GetIdleTimeTotal();
const float TOTAL_TIME = ACTIVE_TIME + IDLE_TIME;
int usage = 100.f * ACTIVE_TIME / TOTAL_TIME;
std::cout << "total cpu usage: " << usage << " %" << std::endl;
}
Compile it:
g++ -std=c++11 -o CPUUsage main.cpp CPUSnapshot.cpp CPUData.cpp
cat /proc/stat
http://www.linuxhowtos.org/System/procstat.htm
I suggest two files to starting...
/proc/stat and /proc/cpuinfo.
http://www.mjmwired.net/kernel/Documentation/filesystems/proc.txt
have a look at this C++ Lib.
The information is parsed from /proc/stat. it also parses memory usage from /proc/meminfo and ethernet load from /proc/net/dev
----------------------------------------------
current CPULoad:5.09119
average CPULoad 10.0671
Max CPULoad 10.0822
Min CPULoad 1.74111
CPU: : Intel(R) Core(TM) i7-10750H CPU # 2.60GHz
----------------------------------------------
network load: wlp0s20f3 : 1.9kBit/s : 920Bit/s : 1.0kBit/s : RX Bytes Startup: 15.8mByte TX Bytes Startup: 833.5mByte
----------------------------------------------
memory load: 28.4% maxmemory: 16133792 Kb used: 4581564 Kb Memload of this Process 170408 KB
----------------------------------------------

Capture CPU and Memory usage dynamically

I am running a shell script to execute a c++ application, which measures the performance of an api. i can capture the latency (time taken to return a value for a given set of parameters) of the api, but i also wish to capture the cpu and memory usage alongside at intervals of say 5-10 seconds.
is there a way to do this without effecting the performance of the system too much and that too within the same script? i have found many examples where one can do outside (independently) of the script we are running; but not one where we can do within the same script.
If you are looking for capturing CPU and Mem utilization dynamically for entire linux box, then following command can help you too:
CPU
vmstat -n 15 10| awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> CPUDataDump.csv &
vmstat is used for collection of CPU counters
-n for delay value, in this case it's 15, that means after every 15 sec, stats will be collected.
then 10 is the number of intervals, there would be 10 iterations in this example
awk '{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump file with & for continuation
Memory
free -m -s 10 10 | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> DataDumpMemoryfile.csv &
free is for mem stats collection
-m this is for units of mem (you can use -b for bytes, -k for kilobytes, -g for gigabytes)
then 10 is the number of intervals (there would be 10 iterations in this example)
awk'{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump & for continuation
I'd suggest to use 'time' command and also 'vmstat' command. The first will give CPU usage of executable execution and second - periodic (i.e. once per second) dump of CPU/memory/IO of the system.
Example:
time dd if=/dev/zero bs=1K of=/dev/null count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 0.738194 seconds, 1.4 GB/s
0.218u 0.519s 0:00.73 98.6% 0+0k 0+0io 0pf+0w <== that's time result