How to get the SPID in linux 2.6 from C++ - c++

I have a question: Is there some way to the SPID in linux 2.6 from a C++ application? When I do a "ps -amT" I can see the threads in the process:
root#10.67.100.2:~# ps -amT
PID SPID TTY TIME CMD
1120 - pts/1 00:00:20 sncmdd
- 1120 - 00:00:00 -
- 1125 - 00:00:00 -
- 1126 - 00:00:00 -
- 1128 - 00:00:00 -
- 1129 - 00:00:09 -
- 1130 - 00:00:00 -
- 1131 - 00:00:09 -
1122 - pts/1 00:00:00 snstatusdemuxd
- 1122 - 00:00:00 -
- 1127 - 00:00:00 -
- 1132 - 00:00:00 -
- 1133 - 00:00:00 -
And then in the filesystem I can see the threads:
root#10.67.100.2:~# ls /proc/1120/task/
1120 1125 1126 1128 1129 1130 1131
So is there some way I can get the SPID from my application so I can somehow identify what my SPID is in each running thread?
Thanks!
/Mike
Edit: I should add that the PID returned from getpid() is the same in each thread.
When I add this code to my threads:
// Log thread information to syslog
syslog(LOG_NOTICE, "ibnhwsuperv: gettid()= %ld, pthread_self()=%ld", (long int)syscall(224), pthread_self());
I get this result:
Jan 1 01:24:13 10 ibnhwsupervd[1303]: ibnhwsuperv: gettid()= -1, pthread_self()=839027488
Neither of which look like the SPID given by ps or in the proc filesystem.
Also, note that gettid does not return the SPID.

How about gettid()?
Edit: If your libc doesn't have the gettid() function, you should run it like this:
#include <sys/syscall.h>
syscall(SYS_gettid);
... or see example on this manual page.

Related

Multiple processes have the same connection

When I start passenger, multiple processes have the same connection.
bundle exec passenger-status
Requests in queue: 0
* PID: 13830 Sessions: 0 Processed: 107 Uptime: 1h 24m 22s
CPU: 0% Memory : 446M Last used: 41s ago
* PID: 13909 Sessions: 0 Processed: 0 Uptime: 41s
CPU: 0% Memory : 22M Last used: 41s ago
ss -antp4 | grep ':3306 '
ESTAB 0 0 XXX.XXX.XXX.XXX:55488 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=14),("ruby",pid=13830,fd=14),("ruby",pid=4672,fd=14)) #<= 4672 is preloader process?
ESTAB 0 0 XXX.XXX.XXX.XXX:55550 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13830,fd=24))
ESTAB 0 0 XXX.XXX.XXX.XXX:55552 XXX.XXX.XXX.XXX:3306 users:(("ruby",pid=13909,fd=24))
Is the connection using port 55488 correct?
I believe that inconsistencies occur when multiple processes refer to the same connection. But I can't find the problem in my application.
I am using Rails 4.x and passenger 6.0.2

Proxy server status capturing

My goal is to pull the key items for my servers that we are tracking for KPIs. My plan is to run this daily via a cron job and then have it email me once a week to be able to be put in an excel sheet to grab the monthly KPIs. Here is what I have so far.
#!/bin/bash
server=server1
ports=({8400..8499})
for l in ${ports[#]}
do
echo "checking on '$l'"
sp=$(curl -k --silent "https://"$server":"$l"/server-status" | grep -E "Apache Server|Total accesses|CPU Usage|second|uptime" | sed 's/<[^>]*>//g')
echo "$l: $sp" >> kpi.tmp
grep -v '^$' kpi.tmp > kpi.out
done
The output shows like this.
8400:
8401: Apache Server Status for server1(via x.x.x.x)
Server uptime: 18 days 4 hours 49 minutes 37 seconds
Total accesses: 545 - Total Traffic: 15.2 MB
CPU Usage: u115.57 s48.17 cu0 cs0 - .0104% CPU load
.000347 requests/sec - 10 B/second - 28.6 kB/request
8402: Apache Server Status for server 1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 26 seconds
Total accesses: 33 - Total Traffic: 487 kB
CPU Usage: u118.64 s49.41 cu0 cs0 - .00968% CPU load
1.9e-5 requests/sec - 0 B/second - 14.8 kB/request
8403:
8404:
8405: Apache Server Status for server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 28 seconds
Total accesses: 35 - Total Traffic: 545 kB
CPU Usage: u133.04 s57.48 cu0 cs0 - .011% CPU load
2.02e-5 requests/sec - 0 B/second - 15.6 kB/request
I am having a hard time figuring out how to filter the out put to the way i would like it. As you can see from my desired output, if it does not have any data to not put it in the file, cut some of the info out of the returned data.
I would like my output to look like this:
8401:server1(via x.x.x.x)
Server uptime: 18 days 4 hours 49 minutes 37 seconds
Total accesses: 545 - Total Traffic: 15.2 MB
CPU Usage: .0104% CPU load
.000347 requests/sec - 10 B/second - 28.6 kB/request
8402: server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 26 seconds
Total accesses: 33 - Total Traffic: 487 kB
CPU Usage: .00968% CPU load
1.9e-5 requests/sec - 0 B/second - 14.8 kB/request
8405: server1(via x.x.x.x)
Server uptime: 20 days 2 hours 20 minutes 28 seconds
Total accesses: 35 - Total Traffic: 545 kB
CPU Usage: .011% CPU load
2.02e-5 requests/sec - 0 B/second - 15.6 kB/request

Stackdriver Monitoring floods collectd uc_update: Value too old in syslog

Let me preface this, by stating that I am not a DevOp, so my experience with Linux administration is limited.
I basically followed this How-To (https://cloud.google.com/monitoring/agent/install-agent) and installed the agent on my Google Compute Instance.
Everything works, I get the new metrics in my stackdriver account, however I get this flooded in my syslog
instance-name collectd[26092]: uc_update: Value too old: name = <RandomNumber>/processes-all/ps_vm; value time = 1517218302.393; last cache update = 1517218302.393;
So I found this in my /opt/stackdriver/collectd/etc/collectd.conf file
Hostname "RandomNumber"
Interval 60
This makes sense, we dont use collectd for anything else, beside stackdriver. So finding that the proccessid that causes the problem is the same as stackdriver hostname is in order.
Next I checked https://collectd.org/faq.shtml
I run this command for both /etc/collectd.conf and /opt/stackdriver/collectd/etc/collectd.conf
grep -i LoadPlugin /etc/collectd.conf | egrep -v '^[[:space:]]*#' | sort | uniq -c
1 LoadPlugin cpu
1 LoadPlugin interface
1 LoadPlugin load
1 LoadPlugin memory
1 LoadPlugin network
1 LoadPlugin syslog
grep -i LoadPlugin /opt/stackdriver/collectd/etc/collectd.conf | egrep -v '^[[:space:]]*#' | sort | uniq -c
1 LoadPlugin "match_regex"
1 LoadPlugin aggregation
1 LoadPlugin cpu
1 LoadPlugin df
1 LoadPlugin disk
1 LoadPlugin exec
1 LoadPlugin interface
1 LoadPlugin load
1 LoadPlugin match_regex
1 LoadPlugin match_throttle_metadata_keys
1 LoadPlugin memory
1 LoadPlugin processes
1 LoadPlugin stackdriver_agent
1 LoadPlugin swap
1 LoadPlugin syslog
1 LoadPlugin tcpconns
1 LoadPlugin write_gcm
As you can see there is no repeating values.
I have run out of ideas, can someone help?
Thank you.
P.S.
We are using Debian Stretch and running lighttpd with php.
P.S. More information
This is a more detailed log with the error in it you can see the timestamps
Jan 30 10:47:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_cputime; value time = 1517309269.877; last cache update = 1517309269.877;
Jan 30 10:48:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_cputime; value time = 1517309329.884; last cache update = 1517309329.884;
Jan 30 10:50:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_rss; value time = 1517309449.881; last cache update = 1517309449.881;
Jan 30 10:50:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/io_octets; value time = 1517309449.881; last cache update = 1517309449.884;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/ps_vm; value time = 1517309569.889; last cache update = 1517309569.889;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/disk_octets; value time = 1517309569.890; last cache update = 1517309569.890;
Jan 30 10:52:49 instance-name collectd[28953]: uc_update: Value too old: name = 5281367784029328076/processes-all/disk_octets; value time = 1517309569.890; last cache update = 1517309569.894;
This is the output of the PS command
ps -e
PID TTY TIME CMD
1 ? 00:01:28 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:24 ksoftirqd/0
5 ? 00:00:00 kworker/0:0H
7 ? 00:41:17 rcu_sched
8 ? 00:00:00 rcu_bh
9 ? 00:00:02 migration/0
10 ? 00:00:00 lru-add-drain
11 ? 00:00:03 watchdog/0
12 ? 00:00:00 cpuhp/0
13 ? 00:00:00 cpuhp/1
14 ? 00:00:03 watchdog/1
15 ? 00:00:01 migration/1
16 ? 00:11:58 ksoftirqd/1
18 ? 00:00:00 kworker/1:0H
19 ? 00:00:00 cpuhp/2
20 ? 00:00:03 watchdog/2
21 ? 00:00:01 migration/2
22 ? 00:03:16 ksoftirqd/2
24 ? 00:00:00 kworker/2:0H
25 ? 00:00:00 cpuhp/3
26 ? 00:00:03 watchdog/3
27 ? 00:00:02 migration/3
28 ? 00:03:11 ksoftirqd/3
30 ? 00:00:00 kworker/3:0H
31 ? 00:00:00 kdevtmpfs
32 ? 00:00:00 netns
33 ? 00:00:00 khungtaskd
34 ? 00:00:00 oom_reaper
35 ? 00:00:00 writeback
36 ? 00:00:00 kcompactd0
38 ? 00:00:00 ksmd
39 ? 00:01:02 khugepaged
40 ? 00:00:00 crypto
41 ? 00:00:00 kintegrityd
42 ? 00:00:00 bioset
43 ? 00:00:00 kblockd
44 ? 00:00:00 devfreq_wq
45 ? 00:00:00 watchdogd
49 ? 00:01:16 kswapd0
50 ? 00:00:00 vmstat
62 ? 00:00:00 kthrotld
63 ? 00:00:00 ipv6_addrconf
130 ? 00:00:00 scsi_eh_0
131 ? 00:00:00 scsi_tmf_0
133 ? 00:00:00 bioset
416 ? 07:01:34 jbd2/sda1-8
417 ? 00:00:00 ext4-rsv-conver
443 ? 00:02:37 systemd-journal
447 ? 00:00:00 kauditd
452 ? 00:00:01 kworker/0:1H
470 ? 00:00:01 systemd-udevd
483 ? 00:00:26 cron
485 ? 00:00:37 rsyslogd
491 ? 00:00:00 acpid
496 ? 00:00:49 irqbalance
497 ? 00:00:21 systemd-logind
498 ? 00:00:36 dbus-daemon
524 ? 00:00:00 edac-poller
612 ? 00:00:02 kworker/2:1H
613 ? 00:00:00 dhclient
674 ? 00:00:00 vsftpd
676 ttyS0 00:00:00 agetty
678 tty1 00:00:00 agetty
687 ? 00:01:18 ntpd
795 ? 4-19:58:17 mysqld
850 ? 00:00:15 sshd
858 ? 00:04:06 google_accounts
859 ? 00:00:33 google_clock_sk
861 ? 00:01:05 google_ip_forwa
892 ? 01:31:57 kworker/1:1H
1154 ? 00:00:00 exim4
1160 ? 00:00:01 kworker/3:1H
4259 ? 00:00:00 kworker/2:1
6090 ? 00:00:00 kworker/0:1
6956 ? 00:00:00 sshd
6962 ? 00:00:00 sshd
6963 pts/0 00:00:00 bash
6968 pts/0 00:00:00 su
6969 pts/0 00:00:00 bash
6972 ? 00:00:00 kworker/u8:2
7127 ? 00:00:00 kworker/3:2
7208 ? 00:00:00 php-fpm7.0
7212 ? 00:00:00 kworker/0:0
10516 ? 00:00:00 systemd
10517 ? 00:00:00 (sd-pam)
10633 ? 00:00:00 kworker/2:2
11569 ? 00:00:00 kworker/3:1
12539 ? 00:00:00 kworker/1:2
13625 ? 00:00:00 kworker/1:0
13910 ? 00:00:00 sshd
13912 ? 00:00:00 systemd
13913 ? 00:00:00 (sd-pam)
13920 ? 00:00:00 sshd
13921 ? 00:00:00 sftp-server
13924 ? 00:00:00 sftp-server
14016 pts/0 00:00:00 tail
14053 ? 00:00:03 php-fpm7.0
14084 ? 00:00:00 sshd
14090 ? 00:00:00 sshd
14091 pts/1 00:00:00 bash
14098 ? 00:00:01 php-fpm7.0
14099 pts/1 00:00:00 su
14100 pts/1 00:00:00 bash
14105 ? 00:00:00 sshd
14106 ? 00:00:00 sshd
14107 ? 00:00:00 php-fpm7.0
14108 pts/1 00:00:00 ps
17456 ? 00:00:03 kworker/u8:1
17704 ? 01:38:36 lighttpd
21624 ? 00:00:30 perl
25593 ? 00:00:00 sshd
25595 ? 00:00:00 systemd
25596 ? 00:00:00 (sd-pam)
25602 ? 00:00:00 sshd
25603 ? 00:00:00 sftp-server
25641 ? 00:00:00 sftp-server
27001 ? 00:00:00 gpg-agent
28953 ? 00:01:20 stackdriver-col
PS grep comamnd with less, output
root#instance-7:/home/# ps aux | grep collectd
root 6981 0.0 0.0 12756 976 pts/0 S+ 13:40 0:00 grep collectd
root 28953 0.1 1.1 1105712 41960 ? Ssl Jan29 3:16 /opt/stackdriver/collectd/sbin/stackdriver-collectd -C /opt/stackdriver/collectd/etc/collectd.conf -P /var/run/stackdriver-agent.pid
These should be normal messages from the Stackdriver agent. (If the rate is as you said 2-3 messages per minute.)
I suggest you to install ntp/ntpd service and sync it to any time server, so you can have the right time on your system.
example ntp server: pool.ntp.org
You are just getting a duplicate as your message has identical timestamp values
for both, the new value to be added to the internal cache and the last value with the same name that was added to the cache.
value time = 1517218302.393
last cache update = 1517218302.393
You can refer to the collectd faq page (https://collectd.org/faq.shtml). It explains this kind of messages including an example which matches the one you got.
You should check:
- If there are more than one collectd daemon running in your instance (ps command). To see the collectd processes you can run:
ps aux | grep collectd
Are the timestamps increasing with each message? If this is the case, it could be that there is another host report data using the same host name.
Since those logs seem to not been affecting the instance, if they're flooding your Stackdriver, you can exclude those logs from the default sink.
Using gcloud, this can be accomplished with the following command:
gcloud logging sinks update _Default --log-filter "$(echo $(gcloud logging sinks describe _Default --format "value(filter)") "AND NOT textPayload:\"uc_update: Value too old:\"")"

Print the first line occurrence of each matching pattern with sed

I would like to filter the output of the utility last based on a variable set of usernames.
This is sample output from last unfiltered,
reboot system boot server Wed Apr 6 13:15 - 14:24 (01:09)
user1 pts/0 server Wed Apr 6 13:08 - 13:15 (00:06)
reboot system boot system Wed Apr 6 13:08 - 13:15 (00:06)
user1 pts/0 server Wed Apr 6 13:06 - down (00:01)
reboot system boot system Wed Apr 6 13:06 - 13:07 (00:01)
user1 pts/0 server Wed Apr 6 12:59 - down (00:06)
What I would like to do is pipe the output of last to sed. Then, using sed I would print the first occurrence of each specified user name i.e. their last log entry in wtmp. The output should appear as so,
reboot system boot server Wed Apr 6 13:15 - 14:24 (01:09)
user1 pts/0 server Wed Apr 6 13:08 - 13:15 (00:06)
The sed expression that I particularly like is,
last|sed '/user1/{p;q;}'
Unfortunately this only gives me the ability to match the first occurrence of one username. Using this syntax is there a way I could specify a multiple of usernames? Thanks in advance!
awk is better fit here than sed due to awk's ability to use associative arrays:
last | awk '!seen[$1]++'
reboot system boot server Wed Apr 6 13:15 - 14:24 (01:09)
user1 pts/0 server Wed Apr 6 13:08 - 13:15 (00:06)

regexp to wrap a line with ${color} and $color

Is there a way to have this regex put ${color orange} at the beginning, and $color at the end of the line where the date is found?
DJS=`date +%_d`;
cat thisweek.txt | sed s/"\(^\|[^0-9]\)$DJS"'\b'/'\1${color orange}'"$DJS"'$color'/
With this expression I get this:
Saturday Aug 13 12pm - 9pm 4pm - 5pm
Sunday Aug 14 9:30am - 6pm 1pm - 2pm
Monday Aug 15 6:30pm - 11:30pm None
Tuesday Aug 16 6pm - 11pm None
Wednesday Aug 17 Not Currently Scheduled for This Day
Thursday Aug ${color orange}18$color Not Currently Scheduled for This Day
Friday Aug 19 7am - 3:30pm 10:30am - 11:30am
What I want to have is this:
Saturday Aug 13 12pm - 9pm 4pm - 5pm Sunday Aug 14 9:30am - 6pm 1pm - 2pm
Monday Aug 15 6:30pm - 11:30pm None
Tuesday Aug 16 6pm - 11pm None
Wednesday Aug 17 Not Currently Scheduled for This Day
${color orange}Thursday Aug 18 Not Currently Scheduled for This Day$color
Friday Aug 19 7am - 3:30pm 10:30am - 11:30am
Acually, it works for me. Depending on your version of sed, you might need to pass -r. Also, as tripleee says, don't use cat here
DJS=`date +%_d`
sed -r s/"\(^\|[^0-9]\)$DJS"'\b'/'\1${color orange}'"$DJS"'$color'/ thisweek.txt
EDIT: Ok, so with the new information I arrived at this:
sed -r "s/([^0-9]+19.+)/\${color orange}\1\$color/" thisweek.txt
This gives me the output
Saturday Aug 13 12pm - 9pm 4pm - 5pm
Sunday Aug 14 9:30am - 6pm 1pm - 2pm
Monday Aug 15 6:30pm - 11:30pm None
Tuesday Aug 16 6pm - 11pm None
Wednesday Aug 17 Not Currently Scheduled for This Day
Thursday Aug 18 Not Currently Scheduled for This Day
${color orange}Friday Aug 19 7am - 3:30pm 10:30am - 11:30am $color
(Note that it differs from your's since it's friday at least in my time zone)